© Carlos Oliveira 2020
C. OliveiraOptions and Derivatives Programming in C++20https://doi.org/10.1007/978-1-4842-6315-0_9

9. Linear Algebra Algorithms

Carlos Oliveira1 
(1)
Seattle, WA, USA
 

Linear algebra (LA) techniques are used throughout the area of financial engineering and, in particular, in the analysis of options and other financial derivatives. These techniques are used, for example, to calculate the value of large portfolios, or to quickly price derivative instruments. This chapter contains an overview of LA algorithms and their implementation in C++.

Linear algebra algorithms consist of simple operations on sets of values arranged as vectors or matrices. There is a rich mathematical theory behind the use of vectors and matrices. Although it is out of the scope here to explain this mathematical theory, it is nonetheless essential to understand how such algorithm can be implemented in C++.

It is important to recognize how the traditional methods of linear algebra can be translated to a multi-paradigm language such as C++. As a high-performance language, C++ has been used by software engineers to efficiently encode numerical algorithms, such as the ones used in linear algebra. With this goal in mind, this chapter presents a few examples that illustrate how to use some of the most common linear algebra algorithms. In this chapter, you will also learn how to integrate the following types of LA algorithms into your code:
  • Vector operations: Operations on vectors are some of the most common ways to explore linear algebra algorithms.

  • Implementing matrices: A matrix is a set of numbers ordered in a two-dimensional array. Even though matrices are very common, there is no standard support for matrices in the C++ library. In this chapter, you will see how to easily create a matrix class that supports many of the most common matrix operations.

  • Using linear algebra libraries: There is a set of LA functions, named BLAS (Basic Linear Algebra Subprograms), that have become a de facto standard in the world of numerical computing. You will see in this chapter how to use BLAS and similar implementations, which provide the basic blocks used by most LA software (both free and commercial) available nowadays.

Vector Operations

As you will see in the following examples, linear algebra is concerned about the mathematical properties of vector spaces. Many of the operations either produce vectors or take vectors as their arguments. Therefore, the first step to properly use LA algorithms is to have a good implementation of vectors.

Notice that, on the positive side, the C++ standard library already contains an optimized container called std::vector, which you have used extensively in the last few chapters. On the other hand, std::vector doesn’t implement some of the most important operations that are conventionally used in linear algebra algorithms. The first step in implementing such an algorithm is therefore to provide such missing operations.

There are two kinds of mathematical operations that are needed when using vectors:
  • Operations between numbers and vectors: Some mathematical operations involve a single number (also called a scalar number) and a vector as arguments. For example, you may need to multiply a vector by a scalar, or add the same number to each entry in the vector. Such operations are not available in std::vector, but are so common that they should be supported by any linear algebra software package.

  • Operations between two or more vectors::Another class or mathematical operations take two or more vectors and calculate a result based on their values. A common example is a vector product, where all members in both vectors are pairwise multiplied and finally added. Other operations like vector summation are also commonly used.

The next few examples will show how to implement some of these operations using the existing containers of the STL, such as std::vector.

Scalar-to-Vector Operations

Scalar operations on vectors allow a vector to be modified by a single scalar number. The two most common scalar operations are scalar addition and scalar multiplication. You can use these operations as building blocks for more complex operations, which will be explored in the following sections.

Because the std::vector class is already part of the STL, the strategy used here is to create free functions (not members of a particular class) that operate on vector containers. This way, you are free to continue to use the well-known functions available for std::vector when necessary. You can also overload these functions with other types if you feel the need to extend these definitions.

The scalar addition to vectors consists in adding the same constant number to each component of the vector. This can be implemented in the following way :
#include <iostream>
#include <vector>
typedef std::vector<double> Vector;
Vector add(double num, const Vector &v)
{
    int n = (int)v.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v[i] + num;
    }
    return result;
}

The first statement is a typedef that allows you to use the type name Vector instead of std::vector in this and the other examples in this chapter. Another advantage of using such a typedef in numerical algorithms like this is the possibility of changing the definition of Vector if necessary. In such a case, all the code would still compile to comply with another vector type with just a few or no changes.

The add function creates a new Vector with a size equal to the length of the argument vector. Then, it fills the resulting vector with the original plus the number in the first argument. Next, you can see the scalar multiplication operation :
Vector multiply(double num, const Vector &v)
{
    int n = (int)v.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v[i] * num;
    }
    return result;
}

The multiply function is implemented similarly to add. It receives a double number and a vector. The resulting vector is created as the same size as the argument v. The resultant vector is computed element by element to comply with the definition of the scalar product operation.

These two functions create and return a new vector. This is an effective way to perform the operations, but it can be less than optimal when used in inner loops of complex algorithms. One way to speed up this process is to create a version of these functions that modify the vector in place. That is, one of the vectors is passed using a non-const reference, and it is modified to contain the result of the calculation.

Here is the scalar addition function, implemented as an in-place modifying operation:
void in_place_add(double num, Vector &v)
{
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        v[i] += num;
    }
}
As you can see, this is the equivalent of the += operator, but applied to a vector and a scalar number argument. A similar implementation also works for the scalar product operation :
void in_place_multiply(double num, Vector &v)
{
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        v[i] *= num;
    }
}
Last, you can take advantage of C++ operator overloading when defining these functions. With operator overloading, you can write code much more naturally, so instead of typing
multiply(5, add(10, a));
(assuming that a is a vector), you can type
5 * (10 * a);
which is much easier to understand and maintain. You can create operator versions of the previous functions using the following definitions:
inline Vector operator +(double num, const Vector &v)
{
    return add(num, v);
}
inline Vector operator *(double num, const Vector &v)
{
    return multiply(num, v);
}
inline void operator +=(double num, Vector &v)
{
    in_place_add(num, v);
}
inline void operator *=(double num, Vector &v)
{
    in_place_multiply(num, v);
}

Because these are inline functions, they don’t add any runtime penalty to the functions that have already been defined. In fact, you can think about these definitions as shortcuts to the full definition of the vector operators, so that they are easy to type .

Vector-to-Vector Operations

The vector-to-vector operations allow you to form mathematical expressions involving two or more vectors. The most common such operations are vector addition and vector product. They can be implemented using strategies similar to the ones used previously.

First, you will see the implementation of vector addition:
Vector add(const Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v1[i] + v2[i];
    }
    return result;
}
Here, the function allocates a resultant vector, which is populated using element-wise addition of vector entries.
void  in_place_add(Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        v1[i] += v2[i];
    }
}

Next, you can apply the same strategy to the implementation of vector products. The first kind of vector product is called inner product, or dot product, and is defined as the sum of products of each correspondent element of each vector. Here is a simple implementation in C++ :

double prod(const Vector &v1, const Vector &v2)
{
    double result = 0;
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        result += v1[i] * v2[i];
    }
    return result;
}
Another type of product between two vectors is known as the cross product and has several applications in physics and engineering. Unlike the inner product, which returns a single number, the cross product results in a new vector. The cross product generates a new vector that is orthogonal to the given parameters. Its definition for three-dimensional vectors is given using the equations presented in the following function:
Vector cross_prod_3D(const Vector &a, const Vector &b)
{
    assert(a.size()==3); // definition is 3D vectors only
    int n = (int)a.size();
    Vector v(n);  // the resulting vector
    v[0] = (a[1] * b[2] - a[2] * b[1]);
    v[1] = (a[2] * b[0] - a[0] * b[2]);
    v[2] = (a[0] * b[1] - a[1] * b[0]);
    return v;
}
Just as you can use in-place operations for scalar-to-vector functions, you can also implement vector-to-vector operations in place, therefore saving some of the effort needed to create temporary data structures. Here are the versions of these two functions designed for in-place updates:
void  in_place_add(Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        v1[i] += v2[i];
    }
}
void  in_place_product(Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        v1[i] *= v2[i];
    }
}
Vector in_place_cross_prod_3D(const Vector &a, const Vector &b,
 Vector &v)
{
assert(a.size()==3); // definition is 3D vectors only
    int n = (int)a.size();
    v[0] = (a[1] * b[2] - a[2] * b[1]);
    v[1] = (a[2] * b[0] - a[0] * b[2]);
    v[2] = (a[0] * b[1] - a[1] * b[0]);
    return v;
}
Finally, you can also simplify the use of these vector operations with the help of C++ operator overloading. Instead of typing a complex set of function calls, it is much more elegant to apply the standard addition and multiplication operations whenever possible. Therefore, you can use the following inline definitions to call the given vector operations without any runtime performance penalty:
inline Vector operator +(const Vector &v1, const Vector &v2)
{
    return add(v1, v2);
}
inline void  operator +=(Vector &v1, const Vector &v2)
{
    in_place_add(v1, v2);
}
inline double operator *(const Vector &v1, const Vector &v2)
{
    return prod(v1, v2);
}
inline void  operator *=(Vector &v1, const Vector &v2)
{
    in_place_add(v1, v2);
}
The next operation I want to discuss is a very common function defined over a single vector. The norm of a vector can be defined as the square root of the vector product of a vector with itself. Basically, the norm of a vector is a numeric quantity that can be applied to describe the whole vector. You can very easily implement a norm in the following way:
double norm(const Vector &v)
{
    double result = 0;
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        result += v[i] * v[i];
    }
    return std::sqrt(result);
}

Matrix Implementation

In the previous section, you learned about the most basic level of linear algebra functions, dealing with single numbers and vectors. A second level of operations is defined on a two-dimensional arrangement of numbers, also known as a matrix. Matrices arise naturally as the result of linear algebra calculations, and they provide a convenient way to manipulate data.

Matrices are fundamental to the implementation of linear algebra algorithms that are frequently used in the analysis of options and other derivatives. Unfortunately, C++ does not support matrices directly. Programmers need to create a separate abstraction to represent a matrix or use some third-party library that contains such a data type.

For the purpose of illustrating linear algebra and related algorithms, a Matrix class will be introduced in this section. This Matrix data type implements some of the most common operations that are needed in a financial application. However, the Matrix class presented here doesn’t include all the necessary checks that a robust implementation would require, and some of these features are left as exercise for the reader.

In particular, the Matrix class presented in this section offers the following abilities:
  • Creation of square and rectangular matrices, which handle the allocation of memory for a two-dimensional container of real (floating-pointing) numbers

  • Copy constructor and assignment operator that support the basic copy operations used in C++ libraries

  • Indexing operator, so that values can be accessed with the familiar square bracket notation

  • Common linear algebra operations, such as transpose, add, and multiply, implemented as member functions

The first step in defining a matrix class is to define the basic organization of the stored data. In this class, the data is stored as a sequence of rows, making maximum use of the existing vector container to help manage the data.

The header file is presented in Listing 9-1, and it includes the class declaration and a few free operators that simplify the use of the class.
//
//  Matrix.h
//
#ifndef __FinancialSamples__Matrix__
#define __FinancialSamples__Matrix__
#include <vector>
class Matrix {
public:
    typedef std::vector<double> Row;
    Matrix(int size);
    Matrix(int size1, int size2);
    Matrix(const Matrix &s);
    ~Matrix();
    Matrix &operator=(const Matrix &s);
    void transpose();
    double trace();
    void add(const Matrix &s);
    void subtract(const Matrix &s);
    void multiply(const Matrix &s);
    void multiply(double num);
    Row & operator[](int pos);
    int numRows() const;
private:
    std::vector<Row> m_rows;
};
// Free operators
//
Matrix operator+(const Matrix &s1, const Matrix &s2);
Matrix operator-(const Matrix &s1, const Matrix &s2);
Matrix operator*(const Matrix &s1, const Matrix &s2);
#endif /* defined(__FinancialSamples__Matrix__) */
Listing 9-1

Declarations for the Matrix Class

Notice that a Row is defined as a std::vector of double numbers, using a typedef. Next, you see the usual definitions for constructors, destructors, and the assignment operator.

The Matrix class contains a few common operations, implemented as member functions. Last, you see a few operator overloads, so that the class can be comfortably used along with other linear algebra types discussed previously.

The first part of the Matrix class implementation is concerned with the constructors. The class has two constructors: the first constructor creates a square matrix, that is, one that has the same number of rows and columns. This is done by instantiating each row of the matrix and adding it to the top-level m_rows vector, until the complete matrix has been allocated.
//
//  Matrix.cpp
//
#include "Matrix.h"
#include <stdexcept>
Matrix::Matrix(int size)
{
    for (int i=0; i<size; ++i )
    {
        std::vector<double> row(size, 0);
        m_rows.push_back(row);
    }
}
The second way to create a matrix is to give a number of rows and a number of columns, therefore creating a rectangular matrix. The underlying algorithm is similar to the previous case:
Matrix::Matrix(int size, int size2)
{
    for (int i=0; i<size; ++i )
    {
        std::vector<double> row(size2, 0);
        m_rows.push_back(row);
    }
}
The next constructor allows you to make a copy of an existing matrix. It simply takes advantages of how vectors copy all of their contents by default. The destructor is also trivial, because of the use of std::vector to manage the data.
Matrix::Matrix(const Matrix &s)
: m_rows(s.m_rows)
{
}
Matrix::~Matrix()
{
}
The assignment operator also takes advantage of the use of an std::vector. The only thing it needs to do is copy the underlying m_rows data member.
Matrix &Matrix::operator=(const Matrix &s)
{
    if (this != &s)
    {
        m_rows = s.m_rows;
    }
    return *this;
}
The Matrix class provides an easy way to access elements, using square brackets. For this purpose, it needs to define the operator[] member function. Because an std::vector is returned, the result can also be accessed using square brackets. Therefore, if a is an object of class Matrix, users of this class can just type a[2][3] to access the fourth element of the third row.
Matrix::Row &Matrix::operator[](int pos)
{
    return m_rows[pos];
}
Transposition is one of the most common operations in a matrix. The goal of transposition is to convert rows into columns, changing the orientation of the data stored. This class does this by creating a new set of rows, where each new row contains the elements of the corresponding column. At the end, you just need to replace the existing rows with this new set of rows. This is done using the swap member function of the underlying std::vector. This way, you don’t need to worry about the details of data allocation, taking full advantage of STL data management techniques.
void Matrix::transpose()
{
    std::vector<Row> rows;
    for (unsigned i=0;i <m_rows[0].size(); ++i)
    {
        std::vector<double> row;
        for (unsigned j=0; j<m_rows.size(); ++j)
        {
            row[j] = m_rows[j][i];
        }
        rows.push_back(row);
    }
    m_rows.swap(rows);
}
Next, the Matrix class contains another very common operation called trace. The trace of a matrix is defined as the summation of elements in the diagonal positions of the matrix. That is, for a given matrix a, you need to sum all elements a[i][i], or in mathematical notation:
$$ Trace(A)=\sum \limits_{i=1}^n{A}_{i,i} $$
This function is not defined for nonsquare matrices.
double Matrix::trace()
{
    if (m_rows.size() != m_rows[0].size())
    {
        throw new std::runtime_error("invalid matrix dimensions");
    }
    double total = 0;
    for (unsigned i=0; i<m_rows.size(); ++i)
    {
        total += m_rows[i][i];
    }
    return total;
}
The add member function implements matrix addition. Just as with vector addition, matrix addition performs the element-wise summation of entries in the matrix. This operation is defined only when the two matrices have the same dimensions; otherwise, a runtime exception is thrown.
void Matrix::add(const Matrix &s)
{
    if (m_rows.size() != s.m_rows.size() ||
        m_rows[0].size() != s.m_rows[0].size())
    {
        throw new std::runtime_error("invalid matrix dimensions");
    }
    for (unsigned i=0; i<m_rows.size(); ++i)
    {
        for (unsigned j=0; j<m_rows[0].size(); ++j)
        {
            m_rows[i][j] += s.m_rows[i][j];
        }
    }
}
The subtract operation is similar to addition. It is here just to avoid the need to multiply the whole matrix by -1 in order to do a simple subtraction.
void Matrix::subtract(const Matrix &s)
{
    if (m_rows.size() != s.m_rows.size() ||
        m_rows[0].size() != s.m_rows[0].size())
    {
        throw new std::runtime_error("invalid matrix dimensions");
    }
    for (unsigned i=0; i<m_rows.size(); ++i)
    {
        for (unsigned j=0; j<m_rows[0].size(); ++j)
        {
            m_rows[i][j] -= s.m_rows[i][j];
        }
    }
}
The product operation is implemented by the member function multiply. When you’re multiplying two matrices, the resulting matrix has entries that correspond to the vector product of the ith row and the jth column. In mathematical notation, this is represented as
$$ {\left(A\times B\right)}_{i,j}=\sum \limits_{k=1}^n{A}_{ik}{B}_{kj} $$
The multiply member function updates the matrix in place; therefore, it just needs to create a new set of rows and swap the results at the end of the function.
void Matrix::multiply(const Matrix &s)
{
    if (m_rows[0].size() != s.m_rows.size())
    {
        throw new std::runtime_error("invalid matrix dimensions");
    }
    std::vector<Row> rows;
    for (unsigned i=0; i<m_rows.size(); ++i)
    {
        std::vector<double> row;
        for (unsigned j=0; j<s.m_rows.size(); ++j)
        {
            double Mij = 0;
            for (unsigned k=0; k<m_rows[0].size(); ++k)
            {
                Mij += m_rows[i][k] * s.m_rows[k][j];
            }
            row.push_back(Mij);
        }
        rows.push_back(row);
    }
    m_rows.swap(rows);
}
The Matrix class also defines a multiply member function that performs multiplication by a scalar number. This is analogous to the scalar multiplication of vectors and multiplies each element of the matrix by the same number.
void Matrix::multiply(double num)
{
    for (unsigned i=0; i<m_rows.size(); ++i)
    {
        for (unsigned j=0; j<m_rows[0].size(); ++j)
        {
            m_rows[i][j] *= num;
        }
    }
}
The numRows member function just returns the number of rows in the matrix.
int Matrix::numRows() const
{
    return (int)m_rows.size();
}
Finally, three operations are defined that simplify the use of the class. These operators use the in-place implementations you have saw previously, and they allow the use of convenient expressions involving matrices. These operators just give you an idea of how this works in practice; you can extend these definitions to include other common operators, such as /, +=, and *=.
Matrix operator+(const Matrix &s1, const Matrix &s2)
{
    Matrix s(s1);
    s.subtract(s2);
    return s;
}
Matrix operator-(const Matrix &s1, const Matrix &s2)
{
    Matrix s(s1);
    s.subtract(s2);
    return s;
}
Matrix operator*(const Matrix &s1, const Matrix &s2)
{
    Matrix s(s1);
    s.multiply(s2);
    return s;
}

Using the uBLAS Library

In the previous sections, you saw simple implementations of linear algebra concepts in C++. While they are useful for the examples provided in this book, sometimes you will be required to create high-performance implementations of complex numerical algorithms involving vectors and matrices. In such cases, it is useful to use well-tested and optimized libraries that provide linear algebra–related code.

The most used library for linear algebra algorithms is the LAPACK. Originally written in Fortran, LAPACK (linear algebra package) aims at providing high-performing and well-tested algorithms for basic operations involving vectors and matrices.

One interesting aspect of LAPACK is that it relies on another library called BLAS (Basic Linear Algebra Subprograms) to implement basic vector and matrix routines. The result is that BLAS became a standard for implementing vector and matrix routines. Several versions of BLAS have been released, providing optimized performance for specific architectures. BLAS also has versions targeting C and C++ that are used in many commercial products and other applications that need extensive support for numerical algorithms.

BLAS defines three levels of routines for support of linear algebra algorithms:
  • BLAS Level 1 supports only vector-to-scalar and vector-to-vector operations. It is the most basic level of support, upon which other levels may be built.

  • BLAS Level 2 offers optimized routines for vector-to-matrix calculations.

  • BLAS Level 3 expands the previous levels to support matrix-to-matrix calculations, including operations such as matrix multiplication.

There are several implementations of BLAS, both in Fortran and in C++. Boost uBLAS is an implementation that is free and mostly compatible with the original BLAS library. It contains the same three support levels listed previously.

For an example of how to use uBLAS, assume that you want to access a fast implementation of the premultiply operations. That is, given a vector and a matrix, you want to write an algorithm that multiplies the vector by the matrix, giving a vector as a result.

To solve this problem, you can import the uBLAS libraries and create a function that receives two arguments: a vector and a matrix object. Here is a possible implementation for this function:
#include "Matrix.h"
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>
#include <boost/numeric/ublas/lu.hpp>
namespace ublas = boost::numeric::ublas;
std::vector<double> preMultiply(const std::vector<double> &v,
 Matrix &m)
{
    using namespace ublas;
    ublas::vector<double> vec;
    std::copy(v.begin(), v.end(), vec.end());
    int d1 = m.numRows();
    int d2 = (int)m[0].size();
    ublas::matrix<double> M(d1, d2);
    for (int i = 0; i < d1; ++i)
    {
        for (int j = 0; j < d2; ++j)
        {
            M(i,j) = m[i][j];
        }
    }
    vector<double> pv = prod(vec, M);
    std::vector<double> result;
    std::copy(pv.begin(), pv.end(), result.end());
    return result;
}

The first step is to include the header files for the boost numerical libraries. (You also need to make sure that the program will link to the necessary libraries; check your boost documentation for details.) Then, a function called preMultiply is defined, receiving a vector and a matrix as its parameters.

Note

For more information about how to install and use boost libraries, check Chapter 14 of this book.

One of the first things this function needs to do is to convert the parameters into types required by the uBLAS library. In particular, uBLAS provides the vector<double> and matrix<double> types. You need to convert your data to these types before calling any uBLAS functions.

Once the data has been prepared, you may call the prod function from uBLAS, which knows how to calculate the product of a vector and a matrix. The result is then saved into an std::vector container and returned to the caller.

Complete Code

This section contains the complete code for the vector operations. These functions may be used as the basis for a complete LA package, which is a common requirement in the analysis of options and derivatives.

The code is spread over two source files—LAVectors.hpp is the header file and LAVectors.cpp is the implementation file—which you’ll find in Listings 9-2 and 9-3.
//
//  LAVectors.hpp
#ifndef LAVectors_hpp
#define LAVectors_hpp
#include <vector>
typedef std::vector<double> Vector;
// Scalar-to-vector operations
Vector add(double num, const Vector &v);
Vector multiply(double num, const Vector &v);
void in_place_add(double num, Vector &v);
void in_place_multiply(double num, Vector &v);
inline Vector operator +(double num, const Vector &v)
{
    return add(num, v);
}
inline Vector operator *(double num, const Vector &v)
{
    return multiply(num, v);
}
inline void operator +=(double num, Vector &v)
{
    in_place_add(num, v);
}
inline void operator *=(double num, Vector &v)
{
    in_place_multiply(num, v);
}
// Vector-to-vector operations
Vector add(const Vector &v1, const Vector &v2);
void  in_place_add(Vector &v1, const Vector &v2);
double product(const Vector &v1, const Vector &v2);
void  in_place_product(Vector &v1, const Vector &v2);
inline Vector operator +(const Vector &v1, const Vector &v2)
{
    return add(v1, v2);
}
inline void  operator +=(Vector &v1, const Vector &v2)
{
    in_place_add(v1, v2);
}
inline double operator *(const Vector &v1, const Vector &v2)
{
    return product(v1, v2);
}
inline void  operator *=(Vector &v1, const Vector &v2)
{
    in_place_add(v1, v2);
}
double norm(const Vector &v);
#include <stdio.h>
#endif /* LAVectors_hpp */
Listing 9-2

Header File LAVectors.hpp

//
//  LAVectors.cpp
#include "LAVectors.hpp"
#include <cmath>
//
// Adds a scalar number to a vector "v"
//
Vector add(double num, const Vector &v)
{
    int n = (int)v.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v[i] + num;
    }
    return result;
}
//
// Premultiply a number "num" by the given vector "v"
//
Vector multiply(double num, const Vector &v)
{
    int n = (int)v.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v[i] * num;
    }
    return result;
}
//
// Perform vector addition in place (modifying the given vector)
//
void in_place_add(double num, Vector &v)
{
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        v[i] += num;
    }
}
//
// Perform vector multiplication in place
// (modifying the given vector)
//
void in_place_multiply(double num, Vector &v)
{
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        v[i] *= num;
    }
}
//
// Perform vector addition of two vectors  (v1 and v2)
//
Vector add(const Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    Vector result(n);
    for (int i=0; i<n; ++i)
    {
        result[i] = v1[i] + v2[i];
    }
    return result;
}
//
// Perform the vector product of vectors v1 and v2
//
double product(const Vector &v1, const Vector &v2)
{
    double result = 0;
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        result += v1[i] * v2[i];
    }
    return result;
}
//
// In-place addition of vectors v1 and v2
//
void  in_place_add(Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        v1[i] += v2[i];
    }
}
//
// In-place product of vectors v1 and v2
//
void  in_place_product(Vector &v1, const Vector &v2)
{
    int n = (int)v1.size();
    for (int i=0; i<n; ++i)
    {
        v1[i] *= v2[i];
    }
}
//
// Computes the cross product for two three-dimensional vectors
//
Vector cross_prod_3D(const Vector &a, const Vector &b)
{
    assert(a.size()==3); // definition is 3D vectors only
    int n = (int)a.size();
    Vector v(n);  // the resulting vector
    v[0] = (a[1] * b[2] - a[2] * b[1]);
    v[1] = (a[2] * b[0] - a[0] * b[2]);
    v[2] = (a[0] * b[1] - a[1] * b[0]);
    return v;
}
//
// In-place version of cross product for 3D vectors
//
Vector in_place_cross_prod_3D(const Vector &a, const Vector &b,
 Vector &v)
{
    assert(a.size()==3); // definition is 3D vectors only
    int n = (int)a.size();
    v[0] = (a[1] * b[2] - a[2] * b[1]);
    v[1] = (a[2] * b[0] - a[0] * b[2]);
    v[2] = (a[0] * b[1] - a[1] * b[0]);
    return v;
}
//
// Computes the norm of a vector
//
double norm(const Vector &v)
{
    double result = 0;
    int n = (int)v.size();
    for (int i=0; i<n; ++i)
    {
        result += v[i] * v[i];
    }
    return std::sqrt(result);
}
Listing 9-3

Implementation File LAVectors.cpp

Conclusion

In this chapter, you learned about linear algebra algorithms that are commonly used in the development of software for the analysis of options and other derivatives. Linear algebra provides many of the techniques that are applied to important problems such as option pricing and the numerical approximation of certain derivatives occurring in finance.

First, you learned about the basic algorithms that involve a vector and a scalar number. These operations can be implemented in C++ using functions that are applied to standard vectors, as you could observe in the given examples.

Next, you learned how to implement a useful matrix data type. Matrices are not directly provided by the STL, but you can take advantage of existing support to vectors as a building block for matrix representations. You also learned about the basic operations that can be performed over matrix objects.

Finally, I discussed linear algebra libraries that provide efficient implementations for some of the functionality discussed in the previous sections. In particular, BLAS has been created and improved by some of the greatest specialists in the implementation of numerical algorithms. The BLAS library is organized into different levels of support for linear algebra algorithms. You saw an example of how to take advantage of this highly optimized library to improve the performance of your own linear algebra code.

In the next chapter, you will learn about another building block for financial derivatives: numerical algorithms used to solve mathematical equations. This type of algorithms is at the core of many techniques used in the pricing of options and more exotic derivatives, as you will see in the next few chapters.