Monday, June 29, 2015

A quick look at the QuantLib 1.6 release

Hello everybody.

Unlike in Westeros, summer is coming.  My posting schedule will be more relaxed, as usual, so don't hold your breath waiting for the next posts (hint: subscribe instead, and you'll be notified whenever I post).

A bit of news: the guys at Quants Hub are happy of the success of their new pricing structure, and they're extending it to July 31st. This means that, for the whole next month, you can still get my video workshop for £99.

However, the bigger piece of news is that last Tuesday I've released QuantLib 1.6; you can download it at this link if you haven't already. Thanks to all those who contributed. In this post, I'll share some information on the release (like I did last time).

I'm happy to report that we managed to decrease the distance between releases. This one comes out four and a half months after QuantLib 1.5, which is much better than the one-year hiatus between the 1.4 and 1.5 releases. However, this doesn't mean that the 1.6 release was a small one: it contains 65 issues and pull requests (follow the link to see their list and examine the changes brought by any one of them). A couple of quick git commands also shows a grand total of 382 commits by 16 contributors (git outputs 18 because it includes a couple of duplicates it doesn't know about).


As usual, the list doesn't include contributors that reported bugs, suggested changes, or even contributed patches but didn't actually author the relevant commits. Their names are in the list of changes for the 1.6 release; apologies if I missed anyone.

As I write, there's 27 open pull requests already in the queue for next release. Stay tuned.

Again, thanks to all those who contributed to the 1.6 release!

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Monday, June 22, 2015

Chapter 8, part 2 of n: evolution schemes

Welcome back.

This week, the second part of the series on the finite-differences framework that started in the last post. And also this week, I'll be releasing QuantLib 1.6. Stay tuned.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Evolution schemes

In a partial differential equation \( {\partial f}/{\partial t} = L f \), the operators in the previous subsection provide a discretization of the derivatives on the right-hand side. Evolution schemes discretize the time derivative on the left-hand side instead.

As you know, finite-difference methods in finance start from a known state \( \mathbf{f}(T) \) at maturity (the payoff of the derivative) and evolve backwards to today's date. At each step, we need to evaluate \( \mathbf{f}(t) \) based on \( \mathbf{f}(t+\Delta t) \).

The simplest way is to approximate the equation above as
$$
\frac{\mathbf{f}(t+\Delta t) - \mathbf{f}(t)}{\Delta t} =
L \cdot \mathbf{f}(t+\Delta t)
$$
which simplifies to \( \mathbf{f}(t) = \mathbf{f}(t+\Delta t) - \Delta t \cdot L \cdot \mathbf{f}(t+\Delta t) \), or \( \mathbf{f}(t) = (I - \Delta t \cdot L) \cdot \mathbf{f}(t+\Delta t) \). It's a simple matrix multiplication, and updating the function array translates into
    a = (I - dt*L).applyTo(a);
(in actual code, of course, the I + dt*L operator would be precalculated and stored across time steps).

A slightly less simple way is to write
$$
\frac{\mathbf{f}(t+\Delta t) - \mathbf{f}(t)}{\Delta t} =
L \cdot \mathbf{f}(t)
$$
which results into \( (I + \Delta t \cdot L) \cdot \mathbf{f}(t) = \mathbf{f}(t + \Delta t) \). This is a more complex step, since it requires solving a linear system (or inverting the operator, which is the same); but given the interface of our operators, the update translates equally simply into
    a = (I + dt*L).solveFor(a);
Now, this is where using tridiagonal operators pays off. Let me oversimplify, so I can keep a long story short: schemes of the first kind (called explicit schemes) are simpler but less stable. Implicit schemes, instead (those of the second kind), are more stable and can be used with larger time steps. If the operator were a generic one, the price for an implicit step would be \( O(N^3) \) in the size of the grid, making it unpractical when compared to the \( O(N^2) \) cost for an explicit step. For tridiagonal operator, though, both methods cost \( O(N) \), which allows us to reap the benefits of explicit steps with no additional price.

Going back to the library: the two schemes above (called explicit Euler and implicit Euler scheme, respectively) are both available. They are two particular specializations of a generic MixedScheme class template, shown in the listing that follows, which models the discretization scheme
$$
\frac{\mathbf{f}(t+\Delta t) - \mathbf{f}(t)}{\Delta t} =
L \cdot \left[(1-\theta) \cdot \mathbf{f}(t+\Delta t)
+ \theta \cdot \mathbf{f}(t) \right]
$$
where an implicit and explicit step are mixed. As you can see from the formula, \( \theta = 0 \) gives the explicit Euler scheme and \( \theta = 1 \) gives implicit Euler; any other value in the middle can be used (for instance, \( 1/2 \) gives the Crank-Nicolson scheme).
    template <class Operator>
    class MixedScheme  {
      public:
        typedef OperatorTraits<Operator> traits;
        typedef typename traits::operator_type operator_type;
        typedef typename traits::array_type array_type;
        typedef typename traits::bc_set bc_set;
        typedef typename traits::condition_type condition_type;

        MixedScheme(const operator_type& L,
                    Real theta,
                    const bc_set& bcs)
        : L_(L), I_(operator_type::identity(L.size())),
          dt_(0.0), theta_(theta) , bcs_(bcs) {}

        void setStep(Time dt) {
            dt_ = dt;
            if (theta_!=1.0) // there is an explicit part
                explicitPart_ = I_-((1.0-theta_) * dt_)*L_;
            if (theta_!=0.0) // there is an implicit part
                implicitPart_ = I_+(theta_ * dt_)*L_;
        }
        void step(array_type& a, Time t) {
            if (theta_!=1.0) {
                a = explicitPart_.applyTo(a);
            if (theta_!=0.0) {
                implicitPart_.solveFor(a, a);
        }
      protected:
        operator_type L_, I_, explicitPart_, implicitPart_;
        Time dt_;
        Real theta_;
        bc_set bcs_;
    };

    template <class Operator>
    class ExplicitEuler : public MixedScheme<Operator> {
      public:
        // typedefs, not shown
        ExplicitEuler(const operator_type& L,
                      const bc_set> >& bcs)
        : MixedScheme<Operator>(L, 0.0, bcs) {}
    };
As I mentioned before, schemes take the type of their differential operators as a template argument. The MixedScheme class does that as well, and extracts a number of types from the operator class by means of the OperatorTraits template (that I'll describe later on).

The constructor of the MixedScheme class takes the differential operator \( L \), the value of \( \theta \), and a set of boundary conditions (more on that later) and stores them. Another method, setStep, stores the \( \Delta t \) to be used as time step and precomputes the operators \( I - (1-\theta) \cdot \Delta t \cdot L \) and \( I + \theta \cdot \Delta t \cdot L \) on which the applyTo and solveFor methods will be called; the two special cases \( \theta = 0 \) and \( \theta = 1 \) are checked in order to avoid unnecessary computations. This is done in a separate method in case we needed to change the time step in the middle of a simulation; by being able to reset the time step, we can reuse the same scheme instance.

Finally, the step method performs the actual evolution of the function array. The implementation shown in the listing is just a basic sketch that I'll be improving in the next subsections; here, it just performs the explicit and implicit part of the scheme in this order (again, avoiding unnecessary operations).

The few schemes provided by the library (such as the ExplicitEuler, shown in the listing) are all specializations of the MixedScheme class (I'm just talking about the old framework here; the new framework has different ones). Again, there's no base class for an evolution scheme; other parts of the framework will take them as a template argument and expect them to provide the setStep and step methods.

Liked this post? Share it:

Monday, June 15, 2015

Chapter 8, part 1 of n: the finite-differences framework.

Welcome back.

As I write, release candidates for QuantLib 1.6 have been out for a week or two (if you haven't heard about this already, you might want to subscribe to the QuantLib mailing list). If you have a few cycles to spare, please download it, give it a try and report any problems to the mailing list. The official 1.6 release should be out in a week or two. (Brace yourself for new developments afterwards. Just saying.)

This week, some initial content from chapter 8 of my book. Yes, I finally sat down to write it. The first couple subsections were pushed to the Leanpub edition a couple of weeks ago, with more coming during the summer (both there and here).

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

The finite-differences framework

The last framework I'll be writing about was, in fact, the first to appear in the library. Of course, it was not a framework back then; it was just a bunch of code hard-coded inside a single option pricer.

Since then, the reusable parts of the code were reorganized and grew in complexity—probably to the point where it's hard to find one's way inside the architecture. A few years ago, a new version of the framework was contributed adding 2-dimensional finite-difference models (which were absent in the first version) and sporting a more modular architecture.

The two versions of the framework still coexist in the library. In this chapter, I'll describe both of them—while hoping for the new one to replace the first some day.

The old framework

As I mentioned in a previous post, our master plan to create a common framework for trees and finite-differences models never came to fruition. Instead, the finite-differences code stayed closer to its underlying mathematics and implemented concepts such as differential operators, boundary conditions and so on. I'll tackle one of them in each of the following subsections.

Differential operators

If I were to start at the very beginning, I'd describe the Array class here; but they behave as you can imagine, so you'll forgive me if I leave its description to a later post and just note that it's used here to discretize the function \( f(x) \) on a grid \( \{ x_0 \ldots x_{N-1} \} \) as the array \( \mathbf{f} \), with \( \mathbf{f}_i = f(x_i) \) for any \( i \).

Onwards to operators. Now, I'm probably preaching to the choir, so I'll keep the math to a minimum to avoid telling you lots of stuff you know already. (And if you don't, there's no way I can tell you all you need. You can refer to any of the several books on finite differences; for instance, Duffy, 2006 [1]).

In short: a differential operator transforms a function \( f(x) \) in one of its derivatives, say, \( f^{\prime}(x) \). The discretized version transforms \( \mathbf{f} \) into \( \mathbf{f^{\prime}} \), and since differentiation is linear, it can be written as a matrix. In general, the operator doesn't give the exact discretization of the derivative but just an approximation; that is, \( \mathbf{f^{\prime}}_i = f^{\prime}(x_i) + \epsilon_i \) where the error terms can be reduced by decreasing the spacing of the grid.

QuantLib provides differential operators for the first derivative \( \partial/\partial x \) and the second derivative \( \partial^2/\partial x^2 \). They are called \( D_0 \) and \( D_+D_- \), respectively, and are defined as:
$$
\frac{\partial f}{\partial x}(x_i) \approx D_0 \mathbf{f}_i
= \frac{\mathbf{f}_{i+1}-\mathbf{f}_{i-1}}{2h} \qquad
\frac{\partial^2 f}{\partial x^2}(x_i) \approx D_+D_- \mathbf{f}_i
= \frac{\mathbf{f}_{i+1}-2\mathbf{f}_i+\mathbf{f}_{i-1}}{h^2}
$$
where \( h = x_i - x_{i-1} \) (we're assuming that the grid is evenly spaced). Taylor expansion shows that the error is \( o(h^2) \) for both of them.

As you can see, the value of the derivative at any given index \( i \) (except the first and last, that I'll ignore now) only depends from the values of the function at the same index, the one that precedes it, and the one that follows. This makes \( D_0 \) and friends tridiagonal: the structure of such operators can be sketched as
$$
\left(
\begin{array}{ccccccc}
d_o & u_0 & & & & & \\
l_0 & d_1 & u_1 & & & & \\
& l_1 & d_2 & u_2 & & & \\
& & \ddots & \ddots & \ddots & & \\
& & & l_{n-4} & d_{n-3} & u_{n-3} & \\
& & & & l_{n-3} & d_{n-2} & u_{n-2} \\
& & & & & l_{n-2} & d_{n-1}
\end{array}
\right)
$$

They have a number of desirable properties, including the fact that a linear combination of tridiagonal operators (such as the one found, say, in the Black-Scholes-Merton formula) is still tridiagonal; thus, instead of using a generic Matrix class (which does exist, but is not used here), we made them instances of a specific class called TridiagonalOperator and shown in the following listing.
    class TridiagonalOperator {
        Size n_;
        Array diagonal_, lowerDiagonal_, upperDiagonal_;

      public:
        typedef Array array_type;

        explicit TridiagonalOperator(Size size = 0);
        TridiagonalOperator(const Array& low,
                            const Array& mid,
                            const Array& high);

        static Disposable<TridiagonalOperator>
        identity(Size size);

        Size size() const;
        const Array& lowerDiagonal() const;
        const Array& diagonal() const;
        const Array& upperDiagonal() const;

        void setFirstRow(Real, Real);
        void setMidRow(Size, Real, Real, Real);
        void setMidRows(Real, Real, Real);
        void setLastRow(Real, Real);

        Disposable<Array> applyTo(const Array& v) const;
        Disposable<Array> solveFor(const Array& rhs) const;
        void solveFor(const Array& rhs, Array& result) const;

      private:
        friend Disposable<TridiagonalOperator>
        operator+(const TridiagonalOperator&,
                  const TridiagonalOperator&);
        friend Disposable<TridiagonalOperator>
        operator*(Real, const TridiagonalOperator&);
        // other operators
    };
As you can see from the private members, the representation of the operator reflects its tridiagonal structure: instead of storing all the mostly null elements of the matrix, we just store three arrays corresponding to the diagonal, lower diagonal, and upper diagonal; that is, the \( d_i \), \( l_i \) and \( u_i \) of the earlier sketch.

The constructors and the first bunch of methods deal with building and inspecting the operator. The first constructor sets the size of the matrix (that is, of the three underlying arrays) and trusts the values to be filled later, by means of the provided setters: referring to the figure again, the setFirstRow method sets \( d_0 \) and \( u_0 \), the setLastRow method sets \( l_{n-2} \) and \( d_{n-1} \), setMidRow sets \( l_{i-1} \), \( d_i \) and \( u_i \) for a given \( i \), and the convenience method setMidRows sets them for all \(i \). The second constructor sets all three arrays in one fell swoop; and a factory method, identity, works as a third constructor by building and returning an operator with \( 1 \) on the diagonal and \( 0 \) elsewhere. The inspectors do the obvious thing and return each the corresponding data member. Oftentimes, the returned values are wrapped in a Disposable class template, in a possibly misguided attempt to avoid a few copies. Again, details will be in a future post.

The applyTo and solveFor methods are the meat of the interface. Given a tridiagonal operator L and an array u, L.applyTo(u) returns the result \( L u \) of applying \( L \) to \( u \), while L.solveFor(u) returns the array \( v \) such that \( u = L v \) (The overloaded call L.solveFor(u,v) does the same while avoiding an allocation). Both operations will be used later, and for a tridiagonal operator they both have the nice property to be \( O(N) \) in time, that is, to only take time proportional to the size \( N \) of the operator. The algorithms were taken from the classic Numerical Recipes in C [2]. Not the code, of course: besides not being free, the original code had a number of endearing old conventions (such as numbering arrays from 1, like in Downton Abbey) that wouldn't have matched the rest of the library.

Finally, the class defines a few operators. They make it possible to write numerical expressions such as 2*L1 + L2 that might be used to compose tridiagonal operators. To this effect, the library also provides a few building blocks such as the \( D_0 \) and \( D_+D_- \) operators I mentioned earlier; for instance, given the Black-Scholes equation, written in operator form as
$$
\frac{\partial f}{\partial t} =
\left[
- \left(r - q - \frac{\sigma^2}{2} \right) \frac{\partial}{\partial x}
- \frac{\sigma^2}{2} \frac{\partial^2}{\partial x^2}
+ r I
\right] f \equiv L_{BS} f
$$
one could define the corresponding \( L_{BS} \) operator as
    TridiagonalOperator L_BS = -(r-q-sigma*sigma/2)*DZero(N)
                               -(sigma*sigma/2)*DPlusDMinus(N)
                               -r*TridiagonalOperator::identity(N);
In this case, though, we didn't eat our own dog food. The library does define a Black-Scholes operator, but not through composition as above.

A final note: while they're probably the most performant, tridiagonal operators are not the only ones that could be used in a finite-difference scheme. However, there's no base class for such operators. The interface that they should implement (that is, applyTo and solveFor is only defined implicitly, through its use in the template classes that we'll see in the next subsection. In hindsight, defining such a base class wouldn't have degraded performance; the cost of a virtual call to applyTo or solveFor would have been negligible, compared to the cost of the actual calculation. But we were eagerly reading about template numerical techniques at that time; and like the two guys in the Wondermark strip, we decided to jingle all the way.

Bibliography

[1] D. J. Duffy, Finite Difference Methods in Financial Engineering: A Partial Differential Equation Approach. John Wiley and Sons, 2006.
[2] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C, 2nd edition. Cambridge University Press, 1992.

Liked this post? Share it:

Monday, June 1, 2015

An IPython notebook server with QuantLib on Docker

Welcome back.

This past couple of weeks, I've been playing with a new tool. Well, new for me anyway. But before I forget: there are still places available for my Introduction to QuantLib Development course, so click here for more info and for registering.

Back on topic. Now, if you don't know what Docker is, I can't possibly make it justice here: the best I can do is to point you to its documentation, which is very clear and got me up and running in no time.

If you're into Docker already, instead, the bit of news in this post is that I've pushed to the Docker Hub an image of an IPython notebook server with the QuantLib module installed. You can get it by running
    docker pull lballabio/quantlib-notebook
and waiting as it downloads. The image is set to run the notebook server and listen on container port 8888. It also exports as a volume the directory /notebooks, which can be mapped to a directory on your computer. So if you say, for instance,
    docker run -P lballabio/quantlib-notebook
you'll have the server up and running, and you'll be able to connect from your browser (the host address and port will depend on your box; the docs will be your friend).


You'll find on the server a sample notebook demonstrating that the QuantLib module works—or your notebooks, if you mapped the /notebooks folder to one of yours.

That was it, I guess. I'm not sure how much of a geek I am for fawning over this, but I'll leave this question for another day.

Oh, and if I know Dirk Eddelbuettel, a Docker image for RQuantLib is not far away.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Liked this post? Share it: