Thursday, May 19, 2016

This is an ex-blog.

Hi all,

I've moved my blog to a new home at http://implementingquantlib.com.

Please update your bookmarks and links accordingly. I'll leave this site up for a while, but it will eventually disappear.

Thanks to Blogger for providing hosting these past three years (and for letting me export the posts without hassles).

Thursday, March 17, 2016

QuantLib notebook: interest-rate sensitivities

Welcome back.

Long time no see, I know. When I said in the latest post that blogging would resume after the holidays, I didn't mean this much after. Well, a number of things happened; the main one being that I'm co-authoring a new book with Goutham Balaraman. He's been publishing a number of IPython notebooks on his blog, as I did with my screencasts, so we decided to pool our material and publish it as (wait for it) the QuantLib Python Cookbook. The link will take you to a page on Leanpub where you can register and be notified when a first version of the book is published—soon, I hope.

Speaking on my screencasts: here is another. Set it to full screen, sit back and enjoy.


Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the links for that are up in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Thursday, December 17, 2015

Christmas break

Welcome back.

Just a short note to tell that, as the song says, I'll be home for Christmas. And unlike the song, it won't be only in my dreams.


Blogging will resume after the holidays (not that I have a very regular schedule anyway, but still).

Have a wonderful time, everybody.

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Thursday, December 10, 2015

Screencast: my talk at the QuantLib user meeting 2015

Welcome back.

In last post, I promised more info on my talk on QuantLib, IPython Notebook and Docker at the QuantLib user meeting. Well, I managed to record a screencast on my laptop while I was presenting, so you can now watch and listen to it. It's just the good parts, too, as you can see the screen but not my face.

And by the way, most speakers have now contributed their slides; so when you're done here, go to the meeting page and have a good look.

Enjoy.





Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Thursday, December 3, 2015

Report from the QuantLib user meeting in Düsseldorf

Hello everybody.

I'm back from the QuantLib user meeting (it was last Monday and Tuesday). While I was there, I was told that it must be nice to see all the things that people did with the library.

Oh, yes. Definitely.

As usual, it was great to be there. A good variety of talks, good organization, and on top of that, the chance to see people that I usually just exchange emails with. As usual, a big thanks goes to Michael von der Driesch for preparing the meeting and for keeping it running smoothly, as well as this year's sponsors IKB and CompatibL.

Here is a short summary of the event. For more details, you can check the meeting page on the QuantLib site; slides are being collected and published, so you should find them all there in a week or two.

The first day started with a talk by Jörg Kienitz. He described the recent research on SABR models and the ways that people are making them work with negative rates (one of the several themes of the meeting). There was a lot of stuff covered; you'll find it in the slides when they're available. As Jörg mentioned, QuantLib contains an implementation of the SABR and ZABR models, even though it's not up to date with all the recent developments.

The second talk was from Alexander Sokol, describing the work done by his company on the tapescript library. It provides a numerical type meant to be a drop-in replacement for double in AAD calculations; as a proof of concept, they managed to recompile the whole QuantLib using it. It's an all-or-nothing approach that can be used to quickly convert an existing legacy library to AAD. The typescript library is available on GitHub.

After lunch, the afternoon started with a joint talk by Ferdinando Ametrano and Paolo Mazzocchi describing their results on modeling tenor basis spread between overnight and forward curves. As it turns out, an abcd parameterization provides quite a nice fit of the observed basis and can be made an exact fit with some small correction coefficients. The approach looks promising, the slides are already published, and an initial implementation is available on Paolo's fork of QuantLib on GitHub (look for the new code in the ql/experimental/tenorbasis folder). A paper will follow.

Next, a more technological talk. Eric Ehlers reported on the status of his reposit project, the successor to ObjectHandler to be used in future versions of QuantLibXL. It looks good. Also, Eric left the podium for part of his presentation to Cristian Alzati from Sayula, who demonstrated their joint work on the =countify platform, making Excel spreadsheets available on the cloud. QuantLibXL is one of the available addins.

In the final talk of the day, Klaus Spanderen and Johannes Göttker-Schnetmann presented the conclusion of the work, previewed in last year's meeting, on the calibration of stochastic local volatilities. Their code is available as a pull request, and will be probably merged in the next QuantLib release.

And then there was beer and a well-deserved rest.

The second day was opened by Peter Caspers, who made two short presentations because that's how he rolls. The first was on AAD, and described his efforts in enabling part of QuantLib to use it. His approach complements that of Alexander, and trades ease of conversion for better performance. You can read about it on his blog, too. I'll be watching closely for a possible synergy of the two approaches; in any case, it's nice to have the choice. In his second presentation, Peter reviewed the work done to enable QuantLib work with negative rates. In short: we're almost there, but there's still a couple of pull requests missing.

Andreas Pfadler, the next speaker, had the best opening line of the meeting. Quoting from memory, it was something like "this work is the fruit of many lonely nights in a hotel". Andreas reported on his development of an open-source architecture for distributing calculations, using the QuantLib Java bindings among other tools and adding scripting capabilities via Scala. During his demo, he also overcame a technical glitch on his computer. Another speaker later confessed over coffee that he would have been crushed by it. Andreas took it in stride by moving the demo to the Amazon cloud.

In the last talk of the morning, Roland Lichters fought bravely with a sore throat to tell us about CSA pricing using QuantLib, especially in the context of the added complexity and the pricing changes caused by using negative rates for collateral discounting. The details are on the slides, and caused an interesting discussion afterward since there's not a lot of agreement on how the effects should be modeled.

In the afternoon, another couple of talks. In the first, Sebastian Schlenkrich came back to the topic of tenor basis spreads; his take was on transforming rate volatilities in this kind of models. Again, you'll be able to read all about it in his slides when they're available. In the second one, I described shortly how I used the IPython Notebook and Docker together with QuantLib. Stay tuned on this channel for more information.

And with that, we were off to our flights. Again, thanks to the organizers, the sponsors, the speakers, and all the participants. Here's to meeting again next year.

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Thursday, November 26, 2015

A quick look at the QuantLib 1.7 release

Hello everybody.

Last Monday, I released QuantLib 1.7 (you can download it here if you still haven't). As is becoming a tradition, here's a quick look at the release.

It comes five months after 1.6, which is not bad. (In the meantime, we also had two small releases in the 1.6 series for compatibility with the latest Boost and Visual C++.) It contains 53 issues and pull requests that you can examine in detail on GitHub; just follow the previous link. The usual git incantation shows that the release consists of 217 commits by 14 people (once you filter out a few duplicates).


Other people were involved by contributing bug reports or patches; their names, as far as I could find them, are in the list of change for the release. I hope I didn't miss anyone; if so, I apologize.

There are two new features I'd like to point out in particular. Both are disabled by default, as they cause a performance penalty. The first is the addition of a time to the Date class; this makes it possible to finally price intraday options. The second is a reimplementation of the Observer pattern that makes it safe to use from C# or Java, whose garbage collector had a habit of sometime destroying objects during notification and crashing the whole thing down. Both new features are mostly the work of Klaus Spanderen. You're encouraged to try them out; the release notes explain how to enable them (look for them at the bottom of the file).

That's all for this post. See you after Düsseldorf.

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Liked this post? Share it:

Thursday, November 19, 2015

Chapter 8, part 6 of n: example, American option

Welcome back.

This week, I finally continue with content from chapter 8 (the one on the finite-difference framework, a.k.a. the one that's taking forever).

Next post will probably be a report from the QuantLib User Meeting, in two or three weeks. There's probably still places available, so click here for more info.

Follow me on Twitter if you want to be notified of new posts, or add me to your Google+ circles, or subscribe via RSS: the widgets for that are in the sidebar. Also, make sure to check my Training page.

Example: American option

At this point, writing a finite-difference pricing engine should be just a matter of connecting the dots. Well, not quite. In this section, I'll sketch the implementation of an American-option engine in QuantLib, which is somewhat more complex than expected (as you can see for yourself from the following figure.)


My reasons for doing this are twofold. On the one hand, I nearly got lost myself when I set out to write this chapter and went to read the code of the engine; so I thought it might be useful to put a map out here. On the other hand, this example will help me draw a comparison with the new (and more modular) framework.

Mind you, I'm not dissing the old implementation. The reason it got so complex was that we tried to abstract out reusable chunks of code, which makes perfect sense. The problem is that, although we didn't see it at the time, inheritance was probably the wrong way to do it.

Let's start with the FDVanillaEngine class, shown in the listing below. It can be used as a base class for both vanilla-option and dividend vanilla-option engines, which might explain why the name is not as specific as, say, FDVanillaOptionEngine. (We might just have decided to shorten the name, though. I don't think anybody remembers after all these years.)
    class FDVanillaEngine {
      public:
        FDVanillaEngine(
             const shared_ptr<GeneralizedBlackScholesProcess>&,
             Size timeSteps, Size gridPoints,
             bool timeDependent = false);
        virtual ~FDVanillaEngine() {}
      protected:
        virtual void setupArguments(
                        const PricingEngine::arguments*) const;
        virtual void setGridLimits() const;
        virtual void initializeInitialCondition() const;
        virtual void initializeBoundaryConditions() const;
        virtual void initializeOperator() const;

        shared_ptr<GeneralizedBlackScholesProcess> process_;
        Size timeSteps_, gridPoints_;
        bool timeDependent_;
        mutable Date exerciseDate_;
        mutable boost::shared_ptr<Payoff> payoff_;
        mutable TridiagonalOperator finiteDifferenceOperator_;
        mutable SampledCurve intrinsicValues_;
        typedef BoundaryCondition<TridiagonalOperator> bc_type;
        mutable std::vector<boost::shared_ptr<bc_type> > BCs_;

        virtual void setGridLimits(Real, Time) const;
        virtual Time getResidualTime() const;
        void ensureStrikeInGrid() const;
      private:
        Size safeGridPoints(Size gridPoints,
                            Time residualTime) const;
    };
This class builds most of the pieces required for a finite-difference model, based on the data passed to its constructor: a Black-Scholes process for the underlying, the number of desired time steps and grid points, and a flag that I'm going to ignore until the next subsection. Besides the passed inputs, the data members of the class include information to be retrieved from the instrument (that is, the exercise date and payoff) and the pieces of the model to be built: the differential operator, the boundary conditions, and the array of initial values corresponding to the intrinsic values of the payoff. The latter array is stored in an instance of the SampledCurve class, which adds a few utility methods to the stored data.

The rest of the class interface is made of protected methods that builds and operates on the data members. I'll just go over them quickly: you can read their implementation in the library for more details.

First, the spectacularly misnamed setupArguments method does the opposite of its namesake in the Instrument class: it reads the required exercise and payoff information from the passed arguments structure and copies them into the corresponding data members of FDVanillaEngine.

The setGridLimits method determines and stores the minimum and maximum value of the logarithmic model grid, based on the variance of the passed process over the residual time of the option. The calculation enforces that the current value of the underlying is at the center of the grid, that the strike value is within its range, and that the number of its points is large enough. (I'd note that the method might override the number of grid points passed by the user. In hindsight, I'm not sure that doing it silently is a good idea.) The actual work is delegated to a number of other methods: an overloaded version of setGridLimits, safeGridPoints, and ensureStrikeInGrid.

The initializeInitialCondition method fills the array of intrinsic values by sampling the payoff on the newly specified grid; thus, it must be called after the setGridLimits method.

The initializeBoundaryConditions method, to be called as the next step, instantiates the lower and upper boundary conditions. They're both Neumann conditions, and the value of the derivative to be enforced is calculated numerically from the array of intrinsic values.

Finally, the initializeOperator method creates the tridiagonal operator based on the calculated grid and the stored process. Again, the actual work is delegated: in this case, to the OperatorFactory class, that I'll describe later.

All of these methods are declared as virtual, so that the default implementations can be overridden if needed. This is not optimal: in order to change any part of the logic one has to use inheritance, which introduces an extra concept just for customization and doesn't lend itself to different combinations of changes. A Strategy pattern would be better, and would also make some of the logic more reusable by other instruments.

All in all, though, the thing is manageable: see the FDEuropeanEngine class, shown in the next listing, which can be implemented in a reasonable amount of code.
    template <template <class> class Scheme = CrankNicolson>
    class FDEuropeanEngine : public OneAssetOption::engine,
                             public FDVanillaEngine {
      public:
        FDEuropeanEngine(
             const shared_ptr<GeneralizedBlackScholesProcess>&,
             Size timeSteps=100, Size gridPoints=100,
             bool timeDependent = false);
      private:
        mutable SampledCurve prices_;
        void calculate() const {
            setupArguments(&arguments_);
            setGridLimits();
            initializeInitialCondition();
            initializeOperator();
            initializeBoundaryConditions();

            FiniteDifferenceModel<Scheme<TridiagonalOperator> >
            model(finiteDifferenceOperator_, BCs_);

            prices_ = intrinsicValues_;

            model.rollback(prices_.values(), getResidualTime(),
                           0, timeSteps_);

            results_.value = prices_.valueAtCenter();
            results_.delta = prices_.firstDerivativeAtCenter();
            results_.gamma = prices_.secondDerivativeAtCenter();
            results_.theta = blackScholesTheta(process_,
                                               results_.value,
                                               results_.delta,
                                               results_.gamma);
        }
    };
Its calculate method sets everything up by calling the appropriate methods from FDVanillaEngine, creates the model, starts from the intrinsic value of the option at maturity and rolls it back to the evaluation date. The value and a couple of Greeks are extracted by the corresponding methods of the SampledCurve class, and the theta is calculated from the relationship that the Black-Scholes equation imposes between it and the other results. (By replacing the derivatives with the corresponding Greeks, the Black-Scholes equation says that \( \Theta + \frac{1}{2} \sigma^2 S^2 \Gamma + (r-q)S\Delta -rV = 0 \).)

What with the figure above then? How do we get three levels of inheritance between FDVanillaEngine and FDAmericanEngine? It was due to the desire to reuse whatever pieces of logic we could. As I said, the idea was correct: there are other options in the library that use part of this code, such as shout options, or options with discrete dividends. The architecture could be improved.

First, we have the FDStepConditionEngine, sketched in the following listing.
    template <template <class> class Scheme = CrankNicolson>
    class FDStepConditionEngine : public FDVanillaEngine {
      public:
        FDStepConditionEngine(
             const shared_ptr<GeneralizedBlackScholesProcess>&,
             Size timeSteps, Size gridPoints,
             bool timeDependent = false);
      protected:
        // ...data members...
        virtual void initializeStepCondition() const = 0;
        virtual void calculate(PricingEngine::results*) const {
            OneAssetOption::results * results =
                dynamic_cast<OneAssetOption::results *>(r);
            setGridLimits();
            initializeInitialCondition();
            initializeOperator();
            initializeBoundaryConditions();
            initializeStepCondition();

            typedef /* ... */ model_type;

            prices_ = intrinsicValues_;
            controlPrices_ = intrinsicValues_;
            // ...more setup (operator, BC) for control...

            model_type model(operatorSet, bcSet);
            model.rollback(arraySet, getResidualTime(),
                           0.0, timeSteps_, conditionSet);

            results->value = prices_.valueAtCenter()
                - controlPrices_.valueAtCenter()
                + black.value();
            // same for Greeks
        }
    };
It represent a finite-difference engine in which a step condition is applied at each step of the calculation. In its calculate method, it implements the bulk of the pricing logic—and then some. First, it sets up the data members by calling the methods inherited from FDVanillaEngine, as well as an initializeStepCondition method that it declares as pure virtual and that derived classes must implement: it must create an instance of the StepCondition class appropriate for the given engine. Then, it creates two arrays of values; the first for the option being priced, and the second for a European option that will be used as a control variate (this also requires to set up a corresponding operator, as well as a pricer object implementing the analytic Black formula). Finally, the model is created and used for both arrays, with the step condition being applied only to the first one, and the results are extracted and corrected for the control variate.

I don't have particular bones to pick with this class, except for the name, which is far too generic. I'll just add a note on the usage of control variates. We have already seen the technique in a previous post, where it was used to narrow down the width of the simulated price distribution; here it is used to improve the numerical accuracy. It is currently forced upon the user, since there's no flag allowing to enable or disable it; and it is relatively more costly than in Monte Carlo simulations (there, the path generation is the bulk of the computation and is shared between the option and the control; here, using it almost doubles the computational effort). The decision of whether it's worth using should be probably be left to the user. Also, we should use temporary variables for the control data instead of declaring other mutable data members; they're turning into a bad habit.

Next, the FDAmericanCondition class template, shown in the next listing.
    template <typename baseEngine>
    class FDAmericanCondition : public baseEngine {
      public:
        FDAmericanCondition(
             const shared_ptr<GeneralizedBlackScholesProcess>&,
             Size timeSteps = 100, Size gridPoints = 100,
             bool timeDependent = false);
      protected:
        void initializeStepCondition() const;
    };
It takes its base class as a template argument (in our case, it will be FDVanillaEngine) and provides the initializeStepCondition method, which returns an instance of the AmericanCondition class. Unfortunately, the name FDAmericanCondition is quite confusing: it suggests that the class is a step condition, rather than a building block for a pricing engine.

The next to last step is the FDEngineAdapter class template.
    template <typename base, typename engine>
    class FDEngineAdapter : public base, public engine {
      public:
        FDEngineAdapter(
             const shared_ptr<GeneralizedBlackScholesProcess>& p,
             Size timeSteps=100, Size gridPoints=100,
             bool timeDependent = false)
        : base(p, timeSteps, gridPoints,timeDependent) {
            this->registerWith(p);
        }
      private:
        void calculate() const {
            base::setupArguments(&(this->arguments_));
            base::calculate(&(this->results_));
        }
    };
It connects an implementation and an interface by taking them as template arguments and inheriting from both: in this case, we'll have FDAmericanCondition as the implementation and OneAssetOption::engine as the interface. The class also provides a bit of glue code in its calculate method that satisfies the requirements of the engine interface by calling the methods of the implementation.

Finally, the FDAmericanEngine class just inherits from FDEngineAdapter and specifies the classes to be used as bases.
    template <template <class> class Scheme = CrankNicolson>
    class FDAmericanEngine
        : public FDEngineAdapter<
                     FDAmericanCondition<
                               FDStepConditionEngine<Scheme> >,
                     OneAssetOption::engine> {
      public:
        FDAmericanEngine(
             const shared_ptr<GeneralizedBlackScholesProcess>&,
             Size timeSteps=100, Size gridPoints=100,
             bool timeDependent = false);
    };
The question is whether it is worth to increase the complexity of the hierarchy in order to reuse the bits of logic in the base classes. I'm not sure I have an answer, but I can show an alternate implementation and let you make the comparison on your own. If we let FDAmericanEngine inherit directly from FDStepConditionEngine and OneAssetOption::engine, and if we move into this class the code from both FDAmericanCondition and FDEngineAdapter (that we can remove afterwards), we obtain the implementation in the listing below.
    template <template <class> class Scheme = CrankNicolson>
    class FDAmericanEngine
        : public FDStepConditionEngine<Scheme>,
          public OneAssetOption::engine {
        typedef FDStepConditionEngine<Scheme> fd_engine;
      public:
        FDAmericanEngine(
             const shared_ptr<GeneralizedBlackScholesProcess>& p,
             Size timeSteps=100, Size gridPoints=100,
             bool timeDependent = false)
        : fd_engine(p, timeSteps, gridPoints, timeDependent) {
            this->registerWith(p);
        }
      protected:
        void initializeStepCondition() const;
        void calculate() const {
            fd_engine::setupArguments(&(this->arguments_));
            fd_engine::calculate(&(this->results_));
        }
    };
My personal opinion? I tend to lean towards simplicity in my old age. The code to be replicated would be little, and the number of classes that reuse it is not large (about half a dozen in the current version of the library). Moreover, the classes that we'd remove (FDAmericanCondition and FDEngineAdapter) don't really model a concept in the domain, so I'd let them go without any qualms. Too much reuse without a proper abstraction might be a thing, after all.

A final note: as you can see, in this framework there are no high-level classes encapsulating generic model behavior, such as McSimulation for Monte Carlo (see this post). Whatever logic we had here was written in classes meant for a specific instrument—in this case, plain options in a Black-Scholes-Merton model

Liked this post? Share it: