Monday, December 23, 2013

Intermission: book update and holiday hiatus

Hello, everybody.

Not much to report this week. On the book front, I've uploaded chapter 5 (previously available only as a series of posts) as a PDF file; it's with on the book page with the others. Also, I've added a bibliography page on this blog; this makes it easier for me to provide links to the sources I'm citing.

In other news: like many others all over the world, the Ballabio household is celebrating Christmas.


The schedule of this blog will be affected: I won't be posting over the holiday. You'll hear from me again in January. Have a nice holiday, everybody!

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Share this post:

Monday, December 16, 2013

Odds and ends: error reporting

Welcome back.

In this post, some old content from appendix A of the book, Odds and ends. Yes, this is what's affectionately known as a filler post. It's technical, rather than finance-related; but it's about a part of coding that everybody bumps into eventually, and I hope that you will find it interesting.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Error reporting

There are a great many places in the library where some condition must be checked. Rather than doing it as
    if (i >= v.size())
        throw Error("index out of range");
we wanted to express the intent more clearly, i.e., with a syntax like
    require(i < v.size(), "index out of range");
where on the one hand, we write the condition to be satisfied and not its opposite; and on the other hand, terms such as require, ensure, or assert—which have a somewhat canonical meaning in programming—would tell whether we're checking a precondition, a postcondition, or a programmer error.

We provided the desired syntax with macros. "Get behind thee", I hear you say. True, macros have a bad name, and in fact they caused us a problem or two, as we'll see below. But in this case, functions had a big disadvantage: they evaluate all their arguments. Many times, we want to create a moderately complex error message, such as:
    require(i < v.size(),
            "index " + to_string(i) + " out of range");
If require were a function, the message would be built whether or not the condition is satisfied, causing a performance hit that would not be acceptable. With a macro, the above is textually replaced by something like:
    if (!(i < v.size()))
        throw Error("index " + to_string(i) + " out of range");
which builds the message only if the condition is violated.

Listing A.3 shows the current version of one of the macros, namely, QL_REQUIRE; the other macros are defined in a similar way.

Listing A.3: Definition of the QL_REQUIRE macro.
    #define QL_REQUIRE(condition,message) \
    if (!(condition)) { \
        std::ostringstream _ql_msg_stream; \
        _ql_msg_stream << message; \
        throw QuantLib::Error(__FILE__,__LINE__, \
                              BOOST_CURRENT_FUNCTION,
                              _ql_msg_stream.str()); \
    } else

Its definition has a few more bells and whistles that might be expected. Firstly, we use an ostringstream to build the message string. This allows one to use a syntax like
    QL_REQUIRE(i < v.size(),
              "index " << i << " out of range");
to build the message (you can see how that works by replacing the pieces in the macro body). Secondly, the Error instance is passed the name of the current function as well as the line and file where the error is thrown. Depending on a compilation flag, this information can be included in the error message to help developers; the default behavior is to not include it, since it's of little utility for users. Lastly, you might be wondering why we added an else at the end of the macro. That is due to a common macro pitfall, namely, its lack of a lexical scope. The else is needed by code such as
    if (someCondition())
        QL_REQUIRE(i < v.size(), "index out of bounds");
    else
        doSomethingElse();
Without the else in the macro, the above would not work as expected. Instead, the else in the code would pair with the if in the macro and the code would translate into
    if (someCondition()) {
        if (!(i < v.size()))
            throw Error("index out of bounds");
        else
            doSomethingElse();
    }
which has a different behavior.

As a final note, I have to describe a disadvantage of these macros. As they are now, they throw exceptions that can only return their contained message; no inspector is defined for any other relevant data. For instance, although an out-of-bounds message might include the passed index, no other method in the exception returns the index as an integer. Therefore, the information can be displayed to the user but would be unavailable to recovery code in catch clauses—unless one parses the message, that is; but that is hardly worth the effort. There's no planned solution at this time, so drop us a line if you have one.

Share this post:

Monday, December 9, 2013

The strange case of the changing implied term structure

Hello again.

A quick note on the book chapters available in PDF form. Last month, Creative Commons released version 4.0 of their licenses; so from now on, the available material is released under an Attribution-NonCommercial-NoDerivatives 4.0 International license. Your rights (and mine) remain the same under the new license; but, in the words of Creative Commons, it is more user-friendly and more internationally robust.

Onwards to the content of this post. As I mentioned in the past, sometimes questions about QuantLib are asked on the Quantitative Finance Stack Exchange. One such question was asked recently, and I thought I'd share the answer. It was an interesting question, and it's a curious enough effect that one wouldn't think of it in advance (I, for one, surely didn't). Also, this might be the first in a series of QuantLib recipes. No, I'm not planning another book—I have my hands full enough. But I'm going to need something to write in this blog after the book is done, right?

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

The Strange Case of the Changing Implied Term Structure

The idea of Lisa Ann (hi, and thanks for the question!) was to forecast a future interest-rate curve based on today's rates, and to use it to estimate some kind of forward price for a bond.

The ImpliedTermStructure class does the job: it takes a given curve, a future date D, and builds a new curve whose reference date is D (that is, a curve that returns a discount factor equal to 1 at D) and that returns the same forward rates as the original curve between any two dates d1 and d2 later than D. Its implementation is simple enough: you can look at the code for the details, but the gist of it is that, for any date d > D, it returns a discount factor B'(d) equal to B(d)/B(D), where the B are the discounts returned by the original curve. You can see how for d = D, the discount factor is B(D)/B(D) = 1; and for any d1 and d2, B'(d1)/B'(d2) = B(d1)/B(d2), which preserves the forward rate between d1 and d2.

Lisa Ann instantiated the original curve and passed it to an ImpliedTermStructure instance with a reference date six months from today. The code was somewhat like the following:
    RelinkableHandle<YieldTermStructure> currentCurve;
    currentCurve.linkTo(depoSwapTermStructure);

    Date futureReference = calendar.advance(todaysDate, 6*Months);

    shared_ptr<YieldTermStructure> impliedDiscountCurve =
        make_shared<ImpliedTermStructure>(currentCurve, futureReference);
A look at the discount factors of the two curves showed that they had been built correctly. As the last step in order to price the bond on the implied curve, Lisa Ann set the evaluation date to the future reference; and to her horror (yes, I'm dramatizing. Bear with me a little, will you?) this action changed the shape of the implied curve.

What had happened? The original curve was built so that it moved with the evaluation date, as described in the posts on term structures. Let's work it out.

Let's say that the original curve was like the fictional one shown in this figure; the x-axis is time and the y-axis shows instantaneous forward rates. From the figure, you can also see that I'm so stuck in my old ways that I'm still using Gnuplot.


The implied curve had the same rates (so I'm now showing it; it would coincide with the original curve) but started at t = 0.5.

When the date changed, the original curve was translated verbatim six months in the future, leading to the next figure. The dashed line is the original curve; the solid line is the curve after moving the evaluation date.


The implied curve was linked to the original one through a handle, and all it could do was to change its rates accordingly.

The solution, as you might have guessed, was to prevent the original curve from moving. In most cases, this can be done by selecting the appropriate constructor; that is, the one that takes an explicit reference date (again, all the details are in this post). This way, the original curve wouldn't change when the evaluation date moved; and neither would the implied curve.

Share this post:

Monday, December 2, 2013

Chapter 3, part 6 of n: volatility term structures

Welcome back.

Two bit of news about the QuantLib site. The first is that we finally collected the material from the QuantLib User Meeting in Düsseldorf, and the slides for most of the talks are now available on the documentation page. The second is that the QuantLib site itself was renovated; I hope that the navigation is a bit more clear now, and that it will be simpler to find stuff.

As promised, this week's post has some new book content. At last, we're nearing the end of chapter 3; all that's missing is a couple of short sections on interest-rate volatilities.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Term structures

Volatility term structures

There are so many types of volatility structures that finding common behavior between them is a bit of a stretch. In fact, I'm not sure that the base class we defined for them (the VolatilityTermStructure class, shown in listing 3.16) is a useful abstraction.

Listing 3.16: Interface of the VolatilityTermStructure class.
    class VolatilityTermStructure : public TermStructure {
      public:
        VolatilityTermStructure(BusinessDayConvention bdc,
                                const DayCounter& dc = DayCounter());
        VolatilityTermStructure(const Date& referenceDate,
                                const Calendar& cal,
                                BusinessDayConvention bdc,
                                const DayCounter& dc = DayCounter());
        VolatilityTermStructure(Natural settlementDays,
                                const Calendar& cal,
                                BusinessDayConvention bdc,
                                const DayCounter& dc = DayCounter());

        virtual BusinessDayConvention businessDayConvention() const {
            return bdc_;
        }
        Date optionDateFromTenor(const Period&) const {
            return calendar().advance(referenceDate(),
                                      p,
                                      businessDayConvention());
        }

        virtual Rate minStrike() const = 0;
        virtual Rate maxStrike() const = 0;
      protected:
        void checkStrike(Rate strike, bool extrapolate) const;
      private:
        BusinessDayConvention bdc_;
    };
The class adds two things to TermStructure, from which it inherits. The first is a method, optionDateFromTenor, that calculates the exercise date of an option from its tenor; to do this, it used the calendar provided by the base-class interface as well as a business-day convention stored in this class (which is passed to the constructors, and from which the usual inspector is provided). Instead, this functionality could be encapsulated in some kind of utility class and used elsewhere. (The idea of such date calculator was suggested years ago on the mailing list by someone who will forgive me if I can no longer recall nor find out his name.)

The second addition involves two pure virtual methods that return the minimum and maximum strike over which the term structure is defined and a protected method that checks a given strike against the defined range. Unfortunately, these don't make sense for a local volatility structure; but leaving them out would require yet another level of hierarchy to hold them (namely, an implied-vol structure class) so I'm not shocked to see them here instead.

Equity volatility structures

Equity and FX-rate Black volatilities are modeled by the BlackVolTermStructure class, shown in listing 3.17. Apart from its several constructors (which, as usual, forward their arguments to the base class and would be made unnecessary by constructor inheritance, introduced in C++11 [1]) the class defines the overloaded blackVol method to retrieve the volatility for a given exercise date or time; the blackVariance method for the corresponding variance (the product of the square of the volatility by the time); and the blackForwardVol and blackForwardVariance methods for the forward volatility and variance between two future dates.

Listing 3.17: Partial interface of the BlackVolTermStructure class.
    class BlackVolTermStructure : public VolatilityTermStructure {
      public:
        Volatility blackVol(const Date& maturity,
                            Real strike,
                            bool extrapolate = false) const;
        Volatility blackVol(Time maturity,
                            Real strike,
                            bool extrapolate = false) const;
        Real blackVariance(const Date& maturity,
                           Real strike,
                           bool extrapolate = false) const;
        Real blackVariance(Time maturity,
                           Real strike,
                           bool extrapolate = false) const;
        Volatility blackForwardVol(const Date& date1,
                                   const Date& date2,
                                   Real strike,
                                   bool extrapolate = false) const;
        Real blackForwardVariance(const Date& date1,
                                  const Date& date2,
                                  Real strike,
                                  bool extrapolate = false) const;
        // same two methods as above, taking two times
      protected:
        virtual Real blackVarianceImpl(Time t,
                                       Real strike) const = 0;
        virtual Volatility blackVolImpl(Time t,
                                        Real strike) const = 0;
    };
Following the Template Method pattern (no surprise there) all these methods are implemented by calling the protected and pure virtual blackVolImpl and blackVarianceImpl methods. The public interface adds range checking and, in the case of forward volatility or variance, the bit of logic required to calculate the forward values from the spot values at the two passed dates.

At the time of this writing, the BlackVolTermStructure class also defines a private static const data member dT that is no longer used—much like we still carry around an appendix, or a vestigial tailbone. By the time you read this, I hope to have it removed. The data member, I mean. Not my appendix.

As usual, adapters are provided to write only one of the blackVolImpl or the blackVarianceImpl method; they are shown in listing 3.18. (For simple examples of either kind, you can look at the ConstantBlackVol and the misleadingly-named ImpliedVolTermStructure classes in the library.) It is unfortunate that the name of one of the adapters, the BlackVolatilityTermStructure class, is so confusingly similar to the name of the base class. I'm open to suggestions for changing either one in a future version of the library.

Listing 3.18: Adapters for the BlackVolTermStructure class.
    class BlackVolatilityTermStructure
                        : public BlackVolTermStructure {
        ... // constructors, not shown
      protected:
        Real blackVarianceImpl(Time maturity, Real strike) const {
            Volatility vol = blackVolImpl(t, strike);
            return vol*vol*t;
        }
    };

    class BlackVarianceTermStructure
                        : public BlackVolTermStructure {
        ... // constructors, not shown
      protected:
        Volatility blackVolImpl(Time t, Real strike) const {
            Time nonZeroMaturity = (t==0.0 ? 0.00001 : t);
            Real var = blackVarianceImpl(nonZeroMaturity, strike);
            return std::sqrt(var/nonZeroMaturity);
        }
    };

Aside: Interpolations and extrapolations.

One of the available volatility classes is the BlackVarianceSurface class, which interpolates a matrix of quoted Black volatilities. I won't describe it here, since you're probably sick of term-structure examples by now; but it has a couple of interesting features.

The first is that the interpolation can be changed once the structure is built; the relevant method is
    template <class Interpolator>
    void setInterpolation(const Interpolator& i = Interpolator()) {
        varianceSurface_ =
            i.interpolate(times_.begin(), times_.end(),
                          strikes_.begin(), strikes_.end(),
                          variances_);
        notifyObservers();
    }
This is not possible in other interpolated curves, in which the type of the interpolation is a template argument and is fixed at instantiation; see, for instance, the PiecewiseYieldCurve class template in this post. The difference is that BlackVarianceSurface doesn't need to store the interpolator, and thus doesn't need to know its type outside the setInterpolation method.

The second feature of BlackVarianceSurface is the possibility to customize the kind of extrapolation to use when the passed strike is outside the range of the interpolation. It is possible either to extend the underlying interpolation or to extrapolate flatly the value at the end of the range; the behavior at either end can be specified independently.

Now, it would be nice if this behavior could be extracted in some base class and reused. The choice of extrapolation can be implemented in a generic way; given any interpolation f defined up to xmax (or down to xmin), and given an x > xmax, the two choices can be realized by returning f(x) for extension of f(xmax) for flat extrapolation.

The Extrapolator class would seem the obvious choice for defining such behavior; but, unfortunately, this wouldn't work if we still want to make different choices on different boundaries. As a base class, Extrapolator would have no knowledge of the fact that, for instance, an interest-rate structure is defined over a time range whose lower bound is 0, while a volatility surface also has a range of strikes. Since it can't distinguish between boundaries, Extrapolator can't define an interface that specifies behavior on any of them; we'd be forced to make a single choice and apply it everywhere.

Therefore, the only possibility I see for code reuse at this time would be to define a polymorphic Extrapolation class, code the different behaviors into derived classes, and store the required number of instances into any given term structure.

Bibliography

[1] International Standards Organization, Programming Languages – C++, International Standard ISO/IEC 14882:2011. Available as a working draft.

Share this post:

Monday, November 25, 2013

Screencast: my talk at the QuantLib User Meeting

Welcome back.

We're still collecting the material from the QuantLib User Meeting in Düsseldorf (see my previous post for a report). In the meantime, here is the screencast of my talk (there's a HD version if you want it; just click the Vimeo logo in the bottom right corner to go there). Thanks to Peter Caspers for recording the audio.

Next week, I'll probably go back to posting content from the book—thanks to a train trip that gave me a few hours for writing.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.


Share this post:

Monday, November 18, 2013

Report from the QuantLib User Meeting

Welcome back.

I'm back from the QuantLib User Meeting in Düsseldorf, and it was great. There were about 30 participants from a number of banks and companies, discussing openly about the way they use QuantLib in their work. The intent of the organizers was to start a network of people that could promote the use of QuantLib and learn from each other. It was certainly a good beginning. Thanks to the organizers, the sponsors (IKB, Quaternion and d-fine), the speakers, and all the participants. They all contributed to a great couple of days.

My notes aren't going to do justice to all that was said, so I'll just write a brief summary here. I'll make the slides and/or the audio of the talks available on the QuantLib site as soon as we get the material (and the permission to publish it) from the several speakers. In the meantime, here's a taste of what was said. (The space given to each talk here is not due to its importance; only to the length of the notes I took.)

I had the honor of being asked for the opening keynote. I had a look at the evolution of QuantLib and pointed out a few flaws in the current architecture (not the only ones, of course): lack of support for multi-threading, some abstraction leak as some classes are used for more and more tasks, and a lack of a stand-alone functional core. I suggested a few direction that could be explored in order to fix some of those problems.

Michael von den Driesch spoke about the way QuantLib is used at IKB in the pricing and financial modeling group (part of the bank risk control group). The library is part of a number of applications for valuation, historical simulation, in-house consultancy and so on. Michael's group can use Murex curves seamlessly with QuantLib and compare them with their own. Also, he showed the use of IPython notebooks: a nice tool that I'll have to explore.

Roland Lichters presented the Quaternion Risk Engine, a cross-asset CVA application based on QuantLib plus a number of proprietary extensions. By choosing analytically tractable models and by applying some modification to the library, they were able to speed up calculations until it became feasible (with the help of a 96-core cluster) to obtain the billions of NPVs required by CVA simulation. The most interesting modification was to allow a user to momentarily bypass the Observer/Observable notification mechanism. If one knows what one's doing, it is possible to send a single notification for the simultaneous update of multiple rates, thus reducing the overhead of the pattern.

André Miemiec talked about his implementation of the Hunt-Kennedy model for pricing accreting swaptions. "Strangely enough, it works", in his own words. The implementation is available in the QuantLib patch tracker, and I hope to include it in the library shortly.

Concluding the first day of talks, Peter Caspers presented the Markov functional model he added to QuantLib 1.3 and showed a few results. Then we were off to dinner together.



The second day started with Sarp Kaya Acar and Henning Segger talking about two extensions to the library made by Quaternion: the implementation of the Jarrow-Yildirim model for inflation and the JCIR++ model for hazard rates, respectively.

After this, we had a bit of open discussion about new ideas for the library. Those included:
  • treating a swap leg as an instrument in its own right, to allow using different pricing engines for each one;
  • a generic mechanism for creating a composite instrument where different components have different currencies;
  • a more useful implementation of currencies, which at this time are basically not used;
  • alternative approaches to reducing the overhead of the Observer/Observable pattern.
Another idea floated, not directly related to the library: what if a number of small banks joined forces to implement pricing models, distributing the effort between them? The general consensus seemed to be that it would be very nice, but a very hard sell. The same could be said of starting an open-source financial library, though, so I'm not without hopes...

Next talk: Martin Dietrich explained how QuantLib is used at E.ON Global Commodities from Excel, LexiFi or even a web browser through a technology stack including a webserver written in Scala, WebSockets in .Net, and the open-source Excel-DNA. (I know it sounds like a Rube Goldberg machine, but it works fine.)

Klaus Spanderen presented ways to introduce some parallelism in QuantLib; from the easiest (multi-processing, which turns out to be ok in most use cases) to more complex such as OpenMP, multi-threaded calibrations, thread-local objects, or Boost.MPI. He also described his experimental redesign of the Observer/Observable pattern that makes it safe to use QuantLib in languages such as Java or C#, that run garbage collection in a separate thread.

Finally, Sebastian Schlenkrich of d-fine talked about algorithmic differentiation and the proof of concept he coded to calculate Bermudan-swaption vegas; and Johannes Göttker-Schnetmann described how QuantLib is used at Helaba for pricing equity derivatives.

As I said, two great days, full of varied and interesting talks. I hope there will be other editions of the user meeting. For my part, I'll surely try to encourage events of this kind happening in other countries.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Share this post:

Monday, November 11, 2013

Intermission

Greetings.

No new content this week, on account of (a) my having to prepare my talk at the QuantLib workshop in Düsseldorf and (b) days having just 24 hours. Stay tuned for next week's post, where I'll report from the workshop. I think it's going to be an interesting one.

In the meantime, you can catch up on previous posts. The links are in the archive section in the sidebar on the right.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Share this post:


Monday, November 4, 2013

Interlude: code archaeology

Welcome back.

No book content this week; instead, I though I'd share some curiosities I'd rediscovered recently.

As I mentioned, I'll be speaking at the QuantLib workshop in Düsseldorf in a couple of weeks. As part of preparing my talk, I had to do some research on the way the architecture of QuantLib evolved in its early years; so I put on my Indiana Jones hat and dug into the repository for a couple of nights.

Man, it was a blast.

It was like stepping into another era. Well, it was another era, in a way. We're talking about the early 2000s. C++ had become a standard for just a few years, and the various compilers were scrambling to implement it; thus, we had macros and #ifdefs all over the place to check whether we could use a given language feature, or in the worst cases, to select the syntax to use. Not surprisingly, it was especially bad with templates; but it was needed even with simple features.

One example? Since day one, we have tried to write warning-free code, because (as you all know) both disabling warnings and leaving too many warnings around can lead to disregard the one that, once in a while, we should really heed. However, if we wrote something like:
if (some_condition) {
    return 42;
} else {
    return 13;
}
a particular compiler would warn that the function might not return a value (because, being particularly dumb, it wouldn't detect that the execution path would hit one of the two return statements in any case). If we tried to silence the warning by writing:
if (some_condition) {
    return 42;
} else {
    return 13;
}
return 0;
another computer, somewhat smarter that the first, would warn us that the last line would never be executed. The solution? Write
if (some_condition) {
    return 42;
} else {
    return 13;
}
QL_DUMMY_RETURN(0);
and define the macro as return 0 for the first compiler and as empty for the second.

Kids these days have it easy, I say!

On top of this, sometimes we did our part to confuse the code. There was some class which was incredibly bad designed (*cough* Instrument *cough*), but I'm not talking about that. We had things like different code trees for headers and source files, or namespaces nested two or three levels. We had the strange convention that private data members had a "the" prepended instead of the trailing underscore we use now; as in, say, theSettlementDate. We hadn't begun using the pimpl idiom for classes such as Calendar or DayCounter, so we had to wrap them into smart pointers to pass them around (and we weren't using Boost, of course, so we wrote and used our own vastly inferior smart pointer).

In short: you know the particular "flavor" of QuantLib source code? The set of familiar conventions that we use and that help you recognize the parts of the code, like the particular voice of a writer? They were nowhere to be found. As I said, it looked like another era. Or another code base.

If you want to have some fun looking around the old QuantLib, just clone our git repository, run
git log --until 2001-12-31
to get a log of the revisions in 2000 or 2001, and switch to any one of them that strikes your fancy; for instance, run
git checkout eae05182
to get the very first revision we committed (at that time, it was into cvs).

That's all for this week, I guess. Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Share this post:

Monday, October 28, 2013

Chapter 3, part 5 of n: other term structures

Hello again.

Back to chapter 3 of my book after the detour of last post (the series on chapter 3 started here). This post covers two sections: default-probability term structures and inflation ter structures. They are both short enough that splitting them into two post would have felt like stretching the content a bit too much.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Term structures

Other term structures

So far, the focus of this chapter has been on yield term structures. Of course, other kinds of term structure are implemented in the library. In this section, I'll review them shortly: mostly, I'll point out how they differ from yield term structures and what particular features they sport.

Default-probability term structures

Default-probability term structures are the most similar in design to yield term structures. They can be expressed in terms of default probability, survival probability, default density, or hazard rate; any one of the four quantities can be obtained from any other, much like zero rates and discount factors.

Unlike yield term structures (in which all methods are implemented in terms of the discountImpl method) the base default-probability structure has no single method for others to build upon. Instead, as shown in listing 3.13, it declares two abstract methods survivalProbabilityImpl and defaultDensityImpl. It's left to derived classes to decide which one should be written in terms of the other; the base class implements survivalProbability and defaultDensity based on the respective implementation methods (their implementation is not as simple as shown, of course; here I omitted range checking and extrapolation for clarity), defaultProbability based trivially on survivalProbability, and hazardRate in terms of both survival probability and default density.

Listing 3.13: Sketch of the DefaultProbabilityTermStructure class.
    class DefaultProbabilityTermStructure : public TermStructure {
      public:
        // ...constructors...

        Probability survivalProbability(Time t) const {
            return survivalProbabilityImpl(t);
        }

        Probability defaultProbability(Time t) const {
            return 1.0 - survivalProbability(t);
        }
        Probability defaultProbability(Time t1,
                                       Time t2) const {
            Probability p1 = defaultProbability(t1),
                        p2 = defaultProbability(t2);
            return p2 - p1;
        }

        Real defaultDensity(Time t) const {
            return defaultDensityImpl(t);
        }

        Rate hazardRate(Time t) const {
            Probability S = survivalProbability(t);
            return S == 0.0 ? 0.0 : defaultDensity(t)/S;
        }

        // ...other methods...
      protected:
        virtual Probability survivalProbabilityImpl(Time) const = 0;
        virtual Real defaultDensityImpl(Time) const = 0;
      private:
        // ...data members...
    };
Listing 3.14 sketches the adapter classes that, as for yield term structures, allow one to define a new default-probability structure in terms of the single quantity of his choice—either survival probability, default density, or hazard rate (the default probability is so closely related to survival probability that we didn't think it necessary to provide a corresponding adapter). The first one, SurvivalProbabilityStructure, defines defaultDensityImpl in terms of the implementation of survivalProbabilityImpl, which is left purely virtual and must be provided in derived classes; the second one, DefaultDensityStructure, does the opposite; and the last one, HazardRateStructure, defines both survival probability and default density in terms of a newly-declared purely abstract hazardRateImpl method.

Unfortunately, some of the adapters rely on numerical integration in order to provide conversions among the desired quantities. The provided implementations use dark magic in both maths and coding (namely, Gaussian quadratures and boost::bind) to perform the integrations efficiently; but when inheriting from such classes, you should consider overriding the adapter methods if a closed formula is available for the integral.

Listing 3.14: Adapter classes for default-probability term structures.
    class SurvivalProbabilityStructure
        : public DefaultProbabilityTermStructure {
      public:
        // ...constructors...
      protected:
        Real defaultDensityImpl(Time t) const {
            // returns the derivative of the survival probability at t
        }
    };

    class DefaultDensityStructure
        : public DefaultProbabilityTermStructure {
      public:
        // ...constructors...
      protected:
        Probability survivalProbabilityImpl(Time t) const {
            // 1 minus the integral of the default density from 0 to t
        }
    };

    class HazardRateStructure
        : public DefaultProbabilityTermStructure {
      public:
        // ...constructors...
      protected:
        virtual Real hazardRateImpl(Time) const = 0;
        Probability survivalProbabilityImpl(Time t) const {
            // exp(-I); I is the integral of the hazard rate from 0 to t
        }
        Real defaultDensityImpl(Time t) const {
            return hazardRateImpl(t)*survivalProbabilityImpl(t);
        }
    };
Like for yield curves, the library provides a few template classes that implement the adapter interfaces by interpolating discrete data, as well as a generic piecewise default-probability curve template and the traits required to select its underlying quantity. Together with the existing interpolation traits, this allows one to instantiate classes such as PiecewiseDefaultCurve<DefaultDensity,Linear>. The implementation is quite similar to the one described in this post; in fact, so much similar that it's not worth describing here. The only thing worth noting is that the default-probability structure is not self-sufficient: in order to price the instruments required for its bootstrap, a discount curve is needed (then again, the same is true of LIBOR curves in today's multiple-curve settings). You'll have to be consistent and use the same curve for your pricing engines; otherwise, you might suddenly find out that your CDS are no longer at the money.

Aside: Cinderella method

In the implementation of DefaultProbabilityTermStructure, you've probably noticed another symmetry break like the one discussed in a previous post. There's a difference though; in that case, discount factors were singled out to be given a privileged role. In this case, hazard rates are singled out to play the mistreated stepsister; there's no hazardRateImpl beside the similar methods declared for survival probability or default density. Again, a look at past versions of the code shows that once, it was symmetric; and again, I can give no reason for the change. I'm sure it looked like a good idea at the time.

The effect is that classes deriving from the HazardRateStructure adapter must go through some hoops to return hazard rates, since they're not able to call hazardRateImpl directly; instead, they have to use the default implementation and return the ratio of default density and survival probability (possibly performing an integration along the way). Unfortunately, even our fairy godmother can't change this now without risking to break existing code.

Inflation term structures

Inflation term structures have a number of features that set them apart from the term structures I described so far. Not surprisingly, most such features add complexity to the provided classes.

The most noticeable difference is that we have two separate kinds of inflation term structures (and two different interfaces) instead of a single one. The library does provide a single base class InflationTermStructure, that contains some inspectors and some common behavior; however, the interfaces returning the actual inflation rates are declared in two separate child classes, leading to the hierarchy sketched in listing 3.15. The two subclasses model zero-coupon and year-on-year inflation rates, which are not easily converted into one another and thus foil our usual multiple-adapter scheme.

Listing 3.15: Sketch of the InflationTermStructure class and its children.
    class InflationTermStructure : public TermStructure {
      public:
        // ...constructors...

        virtual Date baseDate() const = 0;
        virtual Rate baseRate() const;
        virtual Period observationLag() const;

        Handle<YieldTermStructure> nominalTermStructure() const;

        void setSeasonality(const shared_ptr<Seasonality>&);
      protected:
        Handle<YieldTermStructure> nominalTermStructure_;
        Period observationLag_;
        // ...other data members...
    };

    class ZeroInflationTermStructure
        : public InflationTermStructure {
      public:
        // ...constructors...
        Rate zeroRate(const Date &d,
                      const Period& instObsLag = Period(-1,Days),
                      bool forceLinearInterpolation = false,
                      bool extrapolate = false) const;
      protected:
        virtual Rate zeroRateImpl(Time t) const = 0;
    };

    class YoYInflationTermStructure
        : public InflationTermStructure {
      public:
        // ...        `\ldots{}constructors\ldots{}`
        Rate yoyRate(const Date &d,
                     const Period& instObsLag = Period(-1,Days),
                     bool forceLinearInterpolation = false,
                     bool extrapolate = false) const;
      protected:
        virtual Rate yoyRateImpl(Time time) const = 0;
    };
This state of things has both advantages and disadvantages; possibly more of the latter. On the one hand, it leads to a pair of duplicated sub-hierarchies, which is obviously a smell. (It can get worse. Until now, we haven't considered period-on-period rates with a frequency other than annual. Hopefully, they will only lead to a generalization of the year-on-year curve.) On the other hand, it simplifies a bit the sub-hierarchies; for instance, there's no adapter classes since each kind of term structure has only one underlying quantity (that is, either zero-coupon rates or year-on-year rates).

Other differences are due to the specific quirks of inflation fixings. Since inflation figures for a given month are announced after an observation lag, inflation term structures have a base date, as well as a reference date; the base date is the one corresponding to the latest announced fixing. If an inflation figure is needed for a date in the past with respect to today's date but after the base date, it must be forecast. (You might remember that, at the beginning of this chapter, I obscurely suggested that the future doesn't always begin at the reference date. This is the exception I was referring to.) Also, since inflation fixings are affected by seasonality, inflation term structures provide the means to store an instance of the polymorphic Seasonality class (which for brevity I won't describe). If given, such instance models the desired seasonal correction for the returned inflation rates, in a plain implementation of the Strategy pattern.

Unlike other kinds of term structure, the inflation interface doesn't provide methods taking a time value instead of a date; the whole fixing machinery depends on a number of date calculations (what month we're in, the corresponding period for the fixing, and whatnot) and there's simply no reliable way to convert from times to dates, so we called the whole thing off.

Yet another difference is that inflation term structures store a discount curve. Yes, I made the same face as you when I realized it—and shortly before the 1.0 release, too, when the interfaces would have to be frozen. No, let me rephrase that: I made a worse face than you. In the end, though, I left it alone. It's not the design we used for default-probability structures; but at least it has the advantage that a bootstrapped inflation term structure carries around the discount curve it used, so one doesn't have to keep track of their association to be consistent. We can choose the best design for release 2.0, when we have used both for a few years; so all's well that ends well, I guess. Still, I made a mental note to perform code reviews more often.

The remaining machinery (interpolated curves, piecewise bootstrap etc.) is similar to what I described for other kinds of curves. Due to the separate sub-hierarchies for zero-coupon and year-on-year rates, though, there's a couple of differences. On the one hand, there are no adapter classes that implement the whole interface based on a single underlying quantity—such as, say, ZeroStructure for discounts or HazardRateStructure for default probabilities. On the other hand, piecewise curves don't use traits to select the underlying quantity (as, for instance, in PiecewiseYieldCurve<Discount,LogLinear>). To select one or the other, you'll have to choose the appropriate class and write something like PiecewiseZeroInflation<Linear> instead.

Share this post:

Monday, October 21, 2013

Odds and ends: interest rates

Welcome back.

This post marks a break in the series on chapter 3 which started in this post and ran for the past few weeks. It covers interest rates, which are relevant to its subject but are not technically part of the chapter.

The idea is that a number of basic classes are glossed over in the main part of the book and are described in an appendix instead, so they don't interrupt the flow of the chapters. As pointed out by Douglas Adams in the fourth book of his Hitchhiker trilogy, (no, that's right: it's an inaccurately-named trilogy of five books. It's a long story.)
"[An excessive amount of detail] is guff. It doesn't advance the action. It makes for nice fat books such as the American market thrives on, but it doesn't actually get you anywhere."
I'm not sure that this is still the case with the blog form. I'm already cutting up every chapter in several posts, so the additional content might follow one of those. We'll see: I'm still new at this blogging thing, so I'm likely to experiment in the future.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Odds and ends: interest rates

The InterestRate class (shown in listing A.6) encapsulates general interest-rate calculations. Instances of this class are built from a rate, a day-count convention, a compounding convention, and a compounding frequency (note, though, that the value of the rate is always annualized, whatever the frequency). This allows one to specify rates such as "5%, actual/365, continuously compounded" or "2.5%, actual/360, semiannually compounded." As can be seen, the frequency is not always needed. I'll return to this later.

Listing A.6: Outline of the InterestRate class.
    enum Compounding { Simple,              // 1+rT
                       Compounded,          // (1+r)^T
                       Continuous,          // e^{rT}
                       SimpleThenCompounded
    };

    class InterestRate {
      public:
        InterestRate(Rate r,
                     const DayCounter&,
                     Compounding,
                     Frequency);
        // inspectors
        Rate rate() const;
        const DayCounter& dayCounter();
        Compounding compounding() const;
        Frequency frequency() const;
        // automatic conversion
        operator Rate() const;
        // implied discount factor and compounding after a given time
        // (or between two given dates)
        DiscountFactor discountFactor(Time t) const;
        DiscountFactor discountFactor(const Date& d1,
                                      const Date& d2) const;
        Real compoundFactor(Time t) const;
        Real compoundFactor(const Date& d1,
                            const Date& d2) const;
        // other calculations
        static InterestRate impliedRate(Real compound,
                                        const DayCounter&,
                                        Compounding,
                                        Frequency,
                                        Time t);
        ... // same with dates
        InterestRate equivalentRate(Compounding,
                                    Frequency,
                                    Time t) const;
        ... // same with dates
    };
Besides the obvious inspectors, the class provides a number of methods. One is the conversion operator to Rate, i.e., to double. On afterthought, this is kind of risky, as the converted value loses any day-count and compounding information; this might allow, say, a simply-compounded rate to slip undetected where a continuously-compounded one was expected. The conversion was added for backward compatibility when the InterestRate class was first introduced; it might be removed in a future revision of the library, dependent on the level of safety we want to force on users. (There are different views on safety among the core developers, ranging from "babysit the user and don't let him hurt himself" to "give him his part of the inheritance, pat him on his back, and send him to find his place in the world.")

Other methods complete a basic set of calculations. The compoundFactor returns the unit amount compounded for a time t (or equivalently, between two dates d1 and d2) according to the given interest rate; the discountFactor method returns the discount factor between two dates or for a time, i.e., the reciprocal of the compound factor; the impliedRate method returns a rate that, given a set of conventions, yields a given compound factor over a given time; and the equivalentRate method converts a rate to an equivalent one with different conventions (that is, one that results in the same compounded amount).

Like the InterestRate constructor, some of these methods take a compounding frequency. As I mentioned, this doesn't always make sense; and in fact, the Frequency enumeration has a NoFrequency item just to cover this case.

Obviously, this is a bit of a smell. Ideally, the frequency should be associated only with those compounding conventions that need it, and left out entirely for those (such as Simple and Continuous) that don't. If C++ supported it, we would write something like
    enum Compounding { Simple,
                       Compounded(Frequency),
                       Continuous,
                       SimpleThenCompounded(Frequency)
    };
which would be similar to algebraic data types in functional languages, or case classes in Scala (both support pattern matching on an object, which is like a neater switch on steroids; go have a look when you have some time) but unfortunately that's not an option. To have something of this kind, we'd have to go for a full-featured Strategy pattern and turn Compounding into a class hierarchy. That would probably be overkill for the needs of this class, so we're keeping both the enumeration and the smell.

Share this post:

Monday, October 14, 2013

Chapter 3, part 4 of n: adding z-spread to an interest-rate curve

Hello again.

This post is the fourth in an ongoing series covering chapter 3 of my book and starting here. It's a short one, to make up for the monster post of last week.

As I already mentioned, IKB, Quaternion and d-fine are organizing a two-days QuantLib workshop in Düsseldorf on November 13th and 14th. The agenda is here; if you're interested, send an email to the contacts listed in there. Registration is free; there are just 35 places, though.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Term structures

Example: adding z-spread to an interest-rate curve

This example (a lot simpler than the previous one) shows how to build a term-structure based on another one. We'll take an existing risk-free curve and modify it to include credit risk. The risk is expressed as a z-spread, i.e., a constant spread to be added to the zero-yield rates. For the pattern-savvy, this is an application of the Decorator pattern; we'll wrap an existing object, adding some behavior and delegating the rest to the original instance.

Listing 3.12: Implementation of the ZeroSpreadedTermStructure class.
  class ZeroSpreadedTermStructure : public ZeroYieldStructure {
    public:
      ZeroSpreadedTermStructure(
                        const Handle<YieldTermStructure>& h,
                        const Handle<Quote>& spread);
      : originalCurve_(h), spread_(spread) {
          registerWith(originalCurve_);
          registerWith(spread_);
      }
      const Date& referenceDate() const {
          return originalCurve_->referenceDate();
      }
      DayCounter dayCounter() const {
          return originalCurve_->dayCounter();
      }
      Calendar calendar() const {
          return originalCurve_->calendar();
      }
      Natural settlementDays() const {
          return originalCurve_->settlementDays();
      }
      Date maxDate() const {
          return originalCurve_->maxDate();
      }
    protected:
      Rate zeroYieldImpl(Time t) const {
          InterestRate zeroRate =
              originalCurve_->zeroRate(t, Continuous,
                                       NoFrequency, true);
          return zeroRate + spread_->value();
      }
    private:
      Handle<YieldTermStructure> originalCurve_;
      Handle<Quote> spread_;
  };
The implementation of the ZeroSpreadedTermStructure is shown in listing 3.12. As previously mentioned, it is based on zero-yield rates; therefore, it inherits from the ZeroYieldStructure adapter described here and will have to implement the required zeroYieldImpl method. Not surprisingly, its constructor takes as arguments the risk-free curve to be modified and the z-spread to be applied; to allow switching data sources, both are passed as handles. The arguments are stored in the corresponding data members, and the curve registers with both of them as an observer. The update method inherited from the base class will take care of forwarding any received notifications.

Note that none of the base-class constructors was called explicitly. As you might remember from this post, this means that instances of our curve store no data that can be used by the TermStructure machinery; therefore, the class must provide its own implementation of the methods related to reference-date calculation. In true Decorator fashion, this is done by delegating behavior to the wrapped object; each of the referenceDate, dayCounter, calendar, settlementDays, and maxDate methods forwards to the corresponding method in the risk-free curve.

Finally, we can implement our own specific behavior—namely, adding the z-spread. This is done in the zeroYieldImpl method; we ask the risk-free curve for the zero-yield rate at the required time (continuously compounded, since that's what our method must return), add the value of the z-spread, and return the result as the new zero-yield rate. The machinery of the ZeroYieldStructure adapter will take care of the rest, giving us the desired risky curve.

Share this post:

Monday, October 7, 2013

Chapter 3, part 3 of n: bootstrapping an interest-rate curve

Welcome back.

This post is the third in a series covering chapter 3 of my book and starting here. It's a rather large example in which I dissect the code used for bootstrapping an interest-rate curve.

A bit of news: IKB, Quaternion and d-fine are organizing a QuantLib workshop in Düsseldorf on November 13th and 14th, and they were kind enough to ask me to give the keynote on the 13th. As you can guess, it's flattering and scaring at the same time. Possibly more scaring than flattering.

The perks of this will be, first, that I get to meet in person for the first or second time a couple of people that I've worked with on the library, sometimes for years; and second, that I have the opportunity to collect a few data points about the way QuantLib is used in the industry. I'll try to share with you some of the goodies (not the beers, of course: those will stay in Düsseldorf). I'll report on this blog on what I see, and I'll ask the speakers to put their material on the QuantLib site.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Term Structures

Example: bootstrapping an interpolated yield curve

In this section, we'll build an all-purpose yield-curve template. Building upon the classes described in the previous subsections, we'll give it the ability to interpolate in a number of ways on either discount factors, zero yields, or instantaneous forward rates. The nodes of the curve will be bootstrapped from quoted—and possibly varying—market rates.

Needless to say, this example is a fairly complex one. The class template I'll describe (PiecewiseYieldCurve) makes use of a few helper classes, as well as a few template tricks. I'll try and explain all of them as needed.

Listing 3.7: Implementation of the PiecewiseYieldCurve class template.
template <class Traits, class Interpolator,
          template <class> class Bootstrap = IterativeBootstrap>
class PiecewiseYieldCurve
    : public Traits::template curve<Interpolator>::type,
      public LazyObject {
  private:
    typedef typename Traits::template curve<Interpolator>::type base_curve;
    typedef PiecewiseYieldCurve<Traits,Interpolator,Bootstrap> this_curve;
    typedef typename Traits::helper helper;
  public:
    typedef Traits traits_type;
    typedef Interpolator interpolator_type;
    PiecewiseYieldCurve(
           Natural settlementDays,
           const Calendar& calendar,
           const std::vector<shared_ptr<helper> >& instruments,
           const DayCounter& dayCounter,
           Real accuracy = 1.0e-12,
           const Interpolator& i = Interpolator());

    // inspectors not shown

    void update();
  private:
    void performCalculations() const;
    DiscountFactor discountImpl(Time) const;
    std::vector<shared_ptr<helper> > instruments_;
    Real accuracy_;

    friend class Bootstrap<this_curve>;
    friend class BootstrapError<this_curve>;
    Bootstrap<this_curve> bootstrap_;
};
The implementation of our class template is shown in listing 3.7. The class takes three template arguments. The first two determine the choice of the underlying data and the interpolation method, respectively; our goal is to instantiate the template as, say,
    PiecewiseYieldCurve<Discount,LogLinear>
or some such combination. I'll refer to the first parameter as the bootstrap traits; the second is the interpolator already described in a previous post. The third, and seemingly ungainly, parameter specifies a class which implements the bootstrapping algorithm. The parameter has a default value (the IterativeBootstrap class, which I'll describe later) so you can happily forget about it most of the times; it is provided so that interested developers can replace the bootstrapping algorithm with another one, either provided by the library or of their own creation. If you're not familiar with its syntax, it's a template template parameter [1]. The repetition is not an error, as suspected by my spell checker as I write these lines; it means that the template parameter should, in turn, be an uninstantiated class template (in this case, one taking a single template argument) rather than a typename. (An uninstantiated template is something like std::vector, as opposed to an instantiated one like std::vector<double>. The second names a type; the first doesn't.)

Before declaring the class interface, we need another bit of template programming to determine the parent class of the curve. On the one hand, we inherit our term structure from the LazyObject class; as described here, this will enable the curve to re-bootstrap itself when needed. On the other hand, we want to inherit it from one of the interpolated curves described here; depending on the choice of underlying data (discount, zero yields, or forward rates), we must select one of the available class templates and instantiate it with the chosen interpolator class. This is done by storing the class template in the bootstrap traits. Unluckily, C++ didn't allow template typedefs until recently (template aliases were introduced in the 2011 revision of the C++ standard). Therefore, the traits define an inner class template curve which takes the interpolator as its template parameter and defines the instantiated parent class as a typedef; this can be seen in the definition of the Discount traits, partially shown in listing 3.8.

Listing 3.8: Sketch of the Discount bootstrap traits.
    struct Discount {
        template <class Interpolator>
        struct curve {
            typedef InterpolatedDiscountCurve<Interpolator> type;
        };
        typedef BootstrapHelper<YieldTermStructure> helper;

        // other static methods
    }
The described machinery allows us to finally refer to the chosen class by using the expression
    Traits::template curve<Interpolator>::type
that we can add to the list of parent classes. (For those unfamiliar with the dark corners of template syntax, the template keyword in the expression is a hint for the compiler. When reading this expression, the compiler doesn't know what Traits is and has no means to determine that Traits::curve is a class template. Adding the keyword gives it the information, required for processing the rest of the expression correctly.) For the instantiation shown above as an example, with Discount as bootstrap traits and LogLinear as interpolator, the above works out as
    Discount::curve<LogLinear>::type
which in turn corresponds—as desired—to
    InterpolatedDiscountCurve<LogLinear>
as can be seen from the definition of the Discount class.

We make a last short stop before we finally implement the curve; in order to avoid long template expressions, we define a few typedefs, namely, base_curve, this_curve, and helper. The first one refers to the parent class; the second one refers to the very class we're declaring; and the third one extracts from the bootstrap traits the type of a helper class. This class will be described later in the example; for the time being, I'll just say that it provides the quoted value of an instrument, as well as the means to evaluate such instrument on the term structure being bootstrapped. The aim of the bootstrap will be to modify the curve until the two values coincide. Finally, two other typedefs (traits_type and interpolation_type) store the two corresponding template arguments so that they can be retrieved later.

The actual interface, at last. The constructors (of which only one is shown in the listing) take the arguments required for instantiating the parent interpolated curve, as well as a vector of helpers and the target accuracy for the bootstrap. Next, a number of inspectors (such as times or data) are defined, overriding the versions in the parent class; as we'll see, this allows them to ensure that the curve is fully built before returning the required data. The public interface is completed by the update method.

The protected methods include performCalculations, needed to implement the LazyObject interface; and discountImpl, which (like the public inspectors) overrides the parent-class version. The Bootstrap class template is instantiated with the type of the curve being defined. Since it will need access to the internals of the curve, the resulting class is declared as a friend of the PiecewiseYieldCurve class; the same is done for the BootstrapError class, used in the bootstrap algorithm and described later. Finally, we store an instance of the bootstrap class, as well as the required curve accuracy and the helpers.

Listing 3.7 (continued)
template <class T, class I,
          template <class> class B>
PiecewiseYieldCurve<T,I,B>::PiecewiseYieldCurve(
           Natural settlementDays,
           const Calendar& calendar,
           const std::vector<shared_ptr<helper> >& instruments,
           const DayCounter& dayCounter,
           Real accuracy,
           const I& interpolator)
: base_curve(settlementDays, calendar, dayCounter, i),
  instruments_(instruments), accuracy_(accuracy) {
    bootstrap_.setup(this);
}

template <class T, class I,
          template <class> class B>
void PiecewiseYieldCurve<T,I,B>::update() {
    base_curve::update();
    LazyObject::update();
}

template <class T, class I,
          template <class> class B>
void PiecewiseYieldCurve<T,I,B>::performCalculations() const {
    bootstrap_.calculate();
}

template <class T, class I,
          template <class> class B>
DiscountFactor PiecewiseYieldCurve<T,I,B>::discountImpl(Time t) const {
    calculate();
    return base_curve::discountImpl(t);
}
Let's now have a look at the implementation. The constructor holds no surprises: it passes the needed arguments to the base class and stores in this class the other ones. Finally, it passes the curve itself to the stored Bootstrap instance and makes it perform some preliminary work—more on this later. The update method is needed for disambiguation; we are inheriting from two classes (LazyObject and TermStructure) which both define their implementation of the method. The compiler justly refuses to second-guess us, so we have to explicitly call both parent implementations.

The performCalculations simply delegates its work to the Bootstrap instance. Lastly, a look at the discountImpl method shows us why such method had to be overridden; before calling the parent-class implementation, it has to ensure that the data were bootstrapped by calling the calculate method. This also holds for the other overridden inspectors, all following the same pattern. Instead, there's no need to override the zeroYieldImpl and forwardImpl methods, even if the curve inherits from ZeroYieldStructure or ForwardRateStructure; if you look back at listing 3.5 here, you'll see that those methods are only ever called by discountImpl. Thus, overriding the latter is enough.

At this point, I need to describe the BootstrapHelper class template; its interface is sketched in listing 3.9. For our curve, we'll instantiate it (as you can see in the Discount traits shown earlier) as BootstrapHelper<YieldTermStructure>; for convenience, the library provides an alias to this class called RateHelper that can be used in place of the more verbose type name.

Listing 3.9: Interface of the BootstrapHelper class template.
    template <class TS>
    class BootstrapHelper : public Observer, public Observable {
      public:
        BootstrapHelper(const Handle<Quote>& quote);
        virtual ~BootstrapHelper() {}

        Real quoteError() const;
        const Handle<Quote>& quote() const;
        virtual Real impliedQuote() const = 0;

        virtual void setTermStructure(TS*);

        virtual Date latestDate() const;

        virtual void update();
      protected:
        Handle<Quote> quote_;
        TS* termStructure_;
        Date latestDate_;
    };
Each instance of this class—or rather, of derived classes; the class itself is an abstract one—will help bootstrapping a single node on the curve. The input datum for the node is the quoted value of an instrument; this is provided as a Handle to a Quote instance, since the value will change in time. For a yield curve, such instruments might be deposits or swaps, quoted as the corresponding market rates. For each kind of instrument, a derived class must be provided.

The functionality that is common to all helpers is implemented in the base class. BootstrapHelper inherit from both Observer and Observable; the double role allows it to register with the market value and notify changes to the curve, signaling the need to perform a new bootstrap. Its constructor takes a Handle<Quote> providing the input market value, stores it as a data member, and registers itself as an observer. Three methods deal with the underlying instrument value. The quote method returns the handle containing the quoted value; the abstract impliedValue method returns the value as calculated on the curve being bootstrapped; and the convenience method quoteError returns the signed difference between the two values.

Two more methods are used for setting up the bootstrap algorithm. The setTermStructure method links the helper with the curve being built. The latestDate method returns the latest date for which curve data are required in order to calculate the implied value of the market datum; such date will be used as the coordinate of the node being bootstrapped. (The latest required date does not necessarily correspond to the maturity of the instrument. For instance, if the instrument were a constant-maturity swap, the curve should extend a few years beyond the swap maturity in order to forecast the rate paid by the last coupon.) The last method, update, forwards notifications from the quote to the observers of the helper.

The library provides a few concrete helper classes inherited from RateHelper. In the interest of brevity, allow me to do a little hand-waving here instead of showing you the actual code. Each of the helper classes implements the impliedValue for a specific instrument and includes code for returning the proper latest date. For instance, the DepositRateHelper class forecasts a quoted deposit rate by asking the curve being bootstrapped for the forward rate between its start and maturity dates; whereas the SwapRateHelper class forecasts a swap rate by instantiating a Swap object, pricing it on the curve, and implying its fair rate. In a multi-curve setting, it's also possible to use the bootstrapped curve for forecasting forward rates and an external curve for discounting. If you're interested, more details on this (and a discussion on what helpers to choose for a given curve) are available in Ametrano and Bianchetti [2].

We can finally dive into the bootstrap code. Listing 3.10 shows the interface of the IterativeBootstrap class, which is provided by the library and used by default.

Listing 3.10: Sketch of the \texttt{IterativeBootstrap} class template.
template <class Curve>
class IterativeBootstrap {
    typedef typename Curve::traits_type Traits;
    typedef typename Curve::interpolator_type Interpolator;
  public:
    IterativeBootstrap();
    void setup(Curve* ts);
    void calculate() const;
  private:
    Curve* ts_;
};

template <class Curve>
void IterativeBootstrap<Curve>::calculate() const {
    Size n = ts_->instruments_.size();

    // sort rate helpers by maturity
    // check that no two instruments have the same maturity
    // check that no instrument has an invalid quote

    for (Size i=0; i<n; ++i)
        ts_->instruments_[i]->setTermStructure(
                                       const_cast<Curve*>(ts_));

    ts_->dates_ = std::vector<Date>(n+1);
    // same for the other data vectors

    ts_->dates_[0] = Traits::initialDate(ts_);
    ts_->times_[0] = ts_->timeFromReference(ts_->dates_[0]);
    ts_->data_[0] = Traits::initialValue(ts_);

    for (Size i=0; i<n; ++i) {
        ts_->dates_[i+1] = ts_->instruments_[i]->latestDate();
        ts_->times_[i+1] =
            ts_->timeFromReference(ts_->dates_[i+1]);
    }

    Brent solver;

    for (Size iteration = 0; ; ++iteration) {
      for (Size i=1; i<n+1; ++i) {
          if (iteration == 0)   {
              // extend interpolation a point at a time
              ts_->interpolation_ =
                  ts_->interpolator_.interpolate(
                                        ts_->times_.begin(),
                                        ts_->times_.begin()+i+1,
                                        ts_->data_.begin());
              ts_->interpolation_.update();
          }

          Rate guess;
          // estimate guess by using the value at the previous iteration,
          // by extrapolating, or by asking the traits

          // bracket the solution
          Real min = Traits::minValueAfter(i, ts_->data_);
          Real max = Traits::maxValueAfter(i, ts_->data_);

          BootstrapError<Curve> error(ts_, instrument, i);
          ts_->data_[i] = solver.solve(error, ts_->accuracy_,
                                       guess, min, max);
      }

      if (!Interpolator::global)
          break;      // no need for convergence loop

      // check convergence and break if tolerance is reached; bail out
      // if tolerance wasn't reached in the given number of iterations
    }
}
For convenience, typedefs are defined to extract from the curve the traits and interpolator types. The constructor and the setup method are not of particular interest. The first just initializes the contained term-structure pointer to a null one; the second stores the passed curve pointer, checks that we have enough helpers to bootstrap the curve, and registers the curve as an observer of each helper. The bootstrap algorithm is implemented by the calculate method. In the version shown here, I'll gloss over a few details and corner cases; if you're interested, you can peruse the full code in the library.

First, all helpers are set the current term structure. This is done each time (rather than in the setup method) to allow a set of helpers to be used with different curves. (Of course, the same helpers could not be passed safely to different curves in a multi-threaded environment since hey would compete for it. Then again, much of QuantLib is not thread-safe, so it's kind of a moot point.) Then, the data vectors are initialized. The date and value for the initial node are provided by the passed traits; for yield term structures, the initial date corresponds to the reference date of the curve. The initial value depends on the choice of the underlying data; it is 1 for discount factors and a dummy value (which will be overwritten during the bootstrap procedure) for zero or forward rates. The dates for the other nodes are the latest needed dates of the corresponding helpers; the times are obtained by using the available curve facilities.

At this point, we can instantiate the one-dimensional solver that we'll use at each node (more details on solvers in a future post) and start the actual bootstrap. The calculation is written as two nested loops; an inner one—the bootstrap proper—that walks over each node, and an outer one that repeats the process. Iterative bootstrap is needed when a non-local interpolation (such as cubic splines) is used. In this case, setting the value of a node modifies the whole curve, invalidating previous nodes; therefore, we must go over the nodes a number of times until convergence is reached.

As I mentioned, the inner loop walks over each node—starting, of course, from the one at the earliest date. During the first iteration (when iteration == 0) the interpolation is extended a point at a time; later iterations use the full data range, so that the previous results are used as a starting point and refined. After each node is added, a one-dimensional root-finding algorithm is used to reproduce the corresponding market quote. For the first iteration, a guess for the solution can be provided by the bootstrap traits or by extrapolating the curve built so far; for further iteration, the previous result is used as the guess. The maximum and minimum value are provided by the traits, possibly based on the nodes already bootstrapped. For instance, the traits for bootstrapping zero or forward rates can prevent negative values by setting the minimum to a zero; the traits for discount factors can do the same by ensuring that the discounts are not increasing, i.e., by setting the maximum to the discount at the previous node.

The only missing ingredient for the root-finding algorithm is the function whose zero must be found. It is provided by the BootstrapError class template (shown in listing 3.11), that adapts the helper's quoteError calculation to a function-object interface. Its constructor takes the curve being built, the helper for the current node, and the node index, and stores them. Its operator() makes instances of this class usable as functions; it takes a guess for the node value, modifies the curve data accordingly, and returns the quote error.

Listing 3.11: Interface of the BootstrapError class template.
    template <class Curve>
    class BootstrapError {
        typedef typename Curve::traits_type Traits;
      public:
        BootstrapError(
            const Curve* curve,
            const shared_ptr<typename Traits::helper>& helper,
            Size segment);
        Real operator()(Rate guess) const {
            Traits::updateGuess(curve_->data_, guess, segment_);
            curve_->interpolation_.update();
            return helper_->quoteError();
        }
      private:
        const Curve* curve_;
        const shared_ptr<typename Traits::helper> helper_;
        const Size segment_;
    };
At this point, we're all set. The inner bootstrap loop creates a BootstrapError instance and passes it to the solver, which returns the node value for which the error is zero—i.e., for which the implied quote equals (within accuracy) the market quote. The curve data are then updated to include the returned value, and the loop turns to the next node.

When all nodes are bootstrapped, the outer loop checks whether another iteration is necessary. For local interpolations, this is not the case and we can break out of the loop. For non-local ones, the obtained accuracy is checked (I'll spare you the details here) and iterations are added until convergence is reached.

This concludes the bootstrap; and, as I don't want to further test your patience, it also concludes the example. Sample code using the PiecewiseYieldCurve class can be found in the QuantLib distribution, e.g., in the swap-valuation example.

Aside: a friend in need

"Wait a minute," you might have said upon looking at the PiecewiseYieldCurve declaration, "wasn't friend considered harmful?" Well, yes—friend declarations break encapsulation and force tight coupling of classes.

However, let's look at the alternatives. The Bootstrap class needs write access to the curve data. Beside declaring it as a friend, we might have given it access in three ways. On the one hand, we might have passed the data to the Bootstrap instance; but this would have coupled the two classes just as tightly (the curve internals couldn't be changed without changing the bootstrap code as well). On the other hand, we might have exposed the curve data through its public interface; but this would have been an even greater break of encapsulation (remember that we need write access). And on the gripping hand, we might encapsulate the data in a separate class and use private inheritance to control access. At the time of release 1.0, we felt that the friend declaration was no worse than the first two alternatives and resulted in simpler code. The third (and best) alternative might be implemented in a future release.

Bibliography

[1] V. Simonis and R. Weiss, Exploring Template Template Parameters. In Perspectives of System Informatics, Lecture Notes in Computer Science 2244, Springer Berlin/Heidelberg, 2001.
[2] F. Ametrano and M. Bianchetti, Everything You Always Wanted to Know About Multiple Interest Rate Curve Bootstrapping but Were Afraid to Ask, SSRN working papers series n. 2219548, 2013.

Share this post: