## Monday, September 30, 2013

### Chapter 3, part 2 of n: Yield term structures

Hello everybody.

This post is the second in a series of a still undetermined number; the first part is here.

Call me a slowcoach (or whatever the expression might be in your part of the world), but I only found out this week that there's a Twitter feed for the Quantitative Finance Stack Exchange site. I'll be retweeting the QuantLib-related questions when the answers are useful, so you can push that "follow" button on the right if you're interested in those but don't want to get the full site feed. Or you can push it anyway. I won't mind.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Term Structures

### Yield Term Structures

The YieldTermStructure class predates TermStructure—in fact, it was even called TermStructure back in the day, when it was the only kind of term structure in the library and we still hadn't seen the world. Its interface provides the means to forecast interest rates and discount factors at any date in the curve domain; also, it implements some machinery to ease the task of writing a concrete yield curve.

#### Interface and implementation

The interface of the YieldTermStructure class is sketched in listing 3.3.

Listing 3.3: Partial interface of the YieldTermStructure class.
    class YieldTermStructure : public TermStructure {
public:
YieldTermStructure(const DayCounter& dc = DayCounter());
YieldTermStructure(const Date& referenceDate,
const Calendar& cal = Calendar(),
const DayCounter& dc = DayCounter());
YieldTermStructure(Natural settlementDays,
const Calendar&,
const DayCounter& dc = DayCounter());

InterestRate zeroRate(const Date& d,
const DayCounter& dayCounter,
Compounding compounding,
Frequency frequency = Annual,
bool extrapolate = false) const;

InterestRate zeroRate(Time t,
Compounding compounding,
Frequency frequency = Annual,
bool extrapolate = false) const;

DiscountFactor discount(const Date&,
bool extrapolate = false) const;
// same at time t

InterestRate forwardRate(const Date& d1,
const Date& d2,
const DayCounter& dayCounter,
Compounding compounding,
Frequency frequency = Annual,
bool extrapolate = false) const;
// same starting from date d and spanning a period p
// same between times t1 and t2

// ...more methods
protected:
virtual DiscountFactor discountImpl(Time) const = 0;
};

The constructors just forward their arguments to the corresponding constructors in the TermStructure class—nothing to write home about. The other methods return information on the yield structure in different ways: on the one hand, they can return zero rates, forward rates, and discount factors (rates are returned as instances of the InterestRate class, that I'll describe briefly in a future post); on the other hand, they are overloaded so that they can return information as function of either dates or times.

Of course, there is a relationship between zero rates, forward rates, and discount factors; the knowledge of any one of them is sufficient to deduce the others. (I won't bore you with the formulas here—you know them.) This is reflected in the implementation, outlined in listing 3.4; the Template Method patterns is used to implement all public methods directly or indirectly in terms of the protected discountImpl abstract method. Derived classes only need to implement the latter in order to return any of the above quantities.

Listing 3.4: Partial implementation of the YieldTermStructure class.
    InterestRate YieldTermStructure::zeroRate(
const Date& d,
const DayCounter& dayCounter,
Compounding comp,
Frequency freq,
bool extrapolate) const {
// checks and/or special cases
Real compound = 1.0/discount(d, extrapolate);
return InterestRate::impliedRate(compound,
referenceDate(), d,
dayCounter, comp, freq);
}

DiscountFactor YieldTermStructure::discount(
const Date& d,
bool extrapolate) const {
checkRange(d, extrapolate);
return discountImpl(timeFromReference(d));
}


#### Discount, forward-rate, and zero-rate curves

What if the author of a derived class doesn't want to implement discountImpl, though? After all, one might want to describe a yield curve in terms, say, of zero rates. Ever ready to serve (just like Jeeves in the P. G. Wodehouse novels—not that you're Bernie Wooster, of course) QuantLib provides a couple of classes to be used in this case. The two classes (outlined in listing 3.5) are called ZeroYieldStructure and ForwardRateStructure. They use the Adapter pattern (in case you're keeping count, this would be another notch in the spine of our Gang-of-Four book) to transform the discount-based interface of YieldTermStructure into interfaces based on zero-yield and instantaneous-forward rates, respectively.

Listing 3.5: Outline of the ZeroYieldStructure and ForwardRateStructure classes.
    class ZeroYieldStructure : public YieldTermStructure {
public:
// forwarding constructors, not shown
protected:
virtual Rate zeroYieldImpl(Time) const = 0;
DiscountFactor discountImpl(Time t) const {
Rate r = zeroYieldImpl(t);
return std::exp(-r*t);
}
};

class ForwardRateStructure : public YieldTermStructure {
public:
// forwarding constructors, not shown
protected:
virtual Rate forwardImpl(Time) const = 0;
virtual Rate zeroYieldImpl(Time t) const {
// averages \texttt{forwardImpl} between 0 and \texttt{t}
}
DiscountFactor discountImpl(Time t) const {
Rate r = zeroYieldImpl(t);
return std::exp(-r*t);
}
};


The implementation of ZeroYieldStructure is simple enough. A few constructors (not shown here) forward their arguments to the corresponding constructors in the parent YieldTermStructure class. The Adapter pattern is implemented in the protected section: an abstract zeroYieldImpl method is declared and used to implement the discountImpl method. Thus, authors of derived classes only need to provide an implementation of zeroYieldImpl to obtain a fully functional yield curve. (Of course, the other required methods (such as maxDate) must be implemented as well.) Note that, due to the formula used to obtain the discount factor, such method must return zero yields as continuously-compounded annualized rates.

In a similar way, the ForwardRateStructure class provides the means to describe the curve in terms of instantaneous forward rates (again, on an annual basis) by implementing a forwardImpl method in derived classes. However, it has an added twist. In order to obtain the discount at a given time T, we have to average the instantaneous forwards between 0 and T, thus retrieving the corresponding zero-yield rate. This class can't make any assumption on the shape of the forwards; therefore, all it can do is to perform a numerical integration—an expensive calculation. In order to provide a hook for optimization, the average is performed in a virtual zeroYieldImpl method that can be overridden if a faster calculation is available. You might object that if an expression is available for the zero yields, one can inherit from ZeroYieldStructure and be done with it; however, it is conceptually cleaner to express the curve in terms of the forwards if they were the actual focus of the model.

The two adapter classes I just described and the base YieldTermStructure class itself were used to implement interpolated discount, zero-yield, and forward curves. Listing 3.6 outlines the InterpolatedZeroCurve class template; the other two (InterpolatedForwardCurve and InterpolatedDiscountCurve) are implemented in the same way.

Listing 3.6: Outline of the InterpolatedZeroCurve class template.
    template <class Interpolator>
class InterpolatedZeroCurve : public ZeroYieldStructure {
public:
// constructor
InterpolatedZeroCurve(
const std::vector<Date>& dates,
const std::vector<Rate>& yields,
const DayCounter& dayCounter,
const Interpolator& interpolator
= Interpolator())
: ZeroYieldStructure(dates.front(), Calendar(),
dayCounter),
dates_(dates), yields_(yields),
interpolator_(interpolator) {
// check that dates are sorted, that there are
// as many rates as dates, etc.

// convert dates_ into times_

interpolation_ =
interpolator_.interpolate(times_.begin(),
times_.end(),
data_.begin());
}
Date maxDate() const {
return dates_.back();
}
// other inspectors, not shown
protected:
// other constructors, not shown
Rate zeroYieldImpl(Time t) const {
return interpolation_(t, true);
}
mutable std::vector<Date> dates_;
mutable std::vector<Time> times_;
mutable std::vector<Rate> data_;
mutable Interpolation interpolation_;
Interpolator interpolator_;
};


The template argument Interpolator has a twofold task. On the one hand, it acts as a traits class [1]. It specifies the kind of interpolation to be used as well as a few of its properties, namely, how many points are required (e.g., at least two for a linear interpolation) and whether the chosen interpolation is global (i.e., whether or not moving a data point changes the interpolation in intervals that do not contain such point; this is the case, e.g., for cubic splines). On the other hand, it doubles as a poor man's factory; when given a set of data points, it is able to build and return the corresponding Interpolation instance. (The Interpolation class will be described in a later post).

The public constructor takes the data needed to build the curve: the set of dates over which to interpolate, the corresponding zero yields, the day counter to be used, and an optional Interpolator instance. For most interpolations, the last parameter is not needed; it can be passed when the interpolation needs parameters. The implementation forwards to the parent ZeroYieldStructure class the first of the passed dates, assumed to be the reference date for the curve, and the day counter; the other arguments are stored in the corresponding data members. After performing a few consistency checks, it converts the dates into times (using, of course, the passed reference date and day counter), asks the interpolator to create an Interpolation instance, and stores the result.

At this point, the curve is ready to be used. The other required methods can be implemented as one-liners; maxDate returns the latest of the passed dates, and zeroYieldImpl returns the interpolated value of the zero yield. Since the TermStructure machinery already takes care of range-checking, the call to the Interpolation instance includes a true argument. This causes the value to be extrapolated if the passed time is outside the given range.

Finally, the InterpolatedZeroCurve class also defines a few protected constructors. They take the same arguments as the constructors of its parent class ZeroYieldStructure, as well as an optional Interpolator instance, and forward them to the corresponding parent-class constructors; however, they don't create the interpolation—they cannot, since they don't take any zero-yield data. These constructors are defined so that it is possible to inherit from InterpolatedZeroCurve; derived classes will provide the data and create the interpolation based on whatever arguments they take (an example of this will be shown in the remainder of this section). For the same reason, most data members are declared as mutable; this makes it possible for derived classes to update the interpolation lazily, should their data change.

#### Aside: symmetry break.

You might argue that, as in George Orwell's Animal Farm, some term structures are more equal than others. The discount-based implementation seems to have a privileged role, being used in the base YieldTermStructure class. A more symmetric implementation might define three abstract methods in the base class (discountImpl, zeroYieldImpl, and forwardImpl, to be called from the corresponding public methods) and provide three adapters, adding a DiscountStructure class to the existing ones.

Well, the argument is sound; in fact, the very first implementation of the YieldTermStructure class was symmetric. The switch to the discount-based interface and the reasons thereof are now lost in the mists of time, but might have to do with the use of InterestRate instances; since they can require changes of frequency or compounding, zeroYield (to name one method) wouldn't be allowed to return the result of zeroYieldImpl directly anyway.

#### Aside: twin classes.

You might guess that code for interpolated discount and forward curves would be very similar to that for the interpolated zero-yield curve described here. The question naturally arises: would it be possible to abstract out common code? Or maybe we could even do with a single class template?

The answers are yes and no, respectively. Some code can be abstracted in a template class (in fact, this has been done already). However, the curves must implement three different abstract methods (discountImpl, forwardImpl, and zeroYieldImpl) so we still need all three classes as well as the one containing the common code.

#### Bibliography

[1] N.C. Myers, Traits: a new and useful template technique. In The C++ Report, June 1995.

## Monday, September 23, 2013

### Chapter 3, part 1 of n: Term structures

Hello again.

This post starts a new series that will cover chapter 3 of my book. Most of the content was already available (even though I'll review it a bit before posting, so there might be some revisions) but the chapter is still missing the last section. Hopefully, I'll write it by the end of the series. (Suspense. That's what makes you come back here again and again.)

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Term structures

CHANGE IS the only constant, as Heraclitus said. Paradoxically, the aphorism still holds after twenty-five centuries; also in quantitative finance, where practically all quantities obviously vary—sometimes spectacularly—over time.

This leads us straight to the subject of term structures. This chapter describes the basic facilities available for their construction, as well as a few existing term structures that can be used as provided.

### The TermStructure class

The current base class for term structures is a fine example of design ex-post. After some thinking, you might come up with a specification for such class. When we started the library, we didn't; we just started growing classes as we needed them. A couple of years later, older and somewhat wiser, we looked at the existing term structures and abstracted out their common features. The result is the TermStructure class as described in this section.

#### Interface and requirements

Once abstracted out, the base term-structure class (whose interface is shown in listing 3.1) was responsible for three basic tasks.

Listing 3.1: Interface of the TermStructure class.
    class TermStructure : public virtual Observer,
public virtual Observable,
public Extrapolator {
public:
TermStructure(const DayCounter& dc = DayCounter());
TermStructure(const Date& referenceDate,
const Calendar& calendar = Calendar(),
const DayCounter& dc = DayCounter());
TermStructure(Natural settlementDays,
const Calendar&,
const DayCounter& dc = DayCounter());
virtual ~TermStructure();

virtual DayCounter dayCounter() const;
virtual Date maxDate() const = 0;
virtual Time maxTime() const;
virtual const Date& referenceDate() const;
virtual Calendar calendar() const;
virtual Natural settlementDays() const;
Time timeFromReference(const Date& date) const;

void update();
protected:
void checkRange(const Date&, bool extrapolate) const;
void checkRange(Time, bool extrapolate) const;

bool moving_;
};

The first is to keep track of its own reference date, i.e., the date at which—in a manner of speaking—the future begins. (This is not strictly true of all term structures. However, we'll leave it at that for the time being.) For a volatility term structure, that would most likely be today's date. For a yield curve, it might be today's date, too; but depending on the conventions used at one's desk (for instance, an interest-rate swap desk whose deals are all settled spot; that's on the second business day for you equity folks) the reference date might be the result of advancing today's date by a few business days. Our term-structure class must be able to perform such a calculation if needed. Also, there might be cases in which the reference date is specified externally (such as when a sequence of dates, including the reference, is tabulated somewhere together with the corresponding discount factors). Finally, the calculation of the reference date might be altogether delegated to some other object; we'll see such an arrangement in a later example. In all these cases, the reference date will be provided to client code by means of the referenceDate method. The related calendar and settlementDays methods return the calendar and the number of days used for the calculation ("settlement" applies to instruments, but is probably not the correct word for a term structure).

The second (and somewhat mundane) task is to convert dates to times, i.e., points on a real-valued time axis starting with t=0 at the reference date. Such times might be used in the mathematical model underlying the curve, or simply to convert, say, from discount factors to zero-yield rates. The calculation is made available by means of the timeFromReference method.

The third task (also a mundane one) is to check whether a given date or time belongs to the domain covered by the term structure. The TermStructure class delegates to derived classes the specification of the latest date in the domain—which must be implemented in the maxDate method—and provides a corresponding maxTime method as well as an overloaded checkRange method performing the actual test; there is no minDate method, as the domain is assumed to start at the reference date.

#### Implementation

The first task—keeping track of the reference date—starts when the term structure is instantiated.

Listing 3.2: Implementation of the TermStructure class.
   TermStructure::TermStructure(const DayCounter& dc)
: moving_(false), updated_(true),
settlementDays_(Null<Natural>()), dayCounter_(dc) {}

TermStructure::TermStructure(const Date& referenceDate,
const Calendar& calendar,
const DayCounter& dc)
: moving_(false), referenceDate_(referenceDate),
updated_(true), settlementDays_(Null<Natural>()),
calendar_(calendar), dayCounter_(dc) {}

TermStructure::TermStructure(Natural settlementDays,
const Calendar& calendar,
const DayCounter& dc)
: moving_(true), updated_(false),
settlementDays_(settlementDays),
calendar_(calendar), dayCounter_(dc) {
registerWith(Settings::instance().evaluationDate());
}

DayCounter TermStructure::dayCounter() const {
return dayCounter_;
}

Time TermStructure::maxTime() const {
return timeFromReference(maxDate());
}

const Date& TermStructure::referenceDate() const  {
if (!updated_) {
Date today = Settings::instance().evaluationDate();
referenceDate_ =
updated_ = true;
}
return referenceDate_;
}

Calendar TermStructure::calendar() const {
return calendar_;
}

Natural TermStructure::settlementDays() const {
return settlementDays_;
}

Time TermStructure::timeFromReference(const Date& d) const {
return dayCounter().yearFraction(referenceDate(),d);
}

void TermStructure::update() {
if (moving_)
updated_ = false;
notifyObservers();
}

void TermStructure::checkRange(const Date& d,
bool extrapolate) const {
checkRange(timeFromReference(d),extrapolate);
}

void TermStructure::checkRange(Time t,
bool extrapolate) const {
QL_REQUIRE(t >= 0.0,
"negative time (" << t << ") given");
QL_REQUIRE(extrapolate || allowsExtrapolation()
|| t <= maxTime(),
"time (" << t
<< ") is past max curve time ("
<< maxTime() << ")");
}
Depending on how the reference date is to be calculated, different constructors must be called. All such constructors set two boolean data members. The first is called moving_; it is set to true if the reference date moves when today's date changes, or to false if the date is fixed. The second, updated_, specifies whether the value of another data member (referenceDate_, storing the latest calculated value of the reference date) is currently up to date or should be recalculated.

Three constructors are available. One simply takes a day counter (used for time calculations, as we will see later) but no arguments related to reference-date calculation. Of course, the resulting term structure can't calculate such date; therefore, derived classes calling this constructor must take care of the calculation by overriding the virtual referenceDate method. The implementation sets moving_ to false and updated_ to true to inhibit calculations in the base class.

Another constructor takes a date, as well as an optional calendar and a day counter. When this one is used, the reference date is assumed to be fixed and equal to the given date. Accordingly, the implementation sets referenceDate_ to the passed date, moving_ to false, and updated_ to true.

Finally, a third constructor takes a number of days and a calendar. When this one is used, the reference date will be calculated as today's date advanced by the given number of business days according to the given calendar. Besides copying the passed data to the corresponding data members, the implementation sets moving_ to true and updated_ to false (since no calculation is performed at this time). However, that's not the full story; if today's date changes, the term structure must be notified so that it can update its reference date. The Settings class (described elsewhere) provides global access to the current evaluation date, with which the term structure registers as an observer. When a change is notified, the update method is executed. If the reference date is moving, the body of the method sets updated_ to false before forwarding the notification to the term structure's own observers.

Apart from trivial inspectors such as the calendar method, the implementation of the first task is completed with the referenceDate method. If the reference date needs to be calculated, it does so by retrieving the current evaluation date, advancing it as specified, and storing the result in the referenceDate_ data member before returning it.

The second task is much simpler, since the conversion of dates into times can be delegated entirely to a DayCounter instance. Such day counter is usually passed to the term structure as a constructor argument and stored in the dayCounter_ data member. The conversion is handled by the timeFromReference method, which asks the day counter for the number of years between the reference date and the passed date. Note that, in the body of the method, both the day counter and the reference date are accessed by means of the corresponding methods rather than the data members. This is necessary, since—as I mentioned earlier—the referenceDate method can be overridden entirely and thus disregard the data member; the same applies to the dayCounter method.

You might object that this is, to use the term coined by Kent Beck [1], a code smell. A term-structure instance might store a day counter or a reference date (or likely both) that don't correspond to the actual ones used by its methods. This disturbs me as well; and indeed, earlier versions of the class declared the dayCounter method as purely virtual and did not include the data member. However, it is a necessary evil in the case of the reference date, since we need a data member to cache its calculated value. Due to the broken-window effect [2], the day counter, calendar and settlement days followed (after a period in which we developed a number of derived term structures, all of which had to define the same data members).

What day counter should be used for a given term structure? Fortunately, it doesn't matter much. If one is only working with dates (i.e., provides dates as an input for the construction of the term structure and uses dates as arguments to retrieve values) the effects of choosing a specific day counter will cancel out as long as the day counter is sufficiently well behaved: for instance, if it is homogeneous (by which I mean that the time T(d1,d2) between two dates d1 and d2 equals the time T(d3,d4) between d3 and d4 if the two pairs of dates differ by the same number of days) and additive (by which I mean that T(d1,d2) + T(d2,d3) equals T(d1,d3) for all choices of the three dates). Two such day counters are the actual/360 and the actual/365-fixed ones. Similarly, if one is only working with times, the day counter will not be used at all.

Onwards with the third and final task. The job of defining the valid date range is delegated to derived classes, which must define the maxDate method (here declared as purely virtual). The corresponding time range is calculated by the maxTime method, which simply converts the latest valid date to time by means of the timeFromReference method; this, too, can be overridden. Finally, the two checkRange methods implement the actual range checking and throw an exception if the passed argument is not in the valid range; the one that takes a date does so by forwarding the request to the other after converting the given date to a time. The check can be overridden by a request to extrapolate outside the domain of the term-structure; this can be done either by passing an optional boolean argument to checkRange or by using the facilities provided by the Extrapolator class from which TermStructure inherits. Extrapolation is only allowed beyond the maximum date; requests for dates before the reference date are always rejected.

#### Aside: evaluation date tricks.

If no evaluation date is set, the Settings class defaults to returning today's date. Unfortunately, the latter will change silently (that is, without notifying its observers) at the strike of midnight, causing mysterious errors. If you run overnight calculations, you'll have to perform the same feat as Hiro Nakamura in Heroes—freeze time. Explicitly settings today's date as the evaluation date will keep it fixed, even when today becomes tomorrow.

Another trick worth knowing: if all your term structures are moving, setting the evaluation date to tomorrow and recalculating the value of your instruments while keeping everything else unchanged will give you the daily theta of your portfolio.

#### Bibliography

[1] M. Fowler, K. Beck, J. Brant, W. Opdyke and D. Roberts, Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999.
[2] A. Hunt and D. Thomas, The Pragmatic Programmer: From Journeyman to Master. Addison-Wesley, 1999.

## Monday, September 16, 2013

### Odds and ends: basic types

Hello again.

Today's post is about the basic types we're using in the library. Most of its content was prompted by comments made on a previous post; so thank you, Matt.

In last post, I mentioned that I'm looking into publishing Implementing QuantLib as an ebook, but I'm not sure if there's any interest; please go read the post for details, if you haven't already, and leave your feedback.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Odds and ends: basic types

The library interfaces don't use built-in types; instead, a number of typedefs are provided such as Time, Rate, Integer, or Size. They are all mapped to basic types (we talked about using full-featured types, possibly with range checking, but we dumped the idea). Furthermore, all floating-point types are defined as Real, which in turn is defined as double. This makes it possible to change all of them consistently by just changing Real.

In principle, this would allow one to choose the desired level of accuracy; but to this, the test-suite answers "Fiddlesticks!" since it shows a few failures when Real is defined as float or long double. The value of the typedefs is really in making the code more clear—and in allowing dimensional analysis for those who, like me, were used to it in a previous life as a physicist; for instance, expressions such as exp(r) or r+s*t can be immediately flagged as fishy if they are preceded by Rate r, Spread s, and Time t.

Of course, all those fancy types are only aliases to double and the compiler doesn't really distinguish between them. It would nice if they had stronger typing; so that, for instance, one could overload a method based on whether it is passed a price or a volatility.

One possibility would be the BOOST_STRONG_TYPEDEF macro, which is one of the bazillion utilities provided by Boost. It is used as, say,
    BOOST_STRONG_TYPEDEF(double, Time)
BOOST_STRONG_TYPEDEF(double, Rate)

and creates a corresponding proper class with appropriate conversions to and from the underlying type. This would allow overloading methods, but has the drawbacks that not all conversions are explicit. This would break backward compatibility and make things generally awkward. For instance, a simple expression like Time t = 2.0; wouldn't compile. You'd also have to write f(Time(1.5)) instead of just f(1.5), even if f wasn't overloaded.

Also, the classes defined by the macro overload all operators: you can happily add a time to a rate, even though it doesn't make sense (yes, dimensional analysis again). It would be nice if the type system prevented this from compiling, while still allowing, for instance, to add a spread to a rate yielding another rate or to multiply a rate by a time yielding a pure number.

How to do this in a generic way, and ideally with no run-time costs, was shown first by Barton and Nackman [1]; a variation of their idea is implemented in the Boost::Units library, and a simpler one was implemented once by yours truly while still working in Physics. (I won't explain it here, but go look for it. It's almost insanely cool.) However, that might be overkill here; we don't have to deal with all possible combinations of length, mass, time and so on.

The ideal compromise for a future library might be to implement wrapper classes (à la Boost strong typedef) and to define explicitly which operators are allowed for which types. As usual, we're not the first ones to have this problem: the idea has been floating around for a while, and a proposal was put forward [2] to add to the next version of C++ a new feature, called opaque typedefs, which would make it easier to define this kind of types.

A final note: among these types, there is at least one which is not determined on its own (like Rate or Time) but depends on other types. The volatility of a price and the volatility of a rate have different dimensions, and thus should have different types. In short, Volatility should be a template type.

#### Bibliography

[1] J. Barton and L. R. Nackman, Dimensional Analysis, C++ Report, January 1995.
[2] W. E. Brown, Toward Opaque Typedefs for C++1Y, v2, C++ Standard Committee Paper N3741, 2013

## Monday, September 9, 2013

### Intermission: LaTeX style file and some musings.

Welcome back.

After the series of posts on chapter 5 (you'll find them under August and September in the blog archive on the right side of the page), a short intermission about the PDF version of the book.

First: in the past, a few people asked me what LaTeX style file I was using to produce it. I finally managed to clean the file up a bit and I added it to the book page for download. Go ahead and do whatever you want with it.

Second: I've updated the PDF version of chapter 2 with the changes I've made before posting it on the blog in July. Again, see the book page and get it while it's hot.

As for the general state of the thing: for the time being, it could easily do worse. The latest posts gave me a first version of chapter 5, which was one of the two stinkers I've mentioned in this post. The other (chapter 8) will probably take a while. In the meantime, I'll probably revise and post one or two of the chapters I already have; chapter 3, for instance, which would give me an excuse to finally write its last section.

Of course, it could all go the way of the best laid schemes of mice and men instead. If I'm lucky.

A final note: PDF files are ok to produce actual physical books (which I'll probably do at some point using something like Lulu or CreateSpace) but they're not optimal on all those newfangled tablets and thingamabobs you people are using. I'm looking into a couple of possibilities for producing an ebook version—such as using LeanPub, for instance, that would also allow one to optionally donate a few bucks when downloading the book. However, it would need some work for me to get the content in the required format, so please let me know if there's any interest in that. If it's of no use to anybody, I won't even begin.

I guess that's all for today. Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Monday, September 2, 2013

### Chapter 5, part 5 of 5: Model example

Hello everybody.

This is the final post in a series of five covering the newly written chapter 5 of my book. The previous posts are here, here, here and here. I'll be grateful for any feedback.

As I publish this, I'm in London for my Introduction to QuantLib Development course. Drop me a line if you want to be notified when a new one is scheduled; my contact info is at this link.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Parameterized models and calibration

### Example: the Heston model, continued

Time for the second part of the example I started in this post. The code for the HestonModel class is shown in listing 5.7.

Listing 5.7: Implementation of the HestonModel class.
    class HestonModel : public CalibratedModel {
public:
HestonModel(const shared_ptr<HestonProcess>& process)
: CalibratedModel(5), process_(process) {
arguments_[0] = ConstantParameter(process->theta(),
PositiveConstraint());
arguments_[1] = ConstantParameter(process->kappa(),
PositiveConstraint());
arguments_[2] = ConstantParameter(process->sigma(),
PositiveConstraint());
arguments_[3] =
ConstantParameter(process->rho(),
BoundaryConstraint(-1.0, 1.0));
arguments_[4] = ConstantParameter(process->v0(),
PositiveConstraint());
generateArguments();
registerWith(process_->riskFreeRate());
registerWith(process_->dividendYield());
registerWith(process_->s0());
}

Real theta() const { return arguments_[0](0.0); }
Real kappa() const { return arguments_[1](0.0); }
Real sigma() const { return arguments_[2](0.0); }
Real rho()   const { return arguments_[3](0.0); }
Real v0()    const { return arguments_[4](0.0); }

shared_ptr<HestonProcess> process() const {
return process_;
}
protected:
void generateArguments() {
process_.reset(
new HestonProcess(process_->riskFreeRate(),
process_->dividendYield(),
process_->s0(),
v0(), kappa(), theta(),
sigma(), rho()));
}
shared_ptr<HestonProcess> process_;
};

As you might know, the model has five parameters theta, kappa, sigma, rho and v0. The process it describes for its underlying also depends on the risk-free rate, its current value, and possibly a dividend yield. For reasons that will become more clear in chapter 6, the library groups all of those in a separate HestonProcess class (for brevity, I'm not showing its interface here; we're just using it as the container for the model parameters).

The HestonModel constructor takes an instance of the process class, stores it, and defines the parameters to calibrate. First it passes their number (5) to its base class constructor, then it builds each of them. They are all constant parameters; rho is constrained to be between -1 and 1, while the others must all be positive. Their initial values are taken from the process. The class defines the inspectors theta, kappa, sigma, rho and v0 to retrieve their current values; each of them returns the value of the corresponding Parameter instance at time t=0 (which is as good as any other time, since the parameters are constant).

After the parameters are built, the generateArguments method is called. This will also be called each time the parameters change during calibration, and replaces the stored process instance with another one containing the same term structures and quote as the old one but with the new parameters. The reason for this is that the new process would be ready if any engine were to require it from the model; but I wonder if the process inspector should not build the process on demand instead. If the actual process is not required except as a holder of parameters and curves, we could define inspectors for all of them instead, have the engine use them directly, and save ourselves the creation of a new complex object at each step of the calibration. You're welcome to do the experiment and time the new implementation against the current one.

Finally, the constructor registers with the relevant observables. The process instance will be replaced by generateArguments, so there's no point in registering with it. Instead, we register directly with the contained handles, that will be moved inside each new process instance.

Together with the inspectors I already mentioned, this completes the implementation of the model. The calibration machinery is inherited from the CalibratedModel class, and the only thing that's needed to make it work is an engine that takes a HestonModel instance and uses it to price the VanillaOption instances contained in the calibration helpers.

You'll forgive me for not showing here the AnalyticHestonEngine class provided by QuantLib: it implements a closed formula for European options under the Heston model, cites as many as five papers and books in its comments, and goes on for about 650 lines of code. For the non mathematically-minded, it is Cthulhu fhtagn stuff. If you're interested, the full code is available in the library.