## Monday, March 30, 2015

### Chapter 7, part 3 of 6: binomial trees

Welcome back.

This week, part 3 of the series on the QuantLib tree framework that started a couple of posts ago. But first, do you see anything different in the address bar?

Yes, this blog has its own domain now. It's at http://implementingquantlib.com. It's not that different, of course, but it's easier to remember (and it might make it possible in future to switch to other blogging options without inconveniencing you, dear reader). You don't need to change your bookmarks or your feeds right away: the old address is still working and is being redirected to the new one.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

### Trees and tree-based lattices

In order to use the assets I described in the previous post, we need a lattice. The first question we have to face is at the very beginning of its design: should the tree itself be the lattice, or should we have a separate lattice class using a tree?

Even if the first alternative can be argued (a tree can very well be seen as a kind of lattice, so an is-a relationship would be justified), the library currently implements the second. I'm afraid the reasons for the choice are lost in time; one might be that having a class for the lattice and one for the tree allows us to separate different concerns. The tree has the responsibility for maintaining the structure and the probability transitions between nodes; the lattice uses the tree structure to roll the asset back and adds some additional features such as discounting.

#### The Tree class template

As I just wrote, a tree provides information about its structure. It can be described by the number of nodes at each level (each level corresponding to a different time), the nodes that can be reached from each other node at the previous level, and the probabilities for each such transition.

The interface chosen to represent this information (shown in the listing below) was an index-based one. Nodes were not represented as instances of a Node class or structure; instead, the tree was described—not implemented, mind you—as a series of vectors of increasing size, and the nodes were identified by their indexes into the vectors. This gives more flexibility in implementing different trees; one tree class might store the actual vectors of nodes, while another might just calculate the required indexes as needed. In the library, trinomial trees are implemented in the first way and binomial trees in the second. I'll describe both later on.
    class Tree {
public:
Tree(Size columns);
virtual ~Tree();
Size columns() const;
virtual Size size(Size i) const = 0;
virtual Real underlying(Size i, Size j) const = 0;
virtual Size branches() const = 0;
virtual Size descendant(Size i, Size j,
Size branch) const = 0;
virtual Real probability(Size i, Size index,
Size branch) const = 0;
};

The methods shown in the interface fully describe the tree structure. The columns method returns the number of levels in the tree; evidently, the original designer visualized the time axis as going from left to right, with the tree growing in the same direction. If we were to change the interface, I'd probably go for a more neutral name, such as size.

The size name was actually taken for another method, which returns the number of nodes at the i-th level of the tree; the underlying method takes two indexes i and j and returns the value of the variable underlying the tree at the j-th node of the i-th level.

The remaining methods deal with branching. The branches method returns the number of branches for each node (2 for binomial trees, 3 for trinomial ones and so on). It takes no arguments, which means that we're assuming such a number to be constant; we'll have no trees with a variable number of branches. The descendant identifies the nodes towards which a given node is branching: it takes the index i of the level, the index j of the node and the index of a branch, and returns the index of the corresponding node on level i+1 as exemplified in the following figure. Finally, the probability method takes the same arguments and returns the probability to follow a given branch.

The original implementation actually sported a base Tree class as shown in the listing above. It worked, but it committed what amounts to a mortal sin in numerical code: it declared as virtual some methods (such as descendant or probability) that were meant to be called thousands of times inside tight nested loops. (If you're wondering why this is bad, see for instance [1] for a short explanation—along with a lot more information on using C++ for numerical work.) After a while, we started to notice that small glaciers were disappearing in the time it took us to calibrate a short-rate model.

Eventually, this led to a reimplementation. We used the Curiously Recurring Template Pattern (CRTP) to replace run-time with compile-time polymorphism (see the aside below if you need a refresher) which led to the Tree class template shown in the next listing. The virtual methods are gone, and only the non-virtual columns method remains; the machinery for the pattern is provided by the CuriouslyRecurringTemplate class template, which avoids repetition of boilerplate code by implementing the const and non-const versions of the impl method used in the pattern.
    template <class T>
class Tree : public CuriouslyRecurringTemplate<T> {
public:
Tree(Size columns) : columns_(columns) {}
Size columns() const { return columns_; }
private:
Size columns_;
};

This was not a redesign; the existing class hierarchy was left unchanged. Switching to CRTP yielded slightly more complex code, as you'll see when I describe binomial and trinomial trees. However, it paid off in speed: with the new implementation, we saw the performance increase up to 10x. Seeing the results, I didn't stop to ask myself many question (yes, I'm the guilty party here) and I missed an opportunity to simplify the code. With a cooler head, I might have noticed that the base tree classes don't call methods defined in derived classes, so I could have simplified the code further by dropping CRTP for simple templates. (If a new branch of the hierarchy were to need CRTP, it could be added locally to that branch.) Yet another thing to remember when we decide to redesign parts of the library.

#### Aside: curiouser and curiouser.

The Curiously Recurring Template Pattern (CRTP) was named and popularized by James Coplien in 1995 [2] but it predates his description. It is a way to achieve a static version of the Template Method pattern (see [3], of course), in which a base class implements common behavior and provides hooks for customization by calling methods that will be implemented in derived classes.

The idea is the following:
    template <T>
class Base {
public:
T& impl() {
return static_cast<T&>(*this);
}
void foo() {
...
this->impl().bar();
...
}
};

class Derived : public Base<Derived> {
public:
void bar() { ... }
};

Now if we instantiate a Derived and call its foo method, it will in turn call the correct bar implementation: that is, the one defined in Derived (note that Base doesn't declare bar at all).

We can see how this work by working out the expansion made by the compiler. The Derived class doesn't define a foo method, so it inherits it from the its base class; that is, Base<Derived>. The foo method calls impl, which casts *this to a reference to the template argument T, i.e., Derived itself. We called the method on a Derived instance to begin with, so the cast succeeds and we're left with a reference to our object that has the correct type. At this point, the compiler resolves the call to bar on the reference to Derived::bar and thus the correct method is called. All this happens at compile time, and enables the compiler to inline the call to bar g and perform any optimization it sees fit.

Why go through all this, and not just call bar from foo instead? Well, it just wouldn't work. If we didn't declare bar into Base, the compiler wouldn't know what method we're talking about; and if we did declare it, the compiler would always resolve the call to the Base version of the method. CRTP gives us a way out: we can define the method in the derived class and use the template argument to pass its type to the base class so that the static cast can close the circle.

#### Binomial trees

As I mentioned earlier, a tree might implement its required interface by storing actual nodes or by just calculating their indexes on demand. The BinomialTree class template, shown in the listing below, takes the second approach. In a way, the nodes are the indexes; they don't exist as separate entities.
    template <class T>
class BinomialTree : public Tree<T> {
public:
enum Branches { branches = 2 };
BinomialTree(
const boost::shared_ptr<StochasticProcess1D>& process,
Time end, Size steps)
: Tree<T>(steps+1) {
x0_ = process->x0();
dt_ = end/steps;
driftPerStep_ = process->drift(0.0, x0_) * dt_;
}
Size size(Size i) const {
return i+1;
}
Size descendant(Size, Size index, Size branch) const {
return index + branch;
}
protected:
Real x0_, driftPerStep_;
Time dt_;
};

The class template models a rather strict subset of the possible binomial trees, that is, recombining trees with constant drift and diffusion for the underlying variable and with constant time steps. (The library contains, in its experimental'' folder, an extension of this class to non-constant parameters; you're welcome to have a look at it and send us feedback. For illustration purposes, though, I'll stick to the constant version here.)

Even with these constraints, there's quite a few different possibilities for the construction of the tree, so most of the implementation is left to derived classes. This class acts as a base and provides the common facilities; first of all, a branches enumeration whose value is defined to be 2 and which replaces the corresponding method in the old Tree class. Nowadays, we'd use a static const Size data member for the purpose; the enumeration is a historical artifact from a time when compilers were less standard-compliant.

The class constructor takes a one-dimensional stochastic process providing the dynamics of the underlying, the end time of the tree (with the start time assumed to be $$0$$, another implicit constraint), and the number of time steps. It calculates and stores a few simple quantities: the initial value of the underlying, obtained by calling the x0 method of the process; the size of a time step, obtained by dividing the total time by the number of steps; and the amount of underlying drift per step, once again provided by the process (since the parameters are constant, the drift is calculated at $$t=0$$). The class declares corresponding data members for each of these quantities.

Unfortunately, there's no way to check that the given process is actually one with constant parameters. On the one hand, the library doesn't declare a specific type for that kind of process; it would be orthogonal to the declaration of different process classes, and it would lead to multiple inheritance—the bad kind—if one wanted to define, say, a constant-parameter Black-Scholes process (it would have to be both a constant-parameter process and a Black-Scholes process). Therefore, we can't enforce the property by restricting the type to be passed to the constructor. On the other hand, we can't write run-time tests for that: we might check a few sample points, but there's no guarantee that the parameters don't change elsewhere.

One option that remains is to document the behavior and trust the user not to abuse the class. Another (which we won't pursue now for backward compatibility, but might be for a future version of the library) would be for the constructor to forgo the process and take the constant parameters instead. However, this would give the user the duty to extract the parameters from the process at each constructor invocation. As usual, we'll have to find some kind of balance.

The two last methods define the structure of the tree. The size method returns the number of nodes at the $$i$$-th level; the code assumes that there's a single node (the root of the tree) at level $$0$$, which gives $$i+1$$ nodes at level $$i$$ because of recombination. This seems reasonable enough, until we realize that we also assumed that the tree starts at $$t=0$$. Put together, these two assumptions prevent us from implementing techniques such as, say, having three nodes at $$t=0$$ in order to calculate Delta and Gamma by finite differences. Luckily enough, this problem might be overcome without losing backward compatibility: if one were to try implementing it, the technique could be enabled by adding additional constructors to the tree, thus allowing to relax the assumption about the start time.

Finally, the descendant method describes the links between the nodes. Due to the simple recombining structure of the tree, the node at index $$0$$ on one level connects to the two nodes at index $$0$$ and $$1$$ on the next level, the node at index $$1$$ to those at index $$1$$ and $$2$$, and in general the node at index $$i$$ to those at index $$i$$ and $$i+1$$. Since the chosen branch is passed as an index ($$0$$ or $$1$$ for a binomial tree) the method just has to add the index of the node to that of the branch to return the correct result. Neither index is range-checked, since the method will be called almost always inside a loop; checks would not only be costly (which is possible, but I haven't measured the effect, so I'll leave it at that) but actually redundant, since they will already be performed while setting up the loop.

The implementation of the BinomialTree class template misses the methods that map the dynamics of the underlying to the tree nodes: that is, the underlying and probability methods. They are left to derived classes, since there are several ways in which they can be written. In fact, there are families of ways: the two class templates in the following listing are meant to be base classes for two such families.
    template <class T>
class EqualProbabilitiesBinomialTree : public BinomialTree<T> {
public:
EqualProbabilitiesBinomialTree(
const boost::shared_ptr<StochasticProcess1D>& process,
Time end,
Size steps)
: BinomialTree<T>(process, end, steps) {}
Real underlying(Size i, Size index) const {
int j = 2*int(index) - int(i);
return this->x0_*std::exp(i*this->driftPerStep_
+ j*this->up_);
}
Real probability(Size, Size, Size) const {
return 0.5;
}
protected:
Real up_;
};

template <class T>
class EqualJumpsBinomialTree : public BinomialTree<T> {
public:
EqualJumpsBinomialTree(
const boost::shared_ptr<StochasticProcess1D>& process,
Time end,
Size steps)
: BinomialTree<T>(process, end, steps) {}
Real underlying(Size i, Size index) const {
int j = 2*int(index) - int(i);
return this->x0_*std::exp(j*this->dx_);
}
Real probability(Size, Size, Size branch) const {
return (branch == 1 ? pu_ : pd_);
}
protected:
Real dx_, pu_, pd_;
};

The first, EqualProbabilitiesBinomialTree, can be used for those trees in which from each node there's the same probability to go to either branch. This obviously fixes the implementation of the probability method, which identically returns 1/2, but also sets constraints on the underlying method. Equal probability to go along each branch means that the spine of the tree (that is, the center nodes of each level) is the expected place to be; and in turn, this means that such nodes must be placed at the forward value of the underlying for the corresponding time (see the figure below for an illustration of the idea). The class defines an underlying method that implements the resulting formula for the logarithm of the underlying (you can tell that the whole thing was written with the Black-Scholes process in mind). A data member up_ is defined and used to hold the half-displacement from the forward value per each move away from the center, represented by $$\Delta x$$ in the figure; however, the constructor of the class does not give it a value, leaving this small freedom to derived classes.

The second template, EqualJumpsBinomialTree, is used for those trees in which going to either branch implies an equal amount of movement in opposite directions for the logarithm of the underlying (again, see the figure below); since one of the directions goes with the drift and the other against it, the probabilities are different. The template defines the underlying method accordingly, leaving to derived classes the task of filling the value of the dx_ data member; and it also provides an implementation for the probability method that returns one of two values (pu_ or pd_) which must also be specified by derived classes.

Finally, the three classes in the following listing are examples of actual binomial trees, and as such are no longer class templates. The first implements the Jarrow-Rudd tree; it inherits from EqualProbabilitiesBinomialTree (note the application of CRTP) and its constructor sets the up_ data member to the required value, namely, the standard deviation of the underlying distribution after one time step. The second implements the Cox-Ross-Rubinstein tree, inherits from EqualJumpsBinomialTree, and its constructor calculates the required probabilities as well as the displacement. The third one implements the Tian tree, and provides an example of a class that doesn't belong to either family; as such, it inherits from BinomialTree directly and provides all required methods.
    class JarrowRudd
: public EqualProbabilitiesBinomialTree<JarrowRudd> {
public:
JarrowRudd(
const boost::shared_ptr<StochasticProcess1D>& process,
Time end, Size steps, Real strike)
: EqualProbabilitiesBinomialTree<JarrowRudd>(process,
end, steps) {
up_ = process->stdDeviation(0.0, x0_, dt_);
}
};

class CoxRossRubinstein
: public EqualJumpsBinomialTree<CoxRossRubinstein> {
public:
CoxRossRubinstein(
const boost::shared_ptr<StochasticProcess1D>& process,
Time end, Size steps, Real strike)
: EqualJumpsBinomialTree<CoxRossRubinstein>(process,
end, steps) {
dx_ = process->stdDeviation(0.0, x0_, dt_);
pu_ = 0.5 + 0.5*driftPerStep_/dx_;
pd_ = 1.0 - pu_;
}
};

class Tian : public BinomialTree<Tian> {
public:
Tian(const boost::shared_ptr<StochasticProcess1D>& process,
Time end, Size steps, Real strike)
: BinomialTree<Tian>(process, end, steps) {
// sets up_, down_, pu_, and pd_
}
Real underlying(Size i, Size index) const {
return x0_ * std::pow(down_, Real(int(i)-int(index)))
* std::pow(up_, Real(index));
};
Real probability(Size, Size, Size branch) const {
return (branch == 1 ? pu_ : pd_);
}
protected:
Real up_, down_, pu_, pd_;
};


#### Bibliography

[1] T. Veldhuizen, Techniques for Scientific C++. Indiana University Computer Science Technical Report TR542.

[2] J.O. Coplien, A Curiously Recurring Template Pattern. In S.B. Lippman, editor, C++ Gems. Cambridge University Press, 1996.

[3] E. Gamma, R. Helm, R. Johnson and J. Vlissides, Design Patterns: Element of Reusable Object-Oriented Software. Addison-Wesley, 1995.

## Monday, March 23, 2015

### Chapter 7, part 2 of 6: examples of discretized assets

Hello everybody.

This week, the second part of a series on the QuantLib tree framework that started in the previous post. As usual, it's taken from my book.

In case you missed it: we set the dates for my next Introduction to QuantLib Development course, which will be in London from June 29th to July 1st. More info at this link. You can get an early-bird discount until April 30th.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

### Example: discretized bonds

Vanilla bonds (zero-coupon, fixed-rate and floating-rate) are simple enough to work as first examples, but at the same time provide a range of features large enough for me to make a few points.

Well, maybe not the zero-coupon bond. It's probably the simplest possible asset (bar one with no payoff) so it's not that interesting on its own; however, it's going to be useful as a helper class in the examples that follow, since it provides a way to estimate discount factors on the lattice.

Its implementation is shown in the following listing.
    class DiscretizedDiscountBond : public DiscretizedAsset {
public:
DiscretizedDiscountBond() {}
void reset(Size size) {
values_ = Array(size, 1.0);
}
std::vector<Time> mandatoryTimes() const {
return std::vector<Time>();
}
};

It defines no mandatory times (because it will be happy to be initialized at any time you choose), it performs no adjustments (because nothing ever happens during its life), and its reset method simply fills each value in the array with one unit of currency. Thus, if you initialize an instance of his class at a time $$T$$ and roll it back on a lattice until an earlier time $$t$$, the array values will then equal the discount factors from $$T$$ to $$t$$ as seen from the corresponding lattice nodes.

Things start to get more interesting when we turn to fixed-rate bonds. QuantLib doesn't provide discretized fixed-rate bonds at this time; the listing below shows a simple implementation which works as an example, but should still be improved to enter the library.
    class DiscretizedFixedRateBond : public DiscretizedAsset {
public:
DiscretizedFixedRateBond(const vector<Time>& paymentTimes,
const vector<Real>& coupons,
Real redemption)
: paymentTimes_(paymentTimes), coupons_(coupons),
redemption_(redemption) {}

vector<Time> mandatoryTimes() const {
return paymentTimes_;
}
void reset(Size size) {
values_ = Array(size, redemption_);
}
private:
for (Size i=0; i<paymentTimes_.size(); i++) {
Time t = paymentTimes_[i];
if (t >= 0.0 && isOnTime(t)) {
}
}
}
values_ += coupons_[i];
}

vector<Time> paymentTimes_;
vector<Real> coupons_;
Real redemption_;
};


The constructor takes and stores a vector of times holding the payment schedule, the vector of the coupon amounts, and the amount of the redemption. Note that the type of the arguments is different from what the corresponding instrument class is likely to store (say, a vector of CashFlow instances); this implies that the pricing engine will have to perform some conversion work before instantiating the discretized asset.

The presence of a payment schedule implies that, unlike the zero-coupon bond above, this bond cannot be instantiated at just any time; and in fact, this class implements the mandatoryTimes method by returning the vector of the payment times. Rollback on the lattice will have to stop at each such time, and initialization will have to be performed at the maturity time of the bond, i.e., the latest of the returned times. When one does so, the reset method will fill each value in the array with the redemption amount and then call the adjustValues method, which will take care of the final coupon.

In order to enable adjustValues to do so, this class overrides the virtual postAdjustValuesImpl method. (Why the post- version of the method and not the pre-, you say? Bear with me a bit longer: all will become clear.) The method loops over the payment times and checks, by means of the probably poorly named isOnTime method, whether any of them equals the current asset time. If this is the case, we add the corresponding coupon amount to the asset values. For readability, the actual work is factored out in the addCoupon method. The coupon amounts will be automatically discounted as the asset is rolled back on the lattice.

Finally, let's turn to the floating-rate bond, shown in the listing that follows; this, too, is a simplified implementation.
    class DiscretizedFloatingRateBond : public DiscretizedAsset {
public:
DiscretizedFloatingRateBond(const vector<Time>& paymentTimes,
const vector<Time>& fixingTimes,
Real notional)
: paymentTimes_(paymentTimes), fixingTimes_(fixingTimes),
notional_(notional) {}

vector<Time> mandatoryTimes() const {
vector<Time> times = paymentTimes_;
std::copy(fixingTimes_.begin(), fixingTimes_.end(),
back_inserter(times));
return times;
}
void reset(Size size) {
values_ = Array(size, notional_);
}
private:
for (Size i=0; i<fixingTimes_.size(); i++) {
Time t = fixingTimes_[i];
if (t >= 0.0 && isOnTime(t)) {
}
}
}
DiscretizedDiscountBond bond;
bond.initialize(method(), paymentTimes_[i]);
bond.rollback(time_);

for (Size j=0; j<values_.size(); j++) {
values_[j] += notional_ * (1.0 - bond.values()[j]);
}
}

vector<Time> paymentTimes_;
vector<Time> fixingTimes_;
Real notional_;
};
The constructor takes and stores the vector of payment times, the vector of fixing times, and the notional of the bond; there are no coupon amounts, since they will be estimated during the calculation. For simplicity of implementation, we'll assume that the accrual time for the i-th coupon equals the time between its fixing time and its payment time.

The mandatoryTimes method returns the union of payment times and fixing times, since we'll need to work on both during the calculations. The reset method is similar to the one for fixed-rate bonds, and fills the array with the redemption value (which equals the notional of the bond) before calling adjustValues.

Adjustment is performed in the overridden preAdjustValuesImpl method. (Yes, the pre- version. Patience.) It loops over the fixing times, checks whether any of them equals the current time, and if so calls addCoupon method.

Now, like the late Etta James in one of her hits, you'd be justified in shouting "Stop the wedding". Of course the coupon should be added at the payment date, right? Well, yes; but the problem is that we're going backwards in time. In general, at the payment date we don't have enough information to add the coupon; it can only be estimated based on the value of the rate at an earlier time that we haven't yet reached. Therefore, we have to keep rolling back on the lattice until we get to the fixing date, at which point we can calculate the coupon amount and add it to the bond. In this case, we'll have to take care ourselves of discounting from the payment date, since we passed that point already.

That's exactly what the addCoupon method does. First of all, it instantiates a discount bond at the payment time $$T$$ and rolls it back to the current time, i.e., the fixing date $$t$$, so that its value equal at the $$j$$-th node the discount factors $$D_j$$ between $$t$$ and $$T$$. From those, we could estimate the floating rates $$r_j$$ (since it must hold that $$1 + r_j(T-t) = 1/D_j$$) and then the coupon amounts; but with a bit of algebra, we can find a simpler calculation. The coupon amount $$C_j$$ is given by $$Nr_j(T-t)$$, with $$N$$ being the notional; and since the relation above tells us that $$r_j(T-t) = 1/D_j - 1$$, we can substitute that to find that $$C = N(1/D_j - 1)$$. Now, remember that we already rolled back to the fixing date, so if we add the amount here we also have to discount it because it won't be rolled back from the payment date. This means that we'll have to multiply it by $$D_j$$, and thus the amount to be added to the $$j$$-th value in the array is simply $$C_j = N(1/D_j - 1)D_j = N(1-D_j)$$. The final expression is the one that can be seen in the implementation of addCoupon.

As you probably noted, the above hinges on the assumption that the accrual time equals the time between payment and fixing time. If this were not the case, the calculation would no longer simplify and we'd have to change the implementation; for instance, we might instantiate a first discount bond at the accrual end date to estimate the floating rate and the coupon amount, and a second one at the payment date to calculate the discount factors to be used when adding the coupon amount to the bond value. Of course, the increased accuracy would cause the performance to degrade since addCoupon would roll back two bonds, instead of one. You can choose either implementation based on your requirements.

### Example: discretized option

Sorry to have kept you waiting, folks. Here is where I finally explain the pre- vs post-adjustment choice, after the previous example helped me put my ducks in a row. I'll do so by showing an example of an asset class (the DiscretizedOption class, shown in the listing below) that can be used to wrap an underlying asset and obtain an option to enter the same: for instance, it could take a swap and yield a swaption. The implementation shown here is a slightly simplified version of the one provided by QuantLib, since it assumes a Bermudan exercise (or European, if one passes a single exercise time). Like the implementation in the library, it also assumes that there's no premium to pay in order to enter the underlying deal.
    class DiscretizedOption : public DiscretizedAsset {
public:
DiscretizedOption(
const shared_ptr<DiscretizedAsset>& underlying,
const vector<Time>& exerciseTimes)
: underlying_(underlying), exerciseTimes_(exerciseTimes) {}

vector<Time> mandatoryTimes() const {
vector<Time> times = underlying_->mandatoryTimes();
for (Size i=0; i<exerciseTimes_.size(); ++i)
if (exerciseTimes_[i] >= 0.0)
times.push_back(exerciseTimes_[i]);
return times;
}
void reset(Size size) {
QL_REQUIRE(method() == underlying_->method(),
"option and underlying were initialized on "
"different lattices");
values_ = Array(size, 0.0);
}
private:
underlying_->partialRollback(time());
for (Size i=0; i<exerciseTimes_.size(); i++) {
Time t = exerciseTimes_[i];
if (t >= 0.0 && isOnTime(t))
applyExerciseCondition();
}
}
void DiscretizedOption::applyExerciseCondition() {
for (Size i=0; i<values_.size(); i++)
values_[i] = std::max(underlying_->values()[i],
values_[i]);
}

shared_ptr<DiscretizedAsset> underlying_;
vector<Time> exerciseTimes_;
};

Onwards. The constructor takes and, as usual, stores the underlying asset and the exercise times; nothing to write much about.

The mandatoryTimes method takes the vector of times required by the underlying and adds the option's exercise times. Of course, this is done so that both the underlying and the option can be priced on the same lattice; the sequence of operations to get the option price will be something like:
    underlying = shared_ptr<DiscretizedAsset>(...);
option = shared_ptr<DiscretizedAsset>(
new DiscretizedOption(underlying, exerciseTimes));
grid = TimeGrid(..., option->mandatoryTimes());
lattice = shared_ptr<Lattice>(new SomeLattice(..., grid, ...));
underlying->initialize(lattice, T1);
option->initialize(lattice, T2);
option->rollback(t0);
NPV = option->presentValue();

in which, first, we instantiate both underlying and option and retrieve the mandatory times from the latter; then, we create the lattice and initialize both assets (usually at different times, e.g., the maturity date for a swap and the latest exercise date for the corresponding swaption); and finally, we roll back the option and get its price. As we'll see in a minute, the option also takes care of rolling the underlying back as needed.

Back to the class implementation. The reset method performs the sanity check that underlying and option were initialized on the same lattice, fills the values with zeroes (what you end up with if you don't exercise), and then calls adjustValues to take care of a possible exercise.

Which brings us to the crux of the matter, i.e., the postAdjustValuesImpl method. The idea is simple enough: if we're on an exercise time, we check whether keeping the option is worth more than entering the underlying asset. To do so, we roll the underlying asset back to the current time, compare values at each node, and set the option value to the maximum of the two; this latest part is abstracted out in the applyExerciseCondition method.

The tricky part of the problem is that the underlying might need to perform an adjustment of its own when rolled back to the current time. Should this be done before or after the option looks at the underlying values?

It depends on the particular adjustment. Let's look at the bonds in the previous example. If the underlying is a discretized fixed-rate bond, and if the current time is one of its payment times, it needs to adjust its values by adding a coupon. This coupon, though, is being paid now and thus is no longer part of the asset if we exercise and enter it. Therefore, the decision to exercise must be based on the bond value without the coupon; i.e., we must call the applyExerciseCondition method before adjusting the underlying.

The discretized floating-rate bond is another story. It adjusts the values if the current time is one of its fixing times; but in this case the corresponding coupon is just starting and will be paid at the end of the period, and so must be added to the bond value before we decide about exercise. Thus, the conclusion is the opposite: we must call applyExerciseCondition after adjusting the underlying.

What should the option do? It can't distinguish between the two cases, since it doesn't know the specific behavior of the asset it was passed; therefore, it lets the underlying itself sort it out. First, it rolls the underlying back to the current time, but without performing the final adjustment (that's what the partialRollback method does); instead, it calls the underlying's preAdjustValues method. Then, if we're on an exercise time, it performs its own adjustment; and finally, it calls the underlying's postAdjustValues method.

This is the reason the DiscretizedAsset class has both a preAdjustValues and a postAdjustValues method; they're there so that, in case of asset composition, the underlying can choose on which side of the fence to be when some other adjustment (such as an exercise) is performed at the same time. In the case of our previous example, the fixed-rate bond will add its coupon in postAdjustValues and have it excluded from the future bond value, while the floating-rate bond will add its coupon in preAdjustValues and have it included.

Unfortunately, this solution is not very robust. For instance, if the exercise dates were a week or two before the coupon dates (as is often the case) the option would break for fixed-rate coupons, since it would have no way to stop them from being added before the adjustment. The problem can be solved: in the library, this is done for discretized interest-rate swaps by adding fixed-rate coupons on their start dates, much in the same way as floating-rate coupons. Another way to fix the issue would be to roll the underlying back only to the date when it's actually entered, then to make a copy of it and roll the copy back to the exercise date without performing any adjustment. Both solutions are somewhat clumsy at this time; it would be better if QuantLib provided some means to do it more naturally.

## Monday, March 16, 2015

### Chapter 7, part 1 of 6: the tree framework

Welcome back.

This week, some content from my book (which, as I announced last week, is now available as an ebook from Leanpub; thanks to all those who bought it so far). This post is the first part of a series that will describe the QuantLib tree framework.

But first, some news: with the help and organization of the good Jacob Bettany from MoneyScience, I'll hold a new edition of my Introduction to QuantLib Development course. It's an intensive three-day course in which I explain the architecture of the library and guide attendees through exercises that let them build new financial instruments based on the frameworks I describe. It will be in London from June 29th to July 1st, and you can find more info and a brochure at this link. An early-bird discount is available until April 30th.

And in case you can't make it to London, the March sale at QuantsHub continues, so you might consider the workshop I recorded for them instead. (Oh, and the intro is available on YouTube now.)

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

### The tree framework

Together with Monte Carlo simulations, trees are among the most commonly used tools in quantitative finance. As usual, the dual challenge for a framework is to implement a number of reusable (and possibly composable) pieces and to provide customization hooks for injecting new behavior. The QuantLib tree framework has gone through a few revisions, and the current version is a combination of object-oriented and generic programming that does the job without losing too much performance in the process.

#### The Lattice and DiscretizedAsset classes

The two main classes of the framework are the Lattice and DiscretizedAsset classes. In our intentions, the Lattice class (shown in the listing below) was to model the generic concept of a discrete lattice, which might have been a tree as well as a finite-difference grid. This never happened; the finite-difference framework went its separate way and is unlikely to come back any time soon. However, the initial design helped keeping the Lattice class clean: to this day, it contains almost no implementation details and is not tied to trees.
    class Lattice {
public:
Lattice(const TimeGrid& timeGrid) : t_(timeGrid) {}
virtual ~Lattice() {}

const TimeGrid& timeGrid() const { return t_; }

virtual void initialize(DiscretizedAsset&,
Time time) const = 0;
virtual void rollback(DiscretizedAsset&,
Time to) const = 0;
virtual void partialRollback(DiscretizedAsset&,
Time to) const = 0;
virtual Real presentValue(DiscretizedAsset&) const = 0;

virtual Disposable<Array> grid(Time) const = 0;
protected:
TimeGrid t_;
};

Its constructor takes a TimeGrid instance and stores it (its only concession to implementation inheritance, together with an inspector that returns the time grid). All other methods are pure virtual. The initialize method must set up a discretized asset so that it can be put on the lattice at a given time. (I do realize this is mostly hand-waving right now. It will become clear as soon as we get to a concrete lattice.) The rollback and partialRollback methods roll the asset backwards in time on the lattice down to the desired time (with a difference I'll explain later); and the presentValue method returns what its name says.

Finally, the grid method returns the values of the discretized quantity underlying the lattice. This is a bit of a smell. The information was required in other parts of the library, and we didn't have any better solution. However, this method has obvious shortcomings. On the one hand, it constrains the return type, which either leaks implementation or forces a type conversion; and on the other hand, it simply makes no sense when the lattice has more than one factor, since the grid should be a matrix or a cube in that case. In fact, two-factor lattices implement it by having it throw an exception. All in all, this method is up for some serious improvement in future versions of the library.

The DiscretizedAsset class is the base class for the other side of the tree framework—the Costello to Lattice's Abbott, as it were. It models an asset that can be priced on a lattice: it works hand in hand with the Lattice class to provide generic behavior, and has hooks that derived classes can use to add behavior specific to the instrument they implement.

As can be seen from the listing below, it's not nearly as abstract as the Lattice class. Most of its methods are concrete, with a few virtual ones that use the Template Method pattern to inject behavior.
    class DiscretizedAsset {
public:
DiscretizedAsset()
virtual ~DiscretizedAsset() {}

Time time() const { return time_; }
Time& time() { return time_; }
const Array& values() const { return values_; }
Array& values() { return values_; }
const shared_ptr<Lattice>& method() const {
return method_;
}

void initialize(const shared_ptr<Lattice>&,
Time t) {
method_ = method;
method_->initialize(*this, t);
}
void rollback(Time to) {
method_->rollback(*this, to);
}
void partialRollback(Time to)  {
method_->partialRollback(*this, to);
}
Real presentValue() {
return method_->presentValue(*this);
}

virtual void reset(Size size) = 0;
}
}
}
}
}

virtual std::vector<Time> mandatoryTimes() const = 0;

protected:
bool isOnTime(Time t) const {
const TimeGrid& grid = method()->timeGrid();
return close_enough(grid[grid.index(t)],time());
}

Time time_;
Array values_;

private:
shared_ptr<Lattice> method_;
};

Its constructor takes no argument, but initializes a couple of internal variables. The main inspectors return the data comprising its state, namely, the time $$t$$ of the lattice nodes currently occupied by the asset and its values on the same nodes; both inspectors give both read and write access to the data to allow the lattice implementation to modify them. A read-only inspector returns the lattice on which the asset is being priced.

The next bunch of methods implements the common behavior that is inherited by derived classes and provide the interface to be called by client code. The body of a tree-based engine will usually contain something like the following after instantiating the tree and the discretized asset:
    asset.initialize(lattice, T);
asset.rollback(t0);
results_.value = asset.presentValue();

The initialize method stores the lattice that will be used for pricing and sets the initial values of the asset (or rather its final values, since the time T passed at initialization is the maturity time); the rollback method rolls the asset backwards on the lattice until the time t0; and the presentValue method extracts the value of the asset as a single number.

The three method calls above seem simple, but their implementation triggers a good deal of behavior in both the asset and the lattice and involves most of the other methods of DiscretizedAsset as well as those of Lattice. The interplay between the two classes is not nearly as funny as Who's on First, but it's almost as complex to follow; thus, you might want to refer to the sequence diagram shown below.

The initialize method sets the calculation up by placing the asset on the lattice at its maturity time (the asset's, I mean) and preparing it for rollback. This means that on the one hand, the vector holding the asset values on each lattice node must be dimensioned correctly; and on the other hand, that it must be filled with the correct values. Like most of the DiscretizedAsset methods, initialize does this by delegating part of the actual work to the passed lattice; after storing it in the corresponding data member, it simply calls the lattice's initialize method passing the maturity time and the asset itself.

Now, the Lattice class doesn't implement initialize, which is left as purely virtual; but any sensible implementation in derived classes will do the dance shown in the sequence diagram. It might perform some housekeeping step of its own, not shown here; but first, it will determine the size of the lattice at the maturity time (that is, the number of nodes) probably by calling a corresponding method; then, it will store the time into the asset (as you remember, the asset provides read/write access to its state) and pass the size to the asset's reset method. The latter, implemented in derived classes, will resize the vector of values accordingly and fill it with instrument-specific values (for instance, a bond might store its redemption and the final coupon, if any).

Next is the rollback method. Again, it calls the corresponding method in the Lattice class, which performs the heavy-machinery work of stepping through the tree (or whatever kind of lattice it models). It reads the current time from the asset, so it knows on what nodes it is currently sitting; then, a close interplay begins.

The point is, the lattice can't simply roll the asset back to the desired time, since there might be all kind of events occurring: coupon payments, exercises, you name it. Therefore, it just rolls it back one short step, modifies the asset values accordingly—which includes both combining nodes and discounting—and then pauses to ask the asset if there's anything that needs to be done; that is, to call the asset's adjustValues method.

The adjustment is done in two steps, calling first the preAdjustValues and then the postAdjustValue method. This is done so that other assets have a chance to perform their own adjustment between the two; I'll show an example of this in a later post. Each of the two methods performs a bit of housekeeping (namely, they store the time of the latest adjustment so that the asset is not adjusted twice at the same time; this might happen when composing assets, and would obviously wreak havoc on the results) and then calls a virtual method (preAdjustValuesImpl or postAdjustValueImpl, respectively) which makes the instrument-specific adjustments.

When this is done, the ball is back in the lattice's field. The lattice rolls back another short step, and the whole thing repeats again and again until the assets reaches the required time.

Finally, the presentValue method returns, well, the present value of the asset. It is meant to be called by client code after rolling the asset back to the earliest interesting time—that is, to the time for which further rolling back to today's date only involves discounting and not adjustments of any kind: e.g., it might be the earliest exercise time of an option or the first coupon time of a bond. As usual, the asset delegates the calculation by passing itself to the lattice's presentValue method; a given lattice implementation might simply roll the asset back to the present time (if it isn't already there, of course) and read its value from the root node, whereas another lattice might be able to take a shortcut and calculate the present value as a function of the values on the nodes at the current time.

As usual, there's room for improvement. Some of the steps are left as an implicit requirement; for instance, the fact that the lattice's initialize method should call the asset's reset method, or that reset must resize the array stored in the asset (in hindsight, the array should be resized in the lattice's initialize method before calling reset. This would minimize repetition, since we'll write many more assets than lattices.) However, this is what we have at this time. In next post, we'll move on and build some assets.

## Monday, March 9, 2015

### Implementing QuantLib available from Leanpub

Hello everybody.

Big news: Implementing QuantLib is available on Leanpub.

For those of you that are not familiar with Leanpub, it is a publishing platform for ebooks that specializes in lean publishing; that is, publishing books which are still in progress and can thus benefit from reader feedback while they're still being completed. When you buy a book in progress, you get the current version now and you'll get all further updates for free.

Long story short: if you are interested in an Epub or Mobi version of my book, go get it (you can pay as much as you want). The drafts of the PDF version remain available for free on this site.

Of course, I'll be very grateful for all feedback (that's the whole point...) I'm particularly curious to know it the ebook looks ok on different devices, and if code listings wrap correctly—meaning that they shouldn't wrap; it they do, try decreasing the font size.

All in all, I think this will make for an interesting experiment. If it goes well, The QuantLib Notebooks will be next.

Ok, that's all for this post. I'll be back soon with some news on my next course. Oh, and the March sale at QuantsHub continues.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Monday, March 2, 2015

### QuantLib notebook: numerical Greeks calculation

Hello everybody.

Here is the screencast of a new QuantLib notebook (in case you're new to this, the whole series in on both YouTube and Vimeo; choose the one that suits you best). In this one, I show how quotes can be used for the numerical calculation of Greeks.

And so that I don't waste a good segue into self-promotion, this was also one of the notebooks that I used in the workshop I recorded for QuantsHub. I hear there's a March sale going on, so this might be a good time to have a look.

That's all for this week. Turn the video below to full screen, sit back and enjoy.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.