Monday, April 27, 2015

Chapter 7, part 6 of 6: an example of tree-based engine

Hello everybody.

A bit of trivia: one week ago, on April 20th, the QuantLib repository was forked for the 400th time (and they keep coming: we're up to 407 already as I write this post). Kudos to GitHub user donglijiujiu—which doesn't win any prize, apart from this shout-out.

This week: the final part of the series on tree that started a few weeks ago.

And did I mention that the recording of my Quants Hub workshop was discounted to £99? And that you're still in time for an early-bird discount for my next course?

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Tree-based engines

As you might have guessed, a tree-based pricing engine will perform few actual computations; its main job will rather be to instantiate and drive the needed discretized asset and lattice. (If you're pattern-minded, you can have your pick here. This implementation has suggestions of the Adapter, Mediator, or Facade pattern, even though it doesn't match any of them exactly.)

Example: callable fixed-rate bonds

As an example, I'll sketch the implementation of a tree-based pricing engine for callable fixed-rate bonds. For the sake of brevity, I'll skip the description of the CallableBond class. (To be specific, the class name should be CallableFixedRateBond; but that would get old very quickly here, so please allow me to use the shorter name.) Instead, I'll just show its inner arguments and results classes, which act as its interface with the pricing engine and which you can see in the listing below together with the corresponding engine class. If you're interested in a complete implementation, you can look for it in QuantLib's experimental folder.
    class CallableBond::arguments : public PricingEngine::arguments {
      public:
        std::vector<Date> couponDates;
        std::vector<Real> couponAmounts;
        Date redemptionDate;
        Real redemptionAmount;
        std::vector<Callability::Type> callabilityTypes;
        std::vector<Date> callabilityDates;
        std::vector<Real> callabilityPrices;
        void validate() const;
    };

    class CallableBond::results : public Instrument::results {
      public:
        Real settlementValue;
    };

    class CallableBond::engine
        : public GenericEngine<CallableBond::arguments,
                               CallableBond::results> {};
Now, let's move into engine territory. In order to implement the behavior of the instrument, we'll need a discretized asset; namely, the DiscretizedCallableBond class, shown in the next listing.
    class DiscretizedCallableBond : public DiscretizedAsset {
      public:
        DiscretizedCallableBond(const CallableBond::arguments& args,
                                const Date& referenceDate,
                                const DayCounter& dayCounter)
        : arguments_(args) {
            redemptionTime_ =
                dayCounter.yearFraction(referenceDate,
                                        args.redemptionDate);

            couponTimes_.resize(args.couponDates.size());
            for (Size i=0; i<couponTimes_.size(); ++i)
                couponTimes_[i] =
                    dayCounter.yearFraction(referenceDate,
                                            args.couponDates[i]);
            // same for callability times
        }
        std::vector<Time> mandatoryTimes() const {
            std::vector<Time> times;

            Time t = redemptionTime_;
            if (t >= 0.0)
                times.push_back(t);
            // also add non-negative coupon times and callability times

            return times;
        }
        void reset(Size size) {
            values_ = Array(size, arguments_.redemptionAmount);
            adjustValues();
        }
      protected:
        void preAdjustValuesImpl();
        void postAdjustValuesImpl();
      private:
        CallableBond::arguments arguments_;
        Time redemptionTime_;
        std::vector<Time> couponTimes_;
        std::vector<Time> callabilityTimes_;
        void applyCallability(Size i);
        void addCoupon(Size i);
    };
To prevent much aggravation, its constructor takes and stores an instance of the arguments class. This avoids having to spell out the list of needed data in at least three places: the declaration of the data members, the constructor, and the client code that instantiates the discretized asset. Besides the arguments instance, the constructor is also passed a reference date and a day counter that are used in its body to convert the several bond dates into corresponding times. (The conversion is somewhat verbose, which suggests that we might be missing an abstraction here. However, "time converter" sounds a bit too vague. If you find it, please do let me know. The thing has been bugging me for a while.)

Next comes the required DiscretizedAsset interface. The mandatoryTimes method collects the redemption time, the coupon times, and the callability times filtering out the negative ones; and the reset method resizes the array of the values, sets each one to the redemption amount, and proceeds to perform the needed adjustments—that is, the more interesting part of the class.

Being rather specialized, it is pretty unlikely that this class will be composed with others; therefore, it doesn't really matter in this case whether the adjustments go into preAdjustValuesImpl or postAdjustValuesImpl. However, for sake of illustration, I'll separate the callability from the coupon payments and manage them as pre- and post-adjustment, respectively.
    void DiscretizedCallableBond::preAdjustValuesImpl() {
        for (Size i=0; i<callabilityTimes_.size(); i++) {
            Time t = callabilityTimes_[i];
            if (t >= 0.0 && isOnTime(t)) {
                applyCallability(i);
            }
        }
    }

    void DiscretizedCallableBond::postAdjustValuesImpl() {
        for (Size i=0; i<couponTimes_.size(); i++) {
            Time t = couponTimes_[i];
            if (t >= 0.0 && isOnTime(t)) {
                addCoupon(i);
            }
        }
    }
The preAdjustValuesImpl loops over the callability times, checks whether any of them equals the current time, and calls the applyCallability} method if this is the case. The postAdjustValuesImpl does the same, but checking the coupon times and calling the addCoupon method instead.
    void DiscretizedCallableBond::applyCallability(Size i) {
        switch (arguments_.callabilityTypes[i]) {
          case Callability::Call:
            for (Size j=0; j<values_.size(); j++) {
                values_[j] =
                    std::min(arguments_.callabilityPrices[i],
                             values_[j]);
            }
            break;
          case Callability::Put:
            for (Size j=0; j<values_.size(); j++) {
                values_[j] =
                    std::max(arguments_.callabilityPrices[i],
                             values_[j]);
            }
            break;
          default:
            QL_FAIL("unknown callability type");
        }
    }

    void DiscretizedCallableBond::addCoupon(Size i) {
        values_ += arguments_.couponAmounts[i];
    }
The applyCallability method is passed the index of the callability being exercised; it checks its type (both callable and puttable bonds are supported) and sets the value at each node to the value after exercise. The logic is simple enough: at each node, given the estimated value of the rest of the bond (that is, the current asset value) and the exercise premium, the issuer will choose the lesser of the two values while the holder will choose the greater. The addCoupon method is much simpler, and just adds the coupon amount to each of the values.

As you might have noticed, this class assumes that the exercise dates coincide with the coupon dates; it won't work if an exercise date is a few days before a coupon payment (the coupon amount would be added to the asset values before the exercise condition is checked). Of course, this is often the case, and it should be accounted for. Currently, the library implementation sidesteps the problem by adjusting each exercise date so that it equals the nearest coupon date. A better choice would be to detect which coupons are affected; each of them would be put into a new asset, rolled back until the relevant exercise time, and added after the callability adjustment.

Finally, the listing below shows the TreeCallableBondEngine class.
    class TreeCallableBondEngine : public CallableBond::engine {
      public:
        TreeCallableBondEngine(
                       const Handle<ShortRateModel>& model,
                       const Size timeSteps,
                       const Date& referenceDate = Date(),
                       const DayCounter& dayCounter = DayCounter());
        : model_(model), timeSteps_(timeSteps),
          referenceDate_(referenceDate), dayCounter_(dayCounter) {
            registerWith(model_);
        }
        void calculate() const {
            Date referenceDate;
            DayCounter dayCounter;

            // try to extract the reference date and the day counter
            // from the model, use the stored ones otherwise.

            DiscretizedCallableBond bond(arguments_,
                                         referenceDate,
                                         dayCounter);

            std::vector<Time> times = bond.mandatoryTimes();
            TimeGrid grid(times.begin(), times.end(), timeSteps_);
            boost::shared_ptr<Lattice> lattice = model_->tree(grid);

            Time redemptionTime =
                dayCounter.yearFraction(referenceDate,
                                        arguments_.redemptionDate);
            bond.initialize(lattice, redemptionTime);
            bond.rollback(0.0);
            results_.value = bond.presentValue();
        }
    };
Its constructor takes and stores a handle to a short-rate model that will provide the lattice, the total number of time steps we want the lattice to have, and an optional reference date and day counter; the body just registers to the handle.

The calculate method is where everything happens. By the time it is called, the engine arguments have been filled by the instrument, so that base is covered; the other data we need are a date and a day counter for time conversion. Not all short-rate models can provide them, so, in a boring few lines of code not shown here, the engine tries to downcast the model to some specific class that does; if it fails, it falls back to using the ones optionally passed to the constructor.

At that point, the actual calculations can begin. The engine instantiates the discretized bond, asks it for its mandatory times, and uses them to build a time grid; then, the grid is passed to the model which returns a corresponding lattice based on the short-rate dynamics. All that remains is to initialize the bond at its redemption time (which in the current code is recalculated explicitly, but could be retrieved as the largest of the mandatory times), roll it back to the present time, and read its value.

Liked this post? Share it:

Monday, April 20, 2015

Chapter 7, part 5 of 6: tree-based lattices

Welcome back.

This week's content is the fifth part of the series on the QuantLib tree framework which started in this post.

A bit of self promotion: Quants Hub has modified its pricing structure so that all workshops are now sold at £99. This includes my A Look at QuantLib Usage and Development workshop, which I hope is now affordable for a lot more people. It's six hours of videos, so it should be a decent value for that money. (But if you want to see me in person instead, come to London from June 29th to July 1st for my Introduction to QuantLib Development course. Details are at this link. You're still in time for an early-bird discount.)

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

The TreeLattice class template

After the last couple of posts on trees, we're now back to lattices. The TreeLattice class, shown in the listing below, inherits from Lattice and is used as a base class for lattices that are implemented in terms of one or more trees. Of course, it should be called TreeBasedLattice instead; the shorter name was probably chosen before we discovered the marvels of automatic completion—or English grammar.
    template <class Impl>
    class TreeLattice : public Lattice,
                        public CuriouslyRecurringTemplate<Impl> {
     public:
       TreeLattice(const TimeGrid& timeGrid, Size n);
       void initialize(DiscretizedAsset& asset, Time t) const {
           Size i = t_.index(t);
           asset.time() = t;
           asset.reset(this->impl().size(i));
       }
       void rollback(DiscretizedAsset& asset, Time to) const {
           partialRollback(asset,to);
           asset.adjustValues();
       }
       void partialRollback(DiscretizedAsset& asset, Time to) const {
           Integer iFrom = Integer(t_.index(asset.time()));
           Integer iTo = Integer(t_.index(to));
           for (Integer i=iFrom-1; i>=iTo; --i) {
               Array newValues(this->impl().size(i));
               this->impl().stepback(i, asset.values(), newValues);
               asset.time() = t_[i];
               asset.values() = newValues;
               if (i != iTo) // skip the very last adjustment
                   asset.adjustValues();
           }
       }
       void stepback(Size i, const Array& values,
                             Array& newValues) const {
           for (Size j=0; j<this->impl().size(i); j++) {
               Real value = 0.0;
               for (Size l=0; l<n_; l++) {
                   value += this->impl().probability(i,j,l) *
                            values[this->impl().descendant(i,j,l)];
               }
               value *= this->impl().discount(i,j);
               newValues[j] = value;
           }
       }
       Real presentValue(DiscretizedAsset& asset) const {
           Size i = t_.index(asset.time());
           return DotProduct(asset.values(), statePrices(i));
       }
       const Array& statePrices(Size i) const;
    };
This class template acts as an adapter between the Lattice class, from which it inherits the interface, and the Tree class template which will be used for the implementation. Once again, we used the Curiously Recurring Template Pattern (which wasn't actually needed for trees, but it is in this case); the behavior of the lattice is written in terms of a number of methods that must be defined in derived classes. For greater generality, there is no mention of trees in the TreeLattice class. It's up to derived classes to choose what kind of trees they should contain and how to use them.

The TreeLattice constructor is simple enough: it takes and stores the time grid and an integer n specifying the order of the tree (2 for binomial, 3 for trinomial and so on). It also performs a check or two, and initializes a couple of data members used for caching data; but I'll gloss over that here.

The interesting part is the implementation of the Lattice interface, which follows the outline I gave back in the first post on this framework. The initialize method calculates the index of the passed time on the stored grid, sets the asset time, and finally passes the number of nodes on the corresponding tree level to the asset's reset method. The number of nodes is obtained by calling the size method through CRTP; this is one of the methods that derived classes will have to implement, and (like all other such methods) has the same signature as the corresponding method in the tree classes.

The rollback and partialRollback methods perform the same work, with the only difference that rollback performs the adjustment at the final time and partialRollback doesn't. Therefore, it's only to be expected that the one is implemented in terms of the other; rollback performs a call to partialRollback, followed by another to the asset's adjustValues method.

The rollback procedure is spelled out in partialRollback: it finds the indexes of the current time and of the target time on the grid, and it loops from one to the other. At each step, it calls the stepback method, which implements the actual numerical work of calculating the asset values on the i-th level of the tree from those on level i+1; then it updates the asset and, at all steps except the last, calls its adjustValues method.

The implementation of the stepback method defines, by using it, the interface that derived classes must implement (that's currently the only sane way, since concepts were left out of C++11). It determines the value of the asset at each node by combining the values at each descendant node, weighed by the corresponding transition probability; the result is further adjusted by discounting it. All in all, the required interface includes the size method, which I've already shown; the probability and descendant methods, with the same signature as the tree methods of the same name; and the discount method, which takes the indexes of the desired level and node and returns the discount factor between that node and its descendants (assumed to be independent of the particular descendant).

Finally, the presentValue method is implemented by returning the dot-product of the asset values by the state prices at the current time on the grid. I'll cheerfully ignore the way the state prices are calculated; suffice it to say that using them is somewhat more efficient than rolling the asset all the way back to \( t=0 \).

Now, why does the TreeLattice implementation calls methods with the same signature as those of a tree (thus forcing derived classes to define them) instead of just storing a tree and calling its methods directly? Well, that's the straightforward thing to do if you have just an underlying tree; and in fact, most one-dimensional lattices will just forward the calls to the tree they store. However, it wouldn't work for other lattices (say, two-dimensional ones); and in that case, the wrapper methods used in the implementation of stepback allow us to adapt the underlying structure, whatever that is, to their common interface.

The library contains instances of both kinds of lattices. Most—if not all—of those of the straightforward kind inherit from the TreeLattice1D class template, shown in the listing below.
    template <class Impl>
    class TreeLattice1D : public TreeLattice<Impl> {
      public:
        TreeLattice1D(const TimeGrid& timeGrid, Size n);
        Disposable<Array> grid(Time t) const;
    };
It doesn't define any of the methods required by TreeLattice; and the method it does implement (the grid method, defined as pure virtual in the Lattice base class) actually requires another one, namely, the underlying method. All in all, this class does little besides providing a useful categorization; the storage and management of the underlying tree is, again, left to derived classes. (One might argue for including a default implementation of the required methods in TreeLattice1D. This would probably make sense; it would make it easier to implement derived classes in the most common cases, and could be overridden if a specific lattice needed it.)

One such class is the inner OneFactorModel::ShortRateTree class, shown in the next listing.
    class OneFactorModel::ShortRateTree
        : public TreeLattice1D<OneFactorModel::ShortRateTree> {
      public:
        ShortRateTree(
              const boost::shared_ptr<TrinomialTree>& tree,
              const boost::shared_ptr<ShortRateDynamics>& dynamics,
              const TimeGrid& timeGrid)
        : TreeLattice1D<OneFactorModel::ShortRateTree>(timeGrid, 3),
          tree_(tree), dynamics_(dynamics) {}
        Size size(Size i) const {
            return tree_->size(i);
        }
        Real underlying(Size i, Size index) const {
            return tree_->underlying(i, index);
        }
        Size descendant(Size i, Size index, Size branch) const {
            return tree_->descendant(i, index, branch);
        }
        Real probability(Size i, Size index, Size branch) const {
            return tree_->probability(i, index, branch);
        }
        DiscountFactor discount(Size i, Size index) const {
            Real x = tree_->underlying(i, index);
            Rate r = dynamics_->shortRate(timeGrid()[i], x);
            return std::exp(-r*timeGrid().dt(i));
        }
      private:
        boost::shared_ptr<TrinomialTree> tree_;
        boost::shared_ptr<ShortRateDynamics> dynamics_;
    };
Its constructor takes a trinomial tree, built by any specific short-rate model according to its dynamics; an instance of the ShortRateDynamics class, which I'll gloss over; and a time grid, which could have been extracted from the tree so I can't figure out why we pass it instead. The grid is passed to the base-class constructor, together with the order of the tree (which is 3, of course); the tree and the dynamics are stored as data members.

As is to be expected, most of the required interface is implemented by forwarding the call to the corresponding tree method. The only exception is the discount method, which doesn't have a corresponding tree method; it is implemented by asking the tree for the value of its underlying value at the relevant node, by retrieving the short rate from the dynamics, and by calculating the corresponding discount factor between the time of the node and the next time on the grid.

Note that, by modifying the dynamics, it is possible to change the value of the short rate at each node while maintaining the structure of the tree unchanged. This is done in a few models in order to fit the tree to the current interest-rate term structure; the ShortRateTree class provides another constructor, not shown here, that takes additional parameters to perform the fitting procedure.


As an example of the second kind of lattice, have a look at the TreeLattice2D class template, shown in the listing below. It acts as base class for lattices with two underlying variables, and implements most of the methods required by TreeLattice. (In this, it differs from TreeLattice1D which didn't implement any of them. We might have had a more symmetric hierarchy by leaving TreeLattice2D mostly empty and moving the implementation to a derived class. At this time, though, it would sound a bit like art for art's sake.)
    template <class Impl, class T = TrinomialTree>
    class TreeLattice2D : public TreeLattice<Impl> {
      public:
        TreeLattice2D(const boost::shared_ptr<T>& tree1,
                      const boost::shared_ptr<T>& tree2,
                      Real correlation)
        : TreeLattice<Impl>(tree1->timeGrid(),
                            T::branches*T::branches),
          tree1_(tree1), tree2_(tree2), m_(T::branches,T::branches),
          rho_(std::fabs(correlation)) { ... }
        Size size(Size i) const {
            return tree1_->size(i)*tree2_->size(i);
        }
        Size descendant(Size i, Size index, Size branch) const {
            Size modulo = tree1_->size(i);

            Size index1 = index % modulo;
            Size index2 = index / modulo;
            Size branch1 = branch % T::branches;
            Size branch2 = branch / T::branches;

            modulo = tree1_->size(i+1);
            return tree1_->descendant(i, index1, branch1) +
                   tree2_->descendant(i, index2, branch2)*modulo;
        }
        Real probability(Size i, Size index, Size branch) const {
            Size modulo = tree1_->size(i);

            Size index1 = index % modulo;
            Size index2 = index / modulo;
            Size branch1 = branch % T::branches;
            Size branch2 = branch / T::branches;

            Real prob1 = tree1_->probability(i, index1, branch1);
            Real prob2 = tree2_->probability(i, index2, branch2);
            return prob1*prob2 + rho_*(m_[branch1][branch2])/36.0;
        }
      protected:
        boost::shared_ptr<T> tree1_, tree2_;
        Matrix m_;
        Real rho_;
    };
The two variables are modeled by correlating the respective trees. Now, I'm sure that any figure I might draw would only add to the confusion. However, the idea is that the state of the two variables is expressed by a pair of node from the respective trees; that the transitions to be considered are those from pair to pair; and that all the possibilities are enumerated so that they can be retrieved by means a single index and thus can match the required interface.

For instance, let's take the case of two trinomial trees. Let's say we're at level \( i \) (the two trees must have the same time grid, or all bets are off). The first variable has a value that corresponds to node \( j \) on its tree, while the second sits on node \( k \). The structure of the first tree tells us that on next level, the first variable might go to nodes \( j'_0 \), \( j'_1 \) or \( j'_2 \) with different probabilities; the second tree gives us \( k'_0 \), \( k'_1 \) and \( k'_2 \) as target nodes for the second variable. Seen as transition between pairs, this means that we're at \( (j,k) \) on the current level and that on the next level we might go to any of \( (j'_0,k'_0) \), \( (j'_1,k'_0) \), \( (j'_2,k'_0) \), \( (j'_0,k'_1) \), \( (j'_1,k'_1) \), and so on until \( (j'_2,k'_2) \) for a grand total of \( 3 \times 3 = 9 \) possibilities. By enumerating the pairs in lexicographic order like I just did, we can give \( (j'_0,k'_0) \) the index \( 0 \), \( (j'_1,k'_0) \) the index \( 1 \), and so on until we give the index \( 8 \) to \( (j'_2,k'_2) \). In the same way, if on a given level there are \( n \) nodes on the first tree and \( m \) on the second, we get \( n \times m \) pairs that, again, can be enumerated in lexicographic order: the pair \( (j,k) \) is given the index \( k \times n + j \).

At this point, the implementation starts making sense. The constructor of TreeLattice2D takes and stores the two underlying trees and the correlation between the two variables; the base-class TreeLattice constructor is passed the time grid, taken from the first tree, and the order of the lattice, which equals the product of the orders of the two trees; for two trinomial trees, this is \( 3 \times 3 = 9 \) as above. (The current implementation assumes that the two trees are of the same type, but it could easily be made to work with two trees of different orders.) The constructor also initializes a matrix m_ that will be used later on.

The size of the lattice at a given level is the product \( n \times m \) of the sizes of the two trees, which translates in a straightforward way into the implementation of the size method.

Things get more interesting with the two following methods. The descendant method takes the level i of the tree; the index of the lattice node, which is actually the index of a pair \( (j,k) \) among all those available; and the index of the branch to take, which by the same token is a pair of branches. The first thing it does is to extract the actual pairs. As I mentioned, the passed index equals \( k \times n + j \), which means that the two underlying indexes can be retrieved as index%n and index/n. The same holds for the two branches, with \( n \) being replaced by the order of the first tree. Having all the needed indexes and branches, the code calls the descendant method on the two trees, obtaining the indexes \( j' \) and \( k' \) of the descendant nodes; then it retrieves the size \( n' \) of the first tree at the next level; and finally returns the combined index \( k' \times n' + j' \).

Up to a certain point, the probability method performs the same calculations; that is, until it retrieves the two probabilities from the two underlying trees. If the two variables were not correlated, the probability for the transition would then be the product of the two probabilities. Since this is not the case, a correction term is added which depends from the passed correlation (of course) and also from the chosen branches. I'll gloss on the value of the correction as I already did on several other formulas.

This completes the implementation. Even though it contains a lot more code than its one-dimensional counterpart, TreeLattice2D is still an incomplete class. Actual lattices will have to inherit from it, close the CRTP loop, and implement the missing discount method.

I'll close this post by mentioning one feature that is currently missing from lattices, but would be nice to have. If you turn back to the implementation of the stepback method, you'll notice that it assumes that the values we're rolling back are cash values; that is, it always discounts them. It could also be useful to roll values back without discounting. For instance, the probability that an option be exercised could be calculated by rolling it back on the trees without discounting, while adjusting it to \( 1 \) on the nodes where the exercise condition holds.

Liked this post? Share it:

Monday, April 13, 2015

Chapter 7, part 4 of 6: trinomial trees

Welcome back, and I hope you all had a good Easter.

A few things happened in the QuantLib world during the past couple of weeks. First, a couple of blogs started publishing QuantLib-related posts: one by Matthias Groncki and another by Gouthaman Balaraman. They both look interesting (and they both use IPython notebooks, which I like, too).

Then, Klaus Spanderen started playing with Adjoint Algorithmic Differentiation, thus joining the already impressive roster of people working on it.

Finally, Jayanth R. Varma and Vineet Virmani from the Indian Institute of Management Ahmedabad have published a working paper that introduces QuantLib for teaching derivative pricing. We had little feedback from academia so far, so I was especially glad to hear from them (and if you, too, use QuantLib in the classroom, drop me a line.)

This week's content is the fourth part of the series on the tree framework that started in this post. And of course, a reminder: you're still in time for an early-bird discount on my next course. Details are at this link.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Trinomial trees

In last post, I described binomial trees. As you might remember, they had a rather regular structure which allows one to implement them as just calculations on indices.

Trinomial trees are implemented as different beasts entirely, and have a lot more leeway in connecting nodes. The way they're built (which is explained in greater detail in Brigo and Mercurio [1]) is sketched in the figure below: on each level in the tree, nodes are placed at an equal distance between them based on a center node with the same underlying value as the root of the tree; in the figure, the center nodes are \( A_3 \) and \( B_3 \), placed on the same vertical line. As you can see, the distance can be different on each level.


Once the nodes are in place, we build the links between them. Each node on a level corresponds, of course, to an underlying value \( x \) at the given time \( t \). From each of them, the process gives us the expectation value for the next time conditional to starting from \( (x, t) \); this is represented by the dotted lines. For each forward value, we determine the node which is closest and use that node for the middle branch. For instance, let's look at the node \( A_4 \) in the figure. The dynamics of the underlying gives us a forward value corresponding to the point \( F_4 \) on the next level, which is closest to the node \( B_3 \). Therefore, \( A_4 \) branches out to \( B_3 \) in the middle and to its nearest siblings \( B_2 \) on the left and \( B_4 \) on the right.

As you see from the figure, it can well happen that two nodes on one level have forward values which are closest to the same node on the next level; see, for instance, nodes \( A_3 \) and \( A_4 \) going both to \( B_3 \) in the middle. This means that, while it's guaranteed by construction that three branches start from each node, there's no telling beforehand how many branches go to a given node; here we range from \( B_5 \) being the end node of just one branch to \( B_3 \) being the end node of five. There is also no telling how many nodes we'll need at each level: in this case, five \( B \) nodes are enough to receive all the branches starting from five \( A \) nodes, but depending on the dynamics of the process we might have needed more nodes or fewer (you can try it: modify the distances in the figure and see what happens).

The logic I just described is implemented by QuantLib in the TrinomialTree class, shown in the listing below.
    class TrinomialTree : public Tree<TrinomialTree> {
        class Branching;
      public:
        enum Branches { branches = 3 };
        TrinomialTree(
               const boost::shared_ptr<StochasticProcess1D>& process,
               const TimeGrid& timeGrid,
               bool isPositive = false);
        Real dx(Size i) const;
        const TimeGrid& timeGrid() const;
        Size size(Size i) const  {
            return i==0 ? 1 : branchings_[i-1].size();
        }
        Size descendant(Size i, Size index, Size branch) const;
        Real probability(Size i, Size index, Size branch) const;
        Real underlying(Size i, Size index) const {
            return i==0 ? x0_;
                x0_ + (branchings_[i-1].jMin() + index)*dx(i);
        }
      protected:
        std::vector<Branching> branchings_;
        Real x0_;
        std::vector<Real> dx_;
        TimeGrid timeGrid_;
    };
Its constructor takes a one-dimensional process, a time grid specifying the times corresponding to each level of the tree (which don't need to be at regular intervals) and a boolean flag which, if set, constrains the underlying variable to be positive.

I'll get to the construction in a minute, but first I need to show how the information is stored; that is, I need to describe the inner Branching class, shown in the next listing. Each instance of the class stores the information for one level of the tree; e.g., one such instance could encode the figure shown above (to which I'll keep referring).
    class TrinomialTree::Branching {
      public:
        Branching()
        : probs_(3), jMin_(QL_MAX_INTEGER), jMax_(QL_MIN_INTEGER) {}
        Size size() const {
            return jMax_ - jMin_ + 1;
        }
        Size descendant(Size i, Size branch) const {
            return k_[i] - jMin_ + branch - 1;
        }
        Real probability(Size i, Size branch) const {
            return probs_[branch][i];
        }
        Integer jMin() const;
        Integer jMax() const;
        void add(Integer k, Real p1, Real p2, Real p3) {
            k_.push_back(k);
            probs_[0].push_back(p1);
            probs_[1].push_back(p2);
            probs_[2].push_back(p3);

            jMin_ = std::min(jMin_, k-1);
            jMax_ = std::max(jMax_, k+1);
        }
      private:
        std::vector<Integer> k_;
        std::vector<std::vector<Real> > probs_;
        Integer jMin_, jMax_;
    };
As I mentioned, nodes are placed based on a center node corresponding to the initial value of the underlying. That's the only available reference point, since we don't know how many nodes we'll use on either side. Therefore, the Branching class uses an index system that assigns the index \( j=0 \) to the center node and works outwards from there. For instance, on the lower level in the figure we'd have \( j=0 \) for \( A_3 \), \( j=1 \) for \( A_4 \), \( j=-1 \) for \( A_2 \) and so on; on the upper level, we'd start from \( j=0 \) for \( B_3 \). These indexes will have to be translated to those used by the tree interface, that start at \( 0 \) for the leftmost node (\( A_1 \) in the figure).

To hold the tree information, the class declares as data members a vector of integers, storing for each lower-level node the index (in the branching system) of the corresponding mid-branch node on the upper level (there's no need to store the indexes of the left-branch and right-branch nodes, as they are always the neighbors of the mid-branch one); three vectors, declared for convenience of access as a vector of vectors, storing the probabilities for each of the three branches; and two integers storing the minimum and maximum node index used on the upper level (again, in the branching system).

Implementing the tree interface requires some care with the several indexes we need to juggle. For instance, let's go back to TrinomialTree and look at the size method. It must return the number of nodes at level \( i \); and since each branching holds information on the nodes of its upper level, the correct figure must be retrieved branchings_[i-1] (except for the case \( i=0 \), for which the result is \( 0 \) by construction). To allow this, the Branching class provides a size method that returns the number of points on the upper level; since jMin_ and jMax_ store the indexes of the leftmost and rightmost node, respectively, the number to return is jMax_-jMin_+1. In the figure, indexes go from \( -2 \) to \( 2 \) (corresponding to \( B_1 \) and \( B_5 \)) yielding \( 5 \) as the number of nodes.

The descendant and probability methods of the tree both call corresponding methods in the Branching class. The first returns the descendant of the \( i \)-th lower-level node on the given branch (specified as \( 0 \), \( 1 \) or \( 2 \) for the left, mid and right branch, respectively). To do so, it first retrieves the index \( k \) of the mid-branch node in the internal index system; then it subtracts jMin, which transforms it in the corresponding external index; and finally takes the branch into account by adding branch-1 (that is, \( -1 \), \( 0 \) or \( 1 \) for left, mid and right branch). The probability method is easy enough: it selects the correct vector based on the branch and returns the probability for the \( i \)-th node. Since the vector indexes are zero-based, no conversion is needed.

Finally, the underlying method is implemented by TrinomialTree directly, since the branching doesn't store the relevant process information. The Branching class only needs to provide an inspector jMin, which the tree uses to determine the offset of the leftmost node from the center; it also provides a jMax inspector, as well as an add method which is used to build the branching. Such method should probably be called push_back instead; it takes the data for a single node (that is, mid-branch index and probabilities), adds them to the back of the corresponding vectors, and updates the information on the minimum and maximum indexes.

What remains now is for me to show (with the help of the listing below and, again, of the diagram at the beginning) how a TrinomialTree instance is built.
    TrinomialTree::TrinomialTree(
               const boost::shared_ptr<StochasticProcess1D>& process,
               const TimeGrid& timeGrid,
               bool isPositive)
    : Tree<TrinomialTree>(timeGrid.size()), dx_(1, 0.0),
      timeGrid_(timeGrid) {
        x0_ = process->x0();
        Size nTimeSteps = timeGrid.size() - 1;
        Integer jMin = 0, jMax = 0;

        for (Size i=0; i<nTimeSteps; i++) {
            Time t = timeGrid[i];
            Time dt = timeGrid.dt(i);

            Real v2 = process->variance(t, 0.0, dt);
            Volatility v = std::sqrt(v2);
            dx_.push_back(v*std::sqrt(3.0));

            Branching branching;
            for (Integer j=jMin; j<=jMax; j++) {
                Real x = x0_ + j*dx_[i];
                Real f = process->expectation(t, x, dt);
                Integer k = std::floor((f-x0_)/dx_[i+1] + 0.5);

                if (isPositive)
                    while (x0_+(k-1)*dx_[i+1]<=0)
                        k++;

                Real e = f - (x0_ + k*dx_[i+1]);
                Real e2 = e*e, e3 = e*std::sqrt(3.0);

                Real p1 = (1.0 + e2/v2 - e3/v)/6.0;
                Real p2 = (2.0 - e2/v2)/3.0;
                Real p3 = (1.0 + e2/v2 + e3/v)/6.0;

                branching.add(k, p1, p2, p3);
            }
            branchings_.push_back(branching);
            jMin = branching.jMin();
            jMax = branching.jMax();
        }
    }
The constructor takes the stochastic process for the underlying variable, a time grid, and a boolean flag. The number of times in the grid corresponds to the number of levels in the tree, so it is passed to the base Tree constructor; also, the time grid is stored and a vector dx_, which will store the distances between nodes at the each level, is initialized. The first level has just one node, so there's no corresponding distance to speak of; thus, the first element of the vector is just set to \( 0 \). Other preparations include storing the initial value x0_ of the underlying and the number of steps; and finally, declaring two variables jMin and jMax to hold the minimum and maximum node index. At the initial level, they both equal \( 0 \).

After this introduction, the tree is built recursively from one level to the next. For each step, we take the initial time t and the time step dt from the time grid. They're used as input to retrieve the variance of the process over the step, which is assumed to be independent of the value of the underlying (as for binomial trees, we have no way to enforce this). Based on the variance, we calculate the distance to be used at the next level and store it in the dx_ vector (the value of \( \sqrt{3} \) times the variance is suggested by Hull and White as resulting in best stability) and after this, we finally build a Branching instance.

To visualize the process, let's refer again to the figure. The code cycles over the nodes on the lower level, whose indexes range between the current values of jMin and jMax: in our case, that's \( -2 \) for \( A_1 \) and \( 2 \) for \( A_5 \). For each node, we can calculate the underlying value x from the initial value x0_, the index j, and the distance dx_[i] between the nodes. From x, the process can give us the forward value f of the variable after dt; and having just calculated the distance dx_[i+1] between nodes on the upper level, we can find the index k of the node closest to f. As usual, k is an internal index; for the node \( A_4 \) in the figure, whose forward value is \( F_4 \), the index k would be \( 0 \) corresponding to the center node \( B_3 \).

If the boolean flag isPositive is true, we have to make sure that no node corresponds to a negative or null value of the underlying; therefore, we check the value at the left target node (that would be the one with index k-1, since k is the index of the middle one) and if it's not positive, we increase k and repeat until the desired condition holds. In the figure, if the underlying were negative at node \( B_1 \) then node \( A_1 \) would branch to \( B_2 \), \( B_3 \) and \( B_4 \) instead.

Finally, we calculate the three transition probabilities p1, p2 and p3 (the formulas are derived in [1]) and store them in the current Branching instance together with k. When all the nodes are processed, we store the branching and update the values of jMin and jMax so that they range over the nodes on the upper level; the new values will be used for the next step of the main for loop (the one over time steps) in which the current upper level will become the lower level.

Bibliography

[1] D. Brigo and F. Mercurio, Interest Rate Models — Theory and Practice, 2nd edition. Springer, 2006.

Liked this post? Share it: