Monday, May 26, 2014

Chapter 6, part 3 of 8: the Black-Scholes process

Welcome back.

This post is the third in a series (started here) that covers chapter 6 of my book. This time, an example: the Black-Scholes process (in which I'll point out all the flaws in our code, thus showing that I'm not cut out for marketing).

You still have a few days (until the end of this month) to get an early-bird discount for my Introduction to QuantLib Development course, which will be on September 22nd to 24th in London. I've talked about it at length in a previous post, so I'll shut up now.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Example: the Black-Scholes process

And now, an example or three. Not surprisingly, the first is the one-dimensional Black-Scholes process. You've already met briefly the corresponding class (by the unwieldy name of GeneralizedBlackScholesProcess) in this post, where it was passed as an argument to a pricing engine for a European option; I'll go back to that code later on, when I discuss a few shortcomings of the current design and possible future solutions. For the time being, let's have a look at the current implementation, sketched in listing 6.6.

Listing 6.6: Partial implementation of the GeneralizedBlackScholesProcess class.
    class GeneralizedBlackScholesProcess
: public StochasticProcess1D {
public:
GeneralizedBlackScholesProcess(
const Handle<Quote>& x0,
const Handle<YieldTermStructure>& dividendTS,
const Handle<YieldTermStructure>& riskFreeTS,
const Handle<BlackVolTermStructure>& blackVolTS,
const boost::shared_ptr<discretization>& d =
boost::shared_ptr<discretization>());
Real x0() const {
return x0_->value();
}
Real drift(Time t, Real x) const {
Real sigma = diffusion(t,x);
Time t1 = t + 0.0001;
return riskFreeRate_->forwardRate(t,t1,Continuous,...)
- dividendYield_->forwardRate(t,t1,Continuous,...)
- 0.5 * sigma * sigma;
}
Real diffusion(Time t, Real x) const;
Real apply(Real x0, Real dx) const {
return x0 * std::exp(dx);
}
Real expectation(Time t0, Real x0, Time dt) const {
QL_FAIL("not implemented");
}
Real evolve(Time t0, Real x0, Time dt, Real dw) const {
return apply(x0, discretization_->drift(*this,t0,x0,dt) +
stdDeviation(t0,x0,dt)*dw);
}
const Handle<Quote>& stateVariable() const;
const Handle<YieldTermStructure>& dividendYield() const;
const Handle<YieldTermStructure>& riskFreeRate() const;
const Handle<BlackVolTermStructure>& blackVolatility() const;
const Handle<LocalVolTermStructure>& localVolatility() const;
};

Well, first of all, I should probably explain the length of the class name. The "generalized" bit doesn't refer to some extension of the model, but to the fact that this class was meant to cover different specific processes (such as the Black-Scholes process proper, the Black-Scholes-Merton process, the Black process and so on); the shorter names were reserved for the specializations.

The constructor of this class takes handles to the full set of arguments needed for the generic formulation: a term structure for the risk-free rate, a quote for the varying market value of the underlying, another term structure for its dividend yield, and a term structure for its implied Black volatility. In order to implement the StochasticProcess interface, the constructor also takes a discretization instance. All finance-related inputs are stored in the constructed class instance and can be retrieved by means of inspectors.

You might have noticed that the constructor takes a Black volatility $$\sigma_B(t,k)$$, which is not what is needed to return the $$\sigma(t,x)$$ diffusion term. The process performs the required conversion internally, using different algorithms depending on the type of the passed Black volatility (constant, depending on time only, or with a smile). The type is checked at run-time with a dynamic cast, which is not pretty but in this case is simpler than a Visitor pattern. At this time, the conversion algorithms are built into the process and can't be changed by the user; nor it is possible to provide a precomputed local volatility. The class uses the Observer pattern to determine when to convert the Black volatility into the local one. Recalculation is lazy; meaning that it happens only when the local volatility is actually used and not immediately upon a notification from an observable.

The next methods, drift and diffusion, show the crux of this class: it's actually a process for the logarithm of the underlying that masquerades as a process for the underlying itself. Accordingly, the term returned by drift is the one for the logarithm, namely, $$r(t) - q(t) - \frac{1}{2}\sigma^2(t,x)$$, where the rates and the local volatility are retrieved from the corresponding term structures (this is another problem in itself, to which I'll return). The diffusion method returns the local volatility.

The choice of $$\log(x)$$ as the actual stochastic variable brings quite a few consequences. First of all, the variable we're using is not the one we're exposing. The quote we pass to the constructor holds the market value of the underlying; the same value is returned by the x0 method; and the values passed to and returned from other methods such as evolve are for the same quantity. In retrospect, this was a bad idea (yes, I can hear you say "D'oh.") We were led to it by the design we had developed, but it should have been a hint that some piece was missing.

Other consequences can be seen all over the listing. The choice of $$\log(x)$$ as the working variable is the reason why the apply method is virtual, and indeed why it exists at all: applying to $$x$$ a drift $$\Delta$$ meant for $$log(x)$$ is not done by adding them, but by returning $$x\exp(\Delta)$$. The expectation method is affected, too: its default implementation would apply the drift as above, thus returning $$\exp(E[\log(x)])$$ instead of $$E[x]$$. To prevent it from returning the wrong value, the method is currently disabled (it raises an exception). In turn, the evolve method, whose implementation relied on expectation, had to be overridden to work around it. (The evolve method might have been overridden for efficiency, even if expectation were available. The default implementation in StochasticProcess1D would return $$x\exp(\mu dt)\exp(\sigma dw)$$, while the overridden one calculates $$x\exp(\mu dt + \sigma dw)$$.)

On top of the above smells, we have a performance problem—which is known to all those that tried a Monte Carlo engine from the library, as well as to all the people on the Wilmott forums to whom the results were colorfully described. Using apply, the evolve method performs an exponentiation at each step; but above all, the drift and diffusion methods repeatedly ask term structures for values. If you remember this post, this means going through at least a couple of levels of virtual method calls, sometimes (if we're particularly unlucky) retrieving a rate from discount factors that were obtained from the same rate to begin with.

Finally, there's another design problem. As I mentioned previously, we already used the Black-Scholes process in the AnalyticEuropeanEngine shown as an example in this post. However, if you look at the engine code in the library, you'll see that it doesn't use the process by means of the StochasticProcess interface; it just uses it as a bag of quotes and term structures. This suggests that the process is a jack of too many trades.

How can we fix it, then? We'll probably need a redesign. It's all hypothetical, of course; there's no actual code yet, just a few thoughts I had while I was reviewing the current code for this chapter. But it could go as follows.

First of all (and at the risk of being told once again that I'm overengineering) I'd separate the stochastic-process part of the class from the bag-of-term-structures part. For the second one, I'd create a new class, probably called something like BlackScholesModel. Instances of this class would be the ones passed to pricing engines, and would contain the full set of quotes and term structures; while we're at it, the interface should also allow the user to provide a local volatility, or to specify new ways to convert the Black volatility with some kind of Strategy pattern.

From the model, we could build processes. Depending on the level of coupling and the number of classes we prefer, we might write factory classes that take a model and return processes; we might add factory methods to the model class that return processes; or we might have the process constructors take a model. In any case, this could allow us to write process implementations that could be optimized for the given simulation. For instance, the factory (whatever it might be) might take as input the time nodes of the simulation and return a process that precomputed the rates $$r(t_i)$$ and $$q(t_i)$$ at those times, since they don't depend on the underlying value; if a type check told the factory that the volatility doesn't have smile, the process could precompute the $$\sigma(t_i)$$, too.

As for the $$x$$ vs $$\log(x)$$ problem, I'd probably rewrite the process so that it's explicitly a process for the logarithm of the underlying: the x0 method would return a logarithm, and so would the other methods such as expectation or evolve. The process would gain in consistency and clarity; however, since it would be no longer guaranteed that a process generates paths for the underlying, client code taking a generic process would also need a function or a function object that converts the value of the process variable into a value of the underlying. Such function might be added to the StochasticProcess interface, or passed to client code along with the process.

Monday, May 19, 2014

Chapter 6, part 2 of n: stochastic processes

Hello everybody.

This post is the second in a series that covers chapter 6 of my book, that is, the Monte Carlo framework in QuantLib. The first post is here. In this post: stochastic processes.

Also, as I mentioned last week, you can now register for my next Introduction to QuantLib Development course (which is going to be in London, September 22nd to 24th). You can get an early-bird discount until the end of May.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Stochastic processes

The whole point of using RNGs is, of course, to use the generated random numbers to drive a stochastic process. The interface of the StochasticProcess class (displayed in listing 6.4) was designed with this application in mind—and it shows.

Listing 6.4: Partial implementation of the StochasticProcess class.
    class StochasticProcess : public Observer,
public Observable {
public:
class discretization {
public:
virtual ~discretization() {}
virtual Disposable<Array> drift(const StochasticProcess&,
Time t0, const Array& x0,
Time dt) const = 0;
// same for diffusion etc.
};

virtual ~StochasticProcess() {}

virtual Size size() const = 0;
virtual Size factors() const {
return size();
}
virtual Disposable<Array> initialValues() const = 0;

virtual Disposable<Array> drift(Time t,
const Array& x) const = 0;
virtual Disposable<Matrix> diffusion(Time t,
const Array& x)
const = 0;
virtual Disposable<Array> expectation(Time t0,
const Array& x0,
Time dt) const {
return apply(x0, discretization_->drift(*this,t0,x0,dt));
}
virtual Disposable<Matrix> stdDeviation(Time t0,
const Array& x0,
Time dt) const {
return discretization_->diffusion(*this,t0,x0,dt);
}
virtual Disposable<Matrix> covariance(Time t0,
const Array& x0,
Time dt) const {
return discretization_->covariance(*this,t0,x0,dt);
}
virtual Disposable<Array> evolve(Time t0,
const Array& x0,
Time dt,
const Array& dw) const {
return apply(expectation(t0,x0,dt),
stdDeviation(t0,x0,dt)*dw);
}
virtual Disposable<Array> apply(const Array& x0,
const Array& dx) const {
return x0 + dx;
}
};

The concept modeled by the class is a somewhat generic n-dimensional stochastic process described by the equation
$$\mathrm{d}\mathbf{x} = \mathbf{\mu}(t, \mathbf{x}) \mathrm{d}t + \mathbf{\sigma}(t, \mathbf{x}) \cdot \mathrm{d}\mathbf{w}$$
However, the interface of the class deals more with the discretization of the process over the time steps of a sampled path.

Straight away, the class defines a polymorphic inner class discretization. It is an instance of the Strategy pattern, whose purpose is to make it possible to change the way a process is discretely sampled. Its methods take a StochasticProcess instance, a starting point $$(t,\mathbf{x})$$, and a time step $$\Delta t$$ and returns a discretization of the process drift and diffusion. (By this time, you'll have surely noticed that there's no provision for jumps. Yes, I know. Hold that thought; I'll have more to say on this later.)

Back to the StochasticProcess interface. The class defines a few inspectors. The size method returns the number of variables modeled by the process. For instance, it would return 1 for a regular Black-Scholes process with a single underlying, or 2 for a stochastic-volatility model, or N for the joint process of N correlating underlying assets each following a Black-Scholes process. The factors method returns the number of random variates used to drive the variables; it has a default implementation returning the same number as size, although on afterthought I'm not sure that it was a great idea; we might have required to specify that explicitly. To round up the inspectors, the initialValues method returns the values of the underlying variables at t=0; in other words, the present values, in which no randomness is involved. As in most StochasticProcess methods, the result is wrapped in a Disposable class template. This is a way to avoid a few copies when returning arrays or matrices; if you're interested in the details, you can have a look at appendix A. When we can rely on compilers supporting the C++11 standard, we'll be able to replace those with r-value references [1]; but as usual, it will be a few years before we can do this (where "this" also includes being able to drop support for older compilers with a clear conscience, and without leaving users outside in the rain).

The next pair of methods (also inspectors, in a way) return more detailed information on the process. The drift method return the $\mu(t,\mathbf{x})$ term in the previous formula, while the diffusion method returns the $\sigma(t,\mathbf{x})$ term. Obviously, for dimensions to match, the drift term must be a N-dimensional array and the diffusion term a M × N matrix, with M and N being the dimensions returned by the factors and size methods, respectively. Also, the diffusion matrix must include correlation information.

The last set of methods deals with the discretization of the process over a path. The expectation method returns the expectation values of the process variables at time $$t+\Delta t$$, assuming they had values $$\mathbf{x}$$ at time $$t$$. Its default implementation asks the stored discretization instance for the integrated drift of the variables over $$\Delta t$$ and applies it to the starting values (apply is another virtual method that takes a value $$\mathbf{x}$$ and a variation $$\Delta\mathbf{x}$$ and unsurprisingly returns $$\mathbf{x} + \Delta\mathbf{x}$$. Why we needed a polymorphic method to do this, you ask? It's a sad story that will be told in a short while). The stdDeviation method returns the diffusion term integrated over $$t+\Delta t$$ and starting at $$(t, \mathbf{x})$$, while the covariance method returns its square; in their default implementation, they both delegate the calculation to the discretization instance. Finally, the evolve method provides some kind of higher-level interface; it takes a starting point $$(t, \mathbf{x})$$, a time step $$\Delta t$$, and an array $$\mathbf{w}$$ of random Gaussian variates, and returns the end point $$\mathbf{x'}$$. Its default implementation asks for the standard-deviation matrix, multiplies it by the random variates to yield the random part of the simulated path, and applies the resulting variations to the expectation values of the variables at $$t+\Delta t$$ (which includes the deterministic part of the step).

As you've seen, the methods in this last set are all given a default implementation, sometimes calling methods like drift or diffusion by way of the discretization strategy and sometimes calling directly other methods in the same set; it's a kind of multiple-level Template Method pattern, as it were. This makes it possible for classes inheriting from StochasticProcess to specify their behavior at different levels.

At the very least, such a class must implement the drift and diffusion methods, which are purely virtual in the base class. This specifies the process at the innermost level, and is enough to have a working stochastic process; the other methods would use their default implementation and delegate the needed calculations to the contained discretization instance. The derived class can prescribe a particular discretization, suggest one by setting it as a default, or leave the choice entirely to the user.

At a somewhat outer level, the process class can also override the expectation and stdDeviation methods; this might be done, for instance, when it's possible to write an closed formula for their results. In this case, the developer of the class can decide either to prevent the constructor from taking a discretization instance, thus forcing the use of the formula; or to allow the constructor to take one, in which case the overriding method should check for its presence and possibly forward to the base-class implementation instead of using its own. The tasks of factoring in the random variates and of composing the integrated drift and diffusion terms are still left to the evolve method.

At the outermost level, the derived class can override the evolve method and do whatever it needs—even disregard the other methods in the StochasticProcess interface and rely instead on ones declared in the derived class itself. This can serve as a Hail Mary pass for those process which don't fit the interface as currently defined. For instance, a developer wanting to add stochastic jumps to a process might add them into evolve, provided that a random-sequence generator can generate the right mixture of Gaussian and Poisson variates to pass to the method.

Wouldn't it be better to support jumps in the interface? Well, yes, in the sense that it would be better if a developer could work with the interface rather than around it. However, there are two kinds of problems that I see about adding jumps to the StochasticProcess class at this time.

On the one hand, I'm not sure what the right interface should be. For instance, can we settle on a single generalized formula including jumps? I wouldn't want to release an interface just to find out later that it's not the correct one; which is likely, given that at this time we don't have enough code to generalize from.

On the other hand, what happens when we try to integrate the next feature? Do we add it to the interface, too? Or should we try instead to generalize by not specializing; that is, by declaring fewer methods in the base-class interface, instead of more? For instance, we could keep evolve or something like it, while moving drift, diffusion and their likes in subclasses instead (by the way, this might be a way to work on a jump interface without committing too early to it: writing some experimental classes that define the new methods and override evolve to call them. Trying to make them work should provide some useful feedback).

But I went on enough already. In short: yes, Virginia, the lack of jumps in the interface is a bad thing. However, I think we'd be likely to make a poor job if we were to add them now. I, for one, am going to stay at the window and see if anybody comes along with some working code. If you wanted to contribute, those experimental classes would be most appreciated.

One last detour before we tackle a few examples. When implementing a one-dimensional process, the Arrays and their likes tend to get in the way; they make the code more tiresome to write and less clear to read. Ideally, we'd want an interface like the one declared by StochasticProcess but with methods that take and return simple real numbers instead of arrays and matrices. However, we can't declare such methods as overloads in StochasticProcess, since they don't apply to a generic process; and neither we wanted to have a separate hierarchy for 1-D processes, since they belong with all the others.

The library solves his problem by providing the StochasticProcess1D class, shown in listing 6.5. On the one hand, this class declares an interface that mirrors exactly the StochasticProcess interface, even down to the default implementations, but uses the desired Real type. On the other hand, it inherits from StochasticProcess (establishing that a 1-D process "is-a" stochastic process, as we wanted) and implements its interface in terms of the new one by boxing and unboxing Reals into and from Arrays, so that the developer of a 1-D process needs not care about it: in short, a straightforward application of the Adapter pattern. Because of the 1:1 correspondence between scalar and vectorial methods (as can be seen in the listing) the 1-dimensional process will work the same way when accessed from either interface; also, it will have the same possibilities of customization provided by the StochasticProcess class and described earlier in this section.

Listing 6.5: Partial implementation of the StochasticProcess1D class.
    class StochasticProcess1D : public StochasticProcess {
public:
class discretization {
public:
virtual ~discretization() {}
virtual Real drift(const StochasticProcess1D&,
Time t0, Real x0, Time dt) const = 0;
// same for diffusion etc.
};
virtual Real x0() const = 0;
virtual Real drift(Time t, Real x) const = 0;
virtual Real diffusion(Time t, Real x) const = 0;
virtual Real expectation(Time t0, Real x0, Time dt) const {
return apply(x0, discretization_->drift(*this, t0,
x0, dt));
}
virtual Real stdDeviation(Time t0, Real x0, Time dt) const {
return discretization_->diffusion(*this, t0, x0, dt);
}
virtual Real variance(Time t0, Real x0, Time dt) const;
virtual Real evolve(Time t0, Real x0,
Time dt, Real dw) const {
return apply(expectation(t0,x0,dt),
stdDeviation(t0,x0,dt)*dw);
}
private:
Size size() const { return 1; }
Disposable<Array> initialValues() const  {
return Array(1, x0());
}
Disposable<Array> drift(Time t, const Array& x) const {
return Array(1, drift(t, x[0]));
}
// ...the rest of the \texttt{StochasticProcess} interface...
Disposable<Array> evolve(Time t0, const Array& x0,
Time dt, const Array& dw) const {
return Array(1, evolve(t0,x0[0],dt,dw[0]));
}
};


Bibliography

[1] H.E. Hinnant, B. Stroustrup and B. Kozicki, A Brief Introduction to Rvalue References, C++ Standards Committee Paper N2027, 2006.

Monday, May 12, 2014

Chapter 6, part 1 of n: random-number generation

Welcome back.

Today's post is the first of a new series that covers chapter 6 of my book. The subject is the Monte Carlo framework of QuantLib, and it's going to take a few posts (probably interspersed with other content).

A plea for help: last month, Richard Stanton reported on the mailing list that compilation of the Python bindings fails on Mac OS X Mavericks. If you had any success doing that, or if you have a Mac and want to help, please post to the mailing list at quantlib-users@lists.sourceforge.net.

In other news, I think I mentioned that you can register for a new edition of my Introduction to QuantLib Development course, and with an early-bird discount to boot. If you haven't been on this blog recently, it's the course that I teach once or twice a year based on the contents of my book. A good part of the learning experience is doing a number of exercises (which you don't get to do on the blog) so you'll have to bring your laptop and get ready to code. It's going to be in London on September 22nd to 24th, and I look forward to see you there.

And finally: in next week's post, I'll need to display some math. I think I'll go with MathJax, which is supposed to work in all browser; but if yours doesn't display correctly bits of math like $$(\mathbf{x},t)$$ or $$\Delta t$$ here, let me know.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

The Monte Carlo framework

The concept behind Monte Carlo methods is deceptively simple—generate random paths, price the desired instrument over such paths, and average the results. However (and even assuming that the tasks I just enumerated were as simple as they seem: they're not) several choices can be made regarding the implementation of the model. Pseudo-random or quasi-random numbers? Which ones? When should the simulation stop? What results should be returned? How about more advanced techniques such as likelihood ratio or pathwise Greeks?

The mission of a library writer, should he accept it, is to make most (possibly all) such choices independently available to the user; to make it possible to model the underlying stochastic process in a polymorphic way; and on top of it all, to provide at least part of the scaffolding needed to put everything together and run the simulation.

The posts in this series will describe the design of our Monte Carlo framework, in what measure it meets (or falls short of) the above requirements, and how it can be integrated with the pricing-engine framework described in chapter 2.

Random-number generation

Among the basic building blocks for our task are random-number generators (RNG), the most basic being uniform RNGs. The library provides quite a few of them, the most notable being the Mersenne Twister among pseudo-random generators and Sobol among low-discrepancy ones. Here, I'll gloss over their implementation, as well as over when and how you should use either kind of generator; such information (as well as everything else related to Monte Carlo methods) is covered in literature much better than I ever could, for instance in Glasserman [1] or Jäckel [2].

The interface of uniform RNGs, shown in listing 6.1, doesn't make for a very exciting reading. Its methods are the ones you would expect: pseudo-random generators sport a constructor that takes an initialization seed and a method, next, that returns the next random value; low-discrepancy generators take an additional constructor argument that specify their dimensionality and return a vector of values from their nextSequence
method. (On second thought, we might have called the nextSequence method simply next. Including the "sequence" token in the method name reminds me of Hungarian notation, which is up there with goto in the list of things considered harmful.) The only feature of note is the Sample struct, used to return the generated values, which also contains a weight. This was done to make it possible for generators to sample values with some kind of bias (e.g., when using importance sampling). However, at this time the library doesn't contain any such generator; all available RNGs return samples with unit weight.

Listing 6.1: Basic random-number generator classes.
    template <class T>
struct Sample {
typedef T value_type;
T value;
Real weight;
};

class MersenneTwisterUniformRng {
public:
typedef Sample<Real> sample_type;
explicit MersenneTwisterUniformRng(unsigned long seed = 0);
sample_type next() const;
};

class SobolRsg {
public:
typedef Sample<std::vector<Real> > sample_type;
SobolRsg(Size dimensionality,
unsigned long seed = 0);
const sample_type& nextSequence() const;
Size dimension() const { return dimensionality_; }
};


So much for uniform \acronym{RNG}s. However, they're only the rawest material; we need to refine them further. This is done by means of classes (wrappers, or combinators, if you will; or again, you might see them as implementations of the Decorator pattern) that take an instance of a uniform RNG and yield a different generator. We need two kinds of decorators. On the one hand, we need tuples of random numbers rather than single ones: namely, M × N numbers in order to evolve M variables for N steps. Low-discrepancy generators return tuples already (of course they need to do so, for their low-discrepancy property to hold at the required dimensionality). Since we want a generic interface, we'll make the pseudo-random generators return tuples as well, by simply drawing from them repeatedly; the class template that does so, called RandomSequenceGenerator, is shown in listing 6.2.

Listing 6.2: Sketch of the RandomSequenceGenerator class template.
    template<class RNG>
class RandomSequenceGenerator {
public:
typedef Sample<std::vector<Real> > sample_type;
RandomSequenceGenerator(Size dimensionality,
const RNG& rng)
: dimensionality_(dimensionality), rng_(rng),
sequence_(std::vector<Real>(dimensionality), 1.0) {}
const sample_type& nextSequence() const {
sequence_.weight = 1.0;
for (Size i=0; i<dimensionality_; i++) {
typename RNG::sample_type x(rng_.next());
sequence_.value[i] = x.value;
sequence_.weight  *= x.weight;
}
return sequence_;
}
private:
Size dimensionality_;
RNG rng_;
mutable sample_type sequence_;
};


On the other hand, we usually want Gaussian random numbers, not uniform ones. There's a number of ways to obtain them, but not all of them might be equally suited for any given situation; for instance, those consuming M uniform random numbers to yield N Gaussian ones don't work with low-discrepancy generators (especially if M varies from draw to draw, like in rejection techniques). If you want to stay generic, the simplest method is probably to feed your uniform numbers to the inverse-cumulative Gaussian function, which yield a Gaussian per uniform; this is also the default choice in the library. The corresponding class template, InverseCumulativeRsg, is shown in listing 6.3. It takes the implementation of the inverse-cumulative function as a template argument, since there's a few approximations and no closed formula for it; most of the times you'll be fine with the one provided by the library, but at times one might choose to change it in order to trade accuracy for speed (or the other way around).

Listing 6.3: Sketch of the InverseCumulativeRsg class template.
    template <class USG, class IC>
class InverseCumulativeRsg {
public:
typedef Sample<std::vector<Real> > sample_type;
InverseCumulativeRsg(const USG& uniformSequenceGenerator,
const IC& inverseCumulative);
const sample_type& nextSequence() const  {
typename USG::sample_type sample =
uniformSequenceGenerator_.nextSequence();
x_.weight = sample.weight;
for (Size i = 0; i < dimension_; i++) {
x_.value[i] = ICD_(sample.value[i]);
}
return x_;
}
private:
USG uniformSequenceGenerator_;
Size dimension_;
mutable sample_type x_;
IC ICD_;
};


At this point, you might have started to have bad feelings about this framework. I have only covered random numbers, and yet we already have two of three choices to make in order to instantiate a generator. By the time we're done, we might have a dozen of template arguments on our hands. Well, bear with me for a while. In a later post, I'll describe how the library tries to mitigate the problem.

A final note: as you might have noticed, the sequence generators shown in the listings keep a mutable sequence as a data member and fill it with the new draws in order to avoid copies. Of course, this prevents instances of such classes to be shared by different threads—although I'd be a very happy camper if all the problems QuantLib had with multithreading were like this one.