## Monday, August 26, 2013

### Chapter 5, part 4 of 5: Models and calibration

Hello again.

This is the fourth in a series of five post (yes, I found out how many they are) covering chapter 5 of my book. The previous posts are here, here and here.

As I already mentioned (more than once, I seem to remember) during the first half of next week I'll be in London to teach my Introduction to QuantLib Development course. Drop me a line if you want to meet over a pint; probably it won't turn into a QuantLib user group, but it would be nice to meet some of you and chat about what you do with the library. I'll try and tweet the location. And yes, there are still places available for the course; click on this link if you're interested.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Parameterized models and calibration

### The CalibratedModel class

The implementation of the CalibratedModel class is shown in listing 5.5. Its core is the calibrate method, with most other features being there in order to support its execution.  (In fact, there's a couple of public methods there that should be used, directly or indirectly, by calculate alone and should belong to the protected section. I'm not sure that, in true Ellery Queen tradition, you have all the clues you need; but you can try looking at the code and guessing which ones.)

Listing 5.5: Implementation of the CalibratedModel class.
    class CalibratedModel : public virtual Observer,
public virtual Observable {
public:
CalibratedModel(Size nArguments)
: arguments_(nArguments),
constraint_(new PrivateConstraint(arguments_)),
shortRateEndCriteria_(EndCriteria::None) {}

void update() {
generateArguments();
notifyObservers();
}
Disposable<Array> params() const;
virtual void setParams(const Array& params);
void calibrate(
const vector<shared_ptr<CalibrationHelper> >&,
OptimizationMethod& method,
const EndCriteria& endCriteria,
const Constraint& constraint = Constraint(),
const vector<Real>& weights = vector<Real>());
EndCriteria::Type endCriteria();
protected:
virtual void generateArguments() {}
vector<Parameter> arguments_;
shared_ptr<Constraint> constraint_;
EndCriteria::Type shortRateEndCriteria_;
private:
class PrivateConstraint;
class CalibrationFunction;
};

Disposable<Array> CalibratedModel::params() const {
Size size = 0, i;
for (i=0; i<arguments_.size(); i++)
size += arguments_[i].size();
Array params(size);
Size k = 0;
for (i=0; i<arguments_.size(); i++) {
for (Size j=0; j<arguments_[i].size(); j++, k++) {
params[k] = arguments_[i].params()[j];
}
}
return params;
}

void CalibratedModel::setParams(const Array& params) {
Array::const_iterator p = params.begin();
for (Size i=0; i<arguments_.size(); ++i) {
for (Size j=0; j<arguments_[i].size(); ++j, ++p) {
QL_REQUIRE(p!=params.end(),"too few parameters");
arguments_[i].setParam(j, *p);
}
}
QL_REQUIRE(p==params.end(),"too many parameters");
generateArguments();
notifyObservers();
}

void CalibratedModel::calibrate(
const vector<shared_ptr<CalibrationHelper> >& instruments,
OptimizationMethod& method,
const EndCriteria& endCriteria,
const vector<Real>& weights) {

Constraint c =
*constraint_ :
CalibrationFunction f(this, instruments, weights);
Problem prob(f, c, params());
shortRateEndCriteria_ = method.minimize(prob, endCriteria);
setParams(prob.currentValue());
notifyObservers();
}

Instances of this class store a vector of Parameter instances, which for some reason are called arguments here; a constraint, to be applied to the set of their underlying parameters; and a member of the EndCriteria::Type enumeration, which tells us how the latest calibration ended (say, because it succeeded, or for reaching the maximum number of evaluations) and whose name still shows the original use of this class for short-rate models. As I mentioned, we gave this class little attention for quite a while.

The constructor takes a single argument specifying the number of model parameters and initializes the data members: the vector of parameters is given the passed size, the constraint is set to an instance of an inner PrivateConstraint class that I'll describe later, and the end criterion is set to None since the model is not yet calibrated.

Now, we have an interesting glitch here. This constructor is obviously meant to be used by derived classes, but is declared as public (probably an oversight). This, together with the fact that the class doesn't define any pure virtual function, makes it possible to create instances of CalibratedModel directly; however, such instances are unusable since they don't provide a way to set their parameters to anything useful (the stored Parameter instances are default-constructed and thus lack any behavior). Technically, fixing this glitch would break backward compatibility; but it might be argued that programs using this feature were broken anyway. We'll think about it in one of the next versions.

When the model receives a notification, the update method notifies the model's own observers after performing any needed calculation. These will be implemented by overriding the virtual generateArguments method. The name might be misleading, since it seems to suggest that parameter instances should be created here; but this can't be, since we don't want to override parameters that we might have already calibrated. (We don't want to recalibrate at each notification, either.) Instead, this method is used either to create parameters that don't need calibration (e.g., the term-structure parameter in some short-rate models, which follows the risk-free rate) or to perform some housekeeping, as we'll see in the continuation of the Heston model example.

The params and setParams methods are used to read and write the underlying parameters. To return them, the params method asks each of the stored Parameter instances for the number of underlying parameters it provides, creates an Array instance that can hold all of them, and collects their values (it can't add them to the array in a single loop because Array doesn't provide a push_back operation). In a similar way, the setParams method reads from the passed array and writes the required number of values in the stored Parameter instances.

Now, it you guessed that params and setParams are the methods that I'd rather have in the protected section, you can go and pour yourself the alcoholic beverage of your choice. (Drink responsibly. Also, don't drink and code.) These two methods should only be called from inside the calibrate method; client code isn't even able to know the number of the underlying parameters or which ones belong to each Parameter instance (unless its programmer reads the source code of the particular model used. That's cheating, though) so it shouldn't be able to modify them, and it has very little use for reading them, too.

As I said, the calibrate method is the focus of the class; and in true managerial fashion, it delegates most of the work to other objects. Simply put, it sets up a minimization problem so that solving it gives the calibrated set of parameters. The ingredients of the problem are an instance of a class derived from OptimizationMethod (for instance, it might implement the simplex method or Levenberg-Marquardt. More details on those are in appendix A) which is passed to the method by the calling code; a function to minimize, or rather a function object, which is an instance of its inner CalibrationFunction class and that returns a measure of the calibration error (more on that shortly); and a constraint on the parameter values, which defaults to the instance of PrivateConstraint stored at construction and can optionally be combined with an additional constraint passed as an argument.

The calibrate method collects all of the above, instantiates the problem, and starts the minimization. When that is done, it saves the end criterion, sets the parameter values to those that minimize the error (that is, those returned by problem.currentValue()) and notifies any observers that something has changed. The end criterion (which might be that the minimization succeeded, or that it failed for a number of different reasons) can be retrieved by means of the endCriteria method. If I were to write this class now, I'd probably return it from calibrate instead; but I'm ambivalent about it. It might make sense to store it in the model.

As it is now, the calibrate method provide little exception safety; if an exception were thrown at some point during the calibration, the model would be left with the last parameter values tried by the minimizer (the very same that probably caused the exception to be thrown). We might provide the strong guarantee by storing the parameter values before starting the minimization, catching any exceptions, and restoring the old parameter values before re-throwing; but it's probably best to set an appropriate end criterion, which is what happens if the calibration fails for other reasons.

The last pieces of functionality are implemented in the PrivateConstraint and CalibrationFunction inner classes, shown in listing 5.6.

Listing 5.6: Inner classes of the CalibratedModel class.
    class CalibratedModel::PrivateConstraint : public Constraint {
private:
class Impl :  public Constraint::Impl {
const vector<Parameter>& arguments_;
public:
Impl(const vector<Parameter>& arguments);
bool test(const Array& params) const {
for (Size i=0; i<arguments_.size(); i++) {
Array testParams(/* select the correct subset */);
if (!arguments_[i].testParams(testParams))
return false;
}
return true;
}
};
public:
PrivateConstraint(const vector<Parameter>& arguments);
};

class CalibratedModel::CalibrationFunction
: public CostFunction {
public:
CalibrationFunction(
CalibratedModel* model,
const vector<shared_ptr<CalibrationHelper> >& instruments,
const vector<Real>& weights)
: model_(model, no_deletion), instruments_(instruments),
weights_(weights) {}

virtual Disposable<Array> values(const Array& params) const {
model_->setParams(params);
Array values(instruments_.size());
for (Size i=0; i<instruments_.size(); i++) {
values[i] = instruments_[i]->calibrationError()
*sqrt(weights_[i]);
}
return values;
}
virtual Real value(const Array& params) const;
private:
shared_ptr<CalibratedModel> model_;
const vector<shared_ptr<CalibrationHelper> >& instruments_;
vector<Real> weights_;
};


The PrivateConstraint class works by collecting the constraints set to the stored Parameter instances. The logic is similar to that of params or setParams; it loops over the stored parameters, determines the subset of underlying parameters that belong to each one, and asks the Parameter instance to check them by calling its testParams method. The composite constraint is satisfied if and only if all the inner constraints are.

Finally, the CalibrationFunction class provides an estimate of the calibration error for a given set of parameters. Its constructor takes and stores a pointer to the model being calibrated, the set of quoted instruments being used for the calibration, and a set of weights. For some reason, the pointer to the model is stored in a shared_ptr instance, taking care that it's not deleted with the calibration function. Storing the raw pointer would have been enough.

The class inherits from CostFunction, which requires derived classes to implement both a values method returning a set of errors (in this case, one per quoted instrument) and a value method returning a single error estimate; a given optimization method might use the one or the other. The two work in a similar way, so I'm showing the implementation of just the first in the listing: they set the given parameters to the model, and then ask the stored helpers for the calibration error. For this to work, the helpers need to be set a pricing engine that uses the model being calibrated. The setup of the entire thing would be something like this:
    shared_ptr<HestonModel> model(...);
vector<shared_ptr<CalibrationHelper> > helpers(...);
shared_ptr<PricingEngine> engine =
make_shared<AnalyticHestonEngine>(model, ...);
for (i=0; i<helpers.size(); ++i)
helpers[i]->setPricingEngine(engine);
model->calibrate(helpers, ...);
That is, the helpers use the engine to calculate their model prices, and in turn the engine uses the model. Thus, after the call to setParams inside the values method the model prices change and so do the corresponding calibration errors. The values method returns the set of distinct errors, while the value method returns the sum of their squares.

A final note: currently, the CalibrationFunction class is declared as a friend of CalibratedModel. This is actually not necessary, since it only accesses the public setParams method; and if we were using C++11, it wouldn't be necessary even if setParams was protected. According to the new standard, inner classes have access to all member of their enclosing class, public or not.

Next post: an example of model.

## Monday, August 19, 2013

### Chapter 5, part 3 of n: Model parameters

Welcome back.

This is the third in a series of posts on chapter 5 of my book; Part 1 and 2 are here and here. It is still being written as you read, which brings novelty, excitement, and the possibility that I fall behind the schedule. We'll see how this turns out.

From the 2nd to the 4th of September I'll be in London to teach my Introduction to QuantLib Development course. One of those nights I'll go grab a pint or two with a few people; drop me a line if you want to join us. Assuming I find a data connection, I'll try and tweet the place. (And yes, places for the course are still available; go to this link if you're interested).

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Parameterized models and calibration

### Parameters

There is an ambiguity when we say that a model has a given number of parameters. If they are constant, all is well; for instance, we can safely say that the Hull-White model has two parameters alpha and sigma. What if one of the two was time-dependent, though? In turn, it would have some kind of parametric form. Conceptually, it would still be one single model parameter; but it might add several numbers to the set to be calibrated.

The Parameter class, shown in listing 5.3, takes the above into account—and, unfortunately, embraces the ambiguity: it uses the term "parameter" for both the instances of the class (that represent a model parameter, time-dependent or not) and for the numbers underlying their parametric forms. Our bad: you'll have to be careful not to get confused in the discussion that follows.

Listing 5.3: Sketch of the Parameter class.
    class Parameter {
protected:
class Impl {
public:
virtual ~Impl() {}
virtual Real value(const Array& params, Time) const = 0;
};
boost::shared_ptr<Impl> impl_;
public:
Parameter();
const Array& params() const;
void setParam(Size i, Real x) { params_[i] = x; }
bool testParams(const Array& params) const;
Size size() const { return params_.size(); }
Real operator()(Time t) const {
return impl_->value(params_, t);
}
protected:
Parameter(Size size,
const boost::shared_ptr<Impl>& impl,
const Constraint& constraint);
Array params_;
Constraint constraint_;
};


Specialized behavior for different parameter will be implemented in derived classes (I'll show you a few of those shortly). However, the way we go about it is somewhat unusual: instead of declaring a virtual method directly, the Parameter class is given an inner class Impl, which declares a purely virtual value method. There's method in this madness; but let me gloss over it for now. (Someone on the Wilmott forums suggested that the reason is job security: if we obfuscate the code enough, nobody else will be able to maintain it. I can see his point, but this is not the reason.) The idiom is used in other classes, and will be explained in appendix A.

Instances of Parameter represent, in principle, a time-dependent parameter and store an array params_ which contain the values of the parameters used to describe its functional form. Most of the interface deals with these underlying parameters; the params method returns the whole array, the setParam method allows to change any of the values, and the size method returns their number.

Parameter instances also store a constraint that limits the range of values that the underlying parameters can take; the testParam method provides the means to check their current values against the constraint. Finally, operator() returns the value of the represented parameter (I'm not sure how I should call it. The main parameter? The outer parameter?) as a function of time, given the current values of the underlying parameters; as I mentioned, the actual implementation is delegated to the stored instance of the inner Impl class.

Finally, the class declared a couple of constructors. One is protected, and allows derived classes to initialize their own instances. Another is public; it creates instances without behavior (and therefore useless) but allows us to use Parameter with containers such as std::vector (I think this is no longer necessary in C++11, but we are still living in the past).

Listing 5.4 shows a few examples of actual parameters; they inherit from the Parameter class and declare inner Impl classes that inherit from Parameter::Impl and implement the required behavior.

Listing 5.4: Sketch of a few classes inherited from Parameter.
    class ConstantParameter : public Parameter {
class Impl : public Parameter::Impl {
public:
Real value(const Array& params, Time) const {
return params[0];
}
};
public:
ConstantParameter(const Constraint& constraint)
: Parameter(1, /* ... */) {}
};

class NullParameter : public Parameter {
class Impl : public Parameter::Impl {
public:
Real value(const Array&, Time) const {
return 0.0;
}
};
public:
NullParameter() : Parameter(0, /* ... */) {}
};

class PiecewiseConstantParameter : public Parameter {
class Impl : public Parameter::Impl {
public:
Impl(const std::vector<Time>& times);
Real value(const Array& params, Time t) const {
for (Size i=0; i<times_.size(); i++) {
if (t<times_[i])
return params[i];
}
return params.back();
}
private:
std::vector<Time> times_;
};
public:
PiecewiseConstantParameter(const std::vector<Time>& times,
const Constraint& constraint)
: Parameter(times.size()+1, /* ... */) {}
};


The first represents a parameter which is constant in time; this is what we usually think about when we talk of a parameter, and is possibly the most used. The array of internal parameters has just one element (as seen by the 1 passed to the Parameter constructor), and that's what the implementation returns independently of the passed time.

The second is a null parameter; its value is supposed to be 0 and must not be calibrated. In this case, the stored array has no elements (again, see the 0 passed to the Parameter constructor) since nothing will move during calibration. This could probably be a specific case of a more general FixedParameter class, whose fixed value could be different from 0.

Finally, the third class represents a time-dependent parameter. It is modeled as piecewise constant between any two consecutive times in a given set; thus, if the set has n times, we'll have n + 1 different values (including the one before the first time and the one after the last). The implementation is a simple linear search, as we don't expect the times to be too many or we'd be likely to over-calibrate.

The library implements other parameter classes (and we could define others; for instance, using other parameterizations of time dependence) but I won't keep you any longer. At this point (that is, in next post) we need to turn to the class that—I don't really have a less awkward way to say it—models a calibrated model.

## Monday, August 12, 2013

### Chapter 5, part 2 on n: Example

This is the second in a series of posts covering new content from my book, namely, chapter 5. Part 1 is here.

Registration for the next Introduction to QuantLib Development course is still open: it is the three-day course that I teach based on the contents of this blog and of my book (plus several exercises; bring your compiler) and you can find more information, a brochure and a booking form by clicking on this link.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Parameterized models and calibration

### Example: the Heston model

In this chapter, I'll use the Heston model as an example. Here, I'll describe the helper class; the model class will follow after the discussion of the CalibratedModel class in one of the next posts.

The HestonModelHelper class is shown in listing 5.2. It models a European option, and right here we have a code smell; the name says nothing of the nature of the instrument, and instead it refers to a Heston model that is nowhere to be seen in the implementation of the helper. I, for one, didn't have the issue clear when this class was added.

Listing 5.2: Implementation of the HestonModelHelper class.
    class HestonModelHelper : public CalibrationHelper {
public:
HestonModelHelper(
const Period& maturity,
const Calendar& calendar,
const Real s0,
const Real strikePrice,
const Handle<Quote>& volatility,
const Handle<YieldTermStructure>& riskFreeRate,
const Handle<YieldTermStructure>& dividendYield,
CalibrationHelper::CalibrationErrorType errorType
= CalibrationHelper::RelativePriceError)
: CalibrationHelper(volatility, riskFreeRate, errorType),
dividendYield_(dividendYield),
riskFreeRate->referenceDate(), maturity)),
tau_(riskFreeRate->dayCounter().yearFraction(
riskFreeRate->referenceDate(), exerciseDate_)),
s0_(s0), strikePrice_(strikePrice) {
boost::shared_ptr<StrikedTypePayoff> payoff(
new PlainVanillaPayoff(Option::Call, strikePrice_));
boost::shared_ptr<Exercise> exercise(
new EuropeanExercise(exerciseDate_));
option_ = boost::shared_ptr<VanillaOption>(
new VanillaOption(payoff, exercise));
marketValue_ = blackPrice(volatility->value());
}
Real modelValue() const {
option_->setPricingEngine(engine_);
return option_->NPV();
}
Real blackPrice(Real volatility) const {
return blackFormula(Option::Call,
/* ...volatility, stored parameters... */);
}
private:
boost::shared_ptr<VanillaOption> option_;
// other data members, not shown
};


Anyway: the implementation is not complex. The constructor takes information on the underlying contract such as maturity and strike, as well as the quoted volatility and other needed market data; it also allows one to choose how to calculate the calibration error. Some data are passed to the base class constructor, while some other are stored in data members; finally, the data are used to instantiate and store a VanillaOption instance and its market price is calculated. The option and its price are also stored in the corresponding data members.

It's not easy to spot it at first sight, but there's a small problem in the initialization of the instance. The time to maturity tau_ is calculated in the constructor as the time between today's date and the exercise date. Unfortunately, this doesn't take into account the fact that today's date might change. In order to work correctly even in this case, this class should recalculate the values of tau_ and marketValue_ each time they might have changed—that is, inside an override of the performCalculations method.

The rest is simple. As I mentioned before, the modelValue method is implemented by setting the model-based engine to the instrument and asking it for its NPV (this would work for any model, not just the Heston one, which explains my discontent at the class name). The blackPrice class just returns the result of the Black-Scholes formula based on the value of the passed volatility and of the other stored parameters. (The price could also be obtained by setting a Black engine to the stored instrument and asking for its NPV. This is the route chosen by other helpers in the library.) Finally, the addTimesTo method does nothing; this is actually the only Heston-specific part, since we don't have a tree-based Heston model. To be fully generic, we should return here the exercise time (that would be the tau_ data member discussed above) so that it can be used by some other model.

That's all. The machinery of the base class will use these methods and provide the functionality that is needed for the calibration. But before going into that, I need to make a short detour in next post.

#### Aside: breaking assumptions

What happens if the assumptions baked in the code don't hold—for instance, because the volatility is not quoted by means of a Black model, or the instrument is not quoted in terms of its volatility at all?

In this case, the machinery still work but we're left with misnomers that will make it difficult to understand the code. For instance, if the formula used to pass from volatility to price doesn't come from a Black model, we can still implement it in the blackPrice method, but the next one that reads the code will be left scratching her head. And if the instrument price is quoted directly instead of the volatility, we'll have to store the price in the volatility_ data member and to implement the blackPrice method rather puzzlingly as:
    Real blackPrice(Real volatility) const {
return volatility;
}


## Monday, August 5, 2013

### Chapter 5, part 1 of n: Parameterized models and calibration

Hello everybody.

New series of post, this time on chapter 5. This is content that I haven't yet published in book form: in fact, I'm still writing it (as you might have guessed from the title of the post: I'm not sure how many posts this series will last). I look forward to your feedback.

Registration for the next Introduction to QuantLib Development course is still open: it is the three-day course that I teach based on the contents of this blog and of my book (plus several exercises; bring your compiler) and you can find more information, a brochure and a booking form by clicking on this link.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

## Parameterized models and calibration

CRITICS OF the practice of calibration argue that its very existence is a sign of a problem. After all, if physicists had to recalibrate the universal constant of gravitation yearly, it would probably mean that the formula is invalid (or that there's something wrong with the idea of natural laws altogether, which is way scarier. This doesn't seem to be the case for physics. The jury is still out for quantitative finance.)

For better or for worse, QuantLib supports calibration to market data because, well, that's what people need to do. Much like C++ or Smith & Wesson, we might add some safety but we assume that users know what they're doing, even if this gives them the possibility to shoot their foot off.

The calibration framework is one of the oldest parts of the library and has received little attention in the last few years; so it's likely that, as I write this chapter, I'll find and describe a number of things that could be improved—by breaking backward compatibility, I'm afraid, so they'll have to wait. In the meantime, you can learn from our blunders.

Onwards. The framework enables us to write code such as, for instance,
    HullWhite model(termStructure);
Simplex optimizer(0.01);
model.calibrate(marketSwaptions,
optimizer,
EndCriteria(maxIterations, ...));
check(model.endCriteria());
// go on using the model

The above works because of the interplay of two classes called CalibratedModel and CalibrationHelper; the HullWhite class inherits from the former, while the elements of marketSwaptions are instances of a class that inherits from the latter (we're also using accessory classes, such as Simplex, that implement optimization methods; but I'll postpone their description to appendix A). Since they work together, describing either class before the other will cause some vagueness and hand-waving. Bear with me as I try to minimize the inconvenience (pun not intended).

### The CalibrationHelper class

I'll describe the CalibrationHelper class first, since it depends only indirectly on the model (in fact, it doesn't use the model interface directly at all). Its implementation is shown in listing 5.1.

Listing 5.1: Implementation of the CalibrationHelper class.
    class CalibrationHelper : public LazyObject {
public:
enum CalibrationErrorType {
RelativePriceError, PriceError, ImpliedVolError};

CalibrationHelper(
const Handle<Quote>& volatility,
const Handle<YieldTermStructure>& termStructure,
CalibrationErrorType calibrationErrorType
= RelativePriceError);

void performCalculations() const {
marketValue_ = blackPrice(volatility_->value());
}
virtual Real blackPrice(Volatility volatility) const = 0;
Real marketValue() const {
calculate(); return marketValue_;
}

virtual Real modelValue() const = 0;
virtual Real calibrationError();
void setPricingEngine(
const shared_ptr<PricingEngine>& engine) {
engine_ = engine;
}

Volatility impliedVolatility(Real targetValue,
Real accuracy,
Size maxEvaluations,
Volatility minVol,
Volatility maxVol) const;
virtual void addTimesTo(list<Time>& times) const = 0;

protected:
mutable Real marketValue_;
Handle<Quote> volatility_;
Handle<YieldTermStructure> termStructure_;
shared_ptr<PricingEngine> engine_;
private:
class ImpliedVolatilityHelper;
const CalibrationErrorType calibrationErrorType_;
};

class CalibrationHelper::ImpliedVolatilityHelper {
public:
ImpliedVolatilityHelper(const CalibrationHelper& helper,
Real value);
Real operator()(Volatility x) const {
return value_ - helper_.blackPrice(x);
}
...
};

Volatility CalibrationHelper::impliedVolatility(
Real targetValue, Real accuracy, Size maxEvaluations,
Volatility minVol, Volatility maxVol) const {
ImpliedVolatilityHelper f(*this,targetValue);
Brent solver;
solver.setMaxEvaluations(maxEvaluations);
return solver.solve(f,accuracy,volatility_->value(),
minVol,maxVol);
}

Real CalibrationHelper::calibrationError() {
Real error;
switch (calibrationErrorType_) {
case RelativePriceError:
error = fabs(marketValue()-modelValue())/marketValue();
break;
case PriceError:
error = marketValue() - modelValue();
break;
case ImpliedVolError: {
const Real modelPrice = modelValue();
// check for bounds, not shown
Volatility implied = this->impliedVolatility(
modelPrice, 1e-12, 5000, 0.001, 10);
error = implied - volatility_->value();
}
break;
default:
QL_FAIL("unknown Calibration Error Type");
}
return error;
}


The purpose of the class is similar—the name is a giveaway, isn't it?—to that of the BootstrapHelper class, described in chapter 3 (which, unfortunately, didn't show up as posts yet). It models a single quoted instrument (a "node" of the model, whatever that might be) and provides the means to calculate the instrument value according to the model and to check how far off it is from the market value. Actually, the value isn't the only possibility; we'll get to this in a bit.

CalibrationHelper inherits from LazyObject, that you know by now. The reason is that it might need some preliminary calculation: the target value of the optimization (say, the value) might not be available directly, for instance because the market quotes the corresponding implied volatility instead. The calculation to go from the one to the other must be done just once, before the calibration, and is done lazily as the market quote changes.

The constructor takes three arguments—each one maybe a bit less generic than I'd like, even though I only have minor complaints. The first argument is a handle to the quoted volatility; the assumption here is that, whatever the model is, that's how the market quotes the relevant instruments.

The second argument is a handle to a term structure, that we assume to need for the calculations. That's probably true, but it is only ever used by derived classes, not here; so I would have preferred it to be declared there, along with any other data they might need.

Finally, the third argument specifies how the calibration error is defined. It's an enumeration that can take one of three values, meaning to take the relative error between the market price and the model price, or the absolute error between the prices, or the absolute error between the quoted volatility and the (Black) volatility implied by the model price. In principle, we might have used a Strategy pattern instead; but I'm not sure that the generalization is worth the added complexity, especially as I don't have a possible fourth case in mind.

The body of the constructor is not shown here for brevity, but it does the usual things: it stores its arguments in the corresponding data members and registers with those that might change.

As I said, the LazyObject machinery is used when market data change; accordingly, the required performCalculations method transforms the quoted volatility into a market price and stores it. The actual calculation depends on the particular instrument, so it's delegated to a purely virtual blackPrice method; the evident assumption is that a Black model was used to quote the market volatility. Finally, a marketValue method exposes the calculated price.

Note that, unfortunately, we need all three of the above methods. Yes, I know, it bugs me too. Obviously, performCalculations is required by the LazyObject interface; but what about blackPrice and marketValue? Can't we collapse them into one? Well, no. We want the marketValue inspector to be lazy, and therefore it must call performCalculations; thus, it can't be the same as blackPrice, which is called by the performCalculations method. (They also have a different interface, since blackPrice takes the volatility as an argument; but that could have been managed by giving it a default argument falling back to the stored volatility value.)

The next set of methods deals with the model-based calculations which are executed during calibration. The purely virtual modelValue, when implemented in derived classes, must return the value of the instrument according to the model; the calibrationError method, that I'll describe in more detail later, returns some kind of difference between the market and model values; and the setPricingEngine brings the model into play.

The idea here is that the engine that we're storing has a pointer to the model, and can be used to price the instrument represented by the helper and thus give us a market value. All the helpers currently in the library implement the modelValue method as a straightforward translation of the idea: they set the given engine to the instrument they store, and ask it for its NPV (you'll see it spelled out in next post.)

In fact, the implementations are so similar that I wonder if we could have provided a common one in the base class. Had we added a pointer to an Instrument instance as a data member, the method would just be:
    void CalibrationHelper::modelValue() const {
instrument_->setPricingEngine(engine_);
return instrument_->NPV();
}

and with a bit more care, we could have set the pricing engine just once, at the beginning of the calibration, instead of each time the model value is recalculated. The downside of this would have been that the results of the methods would have been dependent on the order in which they were called; for instance, a call to the modelValue right after a call to marketValue might have returned the Black price if the latter had set a different engine to the instrument. According to Murphy's law, this would have bitten us back.

A last remark about the setPricingEngine method: when I saw it, I first thought that there was a bug in it, and that it should register with the engine. Actually, it shouldn't: the engine is used for the model value only, and must not trigger the LazyObject machinery that recalculates the market value.

The last two methods are utilities that can be used to help the calibration. The impliedVolatility method uses a one-dimensional solver to invert the blackPrice method; that is, to find the volatility that yields the corresponding Black price. Its implementation is shown in the listing along with the sketch of an accessory inner class ImpliedVolatilityHelper that provides the objective function for the solver.

The addTimesTo method is to be used with tree-based methods (we'll get to those in chapter 7). It adds to the passed list a set of times that are of importance to the underlying instrument (e.g., payment times, exercise times, or fixing times) and therefore must be included in any time grid to be used. If I were to write it now, I'd just return the times instead of extending the given list, but that's a minor point. Another point is that, as I said, this method is tied to a particular category of models, that is, tree-based ones, and might not make sense for all helpers. Thus, I would provide an empty implementation so that derived classes don't necessarily have to provide one. However, I wouldn't try to move this method in some other class—say, a derived class meant to model helpers for tree-based models. On the one hand, it would add complexity and give almost no upside; and on the other hand, we can't even categorize helpers in this way: any given helper could be used with both types of models.

Finally, the listing shows the implementation of the calibrationError method. It is called by the calibration routine every time new parameters are set to the method, and returns an error that tells up how far we are from market data. The definition of "how far" is given by the stored enumeration value; if can be the relative difference of the model and market prices, their absolute difference, or the difference between the quoted volatility and the one implied by the model price.

In next post: an example of calibration helper.