Forum Replies Created
-
AuthorPosts
-
Peter CaspersKeymaster
Hi, that’s correct. At the moment the calibration of the JY model in the scripted trade pricing engine is limited to using ATM CPI Cap / Floors to determine the CPI process volatility and using a hardcoded real rate volatility. It is generally easy to add more calibration strategies. What exactly do you have in mind, i.e. which calibration instruments do you want to use for the joint calibration of CPI and real rate volatilities?
Peter CaspersKeymasterHi, they are in the log file (set the log level to 255, let me know if you can’t find them). At the moment we don’t write out a dedicated report. This would make sense though!
Peter CaspersKeymasterHi – apologies I don’t quite get what you mean. Can you elaborate a bit what you want to do?
Peter CaspersKeymasterYes we do support CC_BASIS_SWAPs on RFRs.
Peter CaspersKeymasterHi FiveEights,
thank you. I agree we should add a section to the user guide explaining all this in more detail and with examples.
It is correct that we do not have stocks in ORE. For the purpose of valuation, sensitivity and VaR calculation it seems sensible though to use forwards (with maturity = reference date) instead.
Best Regards
PeterPeter CaspersKeymasterHi,
let’s look at Example_15, which provides a VaR calculation of a portfolio of trades. Furthermore let’s focus on EQ_CALL_SP5 which is an equity option corresponding to the case you are looking at. This trade has sensitivities to
– the USD discount curve
– the SP5 equity forecast curve
– the SP5 equity spot
– the SP5 equity volatility surface
– and finally the USDEUR FX Spot rate, since the reporting currency is EURTo compute a parametric VaR you can provide a covariance matrix for all of these risk factors that influence the trade’s NPV. Notice that the covariance matrix contains the variances of the single risk factors on the diagonal, so even if we only consider one single risk factor it makes sense to provide a (1×1) covariance matrix, the only entry being the variance of this factor. You do not need to provide each cell of the covariance matrix, missing values are assumed to be zero. In case of zero variances (diagonal elements of the matrix) for a risk factor with a non-zero sensitivity, a warning is logged though indicating that the covariance matrix specification is incomplete.
As an example let’s take the equity spot sensitivity (from the output file sensitvitiy.csv)
EQ_CALL_SP5,EquitySpot/SP5/0/spot,21.475600,179736.37,7599.69,156.28
for which in covariance.csv we have the variance
EquitySpot/SP5/0/spot EquitySpot/SP5/0/spot 100
which has the following interpretation: Since we are computing equity spot sensitivities by applying a 1% relative shift, as can be seen in the input file sensitivity.xml
<!-- Equity spot shifts --> ... <EquitySpot equity="SP5"> <ShiftType>Relative</ShiftType> <ShiftSize>0.01</ShiftSize> </EquitySpot>
the variance is expected to be consistent with this shift type, i.e. the variance of relative movements of the equity spot expressed in percent. In other words you could estimate the variance on a historical time series by computing the sample variance of daily percentage relative changes 100.0 * (Spot(t+1) – Spot(t)) / Spot(t) of the equity spot. If you plug this value into covariance.csv the resulting value at risk will be a 1-day value at risk w.r.t. the given confidence level.
If you instead want to compute a say 10-day value at risk, you could for example
– estimate the variance of 10d changes 100.0 * (Spot(t+10)-Spot(t)) / Spot(t) directly on your time series, with an overlapping or non-overlapping 10 day window, or
– use your 1d estimate for the variance and scale (multiply) this by 10, following the square-root of time rule (notice we scale a variance here, not a standard deviation, so no square root shows up)or use another method to arrive at an estimate for the 10 day variance. This is what is meant by “no scaling is applied” in the user guide. i.e. you directly provide the variance consistent with the horizon of the value at risk calculation. In the example covariance.csv we have
EquitySpot/SP5/0/spot EquitySpot/SP5/0/spot 100
which means that the variance of the equity spot risk factor key is 100, i.e. the standard deviation of relative equity spot moves is 10%. If we wanted to specify a correlation with another equity “Lufthansa” assume first we have a variance of the Lufthansa spot of 200, i.e.
EquitySpot/Lufthansa/0/spot EquitySpot/Lufthansa/0/spot 200
Then if the correlation between the two spots’ relative movements is 30% we would add a line
EquitySpot/SP5/0/spot EquitySpot/Lufthansa/0/spot 42.4264
because the covariance is the correlation times the standard deviation of SP5 and Lufthansa respectively, i.e. 0.3 * sqrt(100) * sqrt(200). Notice that in the covariance file of Example_15 no non-zero correlations are specified.
Does that make sense to you?
Best Regards
Peter- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
- This reply was modified 6 years, 10 months ago by Peter Caspers.
Peter CaspersKeymasterHi Miro,
yes you are correct, to calibrate both a constant reversion an a stepwise volatility you will need two different swaptions with the same expiry to satisfy the requirements of the LM optimiser. And yes, maybe we should add a check and throw an error message that is clearer than the one coming from the optimiser itself.
However, ORE also allows to use an exogenous mean reversion and only bootstrap the model volatility, in fact this is the way the examples calibrate their LGM models and in a sense is also a more natural approach: Since the mean reversion can be seen as a parameter determining the inter-temporal correlation structure of the model and Bermudan swaptions are sensitive to this (while Europeans are not), one can argue that the reversion should be implied from Bermudan swaption premiums. If no information on Bermudan swaption premiums is available, the reversion is a parameter that is not easy to determine in a reasonable way in my opinion.
We very much appreciate contributions to the ORE project, so feel free to open a ticket on github or / and provide a solution for the issues you observe.
Best Regards
PeterPeter CaspersKeymasterHi Miro,
in ORE we work around these issues by a) interpolating fixings backward flat between simulation dates and b) moving exercise dates effectively to the next simulation date. If you are interested in the details you can look at
a) https://github.com/OpenSourceRisk/Engine/blob/master/OREAnalytics/orea/simulation/fixingmanager.cpp
b) https://github.com/OpenSourceRisk/Engine/blob/master/OREData/ored/portfolio/optionwrapper.cppOf course this method introduces a bias in both cases, and the simulation grid has to be fine enough to control the resulting error. In the context of exposure simulation using regression techniques (a.k.a. American Monte Carlo) which you will probably resort to for callable exotics exposure simulation anyway, interpolation using a Brownian Bridge seems to be the most straightforward approach. However we do not provide our AMC engine as part of the open source libraries.
Best Regards
PeterPeter CaspersKeymasterHi Ben,
the cross asset model is designed to allow for that. However currently only one model type per asset class is available in the open source library, so effectively you can only use e.g. the LGM1F (=HW1F) model for IR unless someone (or we) add an alternative model type.
Best Regards
PeterPeter CaspersKeymasterHi John,
if the cashflow is deterministic you can use a cashflow leg for this. Otherwise it is currently not covered I am afraid.
Best Regards
PeterPeter CaspersKeymasterHi,
the next opportunity would be the QuantLib User Meeting in Düsseldorf next week.
Any chance you make it there?
Kind Regards
PeterPeter CaspersKeymasterHi Francois,
this way we are independent of the QuantLib release cycle and can introduce missing pieces whenever we need them. Another reason is that some classes in QuantExt may seem too specialised for QuantLib, like the InterpolatedDiscountCurve variant which was optimised for the XVA simulation run.
But it’s not impossible that the two libraries might be merged in the future, let’s see.
Kind Regards
Peter -
AuthorPosts