Hacker News new | past | comments | ask | show | jobs | submit login
Option Pricing with Fourier Transform and Exponential Lévy Models [pdf] (maxmatsuda.com)
96 points by Cieplak on Oct 26, 2018 | hide | past | favorite | 19 comments



Being a geophysicist, I got sucked into this by the mention of the Fourier Transform.

I found this paper to be very wordy and littered with mathematics that was at times impenetrable. Thank God for the Appendix to help decipher some of the non-geophysical industry symbology. I'm familiar with the derivations of the equations used but it's been several decades since I had taken differential equations so there is a lot left for me to absorb.

I understand this is an old paper but I am cursed with an eye for detail and some of the little things ended up grabbing my attention more than they should have. I found myself wondering if, prior to publication, anyone had bothered reading the full text with an eye for identifying simple spelling errors or whether they had used a spell checker or other tool to maintain consistency of spelling of uncommon words or terms. I think not. Just in my own reading these things popped out at me:

p. 92 in the sentence just after Figure 6.1 sock is used instead of stock.

p. 153 just after Eq 9.8 is defined Meron is used instead of Merton.

p. 166 the page has all but one mention of Black-Sholes spelled as Black-Shoels.

p. 244 the third sentence uses Bronwian instead of Brownian.

Also, they made a typical math funny on pages 190-191. On p.190 the second sentence reads:

>Because the CF of VG process cannot be obtained by simply substituting 0 α = in (11.23), we need to do this step-by-step.

So now they promise me some interesting step-by-step derivations in their algebra. Instead I get the standard upper-level math statement on p. 191 after Eq. 11.28:

>After tedious algebra:

You're almost to the appendix and you find the first mention that some of the math got hairy. Fun stuff.

I expected the last page to read "This page intentionally left blank."


>I found this paper to be very wordy and littered with mathematics that was at times impenetrable.

don't understand this comment. if someone asked you to read a QFT paper would you claim it were "littered" with impenetrable mathematics? or vice versa: if a cond mat physicist read a geophysics paper would they be justified in claiming the same? you're reading a paper outside of your domain of expertise; expect to be challenged.


>> The answer is that these three models are special cases of more general exponential Lévy models. Options cannot be priced with general exponential Lévy models using the traditional approach of the use of the risk-neutral density of the terminal stock price because it is not available.

Does this mean there is no hedging strategy in these general exponential models? My understanding is the Black-Scholes gives the price, such that if the price was different, there would be an arbitrage strategy (under some assumptions on the variance). And this arbitrage strategy is used for hedging.


That's right, in the Black Scholes model for any option you can construct essentially a portfolio of money market+stock that replicates the option. So if you sell the option and buy and track the replicating portfolio, there is no hedging error. But you can't hedge perfectly when there are jumps, there is no replicating portfolio, only ways to minimize the hedging error.


From the abstract

>Merton jump-diffusion model (1976) which is an exponential Lévy model with finite arrival rate of jumps,

This is funny because my graduate thesis is about models based on Hawkes processes, that is, jump with a path-dependent and self-exciting intensity process. Instead of taking the arrival time of jumps (often the parameter lambda) to be constant, in a Hawkes process a jump increases the probability of another jump (positive feedback), leading to clusters of jumps we often see in crises.

I love how this paper might seem "magic" and voodoo and most importantly "true" to people who don't know much about mathematical finance. The point is that the models here are most likely wrong and have GLARING flaws in them, yet are still used to price REAL things in REAL life. All models are wrong, some are less wrong than others. The point is, how wrong do our models have to be before we get some really bad consequences (2008 financial crisis, anyone?)

P.S. Please rewrite this in LaTeX, posting a 250 page document written in Word makes me 90% less likely to read it (compared to something written in LaTeX)


I'm not sure whose voting up such a technical paper. I could understand if the subject was control theory or robotics, but economics with engineering-level math seems out of most of our purviews.

I'm not complaining - I'm happy to see more math on HN. I'm just wondering about HN's demographics.


Folks interested in this, but finding it too long, so waiting for someone to summarize/explain maybe :)


There are a lot of engineers here who would have ended up in finance (possibly as quants), myself included. This sort of thing is right up our street.


Especially since the result is not new (it was written in 2004). Even though the topic is within my interests I'm confused that this is on the front page of HN.


I guess it's interesting to see an alternative to Black Scholes. Fourier transform (FFT) is a common topic in advanced algos.


Interesting that this got voted up. Anyway, here a few notes:

1. Mostly, the goal is not to "price options". There's a liquid market for basic calls/puts, and those prices are used to calibrate a model and then interpolate/extrapolate as well as price more exotic things. So the goal is to "fit the market".

2. Black Scholes is a well defined bijection between a Call price C(K,T) and a BS vol sigma: C(K,T) = BS(K,T,F,df,sigma). However, prices are such that calls at different strike K have different BS vol, thus the vol can't be a description of the underlying stock price. BS is just "the wrong formula to plug in the wrong number (BS vol) to get the right price". In particular, you can't evolve a stock (in Monte Carlo, going forward, or in a PDE, going backward) using BS vol and reprice all options correctly.

2b. But already, BS gives you a means to interpolate and hedge.

3. The next huge step forward was local vol (Dupire), LV. Instead of assuming fixed vol, it assume that the vol is a deterministic function of stock price S and time t. Now you can evolve a stock (in MC or PDE) and reprice vanilla options correctly, by and large. However, two problems remained:

4a. Forward smile. Take prices as they look today, fit a local vol model, and evolve it forward 2 years. You've hit all the 2yr option prices correctly, and you'll hit all the 3yr option prices. However, the 1yr options IN 2 YEARS will look all wrong (in particular, the smile will have decayed unrealistically).

4b. Very short term smile. A gaussian will basically never go more than 3 std devs from its mean, right. So, short term out of the money options should be really worthless. But they aren't, because stock prices in the real world do jump (or move 10 std devs). So, we require enormously high "lognormal" BS or local vols to reproduce observed option prices correctly.

4a. is solved with stochastic vol models, SV. Mix SV and LV and you reprice options perfectly, and go a few years forward, and your forward smile still looks reasonable.

4b. is solved incorporating jumps, JD (jump diffusion).

Mix SV, JD, LV and you get a nice model that fits the market, and evolves reasonably.

5. Most exotic products you price have additional features that preclude closed form pricing. If there's path dependency, you often just use Monte Carlo. If there's calculability, you try and use PDEs. If there's both, you have to use advanced methods: either carry state variables with you in the PDE, or use Longstaff-Schwartz like Monte Carlo methods.

6. However, in the last decade or so, after the financial crisis, all the fancy stuff receded in the background, and there was more focus on the basics: rates. Different counter parties have different credit risk, different currencies have different credit, giving rise to cross-currency basis, different LIBOR maturities are at different levels, giving rise to intra-currency basis, etc. All that stuff needs to be captured properly.


Naive question from a computational physicist with no quant experience - if you forgo closed form pricing anyway, there doesn't seem to be any obstacle to including all of the things you mention in your last paragraph (and more) and just building more sophisticated numerical models to fit the market. There's certainly no shortage of data or computational power at the scale we're talking about. Given that, why has the "fancy stuff" receded in the background?


Two things, really.

The models needed to be updated to incorporate all the rates stuff. That takes time. So the focus wasn't so much on innovation on the product front, but on getting all existing models and products to play along with the new reality. (That's a huge undertaking, btw... you need the market data, someone responsible for marking it, put it in the databases, have it flow through the infrastructure, take it into account in the models, etc.)

Second, previously there were many fancy products that allowed you to trade, say, vol, mean reversion, correlation, etc.

However, trading this rates stuff is fairly straightforward (in particular, you don't need optionality/convexity; linear products (such as forwards or swaps) are enough).

Maybe I shouldn't say "recede" - but in the earlier decade the focus was more on fancy exotic products (with huge margins), while in the more recent decade the focus was on simpler products, but modelling them really precisely.

(There was also some focus on systems and ops and front-to-back processing.)


This is a great post. Im curious if these techniques are widely used in hedge funds/prop shops today. I was an equity options trader many moons ago, and the Chicago prop shop I traded at generally had every trader using a simple vol arb approach. Apart from real time vol calculations per strike in the trading tools using black scholes and historical vol for the equity, nothing more complex was used to execute trades (eg no stochastic vol or jump diffusion). The concept of smiles was understood, so people would generally trade at the money options exclusively. Positions were delta hedged over night, generally at market close. Deep out of the money options were used essentially for gambles. Vega was the main source of edge, and as you can imagine, most traders exited positions once their options drifted in or out of the money since our vol model didn't really account for it.


Good old days :-)

I should say that it's always amazing to me how good traders are often quite ahead of the models: they use them and have an intuitive feel for them, but are also aware of their limitations, idiosyncrasies, and where the real world diverges from them.

(On the flip side, btw, that means that you can't blame the financial crisis on "oh, the models were bad". They were (some of them), but everyone knew. That things went on as they did was more due to systemic/political factors, incentives, etc.)


The paper mentions (page 117) that there are accuracy concerns with the FT quadrature method used for equation 8.21 (and analogous ones for others) for "near maturity deep OTM and ITM calls (puts)". This is attributed to the highly oscillatory nature of the Fourier Transform kernel. Is performance important here? Like, would a better quadrature rule which didn't suffer due to the oscillatory kernel help in an important way?

I ask, of course, because there are better ways to treat Fourier-type integrals than trapezoidal rule (I even wrote a numerical analysis paper on such a method).


I've rarely used FT methods in the real world (one reason being the wrap-around boundary conditions), and not really followed them. And, not sure whether that paper reflects the current state of the art.


FT methods are different from Fourier-spectral methods which typically have periodic BC requirements. That was not what my question was about.

The method in the paper exactly represents the solution as a Fourier-type integral which must be computed numerically. The error in the quadrature rule is the sole source of pricing error.

I guess whether the extra computational cost in computing those integrals exactly with the method the author used would carry a corresponding financial cost.


Does anyone know if techniques like these are applicable to betting in baseball games?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: