SVI volatility model is the most successful parametric implied vol model in the public domain. It bridges the gap between academia and industry. In this model, there are five parameters for each maturity (in the SVI-JW):

ATM vol, ATM skew, ATM curvature, left-wing slope, and right-wing slope.

The model has some good properties from the point of view of Q Quants:

*1. SVI model is a limit case of the most famous stochastic volatility model — the Heston model — as maturity goes to infinity.2. SVI model has well-researched arbitrage-free conditions.*

Suppose there is a data scientist who recently…

From 1970 to now, we have seen the development of volatility modeling, from constant volatility Black Scholes model to stochastic volatility model and local volatility model. Before we ask the question “what is the next-generation model,” we should ask ourselves: what are new good properties that the model should have, and what puzzles are not well handled by the current model so that we want to solve them by the new model. …

Skew in the implied volatility surface usually means the implied volatility is negatively correlated to option strike. We can explain through many angles:

- leverage effect: when the stock price drops, the company has a greater leverage ratio, hence a greater volatility
- spot-vol correlation effect: volatility process is negative correlated with stock price process
- jump effect: big jumps tend to be down than up
- risk of default: there is positive probability company default
- bucket correlation: when stocks drop, the correlation between stocks increases, and when stocks grow, the correlation gets smaller. A high correlation between individual stock results in greater volatility
- …

Pair trading is a simple trading idea famous 15 years ago. The 2 step methodology is simple. First, you find one pair of stocks moving together. Then you monitor the spread, and when the spread is greater than the threshold, you bet the spread gets smaller. The profit decay in the recent decade after researchers have already tried all possible functions to extract information from prices and all cointegration measures to measure the co-movement between two stocks. It is unlikely to pick up the easy money now, but we can review the latent risk factor behind the pair trading alpha…

The quantitative finance domain has already accepted some concepts from data science. For example, researchers usually limit the number of free parameters in their model because of overfitting. Another case is the usage of the out-of-sample test as a reliable estimate of strategy performance. Notwithstanding, the vol surface modeling community focuses more on elegant physical models and is reluctant to accept new data science concepts. One apparent defect is that although quants can solve paper problems by developing a general framework which in fact increases the degree of freedom, solving practical problems usually requires decreasing the degree of freedom.

If…

Jumps and stochastic volatility are two helpful refinements into Black Scholes’s arbitrage-free pricing paradigm. Mastering them would require advanced calculus and probability theory. Still, we will give an intuitive explanation of why we need jumps without any formula. In this blog, we would only focus on jumps and talk about stochastic volatility in the next blog.

1. **There are jumps in the real world**, such as the 20% SPX drop on Oct 29, 1987. The pricing Q measure should be consistent with the historical measure P. …

Front-running is trading stock by a broker who has inside knowledge a future client transaction is about to affect its price. It is called front because the broker trades before clients. It is illegal, and the profits of the broker come from losses of clients.

What if the broker trade after the client’s order? If the broker has the information that the client trade contains no information. The broker can make use of this information and earn profits by behind-running.

After a trade, there are permanent effects and temporary effects. The temporary effects should decay with time, and the price…

One factor might be useful only under some specific conditions and becomes random noise under other circumstances. The high alpha happens in a short period, but it is diluted by the long useless time, and it results in the alpha is insignificant on average. Furthermore, it is also possible some signals are good at predicting catastrophe but not useful in a quiet environment.

On linear regression methodology, we use information ratio to measure signal performance all-weather. It is unfair for the weak signal mentioned above. For example, there are two famous signals in volatility surface shape: volatility surface skew and…

Volatility surfaces data is usually in a sparse, high dimensional space with limited data points. For example, if we have 500 underlying stocks, each single stock option has 20 maturities, and for each slice, we have 5 parameters (SVI curve). Then we have 50,000 parameters for each observation. If we calibrate the market five times a day, we have 250, 000 parameters per day. Given the fixed amount of information generated in the market per day, if we want to estimate too many parameters, each parameter's confidence is not reliable, and the parameters are not stable.

The reduction of dimension…

Implied volatility surface describes implied volatility as a function of option strike price and option mature time. They are calibrated from option prices by famous Black Scholes formula. There are three typical use cases of implied volatility: exotic option pricing, option market making, and alpha research. Furthermore, option data is much noisier than stock price so we need data cleaning methods. Because the reason people using it is different, we can see great differences in the volatility surface building methodologies among them.

1. exotic option pricing

Sell-side, investment banks sell exotic options (or structure products) to investors and using stocks…