Building a Model for Returns

Published August 22, 2024

The St. Petersburg Stock Exchange - Ivan Aivazovsky (1847)
The St. Petersburg Stock Exchange - Ivan Aivazovsky (1847)

Volatility drag in Gaussian stochastic return models


Who needs σ\sigma anyways?

Quantitative finance follows in the tradition set by Bachelier in his 1900 paper "Théorie de la spéculation" (Bachelier (1900)), in which he models asset prices as fundamentally random. In quant finance, we throw up our hands and leave the business of trying to forecast prices to the value investors and the day-traders. Once we acquiesce to a nondeterministic universe1, we ask "ok, now what can we do?"

Bachelier essentially started with an arithmetic binomial model and considered the transition probability between timesteps to obtain a stochastic diffusion process:

c2Pt2Px2=0c^2 \frac{\partial \mathscr{P}}{\partial t} - \frac{\partial^2 \mathscr{P}}{\partial x^2} = 0

Into the 60s and 70s, however, a preference emerged to model asset prices as stochastic processes which were geometric, both in drift and diffusion. This is certainly an improvement, but it leads to subtleties which are not present in Bachelier’s arithmetic model.

Daily asset price data for SPY is chaotic at every time scale. Fractals, anyone?

In this probabilistic setting, descriptive statistics can grant us an initial foothold in describing the jagged, chaotic time series of asset prices. Of course, efficient markets strip these statistics of any useful information, but their properties are instructive.

We begin by considering a discrete time series indexed by time.

{St}={SkΔt}kN0={S0,SΔt,S2Δt,}\{S_t\} = \{S_{k \, \Delta t}\}_{k \in \mathbb{N}_0} = \{S_0, S_{\Delta t}, S_{2 \, \Delta t}, \dots\}

We might consider the entire sequence {St}\{S_t\} to be a random variable, i.e. {St}S(t,ω)\{S_t\} \equiv S(t, \omega), but it’s not clear how we would extend this definition or sample it.

The individual prices StS_t might be another choice, but each value seems to depend on the previous value. Bachelier considered the sequence of absolute differences {ΔSt}\{\Delta S_t\} to solve this problem, but we can do better.

To proceed, we make an empirical observation.

The average change in StS_t over an interval kΔtk \, \Delta t is proportional to StS_t and the length of the interval.

This suggests that the stock’s movement has something to do with exponential growth, and it’s why a geometric stochastic process model is a better choice.

A couple important points:

  • It’s important to recognize the above as a purely empirical fact. This is not generally true for an arbitrary sequence of quantities.

  • The noise in real-world market data can make it difficult to convince yourself of this unless you look at sufficiently small intervals across sufficiently large timescales, and I wonder if that’s why Bachelier didn’t bother with this mathematical framing. At small timescales with the resolution of price data Bachelier had access to, the relationship would have appeared linear.

  • Even still, the statement is a simplification which does not survive statistical scruitiny. This is a restatement of the classic fallacy that StS_t is lognormally distributed – this is only an approximation.

This also means that, while studying the properties of the ΔSt\Delta S_t indeed leads to the same answers, the math gets a bit cluttered, as the distribution of ΔSt\Delta S_t depends on StS_t. Thus, a natural choice is to reframe the model in terms of the star of this essay: returns.

Returns are noisy and somewhat Gaussian, and since annual return is simply compounded daily returns, it’s fair to think of the asset price path as "growth plus noise." The subtle interplay of this growth and noise is the subject of this essay.

Histogram of simple returns. If you blur your eyes and shake your head back and forth, you might convince yourself that it resembles a Gaussian distribution.

A tale of two growth rates

Let’s start by formalizing our choice of random variable. Emperically, we approximated that

ΔStˉStΔt\widebar{\Delta S_t} \propto S_t \, \Delta t

so a natural first step is to model a sequence of simple returns:

Rt(Δt):=ΔStSt=St+ΔtStSt\label{eq:SimpleReturn} R_t^{(\Delta t)} := \frac{\Delta S_t}{S_t} = \frac{S_{t + \Delta t} - S_t}{S_t}

Here, I introduce the notation Rt(Δt)R_t^{(\Delta t)} to indicate that this return is defined across a timestep ti+1ti=Δtt_{i+1} - t_i = \Delta t. The notation may seem cumbersome, but it will prove quite useful later on.

St+Δt=St(1+Rt(Δt))SnΔt=S0i=0n1(1+RiΔt(δt))S_{t + \Delta t} = S_t (1 + R_t^{(\Delta t)}) \qquad\qquad S_{n \, \Delta t} = S_0 \prod_{i=0}^{n-1} (1 + R_{i \, \Delta t}^{(\delta t)})

However, we can eliminate our dependence on StS_t another way, by considering log returns.

(Rlog(Δt))t:=logΔStSt=log(1+Rt(Δt))\label{eq:LogReturn} \left(R_{\log}^{(\Delta t)}\right)_t := \log \frac{\Delta S_t}{S_t} = \log \left(1 + R_t^{(\Delta t)}\right)

Using this view, the sequence becomes

SnΔt=S0exp(i=0n1(Rlog(Δt))iΔt)S_{n \, \Delta t} = S_0 \exp \left( \sum_{i=0}^{n-1} \left(R_{\log}^{(\Delta t)}\right)_{i \, \Delta t} \right)

Again, the implication of modeling the asset price as random is to claim that it could have ended differently. So what is the event space? Recall the emperical observation from earlier, this time in terms of returns:

R(kΔt)ˉkΔtRlog(kΔt)ˉkΔt\widebar{R^{(k \, \Delta t)}} \propto k \, \Delta t \qquad\qquad \widebar{R_{\log}^{(k \, \Delta t)}} \propto k \, \Delta t

We’ll label these proportionality constants μ\mu and μ~\tilde{\mu} respectively, so that

R(Δt)ˉ=μΔtRlog(Δt)ˉ=μ~Δt\widebar{R^{(\Delta t)}} = \mu \, \Delta t \qquad\qquad \widebar{R_{\log}^{(\Delta t)}} = \tilde{\mu} \, \Delta t

Now that we have chosen R(Δt)R^{(\Delta t)} or Rlog(Δt)R_{\log}^{(\Delta t)} as the engine of the price sequence, we can reason about the event space. Both approaches have the result of turning the geometric problem into a linear one. Furthermore, we can see that, under our assumptions, R(Δt)ˉ\widebar{R^{(\Delta t)}} and Rlog(Δt)ˉ\widebar{R_{\log}^{(\Delta t)}} do not depend on time. Thus, if we have a sufficiently large number of returns, we can replace our sample averages with the expectations.

E[R(Δt)]=μΔtE[Rlog(Δt)]=μ~Δt\mathbb{E}\left[R^{(\Delta t)}\right] = \mu \, \Delta t \qquad\qquad \mathbb{E}\left[R_{\log}^{(\Delta t)}\right] = \tilde{\mu} \, \Delta t

This is a good start, but we have a mystery on our hands. If we think of a stock price as "growth plus noise", which growth rate is the correct one? And what does the other one represent?

The growth rate associated with simple returns, μ\mu, feels more legitimate. After all, simple returns are all that really matters in finance – they are the actual amount of money gained or lost.

For kicks, let’s superimpose these growth rates atop the same SPY time series.

The growth rate obtained from simple returns overestimates total return, whereas mean log returns does not. Note that μΔt\mu \, \Delta t is small enough that exp(μt)(1+μΔt)t/Δt\exp(\mu t) \approx (1 + \mu \, \Delta t)^{t /\Delta t}. The difference is in the growth rate itself.

In some sense, this view makes μ~\tilde{\mu} seem more legitimate, since it preserves total return! If indeed our stock price is "growth plus noise," surely this growth curve is the ideal deterministic basis to compose with the noise.

We can rewrite these random variables equivalently as discrete random growth factors (i.e. price ratios).

E[1+R(Δt)]=1ni(1+Ri(Δt))E[1+Rlog(Δt)]=1+1ni=0n1logSiΔt+ΔtSiΔt=1+1ni=0n1log(1+Ri(Δt))=1+log(i1+Ri(Δt))1n(i1+Ri(Δt))1n\begin{aligned} \mathbb{E}\left[1 + R^{(\Delta t)}\right] &= \frac{1}{n} \sum_i \left(1 + R_i^{(\Delta t)}\right) \\ \mathbb{E}\left[1 + R_{\log}^{(\Delta t)}\right] &= 1 + \frac{1}{n} \sum_{i=0}^{n-1} \log \frac{S_{i \, \Delta t + \Delta t}}{S_{i \, \Delta t}} \\ &= 1 + \frac{1}{n} \sum_{i=0}^{n-1} \log \left(1 + R_i^{(\Delta t)}\right) \\ &= 1 + \log \left( \prod_i 1 + R_i^{(\Delta t)} \right)^\frac{1}{n} \\ &\approx \left( \prod_i 1 + R_i^{(\Delta t)} \right)^\frac{1}{n} \end{aligned}

We can safely ignore terms smaller than O((μΔt)2)=O((μ~Δt)2)O\left((\mu \, \Delta t)^2\right) = O\left((\tilde{\mu} \, \Delta t)^2\right). But be careful – this is not the same as terms of order O((R(Δt))2)O\left(\left(R^{(\Delta t)}\right)^2\right), which we cannot ignore!

Interestingly, these expectations are the arithmetic and geometric mean of price ratios, respectively. The AM-GM inequality tells use that μ\mu will always be larger than μ~\tilde{\mu}.

This view also explains why log returns recover the total return.

E[1+Rlog(Δt)]n=(1+μ~Δt)n=i1+Ri(Δt)=iSiΔt+ΔtSiΔt=SnΔtS0\mathbb{E}\left[1 + R_{\log}^{(\Delta t)}\right]^n = (1 + \tilde{\mu} \, \Delta t)^n = \prod_i 1 + R_i^{(\Delta t)} = \prod_i \frac{S_{i \, \Delta t + \Delta t}}{S_{i \, \Delta t}} = \frac{S_{n \, \Delta t}}{S_0}

The mean log return simply answers the question "what growth rate, if compounded n=t/Δtn = t / \Delta t times, would result in the correct total growth?"

You might not think that this difference is a big deal; for this sampling of SPY prices, we have only a difference of about 3.6%.

μ=0.000984μ~=0.000951\mu = 0.000984 \qquad \tilde{\mu} = 0.000951

But consider pathological cases where these two quantities have different signs. As it turns out, in markets, this happens all the time. I tested 252-day windows for 15 stocks, and for most stocks, a whopping 5% of their year-long windows exhibit this behavior – overall positive daily returns but with a negative total return! This should convince you that understanding the difference between these two growth rates is worthwhile.

A smattering 15 stocks were chosen, and their mean daily returns were calculated over a sliding 252 day window. Random periods are plotted which have positive mean daily returns but negative total return across the period. Each ticker is displayed with its percentage of periods tested which have this characteristic.

Think about the implication:

  • An investor who buys the asset each morning and sells the following morning, while buying the asset again at the exact same price makes money.

  • An investor who buys and holds loses money.

This exercise also gives us a clue – more volatile stocks have a higher frequency of these pathological periods, and for any single stock, the periods occur during volatile periods. Whatever this thing is, it has something to do with volatility.

Indeed, this phenomenon is called volatility drag. To understand it better, we need to make a stop at the casino.

Great average returns do not a winner make

An American roulette table has 38 squares. 2 squares are green, and 18 are red, and 18 are black. The game starts by betting money on red or black. A ball is then spun around the wheel, randomly landing on a color. A correct guess doubles your money, and an incorrect guess loses the entire bet.

ΔS={+$1p=1838$11p=2038\Delta S = \begin{cases} +\$1 & p = \frac{18}{38} \\ -\$1 & 1 - p = \frac{20}{38} \end{cases}

The expected winnings ΔS\Delta S on a $1 bet is trivial to calculate:

E[ΔS]=p1+(1p)1=2p1=218381=$0.053\mathbb{E}[\Delta S] = p * 1 + (1 - p) * -1 = 2 * p - 1 = 2\frac{18}{38} - 1 = -\$0.053

Indeed, after kk games, the expected P&L is E[R]=E[Ri]=E[Ri]=k\mathbb{E}[R] = \mathbb{E}\left[\sum R_i\right] = \sum \mathbb{E}[R_i] = k * -$0.0530.053.

So far so good. This is why no mathematicians play routlette. We can make roulette bets analogous to stock investing by simulating a single-period "buy and hold" strategy for some finite number of plays – we let our bet ride for nn rounds. Unsurprisingly, the expected final payout is abysmal:

SnS0=1+R(n)=i(1+Ri)E[SnS0]=E[i(1+Ri)]=iE[1+Ri]=($0.947)n\begin{aligned} \frac{S_n}{S_0} &= 1 + R^{(n)} = \prod_i \left(1 + R_i\right) \\ \mathbb{E}\left[\frac{S_n}{S_0}\right] &= \mathbb{E}\left[\prod_i \left(1 + R_i\right)\right] = \prod_i \mathbb{E}\left[1 + R_i\right] = (\$0.947)^n \end{aligned}

But something quite interesting happens when we change the game slightly. Let’s imagine a roulette table which has inverted the odds in the gambler’s favor. At this table, the green squares are feebies - they always count as a win.

Now the expected return for a single play is positive.

R={+1p=203811p=1838E[ΔS]=E[R]1=220381=+$0.053\begin{gathered} R = \begin{cases} +1 & p = \frac{20}{38} \\ -1 & 1 - p = \frac{18}{38} \end{cases} \\ \mathbb{E}[\Delta S] = \mathbb{E}[R] - 1 = 2\frac{20}{38} - 1 = +\$0.053 \end{gathered}

Not only that, but the expected growth in wealth after nn plays is also positive, guaranteeing that the casino loses money in the long run.

E[SnS0]=(1.053)n\mathbb{E}\left[\frac{S_n}{S_0}\right] = (1.053)^n

Should we play this game?

We’ve just calculated the expected payout, but that number would only be realized after a sufficiently large number of nn-round plays. We might also wonder about the expected outcome of a single play. Because individual rounds multiply the starting wealth, the order doesn’t matter. Thus, the final wealth is fully determined by the number of wins. Each win occurs with probability pp, so the expected number of wins in an nn-round play is simply

E[num wins]=np\mathbb{E}[\text{num wins}] = n p

Then the final return in this case is

(ΔS)(n)S0=2np0nnp=(2p01p)n=0\frac{(\Delta S)^{(n)}}{S_0} = 2^{n p} * 0^{n - n p} = \left(2^p * 0^{1 - p}\right)^n = 0

So it’s most likely that you will walk away with nothing after each play! Sure, you could walk out the door with 210=2^{10} = $10241024 after 10 rounds, but that happens only with a (2038)10=.16%\left(\frac{20}{38}\right)^{10} = .16\% chance!

On the other hand, if you play a 10-round game 400 times, it’s more likely than not that you’ll have won 210=2^{10} = $10241024 at some point, only sacrificing $399 for the privilege.

We can also see a cartoon example of someone with positive average returns who still ends up a loser. Consider the unlucky soul who won 9 times in a row and lost on their 10th play. Their average return was a whopping 80% per play!

Rˉ=110(19+11)=0.8\bar{R} = \frac{1}{10} (1 * 9 + -1 * 1) = 0.8

But of course, they ended up with a total return of -100%. The bigger the rise, the harder the fall.

Whether or not you would take this bet given finite money (i.e. finite number of plays) depends on your level of risk tolerance.

This is a variation on the St. Petersburg Paradox, which highlights the difference between the mathematical value of a game and a rational individual’s willingness to play it. It was also explored in a setting similar to this one by Samuelson (1971) in 1971.

Now, take a look at the quantity 2p01p2^p * 0^{1 - p}. Taking the log of both sides of the above equation,

log(ΔS)(n)S0=nlog(2p01p)1nlog(ΔS)(n)S0=plog2+(1p)log0=E[Rlog]\begin{aligned} \log \frac{(\Delta S)^{(n)}}{S_0} &= n \log \left(2^p * 0^{1 - p}\right) \\ \frac{1}{n} \log \frac{(\Delta S)^{(n)}}{S_0} &= p * \log 2 + (1 - p) * \log 0 = \mathbb{E}\left[R_{\log}\right] \end{aligned}

This is the log return!

To summarize,

  • Simple returns tell us about the dynamics of the expected value of the price across many possible realizations of the path.

  • Log returns tell us about the dynamics of the expected path that the asset price takes.

This tracks with our discussion about the SPY time series. By accepting that time series data as our sample and declaring that it perfectly models the population statistics, we are not say that "this sample has the mean growth rate;" rather, we are saying that "this sample is the most likely path."

So, we’ve tracked down the difference in these two growth rates, but we’ll get a better feel for the machinery by building a bridge between the stock price and the casino. So far we’ve only been concerned with the first moment of our random returns, but in order to develop our model, we need to consider the second moment as well.

The binomial model

The binomial model provides the perfect setting for a random walk. It simplifies the math while providing some key ingredients:

  • a natural notion of exponential growth

  • sufficient degrees of freedom to tune distributions to our liking

A geometric version of the binomial model was introduced by Cox, Ross, and Rubinstein (1979) in 1979 to price options. I’ll refer to it as the Geometric Binomial Random Walk (GBRW). They also demonstrated that this option pricing method converges on the Black-Scholes-Merton solution. We won’t be pricing options here, but we’ll use a similar convergence technique to study and extend our stock price model.

In the GBRW model, the asset price StS_t rises to uStu \cdot S_t with probability pp or falls to vStv \cdot S_t in a single time step2.

This is simply a generalization of the Roulette game from above.

One time step is not enough for our needs – we need to develop a procedure for extending the model to timescales larger and smaller than Δt\Delta t so we can examine the statistics.

We begin by examining a simpler case: the Arithmetic Binomial Random Walk (ABRW).

Arithmetic binomial random walk

The arithmetic binomial random walk is characterized by step sizes that do not change with position or time. Here, I’ve kept the model general by allowing for assymetric step sizes and probabilities.

There are many ways to frame and study this setting mathematically, but as a computer scientist by trade, I like the framing that facilitates simulation (i.e. one that allows repeated sampling). As it turns out, this approach also builds a nice bridge to stochastic calculus.

We will analyze the binomial model via the properties of a random walk along its tree-like structure. The engine of the random walk is the random variable Δy(Δt)\Delta y^{(\Delta t)}.

Δy(Δt)={Δy1(Δt)pΔy2(Δt)1pyt+kΔt:=yt+kΔyk(Δt)\Delta y^{(\Delta t)} = \begin{cases} \Delta y_1^{(\Delta t)} & p \\ \Delta y_2^{(\Delta t)} & 1 - p \end{cases} \qquad\qquad y_{t + k \, \Delta t} := y_t + \sum_k \Delta y_k^{(\Delta t)}

This is already enough setup to do some interesting math, but the real meat comes from extending the definition of our model.

We recognize that daily returns for assets do not tell the full story. Rather, they are a snapshot at a resolution of Δt=1textday\Delta t = 1 text{day}. Thus, daily returns or log returns are not the atomic random variables! Just like how annual return is a random quantity constructed from random daily returns, so too are daily returns merely a summary of higher frequency trading. In the limit, we consider trading to occur continuously.

Thus, in order for the binomial model to be useful to us, we need to be able to reason about its properties at timescales both larger and smaller than the timescale Δt\Delta t over which it is initially defined.

I like to think of it this way: by defining the sample space and probability measure at a particular time horizon, what we’re really doing is setting the parameters of a distribution at that time horizon. Then, we analyze the properties of the model at discrete interval lengths kΔtk \, \Delta t to find a continuous map MM from time to the distribution.

M:tDt=D(θ(t))M : t \to \mathcal{D}_t = \mathcal{D}(\boldsymbol{\theta}(t))

Think of Δt\Delta t as the resolution of the grid.

Now, consider the process

Zt=logStZ_t = \log S_t

The logarithm turns the geometric binomial grid above into an arithmetic one.

Δ(logS)(Δt)=ΔZ(Δt)={loguplogv1pΔ(logSt)(Δt)=logStS0:=k(ΔZ)k(Δt)\Delta \left(\log S\right)^{(\Delta t)} = \Delta Z^{(\Delta t)} = \begin{cases} \log u & p \\ \log v & 1 - p \end{cases} \qquad\qquad \Delta \left(\log S_t\right)^{(\Delta t)} = \log \frac{S_t}{S_0} := \sum_k \left(\Delta Z\right)_k^{(\Delta t)}

Equivalently, one could also consider the random variable to be the number of up-moves, analogous to the number of wins in the Roulette example. Across one timestep, this is a boolean random variable.

Bool(up-move)(Δt)={1p01p\mathrm{Bool}(\text{up-move})^{(\Delta t)} = \begin{cases} 1 & p \\ 0 & 1 - p \end{cases}

At this point, the grid is only defined for times t+kΔt;k{0,1,2,}t + k \, \Delta t; k \in \{0, 1, 2, \dots\}. So we look at how the expectation and variance evolve for k>1k > 1. It’s relatively easy to work this out using the binomial distribution, or you can simply recognize that we are accumulating independent Bernoulli trials.

E[ΔZ(kΔt)]=kE[ΔZ(Δt)]V[ΔZ(kΔt)]=kV[ΔZ(Δt)]\mathbb{E}\left[\Delta Z^{(k \, \Delta t)}\right] = k \mathbb{E}\left[\Delta Z^{(\Delta t)}\right] \qquad\qquad \mathbb{V}\left[\Delta Z^{(k \, \Delta t)}\right] = k \mathbb{V}\left[\Delta Z^{(\Delta t)}\right]

Because the quantities E[ΔZ(Δt)]Δt\frac{\mathbb{E}\left[\Delta Z^{(\Delta t)}\right]}{\Delta t} and V[ΔZ(Δt)]Δt\frac{\mathbb{V}\left[\Delta Z^{(\Delta t)}\right]}{\Delta t} are invariant for any t=kΔtt = k \, \Delta t, this gives us a natural procedure to extend the binomial grid definition to times between tt and t+Δtt + \Delta t; i.e. to increase the resolution: iteratively partition the interval Δt\Delta t while preserving these two invariants3.

With this framing in mind, we’ll indicate the hard-coded parameters for the Δt\Delta t-resolution grid by defining constants.

E[ΔZ(Δt)]=μ~ΔtV[ΔZ(Δt)]=σ~2Δt\mathbb{E}\left[\Delta Z^{(\Delta t)}\right] = \tilde{\mu} \, \Delta t \qquad\qquad \mathbb{V}\left[\Delta Z^{(\Delta t)}\right] = \tilde{\sigma}^2 \, \Delta t

We did a similar substitution at the beginning of this essay for the SPY time series. But here, this is a direct result of how we’ve defined the binomial grid, not an emperical observation. Writing μ~Δt\tilde{\mu} \, \Delta t is useful because later on we will manipulate Δt\Delta t like a variable. It is justified because, as shown above, E[ΔZ()]\mathbb{E}\left[\Delta Z^{(\cdot)}\right] is linear in time, insofar as you consider ΔZ()\Delta Z^{(\cdot)} a discrete function of time:

ΔZ(kΔt):=Z(kΔt)Z0=i=1k(ΔZ(Δt))i\Delta Z^{(k \, \Delta t)} := Z^{(k \, \Delta t)} - Z_0 = \sum_{i=1}^k \left(\Delta Z^{(\Delta t)}\right)_i

This essentially recognizes μ~\tilde{\mu} and σ~2\tilde{\sigma}^2 as growth rates per unit time of expectation and variance of Z(kΔt)Z^{(k \, \Delta t)} respectively.

If the goal is to simulate a random walk on the binomial grid (e.g. with a computer), all we really need is a formula for the binomial random variable from which to sample to construct a random walk with timesteps of arbitrary length. We simply extend kk to R+\mathbb{R}^+, and define n=1/kn = 1/k. Let

δt:=ΔtnδZ:=ΔZ(δt)\delta t := \frac{\Delta t}{n} \qquad\qquad \delta Z := \Delta Z^{(\delta t)}

Then, we can simply increase nn until we have sufficient precision.

3E[δZ]=pδZ1+(1p)δZ2=1nμ~Δt=μ~δtV[δZ]=p(1p)(δZ1δZ2)2=1nσ~2Δt=σ~2δt\begin{aligned} {3} \mathbb{E}\left[\delta Z\right] &= p \delta Z_1 + (1 - p) \delta Z_2 &&= \frac{1}{n} \tilde{\mu} \, \Delta t &&= \tilde{\mu} \, \delta t \\ \mathbb{V}\left[\delta Z\right] &= p (1 - p) (\delta Z_1 - \delta Z_2)^2 &&= \frac{1}{n} \tilde{\sigma}^2 \, \Delta t &&= \tilde{\sigma}^2 \, \delta t \end{aligned}

These two constraints in two unknowns gives a unique solution (up to a reflection about the first moment):

δZ1=μ~δt±σ~δt1ppδZ2=μ~δtσ~δtp1p\begin{aligned} \delta Z_1 &= \tilde{\mu} \, \delta t \pm \tilde{\sigma} \sqrt{\delta t} \sqrt{\frac{1 - p}{p}} \\ \delta Z_2 &= \tilde{\mu} \, \delta t \mp \tilde{\sigma} \sqrt{\delta t} \sqrt{\frac{p}{1 - p}} \end{aligned}

Here I’ve defined a linear binomial model by p=0.6p = 0.6, ΔZ(Δt)=0.4\Delta Z^{(\Delta t)} = 0.4, ΔZ(Δt)=0.9\Delta Z^{(\Delta t)} = -0.9. The expected value and variance at time t=Δtt = \Delta t are invariant as the resolution of the binomial grid increases. Δt\Delta t is set to 1 w.l.o.g., and the mean path is interpolated to make the graph easier to read. The expectation of ZtZ_t is linear in time.

What’s more, we can let nn become large, and with a healthy bit of applied math hand-waving, let

δtdtδZtdZtZt:=Z0+0tdZt\delta t \to dt \qquad\qquad \delta Z_t \to dZ_t \qquad\qquad Z_t := Z_0 + \int_0^t dZ_t

Voilà! We’ve discovered Brownian motion and stochastic increments!

dZt={μ~dt±σ~dt1pppμ~dtσ~dtp1p1pdZ_t = \begin{cases} \tilde{\mu} \, dt \pm \tilde{\sigma} \sqrt{dt} \sqrt{\frac{1 - p}{p}} & p \\ \tilde{\mu}\, dt \mp \tilde{\sigma} \sqrt{dt} \sqrt{\frac{p}{1 - p}} & 1 - p \end{cases}

It should be clear that we aren’t in Kansas anymore, and that normal rules of calculus don’t apply here. In traditional calculus, a differential is defined as the linear component of its function’s derivative:

dZ(t):=Z(t)dtdZ(t) := Z'(t) \, dt

But at no point (including in the limit) is yty_t differentiable!

Here we’ve approached dZtdZ_t through a limiting process of smaller and more frequent Bernoulli trials, so dZtdZ_t takes the form of a Bernoulli random variable. The more common (functionally equivalent) form is to write dZtdZ_t as a Gaussian random variable.

dZt=d(logSt)=μ~dt+σ~ϕdtdZ_t = d(\log S_t)= \tilde{\mu} \, dt + \tilde{\sigma} \phi \sqrt{dt}

with ϕN(0,1)\phi \sim \mathcal{N}(0, 1).

All that’s left to do is integrate and exponentiate to obtain our simulated path.

StS0=exp(0tdZs)\frac{S_t}{S_0} = \exp\left(\int_0^t dZ_s\right)

Geometric binomial random walk

Now let’s find out what happens when we apply the same procedure for the geometric case.

Start by choosing a random variable which drives the random walker across the grid.

ΔSkΔt(Δt):={(u1)SkΔtp(v1)SkΔt1p    SkΔt=S0+kΔSkΔt(Δt)\Delta S_{k \, \Delta t}^{(\Delta t)} := \begin{cases} (u - 1) S_{k \, \Delta t} & p \\ (v - 1) S_{k \, \Delta t} & 1 - p \end{cases} \quad\implies\quad S_{k \, \Delta t} = S_0 + \sum_k \Delta S_{k \, \Delta t}^{(\Delta t)}

We would rather not have our random variable depend on kk (via SkΔtS_{k \, \Delta t}, such that its distribution changes after each trial), so instead, we express the problem geometrically.

R(Δt)ΔSkΔt(Δt)SkΔt={u1pv11p    SkΔt:=S0k(1+Rk(Δt))R^{(\Delta t)} \equiv \frac{\Delta S_{k \, \Delta t}^{(\Delta t)}}{S_{k \, \Delta t}} = \begin{cases} u - 1 & p \\ v - 1 & 1 - p \end{cases} \quad\implies\quad S_{k \, \Delta t} := S_0 \prod_k \left(1 + R_k^{(\Delta t)}\right)

Whether you consider ΔSkΔt(Δt)\Delta S_{k \, \Delta t}^{(\Delta t)}, St+kΔtS_{t + k \, \Delta t} itself, or even a growth factor like

G(Δt):={upv1p    SkΔt=S0kGk(Δt)G^{(\Delta t)} := \begin{cases} u & p \\ v & 1 - p \end{cases} \quad\implies\quad S_{k \, \Delta t} = S_0 \prod_k G_k^{(\Delta t)}

to be the random variable, it makes no difference.

The key characteristic is that we are sampling an independent sequence of binomial random variables and accumulating them geometrically to walk along the binomial tree. These forms are all equivalent and lead to the same results below. Using R(Δt)R^{(\Delta t)} or G(Δt)G^{(\Delta t)} makes the math a bit cleaner (we don’t get StS_t popping up everywhere).

The expectation and variance across Δt\Delta t are

E[1+R(Δt)]=pu+(1p)vV[R(Δt)]=V[1+R(Δt)]=E[(1+R(Δt))2]E[1+R(Δt)]2=u2p+v2(1p)(up+v(1p))2=p(1p)(uv)2\begin{aligned} \mathbb{E}\left[1 + R^{(\Delta t)}\right] &= p u + (1 - p) v \\ \mathbb{V}\left[R^{(\Delta t)}\right] = \mathbb{V}\left[1 + R^{(\Delta t)}\right] &= \mathbb{E}\left[\left(1 + R^{(\Delta t)}\right)^2\right] - \mathbb{E}\left[1 + R^{(\Delta t)}\right]^2 \\ &= u^2 p + v^2 (1 - p) - (u p + v (1 - p))^2 \\ &= p (1 - p) \left(u - v\right)^2 \end{aligned}

Because these values are just constants, we’ll write them as such.

E[R(Δt)]=μΔtV[R(Δt)]=σ2Δt\mathbb{E}\left[R^{(\Delta t)}\right] = \mu \, \Delta t \qquad \mathbb{V}\left[R^{(\Delta t)}\right] = \sigma^2 \, \Delta t

Again, we write them this way to create quantities μ\mu and σ2\sigma^2 which are expressed per unit time in anticipation of scaling the time interval of our trials later on. But at this point, Δt\Delta t is just a number, not a variable!

The next step is to find the first and second moments of the derived random variable, R(kΔt)R^{(k \, \Delta t)}. Recall from the linear example that the goal is to establish a relationship between R(Δt)R^{(\Delta t)} and R(kΔt)R^{(k \, \Delta t)} which can be extended to all values of tt.

Across kk timesteps of length Δt\Delta t, the expectation is binomial.

E[1+R(kΔt)]=E[St+kΔtSt]=E[uivki]=iuivkiPr[i]=i(ki)(up)i(v(1p))ki=(pu+(1p)v)k=E[1+R(Δt)]k\begin{aligned} \mathbb{E}\left[1 + R^{(k \, \Delta t)}\right] = \mathbb{E}\left[\frac{S_{t + k \, \Delta t}}{S_t}\right] &= \mathbb{E}\left[u^i v^{k - i}\right] \\ &= \sum_i u^i v^{k - i} \Pr[i] \\ &= \sum_i {k \choose i} (u p)^i (v (1 - p))^{k - i} \\ &= (p u + (1 - p) v)^k \\ &= \mathbb{E}\left[1 + R^{(\Delta t)}\right]^k \end{aligned}

This also follows from the expectation of independent variates.

1+R(kΔt)=St+kΔtSt=k(1+Rk(Δt))E[1+R(kΔt)]=E[k(1+Rk(Δt))]=k(E[1+Rk(Δt)])\begin{aligned} 1 + R^{(k \, \Delta t)} = \frac{S_{t + k \, \Delta t}}{S_t} &= \prod_k \left(1 + R_k^{(\Delta t)}\right) \\ \mathbb{E}\left[1 + R^{(k \, \Delta t)}\right] = \mathbb{E}\left[\prod_k \left(1 + R_k^{(\Delta t)}\right)\right] &= \prod_k \left(\mathbb{E}\left[1 + R_k^{(\Delta t)}\right]\right) \end{aligned}

This confirms that the expected total growth and thus the expected value of StS_t grows exponentially, which is unsurprising.

The variance isn’t quite as clean:

V[1+R(kΔt)]=E[(1+R(kΔt))2]E[1+R(kΔt)]2=(iu2iv2(ki)Pr[i])E[1+R(Δt)]2k=(u2p+v2(1p))kE[1+R(Δt)]2k=(V[R(Δt)]+E[1+R(Δt)]2)kE[1+R(Δt)]2k=(V[R(Δt)]+E[1+R(Δt)]2)k(E[1+R(Δt)]2)k\begin{aligned} \mathbb{V}\left[1 + R^{(k \, \Delta t)}\right] &= \mathbb{E}\left[\left(1 + R^{(k \, \Delta t)}\right)^2\right] - \mathbb{E}\left[1 + R^{(k \, \Delta t)}\right]^2 \\ &= \left( \sum_i u^{2i} v^{2 (k - i)} \Pr[i] \right) - \mathbb{E}\left[1 + R^{(\Delta t)}\right]^{2k} \\ &= \left(u^2 p + v^2 (1 - p)\right)^k - \mathbb{E}\left[1 + R^{(\Delta t)}\right]^{2k} \\ &= \left( \mathbb{V}\left[R^{(\Delta t)}\right] + \mathbb{E}\left[1 + R^{(\Delta t)}\right]^2 \right)^k - \mathbb{E}\left[1 + R^{(\Delta t)}\right]^{2k} \\ &= \left( \mathbb{V}\left[R^{(\Delta t)}\right] + \mathbb{E}\left[1 + R^{(\Delta t)}\right]^2 \right)^k - \left( \mathbb{E}\left[1 + R^{(\Delta t)}\right]^2 \right)^k \end{aligned}

Nonetheless, we have the necessary formulae to determine our binomial random variables for any grid resolution. Again, extend kk to R+\mathbb{R}^+, with n=1/kn = 1/k.

δt:=ΔtnδS:=ΔS(δt)\delta t := \frac{\Delta t}{n} \qquad\qquad \delta S := \Delta S^{(\delta t)}
2E[1+R(δt)]=1+pR1(δt)+(1p)R2(δt)=(1+μΔt)δt/ΔtV[1+R(δt)]=p(1p)(R1(δt)R2(δt))2=(σ2Δt+(1+μΔt)2)δt/Δt(1+μΔt)2δt/Δt\begin{aligned} {2} \mathbb{E}\left[1 + R^{(\delta t)}\right] &= 1 + p R_1^{(\delta t)} + (1 - p) R_2^{(\delta t)} &&= (1 + \mu \, \Delta t)^{\delta t / \Delta t} \\ \mathbb{V}\left[1 + R^{(\delta t)}\right] &= p (1 - p) (R_1^{(\delta t)} - R_2^{(\delta t)})^2 &&= \left( \sigma^2 \, \Delta t + \left(1 + \mu \, \Delta t\right)^2 \right)^{\delta t / \Delta t} - \left(1 + \mu \, \Delta t\right)^{2 \delta t / \Delta t} \end{aligned}

Since δt\delta t becomes small, we can simplify.

E[1+R(δt)]=(1+μΔt)δt/Δt=1+δtΔtlog(1+μΔt)V[1+R(δt)]=(σ2Δt+(1+μΔt)2)δt/Δt(1+μΔt)2δt/Δt=δtΔtlog(σ2Δt+(1+μΔt)2)2δtΔtlog(1+μΔt)=δtΔtlog(1+σ2Δt(1+μΔt)2)\begin{aligned} \mathbb{E}\left[1 + R^{(\delta t)}\right] &= (1 + \mu \, \Delta t)^{\delta t / \Delta t} \\ &= 1 + \frac{\delta t}{\Delta t} \log (1 + \mu \, \Delta t) \\ \mathbb{V}\left[1 + R^{(\delta t)}\right] &= \left( \sigma^2 \, \Delta t + \left(1 + \mu \, \Delta t\right)^2 \right)^{\delta t / \Delta t} - \left(1 + \mu \, \Delta t\right)^{2 \delta t / \Delta t} \\ &= \frac{\delta t}{\Delta t} \log \left( \sigma^2 \, \Delta t + \left(1 + \mu \, \Delta t\right)^2 \right) - 2 \frac{\delta t}{\Delta t} \log (1 + \mu \, \Delta t) \\ &= \frac{\delta t}{\Delta t} \log \left( 1 + \frac{\sigma^2 \, \Delta t}{\left(1 + \mu \, \Delta t\right)^2} \right) \end{aligned}

Now we notice something very interesting. Just like in the linear case, we have the scaling of the random variables proportional to δt\delta t. But instead of the μ\mu and σ\sigma which describe the final distribution, we have new quantities.

μ=1Δtlog(1+μΔt)σ2=1Δtlog(1+σ2Δt(1+μΔt)2)\mu_\dagger = \frac{1}{\Delta t} \log(1 + \mu \, \Delta t) \qquad\qquad \sigma_\dagger^2 = \frac{1}{\Delta t} \log \left( 1 + \frac{\sigma^2 \, \Delta t}{\left(1 + \mu \, \Delta t\right)^2} \right)

What’s more, we can see that these values represent an expected value and variance of some other distribution on the Δt\Delta t timescale.

μΔt=log(1+μΔt)σ2Δt=log(1+σ2Δt(1+μΔt)2)\mu_\dagger \, \Delta t = \log(1 + \mu \, \Delta t) \qquad\qquad \sigma_\dagger^2 \, \Delta t = \log \left( 1 + \frac{\sigma^2 \, \Delta t}{\left(1 + \mu \, \Delta t\right)^2} \right)

Now, let’s write our random variable using the new quantities.

R(δt)=δStSt={μδt±σδt1pppμδtσδtp1p1p\begin{aligned} R^{(\delta t)} = \frac{\delta S_t}{S_t} = \begin{cases} \mu_\dagger \, \delta t \pm \sigma_\dagger \sqrt{\delta t} \sqrt{\frac{1-p}{p}} & p \\ \mu_\dagger \, \delta t \mp \sigma_\dagger \sqrt{\delta t} \sqrt{\frac{p}{1-p}} & 1 - p \end{cases} \end{aligned}

What we’ve done here is no small feat – we’ve created a "ghost" distribution N(μδt,σ2δt)\mathcal{N}\left(\mu_\dagger \, \delta t, \sigma_\dagger^2 \, \delta t\right) from which to sample simple returns, which nonetheless results in the correct behavior for StS_t! Again, we can convert this to an equivalent stochastic differential equation.

dStSt=μdt+σϕdtdSt=μStdt+σStdWt\begin{aligned} \frac{dS_t}{S_t} &= \mu_\dagger \, dt + \sigma_\dagger \phi \sqrt{dt} \\ dS_t &= \mu_\dagger S_t \, dt + \sigma_\dagger S_t \, dW_t \end{aligned}

The presence of Δt\Delta t in the distribution at any grid resolution means that simple returns result in growth rates which are tethered to the original sampling resolution. In other words, defining the problem starting at any particular timescale results in a different return growth rate, which is compensated for in μ\mu_\dagger and σ\sigma_\dagger.

Indeed, this technique works:

Here I’ve defined a geometric binomial model by p=0.55p = 0.55, R1(Δt)=0.11R_1^{(\Delta t)} = 0.11, R2(Δt)=0.21R_2^{(\Delta t)} = -0.21. Δt\Delta t is set to 1 w.l.o.g., and the mean path is interpolated to make the graph easier to read. The expectation of StS_t is exponential in time.

References

Bachelier, Louis. 1900. “Théorie de La Spéculation.” Annales Scientifiques de L’Ecole Normale Supérieure 17: 21–88.

Cox, John C., Stephen A. Ross, and Mark Rubinstein. 1979. “Option Pricing: A Simplified Approach.” Journal of Financial Economics 7 (3): 229–63. https://doi.org/https://doi.org/10.1016/0304-405X(79)90015-1.

Samuelson, Paul A. 1971. “The "Fallacy" of Maximizing the Geometric Mean in Long Sequences of Investing or Gambling.” Proceedings of the National Academy of Sciences of the United States of America 68 (10): 2493–96. http://www.jstor.org/stable/61075.


  1. We’re in good company, the physicists are all here.
  2. uu, vv, and Δt\Delta t are real world, non-infinitesimal quantities like 1.1 and "5 days."
  3. Due to the CLT, the binomial distribution becomes Gaussian in the limit, so it makes sense that we end up with two parameters at each timestep.

© 2018-2024 Brendan Schlaman. All rights reserved.