How Not to Visualize Martingales

Published February 14, 2024

Hero image for this essay

A cautionary tale about the subtleties of stochastic process simulation


TL;DR

While exploring the properties of various continuous-time stochastic processes, I had a bit of fun simulating them in python using discrete approximations generated via the Euler-Maruyama method.

In order to check my work when manipulating various stochastic random variables using Itô calculus, I built a Martingale Detector - a crude tool which tests a drift hypothesis via Monte Carlo methods.

When the naive detection scheme yielded unexpected results for certain processes, I discovered that I had implicitly baked in a faulty assumption in my code.

I believe that the bug described below is an instructive example of the consequences of eschewing rigor when designing simulations. This essay is equal parts mathematical candy and postmortem.

While my use for the tool was rather pedestrian (checking my work on some practice problems), testing for signals like Martingale and Markov properties in stochastic time-series is actually of great interest in fields like finance and physics.

The usual disclaimer: The purpose of this essay is to document my experience and present some food for thought to the reader. The procedures described below should not be relied upon for anything important.

A brief review of Martingales

Assume a filtered probability space (Ω,F,{Ft}t0,P)(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t \ge 0}, \mathbb{P}) is given. An Ft\mathcal{F}_t-adapted process YtY_t is a continuous-time martingale if and only if E[Yt]<\mathbb{E}\left[|Y_t|\right] < \infty and

E[YtFs]=Ys,  0st\mathbb{E}\left[Y_t | \mathcal{F}_s\right] = Y_s, \quad \forall \; 0 \le s \le t

Proving the martingale conditions for simple stochastic processes which can be expressed in closed-form often requires little more than some massaging using the tools of Itô calculus and basic probability theory. For example, for processes Ft=F(Wt,t)tF_t = F(W_t, t)_t which can be expressed as functions of a standard Wiener process WtW_t and time tt, it is sufficient to prove

Ft+122FW2=0\frac{\partial F}{\partial t} + \frac{1}{2} \frac{\partial^2 F}{\partial W^2} = 0

Still, I wanted a quick-and-dirty way to check my work, preferrably with a method that makes few assumptions about the underlying dynamics.

The Naive Martingale Detector

The problem domain

I didn’t bother formalizing requirements for the types of processes which the tool should handle. The general form of stochastic differential equations describes far too broad a domain of processes and admits many pathological examples which could evade any numerical drift detector I throw at it 1.

dXt=μ(t,Xt)dt+σ(t,Xt)dWtdX_t = \mu(t, X_t) \, dt + \sigma(t, X_t) \, dW_t

I essentially restricted μ(t,Xt)\mu(t, X_t) and σ(t,Xt)\sigma(t, X_t) to analytic functions or Itô integrals of analytic functions. I may revisit this topic in the future to further generalize the revised tool presented at the end of this essay.

While fancy non-parametric techniques like bootstrapping might be a more mathematically sound tool for the job, Monte Carlo will essentially give me a population standard deviation estimate (σ^=s\hat{\sigma} = s) to any degree of precision my CPU can stand, thus making a zz-test a reasonable choice. CLT, take the wheel!

A better name for this tool might be the Drift Detector 2, since we will be evaluating a Martingale null hypothesis.

H0:E[Yt]Y0=00tTH_0: \mathbb{E}\left[Y_t\right] - Y_0 = 0 \quad \forall \, 0 \le t \le T

Spoiler alert:

The above null is NOT equivalent to invalidating the Martingale property; the nature of this mistake is the primary subject of this essay.

Aiming for simplicity, I ran a zz-test for each timestep t=iδtt = i \, \delta t and rejected the null upon discovery of any breaches.

Detection procedure

For the following, let tiiδtt_i \equiv i \, \delta t. 3

My naive (indeed faulty!) scheme for drift detection is essentially a series of hypothesis tests driven by Monte Carlo simulation. The details:

  1. Sample n×Tn \times T stochastic increments dWtδWti; i{0,,T1}dW_t \equiv \delta W_{t_i}; \ i \in \{0, \dots, T - 1\} from a Gaussian4; i.e. δWtiN(0,δt)\delta W_{t_i} \sim \mathcal{N}(0, \sqrt{\delta t}).

  2. Perform discrete integration of the dWtdW_t to obtain WtW_t, ensuring that W0=0W_0 = 0.

  3. Shift the dWtdW_t to the left such that any vectorized operations on WtW_t and dWtdW_t are properly aligned (I may describe this step in further detail later, but it is largely unimportant for this discussion).

  4. Construct Yt=f(Wt,t)Y_t = f(W_t, t) via Euler-Maruyama. At this point, path realizations of YtY_t can be rendered to get a sense of their dynamics.

  5. Drift detection step: Compute zz-statistics for each timestep 5 and test the null.

    zt=YˉtY0σ^t/n1ni=1nYt(i)Y0st/nz_t = \frac{\bar{Y}_t - Y_0}{\hat{\sigma}_t / \sqrt{n}} \approx \frac{\frac{1}{n} \sum_{i=1}^n Y_t^{(i)} - Y_0}{s_t / \sqrt{n}}

Below are some examples of the detector working as designed.

Figure 1: A Monte Carlo path simulation with significance level α=0.05\alpha = 0.05 shown as black lines. The test correctly fails to reject the null for Martingale et/2cosWte^{t/2} \cos W_t and correctly rejects the Martingale null for near-Martingale et/1.9cosWte^{t/1.9}\cos W_t (breaches shown as red circles).

Notice the growing critical region boundaries (in black) as the processes diffuse.

For good measure, I included the Wiener process, which passes unsurprisingly. I also cooked up some more exotic test cases, like processes which are locally driftless at the boundaries of the interval. This test will catch these types of processes, up to the resolution of the discretization.

Figure 2: This process satisfies the Martingale properties locally but not globally.

Alas, it was too good to be true!

I quickly noticed the detector making Type 2 errors for certain processes, some of which are shown below.

Figure 3: Examples of non-Martingales which caused Type 2 errors in the detector.

In the case of Yt=(Wtt)eWt+2tY_t = (W_t - t) e^{W_t + 2t}, we do indeed see a few breaches. However, they do not appear on all seeds, and when they do, the pp-values are very close to the significance level. We will treat this example as a false negative.

I suspect this is a sampling error which is easily resolved by generating more paths. Nevertheless, I have kept this seed in the essay to demonstrate that getting these types of tests right is tricky!

In order to diagnose the problem, we need to analyze the two non-Martingales shown above 6.

  1. Xt=eαtcosWtX_t = e^{\alpha t} \cos W_t (correctly identified)

  2. Yt=(Wtt)eWt+ktY_t = (W_t - t) e^{W_t + k t} (false negative)

I set out to build a tool to sanity-check my math; now we must rely on the math to debug the tool!

Non-Martingale examples

Non-Martingale 1: correctly identified

Consider the process

Xt=eαtcosWt;αRX_t = e^{\alpha t} \cos W_t; \quad \alpha \in \mathbb{R}

Then E[XtFs]=eαtE[cosWtFs]\mathbb{E}\left[X_t | \mathcal{F}_s\right] = e^{\alpha t} \mathbb{E}\left[\cos W_t | \mathcal{F}_s\right]. To proceed further, define f(Wt,t)=et/2cosWtf(W_t, t) = e^{t/2} \cos W_t. By Itô’s lemma,

dft=et/2sinWtdWtdf_t = -e^{t/2} \sin W_t \, dW_t

This stochastic differential equation is equivalent to the more flexible integral form:

ftfs=steu/2sinWudWuf_t - f_s = - \int_s^t e^{u/2} \sin W_u \, dW_u

Rearranging,

cosWt=e(ts)/2cosWsste(ut)/2sinWudWuE[cosWtFs]=e(ts)/2cosWs\begin{gathered} \cos W_t = e^{-(t - s)/2} \cos W_s - \int_s^t e^{(u - t)/2} \sin W_u \, dW_u \\ \therefore \mathbb{E}\left[\cos W_t | \mathcal{F}_s\right] = e^{-(t - s)/2} \cos W_s \end{gathered}

Finally, have

E[XtFs]=eαte(ts)/2cosWs=e(α12)(ts)Ys\mathbb{E}\left[X_t | \mathcal{F}_s\right] = e^{\alpha t} e^{-(t - s)/2} \cos W_s = e^{\left(\alpha - \frac{1}{2}\right) (t - s)} Y_s

So XtX_t is not a martingale for all α1/2\alpha \ne 1/2.

Non-Martingale 2: false negative

Consider the following process:

Yt=(Wtt)eWt+kt;kRY_t = (W_t - t) e^{W_t + k t}; \quad k \in \mathbb{R}

The conditional expectation is relatively easy to calculate. There may be more elegant methods, but I computed the expectation by brute force, taking advantage of the normality of the quantity WtWsW_t - W_s.

E[YtFs]=ektE[(Wtt)eWtFs]=tE[eWtFs]+E[WteWtFs]=tE[eWtFs]+E[(WtWs)eWtWseWsFs]+E[WseWtWseWsFs]=tE[eWtFs]+eWsE[(WtWs)eWtWsFs]+WseWsE[eWtWsFs]\begin{aligned} \mathbb{E}\left[Y_t | \mathcal{F}_s\right] &= e^{kt} \mathbb{E}\left[(W_t - t) e^{W_t} | \mathcal{F}_s\right] \\ &= -t \mathbb{E}\left[e^{W_t} | \mathcal{F}_s\right] + \mathbb{E}\left[W_t e^{W_t} | \mathcal{F}_s\right] \\ &= -t \mathbb{E}\left[e^{W_t} | \mathcal{F}_s\right] + \mathbb{E}\left[(W_t - W_s) e^{W_t - W_s} e^{W_s} | \mathcal{F}_s\right] + \mathbb{E}\left[W_s e^{W_t - W_s} e^{W_s} | \mathcal{F}_s\right] \\ &= -t \mathbb{E}\left[e^{W_t} | \mathcal{F}_s\right] + e^{W_s} \mathbb{E}\left[(W_t - W_s) e^{W_t - W_s} | \mathcal{F}_s\right] + W_s e^{W_s} \mathbb{E}\left[e^{W_t - W_s} | \mathcal{F}_s\right] \end{aligned}

There are then just two functions of Gaussian quantities for which the expectation must be found. Let xWtWsx \equiv W_t - Ws

E[eWtWsFs]=12π(ts)Rexex22(ts)dxE[(WtWs)eWtWsFs]=12π(ts)Rxexex22(ts)dx\begin{aligned} \mathbb{E}\left[e^{W_t - W_s} | \mathcal{F}_s\right] &= \frac{1}{\sqrt{2 \pi (t - s)}} \int_\mathbb{R}e^x e^{-\frac{x^2}{2 (t - s)}} \, dx \\ \mathbb{E}\left[(W_t - W_s) e^{W_t - W_s} | \mathcal{F}_s\right] &= \frac{1}{\sqrt{2 \pi (t - s)}} \int_\mathbb{R}x e^x e^{-\frac{x^2}{2 (t - s)}} \, dx \end{aligned}

It is trivial to show that

E[eWtWsFs]=e(ts)/2E[(WtWs)eWtWsFs]=(ts)e(ts)/2\begin{aligned} \mathbb{E}\left[e^{W_t - W_s} | \mathcal{F}_s\right] &= e^{(t - s)/2} \\ \mathbb{E}\left[(W_t - W_s) e^{W_t - W_s} | \mathcal{F}_s\right] &= (t - s) e^{(t - s)/2} \end{aligned}

Finally,

E[YtFs]=ekt(te(ts)/2eWs+eWs(ts)e(ts)/2+WseWse(ts)/2)=ekte(ts)/2eWs(t+(ts)+Ws)=ektkse(ts)/2(Wss)eWs+ks=e(k+12)(ts)Ys\begin{aligned} \mathbb{E}\left[Y_t | \mathcal{F}_s\right] &= e^{kt} \left( -t e^{(t - s)/2} e^{W_s} + e^{W_s} (t - s) e^{(t - s)/2} + W_s e^{W_s} e^{(t - s)/2} \right) \\ &= e^{kt} e^{(t - s)/2} e^{W_s} \left( -t + (t - s) + W_s \right) \\ &= e^{kt - ks} e^{(t - s)/2} (W_s - s) e^{W_s + k s} \\ &= e^{\left(k + \frac{1}{2}\right) (t - s)} Y_s \end{aligned}

So YtY_t is not a martingale for all k1/2k \ne -1/2.

This bears a striking resemblance to the result for E[XtFs]\mathbb{E}\left[X_t | \mathcal{F}_s\right]. So why does the martingale detection method work for Xt=eαtcosWtX_t = e^{\alpha t} \cos W_t, but not for Yt=(Wtt)eWt+ktY_t = (W_t - t) e^{W_t + k t}?

So, what went wrong?

The Monte Carlo techniques used by the Naive Martingale Detector work by path averaging, i.e.

Yˉt=1ni=1nYt(i)PE[Yt]\bar{Y}_t = \frac{1}{n} \sum_{i=1}^n Y_t^{(i)} \xrightarrow{P} \mathbb{E}\left[Y_t\right]

where convergence in probability is guaranteed by the Weak Law of Large Numbers.

Spot the mistake? Yˉt\bar{Y}_t converges on the unconditional expectation of YtY_t, whereas the martingale property is defined by the expectation of YtY_t conditional upon the filtrations {Fs}0st\{\mathcal{F}_s\}_{0 \le s \le t}!

For stochastic processes whose expectation conditioned on F0\mathcal{F}_0 is always some static value, the Naive Martingale Detector will "report" a false positive!

The originally proposed null hypothesis is invalid because it implies a conditional expectation only on F0\mathcal{F}_0 and no other values. It is not equivalent to the Martingale drift property.

Hopefully it’s clear now why the Naive Martingale Detector failed or succeeded for the processes discussed earlier.

  • The process Xt=eαtcosWtX_t = e^{\alpha t} \cos W_t has unconditional expectation E[Xt]=e(α12)t\mathbb{E}\left[X_t\right] = e^{\left(\alpha - \frac{1}{2}\right) t}.
    NMD correctly identified the drift.

  • The process Yt=(Wtt)eWt+ktY_t = (W_t - t) e^{W_t + k t} has unconditional expectation E[Yt]=0  t\mathbb{E}\left[Y_t\right] = 0 \; \forall t.
    NMD false negative.

  • The process Zt=Wt2tZ_t = \frac{W_t^2}{t} has unconditional expectation E[Zt]=1  t\mathbb{E}\left[Z_t\right] = 1 \; \forall t.
    NMD false negative.

Fixing the Naive Martingale Detector

There are many ways to fix the Naive Martingale Detector. The first is my solution and admittedly a bit hacky; the second is the test used in most papers on this topic.

Solution 1: Mind the filtration

This solution is a bit crude but allows us to keep most of the code from the Naive Martingale Detector.

After implementing this, I was pleased to find that a more refined version of this strategy was used successfully in this paper: Park and Whang (2005)

Modify the hypothesis:

H0(s,x):E[YtYsYs>x]=0stTH_0(s, x): \mathbb{E}\left[Y_t - Y_s | Y_{s} > x\right] = 0 \quad \forall \, s \le t \le T

for some 0sT0 \le s \le T and some xRx \in \mathbb{R}.

The conditional probability is the key here; there may be many expressions which work. I found a simple threshold filter to yield good results.

The best results are obtained by setting test parameters ss and xx manually for each process; luckily, this is not hard to do. A filter at time s=T4δts = \left\lfloor \frac{T}{4 \delta t} \right\rfloor (about 25% along the tt axis) works for all processes, but doesn’t always lead to good visuals. At first, I somewhat arbitrarily used x=2σ^(s)x = 2 \hat{\sigma}(s) but switched to filtering the upper quartile to accommodate processes with highly clustered distributions.

Here, we revisit the false negatives from above, this time armed with the new detector.

Figure 4: Corrected Detector with unconditional expectation (red) and filtered expectation (orange).

Success! Now, we must ensure there is no regression for the first examples in this essay.

Figure 5: Qualitatively, these Martingales exhibit much less drift after filtering when compared to their non-Martingale cousins above.

Unfortunately, we have indeed regressed here, erroneously rejecting the null hypothesis for the process Yt=et/2cosWtY_t = e^{t/2} \cos W_t. On the bright side, it’s not due to a flaw in logic, but it does expose a weakness of this approach. I believe sampling error is the culprit in this instance, since we filter out 75% of the samples before testing. It’s trivial to compensate for this with a larger initial sample size.

I am confident that one could construct a clever process which could systematically break this revised procedure. If you can find one, I’d love to hear about it!

Solution 2: Autoregressive model

Up to this point, every solution discussed (valid or otherwise) just throws CPU power at the problem quite brutishly. This approach is often tempting (and incredibly powerful), especially when time is of the essence.

But I would be doing the reader a great disservice if I did not mention the right way to solve this problem. This approach is not my own; full credit goes to the many people who have contributed to its development.

Instead of analyzing cross-sections of samples, model the sequence and test for the strength of the drift term in an AR(p) model. For AR(1), model

Xt=μ+θXt1+εiX_t = \mu + \theta X_{t-1} + \varepsilon_i

To make this useful, assume a unit root (θ\theta = 1) so that

Xt=μt+0tus+X0X_t = \mu t + \sum_0^t u_s + X_0

Thus, we have our Martingale null hypothesis:

H0:E[ΔXtμ    Ft1]=0H_0 : \mathbb{E}\left[\Delta X_t - \mu \;\big\vert\;\mathcal{F}_{t-1}\right] = 0

Clever indeed! For more, see this paper from Phillips and Jin: Phillips and Jin (2014)

Final thoughts

I’d like to document some of my takeaways here, written as advice to my future self.

Just because it works in practice, doesn’t mean it works in theory!

Wikipedia lists the 4 defining properties of the Wiener process WtW_t. I designed the path generation procedure described in this essay to preserve properties 1, 3, and 4 (obviously sacrificing continuity). The path generation itself worked well.

However, the mere existence of a functioning discretization mechanism should not lead you to assume that some continuous analog of that mechanism even exists, much less is the fundamental engine of the object of study!

For a while, I operated with the mental model that a realization of the Wiener process was constructed by some mysterious mathematical supercomputer running my same algorithm but with an uncountably infinite array of dWtdW_t. But of course, this is the wrong mathematical framing! Instead, entire paths (on some interval) are indexed by samples ωΩ\omega \in \Omega, and we observe the properties of these objects through the tt-indexed lenses provided by a filtration (a proxy for real-world ignorance) on the underlying probability space.

Notice that dWtdW_t appears nowhere in the 4 defining properties of the Wiener process. dWtdW_t really comes from Itô calculus 7, when Itô generalized the Riemann-Stieltjes integral to stochastic settings in the 1940s, two decades after Wiener’s formalization.

More lessons: lightning round

  1. There is a place for rigor when designing simulations, and it can be rewarding to nail down seemingly abstract mathematical objects.

  2. When designing and debugging simulations, be explicit about your assumptions; own and challenge them. Understand what you lose when discretizing.

  3. Use caution when switching between mathematical contexts. Brownian motion and its cousins are interesting objects of study because they are fundamentally random and fundamentally continuous.

    Simulations always make compromises. But simulations like this have additional (sometimes quite subtle) sources of potential error (e.g. discretization error, sampling error, even accumulating floating point errors).

  4. Quality testing of code is critical; focus on edge cases and combinations of edge cases.

References

Park, Joon Y., and Yoon-Jae Whang. 2005. “A Test of the Martingale Hypothesis.” Studies in Nonlinear Dynamics & Econometrics 9 (2): 1–32. https://doi.org/10.2202/1558-3708.1163.

Phillips, Peter C. B., and Sainan Jin. 2014. “Testing the Martingale Hypothesis.” Journal of Business & Economic Statistics 32 (4): 537–54. https://doi.org/10.1080/07350015.2014.908780.


  1. For example, hide drift at some irrational number tt that the discretization iδti \, \delta t will miss.
  2. but Martingale Detector is better for branding!
  3. In order to simplify the notation throughout, I often switch between continuous-time and discrete-time notation without much justification. I’ll hand wave here and appeal to the strong convergence of order p=1/2p = 1/2 of Euler-Maruyama; discretization does not play a major role in the arguments herein.
  4. Of course, this need not be a Gaussian so long as the first and second moments are correct!
  5. I’m sure there is some kind of fancy aggregation statistic or early-stopping method which is more optimal, but this worked fine for a first attempt.
  6. I’ve added constants in XtX_t and YtY_t (α\alpha and kk) to make the math more general. As you will see, only particular values of these constants make the processes Martingales, which were not used above.
  7. I’m not sure if stochastic increments appeared before this; the point is that Wiener’s original model did not use them.

© 2018-2024 Brendan Schlaman. All rights reserved.