June 25, 2012

**The Setup**

In my last commentary: Randomly Trading, I presented the execution of a randomly generated trading strategy over randomly generated stock prices. The original intent was to answer someone on a LinkedIn forum on how to build a payoff matrix: Σ(**H**.*Δ**P**). So model 1 (very basic Excel file) was provided to show how to set up all 4 of the needed matrices: **P**, Δ**P**, **H** and **H**.*Δ**P**. Each matrix dealing with an aspect of a the payoff matrix used to simulate a portfolio of 10 stocks over 250 trading days (about a 1 year trading interval).

Closing daily price variations for the (**P** matrix) were generated using a simple random function: (Rand()-0.5)*4, which had prices vary within +/- $2.00. The Δ**P** matrix is simply the difference of closing prices from day to day (row to row) of the **P** matrix; an element-wise subtraction. The price variations the Δ**P** matrix would have, as expected, a mean tending to zero or close to it.

The **H** matrix: the trading strategy itself, contained the running stock inventory for all 10 stocks over the entire holding period. In that Excel file (model 1), the trading strategy was designed to be totally random and defined as **H** = **B** – **S**; the running inventory level of shares held. It holds the number of shares Bought minus the number of shares Sold or Shorted over time. The **B** and **S** matrices are of the sparse type having mostly zeroes for each of their respective elements, and represent the trading decisions taken to increase or reduce the inventory as prices evolved from day to day.

As for the **H**.*Δ**P** matrix, it is the result of applying the strategy matrix **H** to the Δ**P** matrix; an element-wise multiplication. So the Δ**P** as well as the **H**.*Δ**P** matrices have little interest as one is a subtraction and the other a multiplication; they only serve in the bean counting process.

The two matrices **P** and **H** are the only ones of significance. The first one because it contains all the price history, the price at which trades can occur and the second one because it contains the trading history of what was done over the whole trading interval; it contains the history of all the trading decisions taken. And it is those trading decisions that in the end matter.

Each time F9 would be pressed in the Excel spreadsheet, a new randomly generated portfolio of 10 stocks would result. It would be the same as picking 10 stocks at random from an infinite stock universe. No two prices series would be the same, in a single portfolio or from one portfolio to the next. The worst kind of trading environment for a wannabe trader.

You could analyze the past at any period in time and nothing of it could be useful to predict the future. You could do data mining, pattern recognition, run any technical indicator, setup Fibonacci line, use stochastic or anything else, and nothing would help to determine what to do next.

Naturally, with such a trading environment, using model 1, you could not win except by luck alone. You were after all using randomly generated trading decisions over randomly generated prices. It would be akin to flip a coin to bet on someone playing heads or tails. Sometimes you would win and sometimes you would loss. The expected outcome of the game would tend to zero. And if a payoff matrix tends to zero, then you have a zero sum game where your expected value is the same as everyone else: zero. There is no free lunch to be had. The only optimum portfolio is to achieve the same average as anyone else, and that is zero profits. And thereby no alpha generation; no over-performance except by luck alone.

You switch to model 2 where prices are allowed to fluctuate even more than in model 1. You add more random components including “fat tails”; low probability events of higher magnitude. Instantly, prices become more erratic and even more unpredictable than in model 1; you have introduced rare events “outliers” in your already randomly generated price structure. You are introducing improbable directional “gaps” in the price data series. Again, nothing will help “forecast” what the next step may be.

Therefore, in model 2, the price series for any of the stocks has become: initial price to which is added the sum of random-like price variations of varying magnitude and varying probability of occurrence. But this is the same as having a random process with drift, diffusion and jump components (a Lévy process).

By introducing added randomness to the generated prices series, you have not simplified the trading process but have definitely added another level of complexity to the trading decision process itself. How could you now trade knowing that your trading risk has greatly increased? An unpredictable major gap down could result in an immediate hair cut.

But even with the added risk, a randomly generated trading strategy would remain with a zero expected outcome. All you would see would be more unexpected price variations, more volatility, that you still could not predict and with no way to knowingly establish a position before the price gap up in order to profit or save you from the loss of a random gap down in price. All you could do would be to speculate, guess or take your bet based on whatever concept might suit your fancy. You would be left trying to outguess randomness, by what ever method of your choice. And therefore, your methodology, what ever it may be, would be as good as anyone else. You would all be playing in a quasi-random trading room. Anyone could have a series of lucky moves up or disastrous gaps down.

What was shown in the Randomly Trading research note led to the design of model 2 where the whole trading process was modified. It was still randomly generated but acquired some conditionals: some do it only ifs. And it is those trading rules that make the difference. In model 2, you pressed F9 to generate a totally new portfolio not only in prices but also in the conditionally generated strategy and you would win almost all the time (over some 98%). Model 2 was very easy to construct, it is what led, after a lot of improvements, to my first 2007 publication: Alpha Power. In that publication, the payoff matrix was for 50 stocks by 1,000 trading weeks. That paper was followed by the Jensen Modified Sharpe in 2008 where the payoff matrix had for size: 100 stocks by 2,000 weeks. Each time you would press enter, the payoff matrix would show exponential growth; it was generating exponential alpha.

The whole financial investment literature says that you can't win over randomly generated prices, and most certainly not by using randomly generated trades (conditional or not). But here, the Randomly Trading article says yes you can, and with relative ease.

One could switch to model 3 which has a better trading philosophy (meaning having higher profits) and win 100% of the time. Not that one would win on all the stocks in the portfolio or on every trade but still win on the overall portfolio. Because of the randomly generated price series, about half of the price variations would still see red and any given day, and the portfolio would see the same relative losses and gains. Look at the model 2 charts where red appears to be randomly distributed in the **P** and **H**.*Δ**P** matrices.

What are the implications related to model 2 or 3? The main output of the payoff matrix Σ(**H**.*Δ**P**) is an exponential function and this leads to exponential alpha generation. It should have, on average, tended to zero just like model 1, but instead even if I run it a thousand times, both model's (2 or 3) payoff matrices remain positive and exponential. There is a need for a reasonable explanation for model 2 or 3. There is a need to find a mathematical model that can explain the phenomenon. And if the explanation is reasonable, backed by simulations on these randomly generated prices and conditional random trading decisions, then the foundation of a “new” trading methodology might be settling in.

… to be continued...

Created... June 25, 2012 © Guy R. Fleury. All rights reserved