July 6, 2019

Over the past 2 years, I have covered a lot of the inner workings of my trading methodology on my website and in posts on the Quantopian website forums. I find the methodology relatively simple and hope that from what has been presented, anyone could reengineer their own strategies to make them fly. This way everyone would be responsible for whatever they do.

Nonetheless, what I do is based on a different understanding of the game since I do accept a lot of randomness and its implications in the evolution of stock prices. I would recommend looking especially at my recent work, but you will find the same approach covered in different ways since 2011 and prior in my free papers.

I consider that I only scratched the surface of possibilities and that there is a multitude of ways of finding acceptable compromises in this profit generating endeavor.

I find looking for the ultimate trading strategy simply utopian. Just find one you like, or a dozen if you wish, and then live with it. You will not be able to know which would have been the best until you reach the end-game anyway, and that is 20 to 30+ years from now.

All we can do is design decent trading strategies, do our homework the best we can and test under the most realistic market conditions, account for all frictional costs and try, in some way, to forecast for the long term.

It is why I got interested in the cited strategy in the first place. It dealt with a niche market that is made to continue to prosper over the coming years. It is like trying to answer the question: will we need more computing power in the years to come? Having been in computing and software since the '70s, I definitely answer yes. We will need a tremendous amount of it and the companies involved in supplying us that power will benefit just as we will using more powerful machines distributed all over the world especially with the advent of G5 and autonomous almost everything.

De Prado in a recent lecture for Quantopian said: “we look at past stock prices, but we also erase their history when doing so” or an equivalent. I think he is right. Just as when he showed that the Sharpe ratio would tend to increase the more we did strategy simulations, and therefore, would become unreliable as a forecaster of what is coming our way.

But, the reasons he gives for the phenomena might not be the right one. The equation for an asset's Sharpe ratio is: SRi = E[ri – rf]} / σwhich has a long-term historical value of about 0.40 when considering the average for the whole market of traded stocks.

An average is just that, an average. But the problem changes when you add some alpha into that picture as in: SRi = (E[ri – rf] + αi) / σ. It transforms the return equation into: F(t)i = p0i ∙ (1 + E[ri – rf] + αi)t or F(t)i = p0i ∙ (1 + β ∙ E[ri – rf] + αi)t. If you play the market average, the beta will be one.

The consequence is that it is not because you are making more Monte Carlo simulations that the Sharpe ratio is rising almost as a sigmoid, it is due to the very nature of the average price movement itself. It is a compounding return game and the general market does have this long-term upward bias.

De Prado demonstrated in one chart the behavior of the Sharpe ratio as the number of tests increased. I accept the curve but not necessarily the reason. The Sharpe ratio should look more like a sigmoid function due to the Law of diminishing returns

But, because we erase the long-term history (price memory) in our short-term calculations, we are erasing one of the components of another representation for security prices as viewed in a stochastic equation, for instance: \\(dp_t = \mu \cdot p_t \cdot dt + \sigma \cdot p_t \cdot dW_t \\), where we technically and practically annihilate the drift component \\( \mu \cdot p_t \cdot dt \\). Doing this removes the underlying long-term trend and leaves us with a Wiener process.

This can be view in a simple chart too. I took the Sharpe ratio chart from the last test presented.

Sharpe Ratio

(click to enlarge)

What the above chart shows is that the Sharpe ratio was relatively contained. Not knowing better, we might assume that it operated within boundaries as an erratic cyclical function to which was superimposed a random and noisy signal, when it should have been increasing with time as described. Where did the expected rise go since there was alpha generation?

We need to better understand the math of the game, and if necessary, even question old premises that might have applied to Buy & Hold scenarios and the efficient frontier but are ill-fitted to describe dynamic trading systems having compounding alpha generation.


Created. July 6, 2019, © Guy R. Fleury. All rights reserved.