June 28, 2019

The following was posted on the Quantopian website.

I got interested in Stefan's trading strategy after seeing the “Cumulative Return on Logarithmic Scale” in a tearsheet. It showed alpha generation. This is represented by the steady widening of the spread between the algo and its benchmark.

I understand that this is a niche trading strategy specifically oriented toward cloud and AI computing. Nonetheless, we should look at the stock market from a long-term perspective. And forecasting that we will need more from our machines should be considered as an understatement. With the advent of G5, this trend will accelerate and enable all new kinds of devices (IoT) requiring even more storage and services. Therefore, such a niche market should continue to prosper over the years.

Stefan's analysis and disclaimer are accepted.

Nonetheless, I opted to reengineer the strategy as I did in the Robot Advisor thread. I started by appraising a Buy & Hold scenario for the same stocks over a slightly different period: from 2003-09-12 to 2018-09-14, in order to compare the program's modifications over the same duration. Also upped the initial capital to 10M.

This scenario produced the following results:

B&H

(click to enlarge)

However, this should be compared to letting the strategy do its trading thing. This is to answer the question: was the Buy & Hold more or less productive than the original trading strategy with the added capital?

Original program version

(click to enlarge)

As can be observed, the Buy & Hold outperformed the 13-month rebalancing. It should be noted that no leverage was being applied and that all Q default frictional costs were included.

This trading strategy provides a trading environment that chooses periodic rebalancing as its main course of action. As Stefan explained, the equal weights forced the selling of part of the rising stocks to increase positions in the lagers. This also explains why the Buy & Hold outperformed the strategy.

Now the reengineering part.

Even if it was interesting, the rising equity curve and the outperformance of the benchmark, I still did not find it enough, especially since the strategy did not outperform a simple Buy & Hold in a niche market that was made to prosper and will continue to do so.

I'll skip some of the steps since just by reengineering the code; you have to make quite a few simulations just to verify that the code does not crash, does what it is supposed to do, and that there are no bugs. I reengineered it twice since I was not satisfied with the limitations of the first method, even if it provided more than interesting results.

Whatever trading strategy you have you need to have it financed and have it scalable. The code was there to make the strategy scalable. But still, finance was limited to the 10M of initial capital. In order to increase the impact, some leveraging would be needed. I know some hate leveraging, but it has its use in scenarios where you can outperform the averages. In a way, putting more pressure and improving one's trading edge, which was already built-in the stock selection itself.

So, I put in some downside protection and added some leveraging to the mix. Changed the orientation of the strategy itself and allowed it to go short at times. The payoff matrix of a trading strategy can be resumed in two numbers: the number of trades over the trading interval and the average net profit per trade. Both numbers are provided when using the “round_trips = True” option of the tearsheets.

Therefore, the objective is to increase either or both of these numbers in order to increase overall CAGR performance. Other numbers might not have an impact on final results, and in a compounding game, the final outcome, the endgame, is what really matters. Evidently, you will try to make it within acceptable volatility and drawdown constraints, making them as low as you can without destroying your strategy's potential. To reduce volatility and drawdowns, I increased the number of stocks to 25, thereby reducing the bet size.

With the first method, it generated the following:

Mod1_10M

(click to enlarge)

Interesting, but not enough. Not enough trades:

Mod1_10M_stats

(click to enlarge)

I wanted more.

I changed the trading methods in order to accelerate the thing. It also meant increasing the use of leverage in order to supply more funds to the methodology and accepting shorts.

Mod2_10M

(click to enlarge)

It generated more trades and provided a higher overall return. Beta decreased, and both Max Drawdown and volatility slightly increased. The average net profit per trade decreased, but this was compensated by the higher number of trades, resulting in more generated profits. The average gross leverage came in at 1.48.

Mod2_metrics

(click to enlarge)

I opted to be more aggressive. It raised the gross leverage from 1.48 to 1.57. At the same time, I was lowering the max drawdown, the volatility, and the average beta, which came in at 0.48 on average. An overall improvement.

Mod3_10M

(click to enlarge)

This was counterintuitive. To do so, I allowed a lot more shorts in the picture in a rising market, to the point that shorts dominated in the number of trades. As if allowing more shorts was enabling the longs to profit more.

Mod3_stats

(click to enlarge)

Not only did the number of trades increase considerably, but the average net profit per trade also went up, resulting in much higher profits.

Mod3_metrics

(click to enlarge)

There is the cost of leveraging that has to be addressed, but I will keep that for later on. Nonetheless, the formula would be for the last test: 1.57∙F_0∙(1+0.629 – 0.57∙0.08)^15, or 1.57∙10M∙(1+0.6285)^15, on the basis that leveraging fees would be at 8%. It will make a difference over the long term, but it should be considered as some kind of frictional cost. You want more, you have to do more than your peers, and there is a cost to it.

By accepting more in the gross leveraging department, you could push for even more with not that big a deterioration in the volatility, beta, and max drawdown figures, as illustrated in the following:

Mod4_10M

(click to enlarge)

The portfolio metrics for the above came in at:

Mod4_metrics

(click to enlarge)

Gross leverage went from 1.57 to 1.62. The average beta went from 0.48 to 0.53. At the same time, max drawdown went up from -30.5% to -31.4%. All these measures were more than acceptable for the added overall performance. Nonetheless, some might not like operating at these levels, but it should be noted that even if we started with an average beta of 1.07, we still ended with an average beta of 0.53. Meaning that the overall strategy is swinging less than the average market as a whole. The statistics for this version of the program came in at:

Mod4_stats

(click to enlarge)

The point being made in this demonstration is that a trading strategy can start with a specified architecture, then it can be modified to do even more by putting the emphasis on two numbers: the number of trades and the average net profit per trade. A strategy can evolve to do more. It is up to us to make it do so. And as said before: if you want more, you will have to do more.

I would point out that it was not the numbers that were the most important; it was the process itself. The trading methods used to survive in a sea of variance. I hope it was instructive just to study the process described in this post.


Created. June 28, 2019, © Guy R. Fleury. All rights reserved