May 18, 2020
My last series of articles (The Portfolio Rebalancing Gambit, I, II, III) was about a trading strategy that dealt with its long-term payoff matrix as if playing a game where some randomness appeared to prevail, and a lot of it did. Even in that kind of trading environment, the strategy was doing more than quite well.
A stock trading strategy operates quite differently than a long-term investment strategy. The latter is awaiting capital appreciation from reasonable investments for periods of 20-30+ years. Doing so, almost assuring itself of winning simply by holding most of the stock positions for long periods of time. As an example, see Berkshire Hathaway.
Holding stock positions for 30+ years would tend to give you the same appreciation as the general market just by making a reasonable initial stock selection and changing in time some of them for whatever reason. You can expect your portfolio growth rate over the period to tend toward the market averages, for example: gavg → gspy. Therefore, the following portfolio equation would provide an expected long-term estimate: F(t) = F0 ∙ (1 + E[gspy])30.
The initial capital would be the element making the difference, here. If you buy KO and hold it for 30 years, you get the exact same percentage return as anybody else that has done the same thing, that you buy 1,000 shares or 400,000,000. The size of the initial stake F0 matters and it matters a lot since it will be compounding for 30+ years.
In an automated trading strategy environment, you are expected to trade, and the number of trades will be much larger than if just investing over the long term. Doing so, the trade slicing and dicing becomes more of an issue. And depending on the number of trades, market statistics and the structure of your program becomes more important. Just as risk exposure.
Doing thousands and thousands of trades will make it that you will not win on every trade, and you will not necessarily know in advance which trade will be profitable or not. You need to introduce probabilities to everything you do when you do not know what those probabilities really are with any certainty at any one time. Also, an automated program will only do what it was told to do. It has no secret agenda, no mood, no sentiments, no psychology, no I will be nice to this guy. It will just execute the given code, nothing else. It becomes your responsibility to make that program do what you want it to do. It will not correct your misassumptions, misinterpretations, or misguided logic for you either.
The Rebalancing Program
The structure of the program used is relatively easy. It rebalanced at every start of a week its selected 400 stocks from a 3,000-stock universe (Q3000US()) with market data starting from 2003 (with one year of prior data) to May 1, 2020 (some 17.16 years).
The program used fundamental data to make its stock selection: return on investment, long-term debt to equity, cash return, and free cash flow. These factors were combined, ranked, and scored. And from there, the top 400 highest momentum stocks became trade candidates. Each week, part of the list would change due to the new rebalanced selection.
The initial weights were assigned at 1/400 or 0.25% of total equity. This fixed fraction weight would increase or decrease the actual bet size amount as the portfolio grew or shrank during downturns. This was making bigger bets on the way up and smaller bets on the way down. In the process, logically, taking advantage of the general upward trend.
Even doing so, the program touched some 2,547 stocks over the course of the simulation from its 3,000-stock universe. That is 85% of the considered stock universe participated in the game in one way or other, either producing single or multiple profits and losses. No stock in the process amounted to more than 1% of total profits or losses. Many produced near-zero profit or near-zero loss as should also have been expected. That kind of strategy cannot, by its very nature, architecture, and structure, win all trades. You do not need a PhD to determine that, common sense is more than sufficient.
The Program's Structure
It is from here that it gets complicated, but not that complicated. Since the program goes long and short depending on its trend declaration. It was more important to find something to make that declaration and then stick to it. In other words, trying to play and be consistent with the trend definition. If you declare the general market trend as up, then you should buy stocks and not necessarily short them without some kind of other justification.
Meaning that if you defined the current trend as going up, you bought shares and liquidated shorts if there were any, whereas, if the trend was down, you could go in cash and/or short, but no buying longs without some other valid consideration.
There were not that many major overall trend switches. As the program progressed and the portfolio grew in value, more time was allocated for the switching and fewer shorts were permitted. This was making it modulated to the market's price swings, with a delayed reaction mind you, but still close enough to profit from those moves most of the time. There were, evidently, some false-positive, just as there were some false-negative. You would see, at times, the program trying progressively to go to safety only to reverse course on minor price swings, but it would catch the major downturns which was more important.
The portfolio's payoff matrix equation used was presented as:
F(t) = F0 + Σ (H ∙ ΔP) = F0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] = F0 ∙ (1 + g)t
where y is the number of years the rebalancing is applied, rb is the number of rebalance per year, j the number of stocks in the portfolio, and tr the expected turnover rate. u(t) is the betting function and E[PT] the expected average profit margin. The portfolio had its equivalent growth rate as g. The equation said that there was math to this game, and that no matter what you did or how you did it, the equation would still prevail.
In the last iteration (see The Portfolio Rebalancing Gambit III), the strategy made 143,610 trades over the 17.16 years of trade. That is more than enough to declare the total number of trades as representative and statistically significant by about any measure. And since the strategy traded using 85% of its potential stock universe, it also made it representative of that universe even though it was a single choice from the gazillions of possible other combinations available from that tradable universe.
Even more, because of the methodology used, you did not know in advance which stocks would be picked, in which order they would be, and if they were to make a profit or not before the next rebalance. Being totally dark about what is coming your way is the same as not knowing what is coming, and this makes it more of a random process than anything else. Otherwise, you would know what is coming your way and would be able to profit from it even more.
So, Where Were The Profits Coming From?
The answer is simple, from the compounding of the bet size which remained a fixed fraction of equity throughout. This redesigned process automatically re-invested all generated profits back into the trading system. And every penny made in the profitable trades would go to making slightly bigger bets and generate slightly large profit amounts, from trade to trade, as long as the previous trade was positive. Where there was a loss, the bet size would be slightly reduced on the next trade.
The money was not coming from predicting which stocks would go up or down. It simply came from the stocks moving about, up and down. There is a big distinction here. The changes in the trend declaration were not than numerous, and they were not all right as should be expected. Nonetheless, they were still followed until there was another declared change in the average price direction. This means that while the trend was declared as up, holdings were longs for the 400 stocks or less that satisfied the weekly selection criteria.
Every week, the program checked the portfolio weights and made adjustments to put the weights back to 0.25% for when there were 400 stocks selected. If fewer stocks were selected, the weights would go up slightly, something you could not know beforehand. Explicitly saying that the predictive abilities of the stock selection process might not have been the reason behind the rise in some or most of the stocks. The general upward market trend was sufficient to make you profit.
What appears to have been important was the fact that there was a stock selection in the first place. During the 17.16-year period, there were only 5 significant downturn declarations where stocks were either allowed to go short or the portfolio went entirely to cash. There were no swinging to bonds as a safe haven during periods of market turmoil. The safety measures were quite simple: go to cash or short the thing. Some times, it would be right and some times wrong. But, overall, you were playing averages, and there, the strategy did pretty good on average. At least, better than market averages for sure.
While playing for the long term, the strategy ended up having most of its time with an uptrend declaration. In fact, the 5 most significant downtrend declarations only occupied 12.5% of the entire trading interval.
Where did the money come from? Well, that seems to have been relatively easy. At each rebalancing, about 40% of stocks had some kind of weight adjustments. Stocks that rose saw part of their holding being reduced and profits returned to the trading account, while stocks that drop in weight saw partial acquisition to restore their weights. This is the same as selling part of the winners on the way up and buying a little more on the dips. But this did not require that you predicted correctly, only that you participated in the process, that you reacted to the situation which was not of your doing, but where you reacted after the fact.
The protective measures were required to alleviate the impact of drawdowns, but the uptrend downtrend declarations were a major part of how the strategy would trade going forward and it spent most of its time in an uptrend definition (87.5%) of the time.
The protective measures were needed because the strategy did use leveraging at times. Not all the time, but most of the time. Not at a constant level, but as a modulated value based on what phase the long-term trend was in. For instance, after a prolonged downtrend, the leveraging was pushed up while at near market tops, the leveraging was reduced to even make it disappear. It is not a good idea to be highly leveraged at the top of a market swing, that it be anticipated or not.
The strategy used, on average, about 1.5-1.6x leverage. At times going a bit higher and at others going much lower, even going down to 0. During shorting periods, margin was used with about the same principles. Switching from long to short or short to long was done progressively and more so as the equity rose.
Leveraging, even at an average of 1.5-1.6x range had a major impact on the final results. You had 143,610 trades with growing size as the strategy would evolve. The bet size might have kept its 0.25% of equity on average, but the bet size amount would grow considerably. 0.25% of the initial $10 million stake is $25,000 per position, whereas, at an equity value of $100 million, the bet size will be at $250,000 per position. And since the average net profit per trade is based on an average percent profit per trade, we can easily imagine that the average net profit per trade, as an amount gained, would get to be 10 times higher as well. And the proceeds from trading would go even higher as the bet size increased further.
There is no mystery in this trading strategy. It might not operate as you might expect, but it is just a program nonetheless designed to do what it was told. No more, no less. It might require someone to change their mindset to appreciate the architecture or engineering of such a program, but that is how innovation works. Someone, somewhere will design a better mousetrap. And I would add, anybody can design a better one and do even better. There is much to learn from that trading strategy. It is part of a much larger family of possible solutions.
The series of articles, also showed that you could increase or decrease the initial stake at will. All that would be required would be to remain consistent with what you were trying to do. For instance, going for the $1 million scenario, I would reduce the number of stocks to something like 200 in order to make the bet size $5,000 instead of just $2,500, and change the stock selection process to better accommodate the smaller bet sizing. I would also have reduced the average leveraging factor in the beginning and further limit its impact. Overall, it would be a reduced scenario but still proportional to the initial stake put on the table. Regardless, it would have followed the payoff matrix equation given above. If you put 10 times less on the table, do not expect 10 times more when you have a portfolio equation that says: F(t) = F0 ∙ (1 + g)t. It is simple math.
Related Recent Articles:
May 18, 2020, © Guy R. Fleury. All rights reserved.