Jan. 8, 2020
In my previous article was shown 17 simulation results of a stock trading strategy as found on the Quantopian website. On that basic template, I added optional functions in order to increase and control performance.
This intermediary step is part of my analysis of the strategy's worthiness since I am still exploring its capabilities: limits, strengths, and weaknesses. I present 12 new simulations using 160 stocks.
The program modifications made are shown to have quite an impact on overall performance. Nonetheless, the outcome might not be for everyone, it always remains a matter of choices, trading methods, preferences and risk averseness.
The above-cited strategy can be copied, is relatively simple, and appears to be like so many others using a similar theme. Its structure mostly dictates how it will behave over time (refer to my prior post for more details).
Once you have such a structure, the question becomes: what can you do with it? Reengineering its trading procedures might be the way to go.
The Basic Portfolio Equation
The basic portfolio equation you have to deal with is: F(t) = F0 + Σn (H ∙ ΔP), where Σn (H ∙ ΔP) is the portfolio's payoff matrix and H is the ongoing stock inventory matrix (a.k.a. the strategy).
The original strategy (at the top of the cited forum) makes a ranked selection of 20 stocks based on fundamentals and rebalances monthly using equal portfolio weights. The SPY 140-day momentum determines its trend and will switch to bonds for its downside protection. All the trading is due to this monthly rebalancing which is also more than quite common for a trading strategy on the Quantopian website.
Even with my modifications, this is not what I consider a final or tradable version. I still have other functions and safeguards to add to better control the thing. It is more a study of trading behavior, feasibility, and finding a strategy's limits than anything else.
Nonetheless, even in its current state, it can show some interesting properties that could also be applied to other strategies. At this stage, my interest was to see how far it could go to then scale back to something I would consider more within my own limits.
It is once you know the limits you do not want to cross
that you know how to stay somewhat away from them.
Reengineering For More
I added some features like a Do_More option, as in: you want more, then you will have to do more than the other guy (as often referred to in my books). This implied managing the ongoing stock inventory differently.
To the basic payoff matrix equation above, I added a generalized growth function: F(t) = F0 + Σn (H ∙ (1+g(t))t ∙ ΔP) that would gradually raise the ongoing inventory further. Doing so would require more capital which could come from reinvesting ongoing trading profits as you improved the strategy's edge, get better predictive functions, be partly financed using some leveraging, or doing it all at the same time as long as it improved performance even with the added costs.
The part: H ∙ (1+g(t))t where g(t) is positive (g(t) > 0) becomes the new core of the portfolio's long-term trading strategy. It says that you gradually, at an exponential rate, grow your bet size as the strategy progresses in time. If you are the one that is setting g(t), then in a way you are somewhat controlling the outcome of your own trading strategy. You would be reengineering its future.
Of note, the basic payoff matrix Σn (H ∙ ΔP) is by itself generally on an exponential path due to sought after compounding returns. It is that you are now giving it a shove higher with a compounding function (1+g(t))t. It does not take much to raise the portfolio's long-term outcome.
The above equation could be applied to a large number of trading strategies, not just the one presented here. I have done so in over 2 dozen such strategies, each different, based on different premises and trading methods. See my books, free papers and articles on my website to get a better idea of how you could use those trading methods and adapt them to your own strategies.
Financing Your Trading Strategy
Whatever trading strategy you have, you need to finance its operations and this can be done from within. You want your trading strategy to profitably trade and recycle those profits to make more profits. Being in a compounding return game, the longer you can do this at the highest possible rate, the better. Evidently, within your own trading limits and constraints.
The first requirement is to have the trading strategy last, not just a few months or a few years, but over the long term. In this case, the strategy starts in 2003 and ends in 2019. It goes through all the market's ups and downs during those 16.7 years.
The strategy used in the following simulations is a much-modified version of the cited program above. It kept some of its structure. I added optional and variable amplifiers to be enabled or not. These amplifiers could even be controlled from outside the program and read from file as if giving the strategy administrative directives: do more of this or less of that going forward. I also added a leverage cost estimator to get a better idea of its impact on overall performance. I even included an optional covered call estimate in order to study its impact, but this option was not activated for these 12 simulations.
Some Fundamentals Might Not Do What You Think
One thing I wanted to demonstrate was the impact of overriding the strategy's preset ranking system by shifting its priorities. This made a group of stocks within the selection having its ranking changed relative to the others in the portfolio making each simulation have a different set of stocks to deal with. The overall result was an increase in performance.
This is seen in the chart below (tests #1 to #8) where only this relative ranking is affected. It gradually changed the composition of the 160 stocks that would be traded. As if some of the lower-ranking stocks that would have been picked in prior simulations were abandoned in favor of new ones from the other factors that were on the fringe of being selected but were not due to the strategy's limited number of stocks (160).
(click to enlarge)
All the tests were done using 160 stocks with an initial capital of $10M. The 160-stock scenario is an arbitrary number (chosen from test #34 in my prior post). These tests could have been done using some other number of stocks. The general trading behavior would have been the same since the trading is done due to the structure of the program and its periodic rebalancing.
Nowhere does the program ask you: what do you think? It just rebalances and will catch whatever average drift is out there (up or down). A way of saying that the periodic rebalancing is making it a statistical, inventory, or variance problem.
The strategy is scalable by design, and therefore, reducing the initial capital by a factor of 10 would simply reduce the outcome by a factor of 10. Likewise, should you want to go higher by a factor of 10. Increasing the number of stocks would tend to slightly reduce the overall outcome as illustrated in my prior post where the number of stocks was increased from 5 to 240.
Wanting to isolate other influences is the main reason for doing all the tests using 160 stocks. More than enough to show portfolio diversification, and producing enough trades to make it statistically significant (62-65k trades). This made the initial bet size 0.625% of the portfolio or $62,500 per position (as example, this bet size initially buys 625 shares of a $100 stock).
The ending bet size column shows that near the end it is much much harder to execute those trades without having a significant market impact. However, there are ways to alleviate that problem. Nonetheless, the program in its current state would have some difficulty in executing its rebalancing orders near the termination date of its simulation as the test number increases.
Overriding The Ranking System
Overriding part of the ranking system is the same as downgrading the merit of the factors used in the stock selection process itself. Like telling a selected stock that ranked high, get in the back anyway. In fact, declassifying the selected top-ranked stock (as a group) to a lower level to the point where more and more got out of the portfolio entirely. The more you declassified these stocks, less and less remained in the portfolio, and as lower-ranked stocks at best.
This would mean that the premises on which were based the original stock selection process were not on such solid grounds even though it all appeared as quite reasonable and logical. There is a historical positive correlation between value and price. Yet, the first 8 simulations, just as the rest, said otherwise.
The more you devalued the group of highly ranked value stocks, profits increased, when MPT theory says profits should have decreased. If you take out your best prospects for profit from your stock selection, it is counterintuitive to expect a rise in profit, the others should have drag you down. Or, on the other hand, what you thought might have been the best prospects were not, and therefore, your short-term value price correlation premise was not founded that it be based on fundamentals or not.
The program had no control over which stocks would
be traded or when or in what quantity.
All it did was to rebalance on schedule the 160 selected stocks based on the ranked fundamentals. A mini wisdom of crowds thing.
The program I used was in a more advanced state than the original version. Since I wanted to use some leverage and have some control. I also included a cost estimate of that leveraging and displayed the net liquidation value after paying for leveraging and frictional costs. Is also displayed the leveraging cost as a percent of equity (est_lev).
(click to enlarge)
The first 8 tests kept 3 of the controlling factors the same in order to better evaluate the impact of the value declassifying process described above. The portfolio metrics slightly increased from test to test. For a little more volatility, drawdown, and beta, you got better performance results only due to the changes in the declassified group of stocks.
Of note, the first 8 strategies used about the same level of leverage, about 1.21 which had to be paid for. The leveraging cost column gives that estimate while the net liquidation value gives the net after costs.
Therefore, the strategy did start with some leveraging in place. However, it was implemented in stages. Three of these controlling factors are displayed (kappa, amp_1, and amp_2). All three are kept constant for test #1 to #8. Meaning that they were not the reason for the change in performance. An alpha_booster option was also set but remained constant for all the tests.
The increase had only one origin, and it was this shift in the ranked priority of the fundamental factors which changed the stock selection for each one of those 8 tests. Downgrading their initial rank to even place them out of selection reach after having been selected as valid candidates. The overall impact was a gradual change in volatility which gave the rebalancing the opportunity to extract more as average net profit per trade as seen in the actual x_bar column.
Due to the program's structure, it executed about the same number of trades (near 65,000 trades, ±500), trading marginally less as the test number increased. It is only on test #9 that the Do_More option was activated. This resulted in a slightly higher leverage (1.26) but also a higher net liquidation value.
At that level (test #9), leveraging costs represented about 5% of equity. Technically saying: maybe worth the added expenses.
Test #10 saw all three controlling factors increase resulting again in another jump in performance. The net CAGR column is not that much different than the strategy's CAGR column indicating that the impact of leveraging over the entire 16.7 years had only minimal CAGR consequences. For instance, on test #10, the strategy's CAGR went from 46.05% before leveraging expenses to 45.82% after expenses. Not that much of a CAGR difference even if moneywise the leveraging costs could not be considered as small change. But more as an added cost of doing business.
Test #11 and #12 showed that you could push the system even further just by doing more of the same and being more aggressive. The consequences being higher leveraging costs but also even higher performance. We can see the progression as the tests were being made.
All 12 tests were done in succession. Changing the variables so that the strategy performed in the wanted direction. At each step not doing it as: let's change this and see what it does, but as: change this and it will do that kind of thing knowing very well in advance that the overall performance would increase. The controls could be extracted from the trading strategy and set as external controls where management policies could determine their orientation based on other outside factors.
This kind of method can be applied to numerous other trading strategies as well. It is the reason why these particular tests were made: to show that anyone can control or at least direct their own trading strategies to do more. Evidently, there will be some added costs.
Either you get better at predicting or you can force your trading strategy to deliver more, and this will cost more too. It is the general theme you will find in my books: if you want more, you will have to do more. And there is more than one way to achieve these results.
The strategy has a self-defined and arbitrarily set trend definition. It goes long if the SPY 140-day momentum is up. Therefore, using variable levels of leverage in those periods becomes a way of amplifying profits. You drop the leveraging or partially reverse it if the 140-day SPY is not going up. An expression of common sense really: simply push in the same direction as the underlying trend that it be up or down, even if it is self-defined. However, one should nonetheless use a more sophisticated trend definition.
An Extended Strategy Payoff Matrix
It all goes back to the equation at the top of the post: F(t) = F0 + Σn (H ∙ (1+g(t) + Δφ(t))t ∙ ΔP), where Δφ(t) > 0, is that little boost you add from test to test to your generalized bet sizing function. A periodically scheduled rebalancing trading system will trade that you like it or not. It is its own raison d'être. And therefore, we can roughly anticipate the number of trades that will be undertaken over the investment period as was illustrated in my prior post.
The average holding period for those 65,000 positions was about 100 trading days or about 5 months' time. It is part of the reason for the higher performance. Trades are recycled, on average, every 5 months with larger and larger bet sizes since the fixed-fraction of equity bets are used throughout.
There is so much to be said from those 12 simulations. Technically, my program version stayed about the same except for the controlling variables which were incremented step by step to show their respective impact on the overall outcome of the trading strategy. It is all part of the equation iteration expressed above: F(t) = F0 + Σn (H ∙ (1+g(t) + ΔT#φ(t))t ∙ ΔP) where ΔT#φ(t) is increased as the test number increased (T#).
The trading strategy could have been stopped at any time. However, because the bet sizing is on an exponential function, it would have the same effect as reducing the compounding time interval which evidently would produce less. Follows 3 charts where the time interval was reduced by stopping the strategy in 2010, 2014 and 2016.
(click to enlarge)
(click to enlarge)
(click to enlarge)
At any of those 3 termination dates, you would have been more than ahead and it would have been your decision to continue or not. Or, make adjustments to the degree of trade aggressiveness that you would accept going forward. Or you could have chosen any other termination time in the first chart above and would still have been ahead. It appears as if it is always a matter of choice, not mine, but yours. I can certainly settle my own choices without your input.
Jan. 8, 2020, © Guy R. Fleury. All rights reserved.