June 6, 2021

This is another continuation of the last few articles found on my website dealing with the freely available In & Out stock trading strategy. This one is about gaining a better understanding of its trade mechanics. Without it, how could you determine what is really going on, and maybe more importantly, how could you "control" what it does? Or even better, what it will do going forward?

Forward, that is the keyword; that is where the money is. There is no real money to be made on a simulation over past market data. A simulation can only serve as a kind of feasibility study sampled out of the gazillions of other choices that could be made. Why is this trading strategy profitable? How could you make it even more so going forward? Those are questions in need of answers.

In a number of posts on QuantConnect over the past few months, the emphasis was put on the signal as the driving force for this money-printing machine. The signal was rarely changed from version to version. Often, we had posts citing the signal as "overfitting" due to its uncanny ability to "predict" market turns, but never giving any data to justify the claim other than: "if it makes more profits than average market performance, then it must be overfitted". That argument does not hold without corroborating evidence, and since none was provided...

 

The Trading Signal

The composite of 5 ETFs was used in the original strategy's trend definition. They acted like a single sensitive signal, used as a kind of Granger causality thing, that would declare the general market trend as up or down. The strategy would switch entirely to stocks the moment the trend was declared up, and switch back to bonds and stay in bonds when the trend was pronounced as down. A reasonable proposition that also served as the strategy's protection for when the markets were declared in decline.

In my simulations, I reduced that signal to -UUP only (one of the 5 ETFs). It produced much better results than the composite 5, meaning that the other 4 "signals" were more like dampers as their use reduced overall returns. There had to be a reason for this. And there were a few.

Note that the inverse of UUP was used. I was looking for a "curve" that was mostly going up, year over year, and close to a straight line. One mostly going down for the past 12 years would foot the bill, if reversed. This signal was very sensitive to market fluctuations to the extent that it flipped sides some 70 times over the 10-year or so period. That meant 35 times all stocks would be liquidated. A new set would be bought, on average, every 36 trading days (about 7 weeks), giving the strategy some "holding" time. This did not forecast when the signal would turn. The program only reacted to the "signal" and its change in definition. Another way of saying it was often late to the party and there was a cost to being on the wrong side. 

Position Sizing

To readjust weights, individual shareholdings were reduced for rising prices while increased for falling ones. The question was not asked: why? It did not matter. It did what it did by the very definition of rebalancing.

Rebalancing only saw price change (expressed as Δi wj, where i is the trade ID). The rebalancing was a sufficient reason for executing those partial trades. Rebalancing, by itself, does not liquidate a position or create a new one. That process is performed elsewhere in the stock selection process or in the trading procedure logic prior to rebalancing.

If we classify stock price variations from day to day as almost random to totally random or slightly predictable to highly predictable, the rebalancing will do the same job, no matter how we perceive those price variations. If rebalancing sees enough price change to initiate the purchase or sale of a single share, it will do it.

Each rebalance returned individual position sizes to: pzi = F(t) / N, where N = 100 was the total number of securities in the portfolio. Making the average position size, or bet size, grow at the same rate as the portfolio, F(t) = F0 ∙ (1 + g)t. This implies the average position size will also be exponential: F(t) = ƩN (F0 / N) ∙ (1 + gj)t, and will grow at the average portfolio return g. Setting N the number of stocks to be traded is not a program prerogative, it is an administrative one or if you prefer, it is your choice and determined before the programs even start running.

This relationship F(t) / N might look trivial in the beginning, but, as the portfolio grows, it is changing the trade dynamics. To maintain the weights, position sizes will vary up/down depending on accumulating profits/losses. Position sizes will rise as the portfolio grows and decline when the markets are going down, thereby inherently reducing the amount of downside risk. Most of the time, holding positions will oscillate near their fixed weights 1 / N. This program structure, with its flusher, precludes the portfolio from going bankrupt. It could go down in value but not go bankrupt. A kind of structural safety net.

If the portfolio grows 10-fold, then the position size will too. You need to answer the question: how will you handle the higher stakes then, and how will the market handle the higher volume of those trades?

Taking the same example as in my last article: 100 stocks, 10+ years, and rebalancing every day (~252 trading days) would yield a potential 252,000 trades over the period. We need to determine a participation rate since not all the stocks will rebalance every day. Only shocks where the price change was sufficient to generate at least a partial trade of one share.

Whatever the stock selection, there might be a few stocks not changing value from day to day. This might reduce the participation rate to something between 90% to 98%. Thereby, scaling down the number of potential trades to the range of 226,800 to 246,960 over the 10-year period.

The program has not even started, and over 200,000 trades are already scheduled to be executed at whatever the price might be. You do not know how many shares will be bought or sold at any one time. But you do have an estimate as to the number of trades that might be executed even if you do not know when which stocks and at what price they will be executed.

It is not that you really planned them. They are a consequence of the structure of the program. Technically, you had nothing to do with it except issuing a scheduled rebalancing command and setting the number of stocks to be traded.

This did not require skills. They were "administrative" choices. You embark on this long-term portfolio journey and are bound to make some 200,000+ trades over the next 10+ years entirely due to a single generic line of code.

Therefore, efforts should concentrate on: what will be the average net profit per trade? Independent of any answer, you are still the person to make it worthwhile or not. Evidently, you want the average net profit per trade to be positive and as large as possible. Otherwise, why would you have programmed the thing to automatically lose your money for no good reason at all? Losing is not a desirable or even acceptable scenario in this game. If you cannot do better than losing, then get out of that game. 

What will that trading be like?

We have an estimate: F(t) = 226,800  xavg, where xavg is the average net profit per trade. Each dollar of average gain will represent $226,800. And this puts you right back at N, the number of stocks in the portfolio. If you pick only 10 stocks, your estimate shrinks to $22,680 per average dollar gained. It should be easier to get. The ease will be proportional to the bet size. We have total trades: E[TT] = 10y + 252d + 0.9N = 10  252  90 = 226,800.

You get $226,800 for every dollar of "average net profit" you make. If your strategy averages $100 or $1,000 net profit per trade or even more, you should get the picture. Implicitly, your job is to make sure you get this "positive" average net profit per trade xavg > 0. And therefore, you will need xavg to represent your long-term trading edge. 

A World Less Than Ideal

The above puts an expected range for the number of trades between 200,000 to 250,000 trades over a 10-year period. But, the number is lower; by how much, that depends on many factors. For one, it is not 98% of the 100 stocks that will participate in each rebalancing. There is this notion that the weight needs to diverge by more than enough to make a trade of at least 1 share. Therefore, the price move must be more than just positive; it must at least represent enough money to buy a single share. This puts a small barrier to entry around zero. Any stock within this barrier will not trade, thereby reducing the number of potential candidates.

The following chart is the same chart as in the previous article: The In & Out Trading Strategy - Analysis. 

Price Variations with Barriers

 Price variations with barriers

I added this small barrier near the zero line. It is intended to represent the necessary move above or below the zero line in order to generate a buy or a sell of at least one share. Otherwise, there is no trade. As can be observed, a lot of the price moves were inside the barriers creating a no-trade zone. Every element of that chart should be viewed as stochastic, just as the main curve on the chart. Each day the chart would be different but would maintain the general shape depicted in the chart.

This small barrier with (Δi wj > |ϵ|) will reduce the potential number of trades to be executed over the life of the portfolio. There could be somewhere between 30% to 60% less trading. Something in the range of 75,000 to 150,000 transactions having at least 1 share traded. This would depend on the overall volatility of the selected stocks and the volatility over the trading intervals. The more erratic the price movements, the more there would be trades since more price moves would exceed the barrier limits on either side.

There might be a minimum required to make a trade, but, to maximize average net profits you should be seeking higher volatility, larger price change, this would imply looking for (Δi wj → |max|) where the max has no preset fixed limit.

Selecting stocks that move little from day to day (ex., low-beta stocks) would appear counterproductive. It is not by choosing price variations tending to zero (Δi wj → 0 ⟹ xavg → 0) that will increase your trading edge unless you are ready to trade even more, and a lot more.

Rebalancing every day will generate a lot of quasi-random trades. If you cannot predict what the price will be tomorrow, like bet the house on it or something, then you have to design a probabilistic approach to the problem. Nonetheless, rebalancing solves a big chunk of that. The strategy does not care why or how the price moved, it just rebalances to the prescribed weights whatever they are or might be. 

Winning The Game

We should all want to win this game over the long term. Period. Otherwise, whatever we did would be worthless. If your portfolio cannot reach the finish line because it failed to make money along the way, you simply lose. There were a lot of alternatives that would have you succeed. Even buying low-cost index funds would have done a better job.

Here, you have a trading strategy that can make you win, not because it is wiser than other trading methods but because it is structured in a way that will make you win. And if you repair its flaws, it makes you win even more. It will be at no fault of its own. It is how the program is structured, its mechanics, and its administrative procedures, which are choices you can make even before you start the game.

The signal might be wrong most of the time, but nonetheless, the flushing side of the equation has some interesting properties. At least, they coincided over the past 10 years or so, where it might have counted the most. Check out the trading behavior and then add correcting code to alleviate weaknesses and have it plan for a future that will be different from the simulated trading environment of the past decade.

You design trading rules that should apply in an automated setting where sentiment, shades, nuances, and intensities are often ill-defined. You do this in a binary world where up and down need to be determined in order to take proper action. Because the price movement of underlying stocks tends to be stochastic in nature, your decision-making process becomes fuzzy at best. You need to deal with averages, long-term averages for the system as a whole, and short-term averages to identify probabilistic momentum plays. When you do over 100,000 trades over the life of your portfolio, any single trade becomes unimportant. It's the average behavior that counts: F(t) = n ∙ xavg, where the number of trades has a major role to play and your long-term edge even more.


June 6, 2021, © Guy R. Fleury. All rights reserved.