February 2, 2024

In late 2011, I wrote an interesting article titled: **Trend Or No Trend**. It tried to answer the question: Do we need a trend to define the direction of our trade decision process? What if we did not use any?

The article compared eleven different trading strategies. Each has its procedures and trading rules. All the simulations used the same 43 stocks over the same time interval, from December 2005 to October 2011 (1,500 trading days or 5.83 years). The period includes the 2007-2008 financial crisis. None of the strategies escaped that market meltdown.

The 1,500 trading days were enough to gather information and worthwhile statistics. These simulations are more than ten years old. So, what do they have to teach us today?

For one, we can compare their trading methods, procedures, and general trading behaviors. Testing 43 stocks using the same initial capital over the same trading interval should show each strategy's comparative strengths, weaknesses, worthiness, and resilience.

The primary concern should be to have these strategies survive over the long term. If you do not see your trade decision proxy sufficiently profitable over past market data, how well will it handle its future?

We can sort these strategies according to the outcome of their respective payoff matrices:

Ʃ(**H**_{1} ∙ Δ**P**) > Ʃ(**H**_{2} ∙ Δ**P**) >, ∙∙∙ , > Ʃ(**H**_{n-1} ∙ Δ**P**) > Ʃ(**H**_{n} ∙ Δ**P**)

We get an ordered set by outcomes, from *i* = 1, …, *n*. It also answers the question: How much profit did each generate?

If you have 11 trading strategies or a thousand, they would follow the above relation. Of the group, there will always be some better than others. Each strategy's CAGR has for equation: [(F_{0i} + Ʃ(**H**_{i} ∙ Δ**P**)) / F_{0i} ]^(1/t) – 1 = CAGR_{i}. And with it, you can always reconstruct the equation to get their outcomes: F_{1}(t) = F_{01} + Ʃ(**H**_{1} ∙ Δ**P**) = F_{01} ∙ (1 + CAGR_{1})^t.

As you look at the eleven tested strategies, you see their CAGRs range from 47% to an outstanding 206%. These trading strategies were all tested under the same initial conditions.

They also had to deal with the same price series matrix (**P**). All eleven simulations have linked articles providing more details on procedures and trading rules used (including 43 charts each). Follow the links in the table below:

Strategy |
Trend Definition & Objective |
# Trades |
CAGR |

Gyro's Trend Checker II |
Strong | n/a | 47.24% |

Gyro's Trend Checker II Improved |
Strong | 4,790 | 54.07% |

QQQ and QID Trader |
Accumulate shares | 41,249 | 91.07% |

Livermore Challenge Response |
Long & Short Trends | 19,275 | 86.62% |

After Livermore Challenge |
Long & Short Trends | 19,164 | 109.76% |

Turtles V 3.2 |
Trend Channel | 321,226 | 127.11% |

ADD3 V 1.7 |
Price Relative Trades | 86,962 | 137.47% |

Trend Study II |
Accumulate shares | 347,453 | 120.43% |

Myst's XDev Modified |
Accumulate shares | 111,953 | 133.34% |

Momentum Trader |
DEVX8 (No Trends) | 58,605 | 27.76% |

One Minute Bollinger Band |
Accumulate shares | 127,706 | 206.93% |

All the strategies used next-day market orders at the open. Therefore, all trades had the same price series (**P**). Comparing strategies was a simple comparison of outcomes.

The difference in performance will have to come from the holding matrix **H**_{i}, which holds the ongoing inventory in each of the 43 stocks over the period.

Each holding matrix will result from the purchases **B**_{i }and sales **S**_{i} made: **H**_{i} = **H**_{0i} + **B**_{i} – **S**_{i}. All these matrices, **P**, Δ**P**, **H**, **B**, and **S**, will be the same size: 1,500 rows by 43 columns and have 64,500 data elements each. The price matrix **P **comprises only opening prices, while Δ**P** is the day-to-day difference in opening prices. A stock can only trade at price **P **by construction. All trade entries and exits are next-day market orders at the open, thereby also subject to slippage and commissions.

Whatever their respective trading procedures, you would evaluate each strategy's ensemble of trading routines. Are procedures in **H**_{1}better than in **H**_{2}? Or, is Ʃ(**H**_{1} ∙ Δ**P**) >? Ʃ(**H**_{2} ∙ Δ**P**). If yes, then **H**_{1}should look more favorable. Nonetheless, return is not the only consideration one should have.

That is what **Trend Or No Trend **is showing. The equations above could deal with thousands upon thousands of trading strategies. The only way to qualify their outcomes would be to differentiate their trading procedures, software routines, and trade-triggering processes.

In essence, we would be analyzing their behavioral differences. **Trend Or No Trend** compares only eleven trading strategies, each dealing with the same 43 stocks. The first strategies in the list start with a strong trend definition, and gradually, that definition is relaxed from simulation to simulation to end up with no trends defined. It showed that even having random-like functions as trade decision proxies can work.

There were no fundamental criteria for the selected stocks. At the time, it was what others on the forum looked at. It was an effortless way to make a selection that might appear almost random. Nonetheless, it might appear as a variation of a survivorship bias theme.

You did not know why a stock was selected, but it did not matter. What was important was that there was a selection. And all the programs would have to survive that particular selection. It could have been the worst test for a set of trading strategies. The strategies were debugged using one stock and then fed 42 unknown stocks that might not have been relatable to those trading procedures. And that was the objective, to see how, on average, a strategy would handle those other 42 stocks.

It put the eleven simulations on the same footing. They faced the same price series over the same time interval and could only trade on the same opening prices (**P**).

Only the trading methods used were responsible for the difference in outcomes. Looking at all the strategies from the point of view of their generated profits is sufficient to order them by their respective outcomes.

The trade decision process in any of these strategies did not have that many choices: either you bought or sold at the given opening price, or you held your position (including no shares held: h_{i,j} = 0). Holding on to an existing position was: h_{i,j} = h_{i-1,j}, meaning having the same inventory on hand in stock *j *as the day before.

In the eleven-chart series, we start with a strong trend definition, meaning the trend is hard-coded. It is either long or not, with no in-between. Gradually, from simulation to simulation, the trend definition becomes blurry and will even start having some random-like entries. As the simulations progress, the number of random entries will increase. They will terminate the series with its trading (entries and exits) being random-like (~95%) without a trend definition to determine a trade's direction. Nonetheless, the strategies made a bet to the upside since they were designed with the bias to accumulate shares over the long term.

The overall return from these strategies rises as the trend definition is gradually relaxed, even toward nonexistent. It says that a trading strategy could prosper at a high level even without a trend definition.

Each chart made its point, showing the power behind those trading procedures and assumptions and saying a particular set of trading procedures could be better than another. In such a case, why not use the most productive ones? Why not mix the best traits of these procedures and trading rules?

It is not by doing the same thing as everybody else that you will get different results. You have to make it unique, even innovative. Each presented strategy has some parts with an unusual look at the trading problem. Each trying to explore: “What if we did this or that?” Something outside what might be expected, like having the trade decision process governed by multiple random-like functions.

It is not because you have a debugged and workable trading strategy that it is the best, far from it. There are other considerations to have.

First, the quest for the highest performance possible is never-ending. You will never reach that optimum strategy that supersedes all others. An estimate for this might be one chance in 10^{400+}. That is a massive number. There is nothing you could do that would move that needle. That is a one followed by 400+ zeros before the decimal point. No amount of computing power could even make a dent in a million years. Make it trillions and trillions of years.

So, first off, you will never find or prove that you programmed the ultimate trading strategy. Period. There are too many possibilities. And even if you found it, it would only apply to past market data. It would still have to prove itself on future price data, meaning handling its unknown future. Furthermore, to find the ultimate strategy, you would need to know how the other 10^{400+} - 1 behaved. So, kiss that dream goodbye.

In the future, you will always be at the right edge of a price chart. What matters is what follows that point and what you will do about it. A simulation does not produce real money. You cannot make money by simulating past market data. The future will not be a simulation with a let's start over button.

Nonetheless, we should concentrate on things that seem to have worked on past market data. Then, stress-test these strategies to find and enhance their strengths while seeking the weaknesses to correct or at least alleviate their potential negative impact.

Also, we cannot expect all the stock we picked for our simulations to be there in 10 or 20 years. The stock selection process should be better than the 43 stocks in these simulations. The selection should be more dynamic and adaptable, removing underperformers and replacing them with better prospects. None of the eleven simulations tried to adapt in this way. They used the same 43 stocks over the entire period. Some tickers might disappear due to mergers, acquisitions, or bankruptcies. Make a better stock selection and have a long-term approach to your choices.

I have come to appreciate quasi-random-like entries and exits because they cannot predict what is coming next, nor can I.

The **Momentum Trader **strategy made that point relatively clear. It had ~95% of its entries and exits based on random-like functions (DEVX8). The strategy divided price movements into three zones: a buy zone, a no-trade zone, and a sell zone. Each zone defines which trading procedures could be applied.

The sell zone only allowed selling existing positions if there was a sufficient profit. Even then, with the paper profit in hand, it had to win its exit lottery ticket. A trade could miss the exit just as gaining more simply due to the delayed exit (for example, if rand() > 0.95, then ok_to_exit. The trade would have one chance in 20 to win its exit). The strategy was making more money using this delayed gratification; all it needed to do was kick the can forward. Postponing the exit of a trade in a generally rising market should generate, on average, more profits.

This procedure was consistent with the strategy's premises. The objective was to build a core position over which trading could occur, thereby creating a positive feedback loop while recycling profits from price swing to price swing. The strategy took advantage of market dips in prices and resold on rising prices under the condition they met minimum profit targets. You could push the strategy, if you wanted to, by providing more capital and increasing the size of its trading unit. It is stuff you could do as administrative decisions even before you started trading rather than something the program decided even though it could be programmed in, but in this case, it was not. None of the eleven programs did.

Unfortunately, these strategies are based on a now-dead programming language, and I have been unable to get market data after Yahoo stopped providing it in a WL-compatible format. It leaves the strategies dead in the water with no way to make any run again except to rewrite those programs in another programming language.

It represents an enormous lost opportunity. The last simulation was in early 2017. Redoing that simulation would represent a 7-year walk-forward test. It would undoubtedly teach me a few things, mainly how they would have dealt with their future. It also would emphasize that the stock selection process should have been more dynamic and adapted with time. An easy solution is to take the top 50 stocks in the QQQ ETF (see **QQQ To The Rescue**).

What matters in a trading strategy is its long-term outcome. What it does next year or in its tenth year is unimportant. However, what will be the overall outcome in 20 or 30 years does. And that is not easily answered.

We often simulate trading strategies based on whatever criteria we like but rarely compare those strategies under the same testing conditions. Using the same datasets, initial capital, and time horizons allows us to compare trading procedures and their merits. It should help determine which underlying trading procedures have more long-term value.

Created: February 2, 2024, © Guy R. Fleury. All rights reserved.