Dec. 8, 2018
What was demonstrated in my latest book (Beyond the Efficient Frontier) was that the market's average secular trend could be considered sufficient to explain most of the general underlying long-term price movement in stock prices. To make the demonstration, price series were not decomposed into various factors but simply reconstructed from scratch using an old stochastic model that has been in use for decades.
Even as a rough approximation to the real thing, it was sufficient to extract similarities between millions of simulated portfolios using randomly generated price series and portfolios that would be composed of live market data. Thus, providing some statistics on their similarities.
The stochastic equation used is simple: F(t) = F(0) (1 + µdt + σdW)t, which states that the funds put to work start at their initial value to which is applied a compounding long-term rate of return that is allowed to randomly fluctuate with time.
When you take off the stochastic part, which tends to zero over the long term anyway, you can reduce the equation to: F(t) = F(0) (1 + rm)t, which gives the long-term market CAGR. For the past 240+ years, the average US market trend (rm) amounted to a little less than a 10% CAGR, dividends included. That is if you participated for 30 years or more. Therefore, we should not be surprised if it becomes the long-term expectancy for fund indexers and active players alike. It is like saying that achieving less should not even be an option.
To reach that conclusion, the book rebuilt tons of portfolios of randomly generated price series. Using the supplied program, millions of portfolios could be generated with the use of a single Python one-liner: return_vec = np.random.randn(n_assets, n_obs), which generated normally distributed return series for each asset for the desired number of observations, which for simulation purposes were calibrated into day equivalent price movements.
Thereby, you could recreate portfolios of any size for any duration enabling the study of their characteristics and idiosyncrasies. All price time series were considered on the same level since they were constructed on the basis of percent change from period to period. A 10% move in any one stock is still a 10% move, no matter its price. Doing this normalized all price series.
Also, since no seed was used in the random number generator, all price series would be unique as if taken from an infinite stock universe, thereby making each portfolio unique with no chance of ever having a duplicate price series or portfolio. The interest would be in their averaged behavior over time and their expected outcomes.
The No Secular Trend Scenario
First, consider the scenario without a trend (µdt). You would be left with the random-like component as the only source of return: F(t) = F(0) (1 + σdW)t. We might think that the first property of such portfolios would be their unpredictability, but we already know their average expectancy and, therefore, could predict, almost surely, that it would tend to F(0). This was by construction with, as a consequence, no expected long-term profits since the sum of the random fluctuations (σdW) tended to zero: F(t) = F(0) (1 + 0)t = F(0).
By design, the mean of each price series would tend to zero with a standard deviation of one. This implicitly meant that the average portfolio return (r_m) would tend to zero. There was no profit expected from such a contraption. The sum of many zeros is still zero.
How could this be accomplished in real-life trading? Simple, detrend all price series, what will be left is the random-like component. A trader is dealing all the time with the right edge of a price chart, and this will force them to adapt their game to the circumstances. As was explained in the book, at the right edge of a price chart, the trader is facing odds close to 50/50.
This also implied that none of the conventional tools we might use to predict what is coming next could be of any use. If you did 50,000 portfolios at a time with hundreds of stocks each, it would not change the general picture. The expected outcome would still be zero. You could not even predict, better than a 50/50 proposition, if it would end on the positive or negative side of zero.
There is not much you can say about randomly generated time series, especially normally distributed return series. All you have is a Gaussian distribution with zero mean and a standard deviation of one. And yes, 99.7% of all the data will fall within 3 standard deviations.
There are no seasonalities, no periodicities, no patterns, no candle formations, no cycles or regime changes, and no certainties that you can rely on as repetitive or otherwise. There evidently is no fundamental data, no bellwether series, and no indicator that could provide a hint of predictability. You could always try to find those patterns and factors and, at times, find some, but they would be, at best, coincidental occurrences of random events with, again, no real predictive powers.
There is no artificial intelligence, no deep learning, no signal processing, or other processes that could uncover whatever kind of predictability from a normally distributed return series. No wavelet thing, no Fibonacci, no fractals, no golden ratios, no moon cycle or Kondratieff wave. You could not count on Elliot waves or spectral analysis either. None of it has ever been of predictive usefulness on normally distributed return series. The reason is simple, none could.
It would be sufficient to carry out a few simulations in succession, using whatever trading method you want, to show that just changing the randomly generated price matrix would be enough to have all those methods break down. Running a new simulation only requires rerunning the program. Pressing and your well-intended trading strategy, like a house of cards, would collapse.
Whether it be on randomly generated simulations like those performed in the book or real future market data, both are close to a continuous chain of random-like events ready to unfold. In trading, what is important is to know what to do at the right edge of a price chart.
The Long-Term Trend
It is only when you add a long-term drift to the portfolio creation process that the picture changes. Refer to the first equation: F(t) = F(0) (1 + µdt + σdW)t.
The simulations showed it was sufficient to recreate portfolios with an average long-term trend to mimic the market average (rm). It had predictability since what was added was just a straight line (µdt), which was given values of: µdt = 0.0001dt, or 0.0002dt. A value of 0.0001dt was equivalent to a long-term upside bias of about 2.5% average return per year.
The long-term drift appeared sufficient to explain what was out there. As if saying: this is what is available. In a way, it does make some statistical sense. A stock portfolio would be like a sample taken from the whole available market. And on randomly generated portfolios taken from an infinite stock universe, whatever selection could be viewed as a sample from this infinite universe.
If you want to play for the long term, then you should expect the market average for your efforts since it is the most probable outcome. But, even if indexing is big business, you can do better than market averages simply by adding trading to your portfolio-building process. Adding skills and expertise (positive alpha) to your trading strategies can lead to even higher returns: F(t) = F(0) (1 + µdt + αdt + σdW)t since this alpha is also compounding.
The long-term trend is acquired by being fully exposed to the market, and the skills (alpha) will come from what you put into the trading procedures you design.
Created... December 8, 2018, © Guy R. Fleury. All rights reserved.