October 5, 2013
 
The prior two sections, A Basic View and A Basic View II, were simply a necessary introduction, just as this part is, to the reasoning needed to look at the trading/investing problem from the perspective of portfolio optimization under long-term uncertainty.
 
When considering the graphic presented in the previous section, one soon realizes that it is stating the obvious: we know the past to the penny, we know the now for what it is, and the future remains almost a complete unknown. 
 
And yet, we can draw for future data almost a mirror image of the past data presented, even though none of those curves (for t > 0) could be predicted in advance.
 
Objective: Compounding Trading Strategies
 
Again, using the payoff matrix notation, let a trading strategy matrix H be the result of a set of decisions to increase or decrease an ongoing stock inventory over an extended period of time. All you can do is decide the action to take: get in, get out, and determine the size and direction of each bet. But there is also an overlooked decision process that is as important, if not more important, that is to stay in or stay out of any positions, which in turn will affect market exposure and/or the participation level. A lot more time is spent on the holding functions than on the decision triggers to get in or out of a trade.
 
The inventory level could change following any type of rule-based information set that one thinks might be relevant to the task (proven beneficial or not). These decisions don't even have to be related to the immediate market data. But one thing is sure, the inventory can only change following buy and/or sell operations: (H = B0 + (B – S)). This expression states that to an initial position (B0, which could be zero) in each of the stocks in the portfolio is added, all the purchases B, and subtracted all the sales S, which will result in inventory matrix H. The B and S matrices could be huge depending on the number of stocks and the trading interval under consideration. 
 
Trades can be taken in fixed amounts of shares, fixed equity ratios, or fixed dollar amounts, as well as part of an averaging or scaling process to get in and out of trades; it could be time or volume sliced or combinations of all these. One can even include: randomly positioned trades using random time scaling and/or randomly sliced volume.  
 

The Payoff Matrix

Whatever the trading strategy used, it can, at all times, be summarized by its payoff matrix: Σt(H.*ΔP) where ΔP is the matrix of entry and exit prices, including all variations in between over the trading interval, and where H is its applied trading strategy matrix. For example, a 20-year EOD (End-of-Day) trading strategy H for something like the 100 stocks of the S&P100 would result in a ΔP matrix of some 500,000 daily price variations; while going down to the minute level would require both the ΔP and H matrices to have 195,000,000 data elements. Now, just imagine the size of the ΔP and H matrices at the millisecond level!
 
One should expect the B and S matrices to be of the sparse type, with most entries having a value of 0 (H = B0 + B – S). This would result in no inventory change for each of the zeros encountered in the buy B and sell S matrices. Over 90% of entries in the B and S matrices might have zero for value, which would be equivalent to holding existing positions, whatever they were. This naturally all depends on the trading strategy used.
 
I think a basic point being made here is that the stock market game is a long-term endeavor, it is not a weekday junket to a “Wall Street” casino with a “I feel lucky” attitude. Sure, anyone can play that way from one junket to the next until they realize that they are not really winning the game.
 
The main reason won't be because they were unlucky, it will be because they played the game based on wrong or non-valid assumptions and with some total disregard for market risks not to mention the very nature of the game itself. They'll fail because of poorly designed trading strategies that were doomed to fail from the start, no matter what happened. This is probably why some 80%+ wannabee traders disappear from the ranks due mostly to having lost too much, if not all, of their stake to never be heard of or seen again.
 
There might be some 95 million (direct and indirect) investors in the US stock market alone. Most of them participate through mutual funds, ETFs, and pension plans… Some, say about 10 to 15%, opt to manage their own portfolios, with most of them doing up to 5 trades a month and having relatively small trading accounts. I don't have the exact or approximate numbers, but I expect most short-term discretionary traders to probably make less than 10 trades a day. The top 20% of market participants should account for some 80% of all trades with the 20% of trades remaining being spread out over the 80% considered relatively smaller traders.
 
There are so many silly trading ideas going around, and it is not surprising to find so many people ready to accept, promote, or defend such concepts. How could anyone defend that astrology has anything to do with playing the market? How could one promote that there is a “universal law” to price movements? How could anyone advance that they have found the “technical indicator” that supersedes all others?
 
Why don't they look at the math of the game, evaluate opportunities and beforehand determine if “their” trading strategy has any chance of providing long-term profits? Simply backtesting any of their concepts over extended periods of time over many assets of the same type might or should be sufficient to show worthiness or not.
 
Playing a thousand trades, to lose it all on the thousand and one will still have a zero count at the end of the game. It is not an “all in” on a pair of deuces kind of games. When endowed with limited predictive powers, one has to design reasonable trading strategies in order to better adapt to wildly fluctuating stock prices.
 
If any trading strategy H can be summarized using the payoff matrix: Σt(H.*ΔP), then all the efforts being deployed should be concentrated on the how much, the when, and the which positions should be taken in which direction. There are not that many ingredients in the payoff matrix. On one side, you have the historical inventory H, and on the other, the corresponding matrix of price variations ΔP of all the stocks ever held in the portfolio's inventory over time. 
 
Stuff not seen in the payoff matrix Σt(H.*ΔP) includes the length of the trading interval, the actual trading strategy H used, and the stock selection P itself. Also not shown are all the closed or still opened positions; all you have is the end result: how much profit was generated over the entire portfolio's life at end time t. This could also have been expressed as:
 
A(t) = A(0) +  H(t)dP 
 
which says exactly the same thing using integrals. This expression also says: your asset value A(t) at time t  is equal to your original asset value at time t=0 plus all the profits generated by the applied trading strategy over the integration interval. 
 

Trading Decisions

To the payoff matrix should be added the information set needed to not only select the stocks to be traded but whatever else will also regulate the whole trading process, making the payoff matrix something more like Σt(H.*DIset.*ΔP.*SIset) where you have a trading decision (DIset)  information set and a stock selection (SIset) information set; both of which trying to control the evolution and the output of the payoff matrix.
 
However, whatever the information set used to improve overall performance, it might be difficult to predict or identify what will work best, but nonetheless, one can design the mathematical model that will explain either past results or the framework that will apply as well to future results. From the payoff matrix: Σt(H.*ΔP), a single price variation ΔP could be expressed as P0(1+r)t – P0, or more simply: ΔP = P(t) – P(0). 
 
Whatever the price variation ΔP, it could be represented as the compounded rate of return over the trading interval, and without loss of generality, it would provide the same profit or loss as the difference between the exit and entry prices. Viewed this way, improving the stock selection and its exposure window should improve overall performance with this degree of expertise. This would translate into:  P0(1 + r + α1)t - P0. where r is the long-term average rate of return over the exposure period while α1 is the added return generated by the better stock selection process.
 
Long-term exposure to a selected list of US stocks, like the 100 stocks of the S&P100, should have an average performance about the same as the long-term S&P100 average. In fact, Σt(H(B&H).*ΔP)  →  Σt(H(S&P100).*ΔP) in probability. This has been demonstrated in numerous academic papers over the years. Then if the most expected outcome of a trading strategy H on the S&P100 stocks is to tend to the average long-term performance of the index's average: H(S&P100); then how is one to assure himself to cross over this barrier in order to out-perform this expected long-term limiting average? 
 

The Problem

There is a need to look at the problem from a different angle. You know that you can improve performance levels by selecting (on average) better-performing stocks, selecting better trading windows than average, or developing better predictive trading methods. I see most efforts concentrated in this area: the ΔP side of the equation. It is understandable that better forecasts lead to better overall performance. 
 
However, when you look at historical results, the case for over-performance by better selection is hard to make since 3 out of 4 professional money managers fail to exceed the long-term averages. One simple question should be, why? How can they be highly “professional” and not outperform averages? Sorry about this one. Not intended to hurt anyone, but the question remains.
 
It's like most realized some time ago that maybe the best way to go at it was not necessarily predicting price movements but emulating the averages using index funds trying to mimic market averages, which would explain the popularity of index funds, which are, by design, made to closely follow an index average. Then, by design, one should not be surprised if, indeed, an index fund mimics an index's long-term performance level.
 
It should be observed that when playing the index fund game, one puts an upward limit to long-term performance, and this limit is that the portfolio will tend to approach the index's long-term return. There will be no alpha generation, as none is expected. It is by jumping over these barriers to higher returns that can generate alpha over the long haul, not by staying within self-imposed limits. 
 

The IBM Case Story

The chart below shows IBM over a 13-year period starting in January 2000. It shows a linear as well as a polynomial regression. Over the interval, performance has been about 6% compounded annually, dividends included, which could be considered within average secular market trends. However, starting in January 2000, no one could predict, on a daily basis what would be the price variations over the period.
 
IBM (13 years)
IBM (13 years)
(click to enlarge)
The same data as presented in the above chart could also be viewed as daily price variations (ΔP: the pink seismograph line about zero, left scale).  
 
IBM Price Variations (13 years)
IBM Price Variations (13 years)
(click to enlarge)
The slope of the regression line on these price variations is close to zero with an r2 value of 0.0007. The average daily price variation for the entire 3,317 trading days was a mere $ 0.029, where 3,307 of those trading days had a price variation of more than $ 0.50. Almost 45% of trading days (some 1,492) had price variations in excess of $ 2.00. Some 5.9% (163 days) had moves of $ 5.00 or more. The 20-day moving average of price variations (blue line, right scale) shows some periods of high volatility.
 
The underlying long-term trend ($0.029 / day) is only a small fraction of the average daily range of $2.25 per day. This amounts to say that the underlying signal, within all the noise, represents a mere 1.27% of the average daily price range; leaving the rest: 98.73% as just noise. I know it is just one stock. Nonetheless, it could be representative of many.
 
Even from this limited data, some observations could be made. Detrending the 13 years of IBM data results in the following chart:
 
IBM Detrended (13 years)
IBM Detrended (13 years)
(click to enlarge)
Compared to the un-detrended version, one can not help but notice that in the above chart, IBM did not double over the trading interval. There seem to be only two major trends in the series, one going down for about 2/3 of the trading interval and one going up for the last one-third.
 
The following chart shows IBM's original and detrended price series side by side, each with its own polynomial regression line.
 
 
IBM Detrended Regression Lines (13 years)
IBM Detrended Regression Lines (13 years)
(click to enlarge)
Observe that all the generated profits over the 13-year period resulted from the underlying long-term trend. In fact, the difference in the 2 polynomials is -0.0972x - (-0.1257x) = 0.029, as was expected. A positive drift of about 3 cents per day which would have started to generate a positive return only after holding on for at least the first 10 years.
 
The real question is: are there trading methods that one can implement that will generate better results long term than just holding the bag long term?
 
The above leads to the methods used in generating the IBM backtest found in the following article: 
 
 
which shows that the trading strategy used performed much better than the Buy & Hold and easily exceeded whatever should have been the expected long-term return. All this by jumping over the limiting barriers to outstanding performance.
 
The main idea was to find a transforming function such that the payoff matrix: Σt(H.*ΔP) would be changed into Σt(H.*T(t).*ΔP), which in turn would be much greater than its original unmodified version: Σt(H.*ΔP).
Created... October 5, 2013,    © Guy R. Fleury. All rights reserved.