October 5th, 2013

A Basic View III

The prior two sections: A Basic View and A Basic View II were simply a necessary introduction, just as this part is, to the reasoning needed to look at the trading/investing problem from the perspective of portfolio optimization under long term uncertainty.

When considering the graphic presented in the previous section, one soon realizes that it is stating the obvious: we know the past to the penny, we know the now for what it is, and the future remains almost a complete unknown. And yet, we can draw for future data almost a mirror image of the past data presented, even though none of those curves (for t > 0) could be predicted in advance.

Again, using the payoff matrix notation, let a trading strategy matrix **H** be the result of a set of decisions to increase or decrease an ongoing stock inventory over an extended period of time. All you can do is decide the action to take: get in, get out, determine the size and direction of each bet. But there is also an overlooked decision process which is as important if not more important, that is: to stay in or stay out of any positions which in turn will affect market exposure and/or participation level. A lot more time is spent on the holding functions than on the decision triggers to get in or out of a trade.

The inventory level could change following about any type of rule based information set that one thinks might be relevant to the task (proven beneficial or not). These decisions don't even have to be related to the immediate market data. But one thing is sure, the inventory can only change following buy and sell operations: (**H** = B_{0} + (**B** – **S**)). This expression states that to an initial position (B_{0}, which could be zero) in each of the stocks in the portfolio is added all the purchases **B** and subtracted all the sales **S**, which will result in inventory matrix **H**. The **B** and **S** matrices could be huge depending on the number of stocks and the trading interval under consideration.

Trades can be taken in fixed amounts of shares, fixed equity ratio or fixed dollars amounts as well as part of an averaging or scaling process to get in and out of trades; it could be time or volume sliced, or combinations of all these. One can even include: randomly positioned trades using random time scaling and/or randomly sliced volume.

What ever the trading strategy used, it can, at all times, be summarized by its payoff matrix: Σ_{t}(**H**.*Δ**P**) where Δ**P** is the matrix of entry and exit prices including all variations in between over the trading interval and **H** is its applied trading strategy matrix. For example, a 20 years EOD (End of Day) trading strategy **H** for say the 100 stocks of the S&P100 would result in a Δ**P** matrix of some 500,000 daily price variations; while going down to the minute level would require both the Δ**P** and **H** matrices to have 195,000,000 data elements. Now, just imagine the size of the Δ**P** and **H** matrices at the millisecond level!

One should expect the **B** and **S** matrices to be of the spares type with most entries having a value of 0 (**H** = B_{0} + **B** – **S**). This would result in no inventory change for each of the zeroes encountered in the buy **B** and sell **S** matrices. Over 90% of entries in the **B** and **S** matrices might have zero for value which would be equivalent to hold existing positions, what ever they were. This naturally all depends on the trading strategy used.

I think a basic point being made here is that the stock market game is a long term endeavor, it is not a weekday junket to a “Wall Street” casino with a “I feel lucky” attitude. Sure anyone can play that way from one junket to the next until they realize that they are not really winning the game.

The main reason won't be because they were unlucky, it will be because they played the game based on wrong or non-valid assumptions and with some total disregard for market risks not to mention the very nature of the game itself. They'll fail because of poorly designed trading strategy that were doomed to fail from the start no matter what happened. This is probably why some 80%+ wannabee traders disappear from the ranks due mostly to having lost too much if not all of their stake to never be heard of or seen again.

There might be some 95 millions (direct and indirect) investors in the US stock market alone. Most of them participating through mutual fund, ETFs, pension plans, … Some, say about 10 to 15%, opting to manage their own portfolios with most of them doing up to 5 trades a month and having relatively small trading accounts. I don't have the exact or approximate numbers, but I expect most short term discretionary traders to probably make less than 10 trades a day. The top 20% of market participants should account for some 80% of all trades with the 20% of trades remaining being spread out over the 80% considered relatively smaller traders.

There are so many silly trading ideas going around and it is not surprising to find so many people ready to accept, promote or defend such concepts. How could anyone defend that astrology has anything to do with playing the market? How could one promote that there is a “universal law” to price movements? How could anyone advance that they have found the “technical indicator” that supersedes all others?

Why don't they look at the math of the game, evaluate opportunities and before hand determine if “their” trading strategy has any chance of providing long term profits? Simply backtesting any of their concepts over extended periods of time over many assets of the same type might or should be sufficient to show worthiness or not.

Playing a thousand trades to lose it all on the thousand and one will still have a zero count for end of game result. It is not an “all in” on a pair of deuces kind of games. When endowed with limited predictive powers; one has to design reasonable trading strategies in order to better adapt to wildly fluctuating stock prices.

If any trading strategy **H** can be summarized using the payoff matrix: Σ_{t}(**H**.*Δ**P**) then all the efforts being deployed should be concentrated on the how much, the when and the which positions should be taken in which direction. There are not that many ingredients in the payoff matrix. On one side you have the historical inventory **H**, and on the other the corresponding matrix of price variations Δ**P** of all the stocks ever held in the portfolio's inventory over time.

Stuff not seen in the payoff matrix: Σ_{t}(**H**.*Δ**P**) include the length of the trading interval, the actual trading strategy **H** used and the stock selection **P** itself. Also not shown are all the closed or still opened positions; all you have is the end result: how much profit was generated over the entire portfolio's life at end time t. This could also have been expressed as:

A(t) = A(0) + _{∫} **H**(t)d**P**

which says exactly the same thing using integrals. This expression also says: your asset value A(t) at time t is equal to your original asset value at time t=0 plus all the profits generated by the applied trading strategy over the integration interval.

To the payoff matrix should be added the information set needed to not only select the stocks to be traded but what ever else will also regulate the whole trading process; making the payoff matrix something more like: Σ_{t}(**H.*DI**_{set}.*Δ**P.*SI**_{set}) where you have a trading decision (**DI**_{set}) information set and a stock selection (**SI**_{set}) information set; both of which trying to control the evolution and the output of the payoff matrix.

However, what ever the information set used to improve overall performance, it might be difficult to predict or identify what will work best, but nonetheless, one can design the mathematical model that will explain either past results or the framework which will apply as well to future results. From the payoff matrix: Σ_{t}(**H**.*Δ**P**), a single price variation ΔP could be expressed as: P_{0}(1+r)^{t} – P_{0}, or more simply: ΔP = P(t) – P(0).

What ever the price variation ΔP, it could be represented as the compounded rate of return over the trading interval and without loss of generality, it would provide the same profit or loss as the difference between the exit and entry prices. Viewed this way, improving on the stock selection, its exposure window, should improve overall performance by this degree of expertise. This would translate into: P_{0}(1 + r + α_{1})^{t} - P_{0}. where r is the long term average rate of return over the exposure period while α_{1} is the added return generated by the better stock selection process.

A long term exposure to a selected list of US stocks like say the 100 stocks of the S&P100 should have for average performance about the same as the long term S&P100 average. In fact, Σ_{t}(**H**(B&H).*Δ**P**) → Σ_{t}(**H**(S&P100).*Δ**P**) in probability. This has been demonstrated in numerous academic papers over the years. Then if the most expected outcome of a trading strategy **H** on the S&P100 stocks is to tend to the average long term performance of the index's average: **H**(S&P100); then how is one to assure himself to cross over this barrier in order to out-perform this expected long term limiting average?

There is a need to look at the problem from a different angle. You know that you can improve performance levels by selecting (on average) better performing stocks, selecting better trading windows than average, or developing better predictive trading methods. I see most efforts concentrated in this area: the Δ**P** side of the equation. It is understandable, better forecasts lead to better overall performance.

However, when you look at historical results, the case for over-performance by better selection is hard to make since 3 out of 4 professional money managers fail to exceed the long term averages. One simple question should be why? How can they be highly “professional” and not outperform averages? Sorry about this one, not intended to hurt anyone, but the question remains.

It's like most realized some time ago that maybe the best way to go at it was not necessarily predicting price movements but emulating the averages using index funds trying to mimic market averages which would explain the popularity of index funds which are, by design, made to closely follow an index average. Then, by design, one should not be surprised if indeed an index fund mimics an index's long term performance level.

It should be observed that playing the index fund game, one puts an upward limit to long term performance, and this limit is that the portfolio will tend to approach the index's long term return. There will be no alpha generation as none is expected. It is by jumping over these barriers to higher returns that can generate alpha over the long haul; not by staying within self imposed limits.

The chart below shows IBM over a 13 years period starting in January 2000. It shows a linear as well as a polynomial regression. Over the interval, performance has been about 6% compounded annually, dividends included; which could be considered within average secular market trends. However, starting in January 2000, no one could predict, on a daily basis what would be the price variations over the period.

The same data as presented in the above chart could also be viewed as daily price variations (ΔP: the pink seismograph line about zero, left scale).

The slope of the regression line on these price variations is close to zero with an r^{2} value of 0.0007. The average daily price variation for the entire 3,317 trading days was a mere $ 0.029 where 3,307 of those trading days had a price variation of more than $ 0.50. Almost 45% of trading days (some 1,492) had price variations in excess of $ 2.00. Some 5.9% (163 days) had moves of $ 5.00 or more. The 20 day moving average of price variations (blue line, right scale) shows some periods of high volatility.

The underlying long term trend ($0.029 / day) is only a small fraction of the average daily range of $2.25 per day. This amounts to say that the underlying signal, within all the noise, represents a mere 1.27% of the average daily price range; leaving the rest: 98.73% as just noise. I know it is just one stock, nonetheless, it could be representative of many.

Even from this limited data, some observations could be made. Detrending the 13 years IBM data results in the following chart:

Compared to the un-detrended version, one can not help but notice that in the above chart, IBM did not double over the trading interval. There seems to be only two major trends in the series, one going down for about 2/3 of the trading interval and one going up for the last one third.

The following chart shows IBM's original and detrended price series side by side, each with its own polynomial regression line.

Observe that all the generated profits over the 13 years period resulted from the underlying long term trend. In fact, the difference in the 2 polynomials is: -0.0972x - (-0.1257x) = 0.029 as was expected. A positive drift of about 3 cents per day which would have started to generate a positive return only after holding on for at least the first 10 years.

The real question is: are there trading methods that one can implement that will generate better results long term than just holding the bag long term?

The above leads to the methods used in generating the IBM backtest found in the following article:

which show that the trading strategy used performed much better than the Buy & Hold, and easily exceeded what ever should have been the expected long term return. All this by jumping over the limiting barriers to outstanding performance.

The main idea was to find a transforming function such that the payoff matrix: Σ_{t}(**H**.*Δ**P**) would be changed into: Σ_{t}(**H**.***T**(t).*Δ**P**) which in turn would be much greater than it original unmodified version: Σ_{t}(**H**.*Δ**P**).

Created... October 5th, 2013 © Guy R. Fleury. All rights reserved.

Created... October 5th, 2013 © Guy R. Fleury. All rights reserved.