March 4, 2012

All I have written on this website is dedicated to a single equation, which translates into a single concept. This equation has evolved over the years but not in what it represents or its governing trading philosophy. The concept remained the same throughout, and that is trade over a long-term share accumulation program. In its latest iteration, it can serve as an explanation for the process. Its payoff matrix representation looks like this:

 Σ [ hioI(1 + gi + Ti + CCi + NPit-1  .* Pio((1 + ri)t  - 1)]

where gi is the average reinvestment policy rate matrix, Ti is the trading strategy contribution rate matrix, CCi is the covered call contribution rate for each of the individual stocks, and NPi is the naked put program contribution rate matrix. All these procedures combined to generate alpha at an exponential rate. (see On Growth Optimal Portfolio I-VI for more details).

It can all be summarized in the following payoff matrix:

 Σ(H+ .* ΔP

An enhanced inventory holding function matrix H+ is applied to a price differential matrix ΔP. In Excel, it would be represented as the sum of a column-wise multiplication. The enhanced holding function should evidently be made to out-perform the Buy & Hold investment strategy: Σ(H+.*ΔP)  >  Σ(H.*ΔP). It's the primary objective of any trading system. If your trading methods cannot outperform a long-term Buy & Hold strategy, then why in the world would you want anyone else to adopt your trading methods?

A more detailed description of my payoff matrix can be found in ITRADE formula, which is explained in my research note On Seeking Alpha III. A recent simulation using these trading principles can be found in my latest On Trade Slicing article.

There is a need to be able to express a trading methodology in mathematical form. It is a way to represent the intricacies of the various trading procedures used to implement a  long-term portfolio-level investment program. If ever you want to trade big, you will find that the bigger you get, the more you have to hold on to some shares for the long term; otherwise, you are bound to try flipping huge volumes on a daily basis. And even with the aid of computers, this task could be awesome. But then again, it might be doable using a different set of techniques than the ones I apply in my methods.

Random Data Portfolio Testing

From the original concept in 2006 to the first simulations on real market data last April  2011, all my tests were done on randomly generated stock prices. These tests on random data forced me to look for solutions that were not curve-fitted or over-optimized for the simple reason that no two price series would behave the same or could ever be duplicated.

Every single stock in every single portfolio test would be different from whatever was produced before or would be produced after. There was no way to know what the future would bring for any of the stocks, which stocks would behave better than others, or which would go bankrupt (up to 28% could fail). It was like selecting 50 or 100 stocks at random from an infinite stock universe. This was not like reshuffling a data series as in a Monte Carlo simulation. It was creating all new and unique data series for each stock each time you ran a test. No knowledge gained from a previous test could be transferred to the next. There were indeed many lessons learned using randomly generated data series (see my first 2007 Alpha Power paper).

Real Data Portfolio Testing

Last April 2011, I started testing on real market data. Having designed a long-term trend-following trading methodology, I naturally started with a script having a strong trend definition: the Gyro's Trend Checker II (the original version is no longer available on the old Wealth-Lab 4 website. It has been taken down). After a lot of code modification to better suit my purpose, I released the first portfolio results based on real market data for a portfolio of 43 stocks over a testing period of 1,500 trading days (5.83 years).

It was a first draft. It used crude trading methods and looked as if I had used a bulldozer over the original chartscript code. Like all first drafts, there was much room left for improvement. But even in this first iteration on real market data, performance was remarkable: every single stock produced a profit, and on average, the portfolio generated some 47% CAGR (compounded annual growth rate). From my simulations on random data, where I had projected about a 50% CAGR, I then concluded that by designing better trading procedures, I could improve on performance even further.

Right from the start, even on a theoretical level, I knew no one would believe such performance results. They were too good to be true. They went beyond acceptable norms. And yet, they were the result of a simple trading philosophy: trading over a stock accumulation process. I was designing trading procedures almost blind, only seeing about 220 trading days out of the 1,500. Oftentimes, having only a mental picture of how the trading procedures would react over the other 1,280 trading days. But still, performance levels exceeded what most would consider as “reasonable”. It was almost as if everyone was saying: I cannot do this right now; I cannot reach those performance levels; therefore, no one else can.

Most people did not accept the theoretical functions provided in my papers using randomly generated data and, therefore, were not ready to accept the simulation results even if performed on real market data.

The underlying implications of this trading philosophy should force anyone to reconsider some fundamental precepts of modern portfolio theory, and I guess almost no one was ready to do that, either. I don't recall anyone ever challenging the portfolio-efficient frontier or the capital market line over the last 50 years. And there I was doing just that. I was kind of proving that Jensen was right; there was some alpha to be gained and it was positive; not only that, it could be exponential.

I opted from the start to display every single chart produced by the WL4 simulator (see Gyro's Trend Checker II) as a form of proof for what was presented in the portfolio tables (generated profits on the charts would, in fact, correspond to generated profits in the portfolio tables). And since all charts were the result of my scripts being performed on the Wealth-Lab 4 site using their computer, their simulation software, and their market data, I thought that no one would question what was on the charts...

Since 2008, I have chronicled, on Wealth-Lab, almost in real-time, what I was doing as my research evolved. My latest research resumé can be found in Trend or No Trend where most of the simulation tests over the last year are summarized and where we can see the progression as performance improved using various scripts as a basis for more elaborated trading procedures. It was all part of my ongoing search to find the limits of my trading equations, part also of the overall design to a better understanding of the trading procedures. 

A Milestone

In June 2011, I issued the Livermore Challenge. It did not last very long since, within a few hours of improving on this non-productive strategy. I had already reached the 100% CAGR barrier. For me, a milestone, I was going much further than the theoretical expectations in my research papers using random data. I was using real market data and pushing performance levels much higher than anticipated. In retrospect, it should not have been a surprise as the randomly generated data in my papers was less volatile than real market data.

Every simulation built on the previous ones, trying to reach better and better performance levels. I was searching for the limits, where it would break down, and most importantly, why it would. On the premise that no one would believe the improving performance result anyway, I kept pushing the limits. Designing new administrative procedures and adding more features and routines to the enhanced holding matrix. I was looking for procedures that would enhance performance across the board, not just on a single stock but a whole group of stocks. Each time I showed a chart somewhere that would really outperform expectations, it was met, as expected, with more than just skepticism.

For me, a remarkable output from all these simulations was the realization that:

  • even when using a trend-following methodology, a trend definition might not be a necessary requirement;
  • even if you need some form of technical indicators as decision surrogates, they too might not be needed;
  • even if random entries might not be considered as a respectable way to scale in positions, random entries were not detrimental to the overall performance, on the contrary, they could even add value.

My latest simulations (although not the highest performer) showed some interesting properties (see Interesting Properties and On Trade Slicing). The strategy was designed to sprinkle some random entries in an open trading window as a scaling-in method, which had for side effect of having some of the entries fall, by coincidence, at the best entry points. Some of the trades caught the cycle bottom with no knowledge of doing so or even trying to do so. A remarkable side effect of a simple random time slicing procedure helping to improve performance.

Breaking the Barriers

Because you dared cross over limiting barriers by combining what appears to be elementary notions, like trading over a stock accumulation process, you become unbelievable and unrealistic while all you did was put out trading equations with a different view of the problem. I wanted a compromise of sort part of a total solution that could perform well while remaining feasible over the whole time spectrum under which a portfolio needed to thrive. The methodology was producing positive exponential alpha in a world where modern portfolio theory states that long-term alpha if not zero, would tend to zero. Even Jensen, in his seminal paper in 1968, showed that if there was some alpha, it was negative (about -1.1). And there I was, advocating that there was not only some positive alpha but that it could be exponential.

I have worked so long with these formulas that I now consider them almost elementary, a simple expression of a trading philosophy. It's like I could dictate what I want as output: this parameter will increase the stock accumulation rate while this other one will try to take more short-term positions, and this other one will hold out for more profits. I still would not know what the future would bring, but the equations would be ready for whatever was coming.

My trading methodology is designed for big portfolios to get much, much bigger with time. It is designed to accumulate shares, and while the process is evolving, it is also designed to trade over the accumulative process. It is not a simple strategy that is picking a trade here and there. It's a portfolio-level investment strategy that is made to span the entire long-term time horizon using over-diversification as a protection measure and exponential alpha for profit generation. It is designed to trade not just thousands of trades but hundreds of thousands, if not millions. It can adapt to the original capital available and can go as far as one wants to go. It is totally scalable, up or down.

Compounding: the Name of the Game

We play a compounded rate of return game. Only depending on our own capital to implement our trading strategies should be considered a real waste of potential (on the assumption that you do have a worthwhile trading strategy with positive expectancy).

We spend years learning, understanding, and researching to ultimately develop and debug our own trading systems. Then we simulate for years over past data to make sure that the trading strategy to be used makes sense and is profitable under a wide range of market conditions. But we seem to forget some very basic notions like compounding, trade size, and market impact.

Let's assume for an instant that you are good at portfolio management, and your system was designed and backtested to produce 30% CAGR over a 20-year period: Initial Capital * (1 + 0.30)20. This would produce 190 times your original 100k capital. But it would take 20 years to get there (over 7,300 days, 21,900 meals, and the time to smoke 182,000 cigarettes for a 25/pack/day smoker). The point is that it would take a long time just sitting on your butt. The reward sounds interesting since, after 20 years, you would end up with 19M, with half of it being earned in the previous 2.65 years.

But you opt to delay 2 years because, in the beginning, your system is not producing enough cash, and therefore, you want to improve on your design or take time to find more capital before you start. The delay does not seem to cost much in lost opportunity. But in reality, you delay your 20-year sequence by 2 years. You will reach your 20th year two years later. The cost: 131 times your original capital or 13.1M. Equation: (1 + 0.30)22 - (1+ 0.30)20 = 131. Therefore, delaying the implementation of your trading strategy can indeed be very expensive.

Not only that but not searching for additional capital can be even more expensive. Say you find 10M to manage using your trading strategy. Now, over the same 20-year period, the portfolio would grow to 1.90B. Your opportunity cost for not finding the additional funds: 1.88B. How much effort should you put into finding the additional funds? And if you wait 2 years before doing it, then the opportunity cost amounts to 1.31B. Very expensive indeed! The equivalent of about 65M per year in opportunity cost just because you were not in a rush to implement your trading strategy.

So once you have a worthwhile trading strategy that can be backtested on, say, over 100 stocks over more than 5 years generating thousands of trades, then I would strongly suggest that you stop wasting time, get out there, and find the capital required to implement your trading strategy. Should you ever want to wait 5 years before finding the additional 10M before you can start your own fund, then be prepared to accept an opportunity cost in the order of 5.2B? The longer you wait, the more expensive the delay gets.

More Compounding and Doubling Time

The whole picture changes if you can achieve a 50% CAGR over a 20-year investment period. Your original 100k would grow to 332M, not a bad return considering. But then again, the 10M that you solicited here and there, using the same strategy, producing the 50% CAGR, would reach 33.2B. And delaying by 2 years the implementation of your strategy would represent a staggering opportunity cost of about 74.8B.

However you look at it, not getting that additional capital to implement your trading strategy should be considered very expensive opportunity wise. This entirely depends on your ability to produce a 30% or a 50% CAGR over the 20-year investment period. To make these calculations is very easy, just plug in your rate of return number in the following:  Initial Capital * ( (1 +  r)22 – (1 +  r)20 ) = x  (where r equals your expected rate of return over the investment period). To estimate a 5-year delay, use: Initial Capital * ( (1 +  r)25 – (1 +  r)20 ) = x.

No matter how you slice it, not getting that extra capital once you have a worthwhile back-tested trading strategy that produces 30% CAGR or more, take notice that it is very expensive as an opportunity cost. It is not the first years that are important in achieving your goals. It is the last few years of your long-term investment plan that matter the most.

On a 30% CAGR, your doubling time is 2.65 years. Doubling 100k in your first 2.65 years is not that dramatic. But doubling each time after your 20th year will represent huge amounts as you will be making in those 2.65 years as much as all that was done in all the previous years combined. After doubling 8 times, your portfolio would have increased 256-fold (21.2 years). Your next doubling 2.65 years later (year 23.85) would see your portfolio at 512 times its original value. And that is the main reason to play the game. The rewards go to those that can last the longest, and that can compound their performance over a long-term horizon.

Every exponential game has a doubling time: the time required to do as much as all previous doubling times combined: initial capital *(…, 64, 128, 256, 512, 1024, ...). And that is what we are playing for. Doubling, in the beginning, does not have that high a value. It is not where it counts. It is after many doubling times that it really matters. Technically, the job of a strategy developer is to reduce this doubling time as much as possible, and this can only be done by increasing the CAGR. Therefore, every alpha point you can add to your CAGR has for effect to reduce the time it takes for your doubling time.

Between the 9th and 10th doubling, you add 512 times your initial capital to the 512 times already accumulated over the prior 9 doublings. And whatever capital you can add to your initial capital will more than reward all the added efforts or trouble you may have over the life of the portfolio.

The most expected long-term CAGR for playing the market is about 10% for the past 200 years (including dividends). A trading strategy developer's job is to transform the above-cited equation into: (my initial + outside capital) * (1 + CAGR + alphat )t. There are many ways to do this.

You can do like Mr. Buffett and be a better stock picker (2 – 5 alpha points). But that is not enough. You can do like Mr. Buffett again by reinvesting the accumulating profits in buying more shares or outright companies (10 – 15) alpha points. Over the past 40 years, Mr. Buffett has averaged 12 alpha points by accumulating more and more shares of companies as his portfolio grew.

What I propose is to trade over the whole process, which can propel you to much higher alpha by taking the proceeds from your trading operations to feed the stock accumulation process at a higher rate. The output will be an exponential alpha which will reduce gradually the doubling time. And it is this doubling time, the last few ones, that can take huge proportions when compared to the first ones. For example, your 10th doubling time is 1024 times the first one. So the real prize is not at the beginning, it is at the end. All my research notes, equations, and simulations deal with this unique problem of how to increase alpha points and, as a side effect, reduce doubling time.

The future only happens once. Plan to use it the best you can.

Portfolio Testing

We play an uncertain game with an uncertain future. We are ready to try anything with a positive expectancy. That’s why we all test so many different trading methods from short, medium, to long-term horizons. We try to find strategies that worked in the past and hope the same thing will prevail in the future…  Even though there could be 11 dimensions to the future, you will be able to exercise only one.

The future is always new. What prevailed in the past has little resemblance to what will happen 10 to 20 years down the road. Who knows which inventions, disasters, or constraints will drive our Darwinian planetary economy? Forecasting short to medium-term stock prices is not that easy. And when you study all those who try forecasting, you find out that in the long term, they have a hard time beating the market averages, and you do observe that most don’t. 

Portfolio management has seen many methods trying to optimize performance: Kelly number, Optimal-f, fixed ratio, fixed amount, variable ratio, and many others. But most have a deficiency or other. The Kelly number and Optimal-f presume that your win rate will remain constant in a Gaussian price environment, and that is not the case. The fixed ratio and variable ratio tend to get too risky as all trades are not created equal and should not be treated as such. The fixed amount will underperform as the portfolio grows.

Optimal-f works on the grounds that you know your future profit distribution (based on your backtests !!!) and that this distribution is Gaussian in nature, which the market is not. There lies the weakness of Optimal-f, which is the same problem faced by the Kelly number. I do not know what my hit rate will be in the future, and I have no way of finding out. Moreover, the Optimal-f method becomes unfeasible as the portfolio grows. One can not flip a million shares or higher on a daily basis and hope to make consistent profits. New problems would emerge that will limit performance.

With what I think is one small “innovation” - the trade and volume accelerators – you solve all these questions in a single swoop. You let the market decide which stock will survive and thrive. And using your Darwinian approach, where you feed the strong and starve the weak, this will make it a performance reinforcement methodology that will outperform the market itself.

Small Bets

You place small bets because the market has a tendency to throw you a curve ball here and there. There is always a Lehman or a Madoff somewhere. There is always a WorldCom, an Enron, a Refco, or an MF Global cooking the books in the background, and you never know when one of those will be your preferred high percent of portfolio dip-buying kind of thing. And having a big bet on one of those can destroy your portfolio and put you out of the game.

So, you place smaller bets as the most basic preservation measures and portfolio protection. It’s the same reason you accept stop-losses as a kind of portfolio insurance. It is preferable to pay the insurance fee in order to avoid the big drawdown on the big bet with no other recourse than to accept a portfolio wipeout. I can’t put more stress on this than that, we play a treacherous game where on a hundred trades we can make a profit and then on a single trade lose 80% of the portfolio. The risk is too high. I’ve seen people blow up their entire account on just a single trade in a single day.

The more your trading edge is secure, meaning that it holds long term, the more you can increase the volume (the bet size) as you go along. And whatever constitutes your edge, should you only participate in a fraction of the time this edge occurs, then you can increase your participation by taking more of such trades.

Should you deceive yourself in backtests by doing over-optimization, curve-fitting, or outright peeking, you will find out, at your own expense, that the market does not fool around that much. From my observation, it has always been ready to squash any delusion one might have. One needs first to be honest with oneself.

Removing Limiting Barriers

Where most academic papers elaborate on efficient markets, growth optimal portfolios, and efficient frontiers, my papers emphasize that you can jump over these limitations by reinforcing your positions in the best performers of your selected assets while starving your worst performers.

Whenever someone uses the market efficient frontier as a limiting barrier and puts all the efforts needed to limit the portfolio to remain within these barriers, why should he/she be surprised that, in fact, the portfolio remained within the barriers and performed within the expected averages? Isn't it like buying an indexed fund and then hoping to outperform the index?

It is first by removing these limiting factors, these barriers to higher performance levels, that one should strive. But whatever trading method is used, it must at least beat the Buy & Hold.

My methods do not know the future price of stocks, or the future hit rate for that matter, and do not care what the future distribution will be. They operate on a relatively simple formula (given in my papers) where all the stress is put on position sizing using positive reinforcements.

The result will be that whatever your selected assets, your portfolio weights will be in the same order as their overall relative performances. This, in turn, would result in that, in the end, you will have made your biggest bets on the assets with the highest returns while having your smallest bets on the losers.

Building a portfolio is a multi-period multi-decision process where it is necessary to determine which stocks to trade, the entry points, the bet size, the duration of positions, and the management of inventory levels. You can use a decision surrogate to determine for each stock the best course of action in relation to all others in your portfolio. By having controlling functions you determine what you want to get out of the market and on your terms.

You want to reward the market (by putting more money in it) when it rewards you first. If you look closely at the capital requirement functions, you will notice that one of the requirements (a side effect) is that you are requesting the market, in the end, to pay for it all (see Jensen Modified Sharpe Ratio).

The enhanced holding function H+ is a matrix of size, say, 100 by 5,000 or 10,000 (some 20 or 40 years using daily data). Using 5-minute data, the matrix size would be 100 stocks by 390,000 or 780,000 again for 20 or 40-year data. This means that using daily data, there would be 500,000 decision points over a 20-year period and some 1,000,000  over the 40-year trading interval.

Each period would require a trading decision for each of the stocks in the portfolio: hold, buy, or buy more; sell, sell more, or sell all. And it is this sequence of trading decisions that will determine the outcome of any portfolio. It is not a single bet from one period to the next, it is a continuous bet over the entire trading interval where a trading decision needs to be made for every single interval, not just for the present one, but also for every position in the portfolio that is still opened. This means that a decision needs to be taken all the time over the entire inventory on hand, not just on the last trade.

In the end, one thing is sure: the future is still there to unfold.


Created on ... March 4, 2012,   © Guy R. Fleury. All rights reserved.