Basic Portfolio Math III

Whatever type of stock portfolio we would like to build, especially those intended to last for decades, we need to make plans, estimates for where we would want to go and how we are going to get there. Maybe more to the point, it might be our willingness to uncover our own abilities and limitations as to what we can do with our time and capital resources in the face of uncertainty.

Even a ballpark figure is better than none. It is better than just hoping, from period to period that, in the end, it will all turn out OK. This can be fine for the long-term investor where a diversified portfolio will give an almost certain positive outcome. But, in trading, that does not work so good. No one can plan that far ahead when operating on a day to day basis. There is no vision of where it will all end. Especially when most of those days could be viewed as if approaching the flip of a coin.

History has shown that just holding on to a diversified portfolio, a.k.a. "investing" in the future economic prosperity of good companies, will elevate this hope of ending well to an almost certainty.

There is a lot we can learn from past performance and historical data, even if future performance might be somewhat unpredictable. History has shown that the US stock market has had a tendency, on average, to move up over the long term. We can take advantage of such a notion.

Also, we can look at the past and average about anything that resulted in numbers. In turn, this could provide us with some information on a general market behavior, as well as help guide us in our trading methods.

$\textbf{Basic Portfolio Math II}$ ended with a portfolio's long-term CAGR time function: $$CAGR_{n \bar x}(t) = \displaystyle{\left(\frac{F_0 + \bar n \cdot t \cdot \bar x}{F_0 }\right)^{1/t}}-1$$which is expressed as a return decaying function. The more time you provide, the more the $CAGR$ will tend to decrease. Even if a trading strategy's signature shows that it is sustainable, it nevertheless will see its $CAGR$ decline with time.

For $\bar x > 0$, the expression $\bar n \cdot t \cdot \bar x$ is a monotonically increasing function. The number of trades $\,n\,$ can only go up, $\,t\,$ has only one way to go, while $\,\bar x\,$ needs to be greater than zero. And, as an average, it will tend to a constant with a large $\,n.$

Since any trading strategy can be expressed as: $\,F(t) = F_0 + n \cdot \bar x,\,$ there will be no loss in generality if we represent $\,n\,$ the total number of trades as an average number of trades per time-period: $\bar n \cdot \Delta t.$ As a consequence, for every $\,\bar x > 0\,$, the above CAGR function will degrade with time.

It should be no surprise if in a walk forward test we see a trading strategy's CAGR degrade when going live. As a matter of fact, it should be expected if this behavior is not compensated for.

Having put forward that our trading strategies will degrade with time, shouldn't we offer countermeasures to this phenomenon? Shouldn't our strategies compensate for this inherent return degradation?

The Market's CAGR

The US stock market has had, on average, a long-term upward drift over the past 240+ years. Buying an index tracker for the long term, like $SPY$ is just 1 trade ($n = 1$). Even if $\bar x$ varies with time, $\,\bar x = x(t)$, it will give the right answer. The $SPY$ portfolio might be better expressed in terms of its CAGR: $$CAGR_{SPY}(t) = \displaystyle{\left(\frac{F_0 \cdot (1 + g)^t}{F_0 }\right)^{1/t}}-1 \,= \;g\; \to \bar g$$where $\,g\,$ is the long-term $SPY$ return. We could view $\,g\,$ as $SPY$'s long-term average $CAGR_{SPY} \approx \bar g $. This historical long-term average has been a little less than $10\%$ including reinvested dividends. As such, it could be used as an approximation to make long-term estimates.

For instance, someone investing some funds $\,F_0\,$ in $SPY$ for the long term should have for expectation: $\mathsf{E}[F(t)] = F_0 \cdot (1 + \hat g)^t $ for some time horizon $\,t > 20$+ years. This expectation should tend to the market average: $\hat g \to \bar g\,$ over the long term. It should be evident for the trader that the goal is to exceed this expected market average $\hat g$ over the same time interval. De facto, making the game not only a long-term trading game, but also one with a minimum goal not only to reach, but to exceed.

Estimates can provide ballpark figures which can be viewed as minimum objective to be met even if we are still looking at an uncertain future. Still, our trading strategies have to exceed these market averages performance-wise. Otherwise, why trade, since we could buy $SPY$, or the likes, and get the market average with practically no additional effort?

If we design a portfolio composed of the stocks in $SPY$ to make it track $SPY$, would we not get $\, \hat g\,$ as long-term return? Meaning something close to the average long-term expected market return. To do better than $SPY$, one would need to time-slice $SPY$ in such a way that the sum of the parts be larger than the whole: $\,n \cdot \bar x > q_{SPY} \cdot (p_T - p_0)$. If you do not take enough slices or of sufficient magnitude to overcome $q_{SPY} \cdot (p_T - p_0)$, then you will underperform, making buying $SPY$ the better choice profitwise.

Evidently, if you want to play this game by trading, you should play to win.

You should aim to, at least, exceed the long-term market averages. Otherwise, why not just buy $SPY$ and be done with it. It is a lot less work. It should be part of your requirements that over the long haul, your trading strategy outperforms the equivalent of buying $SPY$, otherwise, again, why do it? Why trade at all? Why waste all that time just to underperform?

The point is, presently, you do not know what will end up to be $SPY$'s growth rate 20+ years from now. Nor can you have any means to know your own portfolio's future value. All you can do is make estimates, guesses of what might be: $\,\hat g.$ There are no predictive methods that can say with any kind of probability measure what the value of your future portfolio will be.

Nonetheless, there are still generalities that can be made, now.

One of which is: you cannot expect $\,\hat g$ to be $10\%$ next year. However, you could expect over a 20-year period that $\,\hat g \to \bar g \to \; \approx 10\%$ dividends included. Just as you could put a high probability value on the assumption that the average market return will be positive, since the probability of $\,\hat g$ being positive is asymptotically approaching 1: $\; \; \displaystyle{ \lim_{\mathbf{t \,> 20}} } \;\mathsf{P} [\,\bar g > 0] \to 1$. At least, it has for the past 240+ years.

We can use past numerical data to make averages of this and that. And even say, if such and such trading procedures would have generated trades of this or that nature. We might expect a strategy to continue doing about the same in its future. As if the averages gathered from all that past trading activity might somewhat replicate in the future too. Not as a prediction, but as a general estimate of what a trading program is designed to do. And, in trading, this would tend to apply since as $\,n\,$ gets larger and larger, $\,\bar x\,$ will tend to a constant. And as $\,n\,$ increases, both $\,\bar n\,$ and $\,\bar x\,$ become more and more representative of the whole.

Averages

Then, what should be of real concern in trading? The trader and investor have different perspectives on things. Even if the same gestures are required to play the game. One is looking at the short term for trade resolution, while the other should have little interest in day to day fluctuations, but will look closely at a company's long-term prospects. Two methods of play that require a different set of skills, and a different outlook on price appreciation.

There are market dynamics at work where players might influence each other into action. Either side viewing the changing price as a confirmation of their own price valuation, or, as a threat to their holdings. Both have to deal with "on average", and will look at it differently.

Whatever each player does, all trades have a starting and an ending point in time. You trade or invest over a time interval. For the investor, the end time is not necessarily defined, he/she will be holding for as long as it is reasonable to do so. Whereas, the trader might see a relatively close endpoint for almost every trade: the closing of the taken position over the short term.

Ultimately, the question will be: $\textsf{is the trading side producing more than the investment side}$?

Does the following relationship hold over some time interval $\,t\,$? $\,$ Results need to be compared over the same time period. $$\mathsf{Is:} \quad F_0 + n \cdot \bar x \; > \; F_0 \cdot (1 + \hat g)^t \quad \quad ?$$Whatever you do trading, when compared to an index, it should result in the following: $\; n \cdot \bar x \, > \, F_0 \cdot ((1 + \hat g)^t -1) $. Meaning that the profits from all the trading activity should exceed the profits generated by the average index investor.

Otherwise, why deliberately volunteer to underperform the average indexer?

All the trader has is: $\,n \cdot \bar x \,$. He might wish for something more. But, whatever trading strategy he/she devises, based on whatever principles or methodology, it will always end up with those two numbers.

How trades are distributed or sequenced might be secondary. It will be large and small profits and losses added up together anyway, then averaged over the trading interval. It is the total ending amount that will matter, and, it will bear a dollar sign, nothing else.

As $\,n\,$ increases, the particularities of a single trade tend to disappear into the averages. That you made $\$100$ on a $\$1,000$ bet might look significant if standing alone, but not so much on a $\$1,000,000$ portfolio.

The indexer knows what to expect over the long term, it is: $\, \mathsf{E}[F(t)] = F_0 \cdot (1 + \hat g)^t.\,$ A way of saying about the same as in the past. An expected average return $\,\hat g\,$ which could be achieved without much effort. The trader has no means to ascertain such an outlook. He knows that whatever he might do, the outcome of all the trading activity, needs to exceed the investor's long-term expected outcome. But, this does not say how it will be done, only that it has to be done. And certainly, it does not say that it will be done, not even remotely close.

Whereas, the investor already knows, and it is by sitting tight, for a long time, on a diversified portfolio of prospering companies.

If a trader's methodology has only $\, n \cdot \bar x \,$ to work with, or work for, shouldn't all the efforts be concentrated there? Finding an edge $\, \bar x \,$ that can be repeated $\, n \,$ times over a long-term trading interval. Also, trying to ascertain that the strategy will behave somewhat the same going forward.

The trading methodology itself could be anything, as long as it could produce more than what the indexer, on average, is expected to get: $\; n \cdot \bar x \, > \, F_0 \cdot ((1 + \hat g)^t -1) $. If all the work involved in supervising an automated trading strategy which does not perform better than indexing, then trading was the worse of scenarios and a total waste of time and resources.

In a typical year, there are $252$ trading days. Over a 20-year period, this gives: $5,040$ tradable days. We could slice the $SPY$ time series in $5,040$ pieces to get daily price changes from close to close. Summing these price variations would get us the original price series itself. If we skip every other day, this would leave us in $SPY$ some $2,520$ days, but would still take 20 years to get there.

However, one should not expect to get the same outcome as if he had stayed fully invested for the duration. The same applies if you skip every other week or every other month. You would be in $SPY$ half the time, and, still have to be there for the $5,040$ trading days. Most probably, producing half, profitwise.

If you are in the market half the time, then you can only expect to get about half of the exposure benefits. However, in CAGR terms, it might not be half. $(1 + 0.10000)^{30}= 2 \cdot (1 + 0.07487)^{30} $. It is only a $33.55\%$ decline in performance in CAGR terms, and not a $50.00\%$ decline as might be naively expected.

Therefore, you could make half as much money overall following a $2.50\%$ reduction from your expected rate of return: $0.10000 - 0.07487 = 0.02512.$ This is to show how sensitive a CAGR can be over time. These relatively small rate differences are compounded and with time can amount to something considerable.

Adding $2.512\%$ to the original $10.00\%$ example will change the picture: $(1 + 0.12512)^{30} \approx 2 \cdot (1 + 0.10000)^{30} $, thereby generating twice as much as with $\,\hat g \,$ alone. Yet, you only added $\,2.512\%$ to $\,\hat g: \;$ $\, 0.12512 = 0.10000 + 0.02512$. It was sufficient to double the initial capital $\, F_0\,$ over the same time interval.

This "little extra" already has a name, it is called alpha. Once added to the expected market return equation, it will give: $\, F_0 \cdot (1 + \hat g + \alpha)^t$. In this case, the alpha ($\, \alpha \,$) was $0.02512$ on the condition you stayed in the game for 30 years and found a way to generate it since it is above the long-term expected market return: $\,\hat g $.

The alpha is usually not given away like $\,\hat g \,$ might be. The alpha ($\, \alpha \,$) has to be found, mined, and extracted. Saying, it might have to be earned.

Here is a reasonable thought. If you want more than $\,\hat g $, then you will have to work for it. You will have to generate this $\, \alpha \,$ on your own, or find someone that can do it for you. But, always hold that you can do it all yourself. Now, that was not so hard to say.

The majority of professional portfolio managers have a hard time beating long-term market averages. This implies that they do not even reach $\,\hat g $, or that their alpha is negative which would produce: $\, F_0 \cdot (1 + \hat g - \alpha)^t$.

It is what Jensen found in the late 60's, and it still applies today. That a fund manager outperforms the market next year is almost unimportant. What really counts is the end result. There is no replay here, there is a $\,t\,$ in that equation, and if it takes you 30 years to realize you underperformed, those 30 years were wasted years when you could have done better simply by buying $\,\hat g \,$ instead.

I heard some say: "No one can do what Mr. Buffett does."

Wrong. Everyone can do that. At least, they can do as good. The solution is so simple too. You have Mr. Buffett personally manage your account. You buy Berkshire Hathaway shares! And you will have Mr. Buffett taking care of your retirement fund just as if it was his own. You will get exactly the same return he does. No effort and no management fees.

One would have to concede that a positive alpha is not that hard to get: $\,F_0 \cdot (1 + \hat g + \alpha)^t$. It could be as trouble free as buying an index. In some ways, the sheer size and complexity of Berkshire Hathaway is making it a market averager. Berkshire Hathaway has achieved a long-term alpha of approximately $\,10\%,\,$ giving it an average long-term CAGR of $\,20\%.\,$ Berkshire Hathaway's initial stockholders achieved: $\, F_0 \cdot (1 + 0.20)^{50}.$ It was enough to turn $\$10,000$ into $\$91,000,000+$ over the investment interval.

However, I imagine some would still complain because it took time to get there. Then, they should compare it with $\, F_0 \cdot (1 + 0.10)^{50}$ which takes the initial $\$10,000$ and produces $\$1,173,909$, again waiting 50 years to get there.

If buying Berkshire Hathaway stock is a solution to a higher CAGR, why are people satisfied with less? Or even worse, why go for an expected negative alpha when they could at least get $\,\hat g\, $?

To outperform we need some positive alpha and it is up to our trading strategy to supply it. Meaning that it is really up to us to suppy it: $$\; n \cdot \bar x \, > \, F_0 \cdot (\,(1 + \hat g + \alpha \,)^t -1) $$This will require that the payoff matrix generate even more profits than expected.

A particular trading strategy cannot do more than what it was designed to do. It cannot jump to this alpha thing without being programmed to do so. In fact, modern financial literature is filled with statements that alpha, if there is any, will tend to zero ($\alpha \to 0$) over the long term. Lookup, for instance, the "no free lunch" hypothesis in financial markets.

Doing better than average might require that we also outperform Mr. Buffett's long-term CAGR. It is not just a positive alpha that might be needed, but more likely an alpha greater than 10$\%$. This does make the job much more demanding. As previously stated, a trading strategy has a signature. It is designed to do things in a specific way. It has hard coded trading rules and idiosyncrasies built in.

We could scale a trading strategy. But, it would require more capital. And still give the same CAGR, most probably less. Some trading strategies are not even scalable, and this, by design. Any attempt at scaling them might destroy whatever potential they might have had.

One could also leverage a strategy which is the same as leveraging the portfolio. But, there too, some strategies do not support leveraging that well, or the added expenses associated with it, not to mention the added volatility and drawdowns.

To get the added alpha will require a strategy redesign, better adapted to its new mission. The code will have to reflect this perceptual change, this higher goal setting. Regardless, the strategy, whatever its composition, will still have to contend with the same two numbers: $\; n \cdot \bar x.$ Understandably, the strategy has to behave differently to reach this higher performance level.

This does not say how it should be done, but that it HAS to be done, otherwise... you might miss out. It does not say either which methodology would be better, only that you will need one, whatever it is.

Nor does it say that the alpha could be the product of a single procedure. You could add to a strategy more than one source of positive alpha: $$\; n \cdot \bar x \, > \, F_0 \cdot (\,(1 + \hat g + \alpha_1 + \alpha_2 + \alpha_3 \,)^t -1) $$Evidently, the payoff matrix, $\, n \cdot \bar x \,$ will have to adapt.

This is our advantage. We can add to our strategy design whatever we think might generate this added alpha, whatever its source. For a trader, in the end, only those two numbers will matter: $\, n \cdot \bar x.\, $ Each additional bit of positive alpha will count since this alpha is compounding over time. Which will tend to make a huge difference in the end. It is all built on the premise that your trading strategy can last, and prosper for a long time.

This gets us right back to finding some $\, n \cdot \bar x\, $ that could meet our new objectives. The following chart shows the value of $\, n \cdot \bar x\, $ for a fixed outcome. All the points on the blue line give $\$10,000,000$ in profits. This chart could be scaled, and the relationship would hold. The curve would stay the same, only the vertical scale would change.

This could suggest that it might not matter that much how it is achieved, as long as it is achieved.

Any point on that chart can produce the desired outcome, anyone of them. We can expand the x-axis for as far as we wish, the relation would still hold. It is as if what your trading strategy is trying to do is pick its $\,n \cdot \bar x.\,$ And with a large $\,n,\,$ $\,\bar x\,$ will tend to a constant. Meaning that it will take big moves to nudge it a little.

This changes the perception we might have concerning a trading strategy. We are looking at two number and the ways to affect them. If what we do in our automated trading programs does not have some impact on $\,n \cdot \bar x$, then, we might as well call it cute code or something for it is not producing a single penny.

If we could double $\,n\,$, evidently it would give: $\, 2 \cdot n \cdot \bar x,\, $ should we be able to maintain $\,\bar x\,$ at the same level. It goes the same should we be able to double $\,\bar x\,$, it would result in: $\, n \cdot 2 \cdot \bar x\, $ which would produce the same thing. However, doing both at the same time would give: $\, 2 \cdot n \cdot 2 \cdot \bar x = 4 \cdot n \cdot \bar x $. And. if we could do better than doubling, it would be even better.

The task becomes finding this edge: $\, \bar x, \,$ and finding ways to repeat it as often as possible within the limited capital resources available while facing uncertainty.

Trading is a different management problem than investing. Both will adhere to the above equations. However, trading will require more effort. Hopefully, it could generate more. Or, could it really?

Trading might generate more, but on the condition the payoff matrix achieves more than just average: $$ \displaystyle{ \sum_1^n \mathbf{\tilde H} \cdot \Delta \mathbf{P} \; - \sum_1^n \mathbf{\bar H} \cdot \Delta\mathbf{P} > 0}$$ where $\mathbf{\tilde H}$ is the projected expected strategy outcome and $\mathbf{\bar H}$ is the expected market average over its long-term trading interval which could serve as benchmark.

Just averaging leads to: $\; F_0 + \sum_1^n \mathbf{\bar H} \cdot \Delta\mathbf{P} \; \geq \; F_0 \cdot (1 + \bar g)^t$. Again, the trading payoff matrix needs to outperform long-term averages.

The projected trading strategy ($\mathbf{\tilde H}$) inherits this requirement: $\; F_0 + \sum_1^n \mathbf{\tilde H} \cdot \Delta\mathbf{P} \; \geq \; F_0 \cdot (1 + \bar g)^t.\,$ Whatever its trading environment might be, it will have to exceed market averages which are relatively easy to obtain. This means, it will have to generate some alpha, otherwise it might not make it. Explicitly stating: $\; F_0 + \sum_1^n \mathbf{\tilde H} \cdot \Delta\mathbf{P} \; \geq \; F_0 \cdot (1 + \bar g + \alpha)^t.\,$

Transforming a trading strategy from $\mathbf{H}$ to $\mathbf{\tilde H}$ will require long-term planning. What could be useful is how a trading strategy behaved in the past under normal market conditions. A simulation can provide that. A portfolio simulation software should be able to give: $\,n \cdot \bar x\,$ over any chosen set of stocks over any trading interval $\,\Delta t.$

Transforming a Trading Strategy

You design "a" stock trading strategy $\,\mathbf{H}.\,$ It does not matter at the moment what it is, only that it is there and of sufficient size. For example, take the 100 stocks in $SPX$ with daily opening or closing prices over the last 20 years. We store this data in a price matrix $\,\mathbf{P}\,$ to which might be added all the trade executed prices while keeping everything in chronological order. This will give an Excel-like table composed of at least 100 columns (stocks) having 5,040 rows (days).

To simplify things, we will let all trades be market orders at the opening price the day after a trade decision is made. In essence, building an ordinary end of day (EOD) trading strategy. There is no loss in generalities in such a construct since the trades could have been made at market at the opening bell on these highly liquid stocks.

This strategy will have an inventory matrix $\,\mathbf{H}\,$ which will have $504,000$ data elements (5,040 trading days by 100 stocks).

Over its 20-year trading interval, this strategy will have to decide what to do $504,000$ times. It is not that the strategy needs to make a few hundred trading decisions, it is making hundreds of thousands of trading decisions, and each one might have an impact on the outcome. Even those that say: stay put.

It is not only get in and get out of a trade. It is also stay in or stay out that needs to be considered. It is why you have to decide at every time period what to do, or what could or should be done: buy, sell, or hold your positions.

Looking back at the $SPX$ opening price matrix $\,\mathbf{P},\,$ the first thing we should notice is that it is the same for everyone. The operative words here were "the same". The matrix will have at least 5,040 rows (days) with 100 columns (stocks). That we insert rows with prices at which trades were executed does not change any of the opening prices.

Each price in the matrix is identified by its $d, j$ indices: $p_{_{d,\,j}}$. The $SPX$ price matrix over those 20 years is of historical record, and there was only one iteration of it, only one. Therefore, whatever simulation is done on this matrix by whoever on the planet will have to deal with the same price matrix: $\mathbf{P}.\,$ Each new day or each new trade adds a new row to the price matrix $\,\mathbf{P}.\,$ Each new day is of the uncertainty type.

With that said, it should be evident that the day to day price difference matrix be also the same for everyone: $\Delta \mathbf{P}$. Its composition is simple: $\Delta p_{d,\,j} = p_{d,\,j} - p_{d-1,\,j}$. It is the difference in price between today and the day prior for each of the 100 stocks in the matrix.

A simple trading strategy would be: $\sum_1^n \mathbf{H} \cdot \Delta\mathbf{P} = \sum_1^n ( h_0 \cdot \mathbf{I} \cdot \Delta\mathbf{P} )\,$ where $\mathbf{{I}}$ is a matrix all filled with ones, and $h_0$ the initial position taken in each of the 100 stocks.

This strategy has a name. It is a Buy $\&$ Hold. As such, we already know its outcome: $\; F_0 + \sum_1^n h_0 \cdot \mathbf{I} \cdot \Delta\mathbf{P} \; = \; F_0 \cdot (1 + \bar g_{SPX})^t.$ If we could buy $SPX$, it would generate about the same average return as $SPX$. This is not a guess, it comes with a mathematical "almost surely" kind of statement.

It is not a sure thing. It is an expected outcome since the average rate of return over the period will tend to $SPX$'s average return: $\hat g \to \bar g_{SPX}$. This portfolio would get about the same outcome as if having invested in the index itself. One could do the equivalent by buying and holding $QQQ.\,$ An estimate for the long-term outcome would be: $\, F_0 \cdot (1 + \mathsf{E}[\,\bar g_{QQQ}]\,)^t \to F_0 \cdot (1 + \mathsf{E}[\,\bar g_{SPX}]\,)^t.$

The above is a one-decision portfolio. It is not designed to outperform the averages, but it will do about the same as the averages. In the majority of cases, it will end up being better than what professionals would have been able to do. Imaging paying someone to take a single decision, where if you had taken it yourself, you might have most probably done better, and not pay all those management fees and expenses.

One real advantage of this scenario is that you need no one to do the job. You can do it yourself. You have one decision to make followed by $5,039$ other decisions that all can be resumed to wait, hold, and do nothing more. You do not have to worry over the day to day price fluctuations since your outlook is for that 20$^+$ years. You know your investment is compounding and should outperform some 75$\%$ of professionals over that period.

It should be only if you want more than averages that you might need to take another course of action.

And this will translate into finding ways to accelerate the process by increasing your expected return $\, \mathsf{E}[\,\hat g]\,$ as in: $\, F_0 \cdot (1 + \mathsf{E}[\,\hat g]\,)^t.\,$ It will also have to be done in such a way that your expectations exceed market averages: $\, \mathsf{E}[\,\hat g]\, > \mathsf{E}[\,\bar g_{SPX}].$

A Buy $\&$ Hold strategy using $\,\mathsf{QQQ} \,$ has the following formula: $F_0 + \sum_1^n h_0 \cdot \mathbf{I} \cdot \Delta\mathbf{P}\, \to F_0 \cdot (1 + \bar g_{QQQ})^t.\,$ It is equivalent to: $F_0 + h_0 \cdot (p_t - p_0).\, $ You can make a projection, a guess, as to $\,p_t$'s expected value in 20 years time: $\,\hat p_t = p_0 \cdot (1 + \hat g_{QQQ})^t$, using the long-term historical average. This would be the same as if investing in an index tracking fund. $\,\mathsf{QQQ} \,$ is not the only available low-cost index tracker.

Yes, there is a big difference between what is expected and what will be. Nonetheless, we can make the expected call based on historical data. The one-year standard deviation based on the same historical data is about $16\%$. It applies to the last year as well, on average.

Therefore we could rewrite the estimate as: $\,\hat p_t = (1 \pm 0.16) \cdot p_0 \cdot (1 + \hat g_{QQQ})^t$ to be within one standard deviation. Or, within 2 standard deviations: $\,\hat p_t = (1 \pm 0.32) \cdot p_0 \cdot (1 + \hat g_{QQQ})^t$. This starts to be quite a wide range.

For every 1M dollars of initial capital ($F_0$), we have $p_t$: $\, (1 - 0.16) \cdot (1 + 0.10)^{20} < p_t < (1 + 0.16) \cdot (1 + 0.10)^{20}$, this makes the estimate $\,F(t)\,$ within 1 standard deviation: $\,5.6M < F(t) < 7.8M$. In CAGR terms, it is equivalant of having: $ 0.09045 < \hat g_{QQQ} < 0.10819$. For two standard deviation, it would give: $ 0.07899 < \hat g_{QQQ} < 0.11537$. Which is the same as having: $4.6M < F(t) < 8.9M$.

As you add zeros to the initial capital, all it does is add zeros to the outcome: $F(t)$. Starting with 1B dollars, (adding 3 zeros) will make the range for the 2 standard deviation estimate in the vicinity of 4.3B. Therefore some efforts should be made to be on the upper end of this estimate. Evidently, it goes on the premise that there was 1B to start with and that your trading strategy could show some long-term positive alpha. The high end estimate would require an alpha of 0.01537 or 1.537$\%$. Not that big a task.

Subtracting zeros, like starting with $\$$10k (subtracting 2 zeros) will give a projected range of $\$$21.5k. The 2-standard deviation outcome after those 20 years would be something like: $\$45.7k < F(t) < \$88.8k$.

I find it a doomed scenario to start with a $\$$10k stake. Putting in all that time (20 years) to get less than $\$$88k. In 20 years time, $\$$88k will not be worth that much. It should be a foregone conclusion that starting too small is a complete waste of time and resources.

It is probably why most poor people can not make it out of poverty even if they try.

In all these cases, the expected CAGR was the same, executed over the same time interval. The only thing that changed was $F_0$, the initial stake. Thus, in the formula: $ F_0 \cdot (1 + \bar g_{QQQ})^t,\,$ $F_0$ does occupy an important role. There is a minimum required to make it worthwhile. Having a positive return over a 20-year period is not enough. Your portfolio needs to exceed expectations and start with the highest stake possible. One should start with a reasonably larger size stake than $\$$10k in order to make the effort worthwhile.

Grouping a number of small accounts together does not change a thing. It could reduce costs, but not improve returns. And when you would take out your share of the account, you will get about the same as if you had done it yourself and obtained the expected market return: $\, \hat g.$ What you need is to do better than expected averages. If you cannot get at least $\$$100k as a starting point, my suggestion would be: forget the whole thing and buy an index fund. Then again, you could go out there and find more financing.

$©$ May 2018 Guy R. Fleury