Today’s topic is on optimization. Some system developers don’t believe in optimization. However, I’m going to show you why you should really learn how to optimize, and second I’m going to demonstrate that even if you don’t believe in optimization then “not optimizing” is not the solution. “Not optimizing” is not the opposite of optimization!
First, we’re going to assume you want supernormal returns. Let’s just think about what it means for both the strategy and about the markets. If a strategy can deliver supernormal returns without optimization then it suggests that there is robustness to the strategy. That’s a good thing! But what does it say about the market? It suggests the market is not very efficient. We know that’s not the typical case.
The “opposite of optimization” is not “not optimizing”! Merely looking at a single performance report won’t tell you about how robust a strategy is or what the typical return is likely to be. What you are really doing when you “don’t optimize” is making an arbitrary decision, or a random decision. You are picking some random values. It doesn’t tell you conclusively about what the typical result looks like. Now, if you were to randomly pick values for your variables and do that over and over again then you’d start to get some real insights into the strategy. And that’s the idea behind Monte Carlo methods. An optimization algorithmic is just a more intelligent take on the same theme. Instead of randomly selecting values, the optimizer simply steps through the reasonable values one by one (or uses genetic or other advanced methods), and goes one step further by highlighting the very best input values for you. You see the true opposite of optimization is diversification. Diversification, in this case, means to trade multiple combinations of parameters. Instead of trading a strategy with optimal parameters with more size, you could trade a basket or “class” of similar strategies all running with slightly different parameters. In a future post, I’ll share with you an advanced technique for achieving both a higher degree of optimization and diversification, and without requiring a larger account.
But, here’s a simple way to optimize sensibly. Run your optimization and look over all returns. Do they look random or do you see any clear relationships? Look at the values above and below the best returns. Are they still reasonable? This is called sensitivity analysis testing and is important. Next, take an average of all returns say 25% above and below your optimal and average them to get a more realistic outlook on your system. You can also take an average of the top 25% of all returns and the bottom 25% of all considered returns. You can do this for drawdowns and other system measures too. You can also compute the min/max for these values which will provide more information than a single performance report. By going through these processes, you’ll start to build up a more complete perspective of the performance metrics of your system.
Examples follow:
Sensitivity analysis reveals that the returns are random. Don’t trade this system.
Input | Return ($) |
---|---|
12 | -3,000 |
11 | 2,000 |
10 | 25,000 |
9 | -8,000 |
8 | -15,000 |
Sensitivity analysis looks good. Even though the best return was $29,000, the average of the best returns was only $20,000. Going forward, we might hope to get $29,000 over the same length of time but we know that $20,000 is a more realistic optimistic return.
Input | Return ($) |
---|---|
12 | 18,000 |
11 | 23,000 |
10 | 29,000 |
9 | 17,000 |
8 | 15,000 |
What’s a pessimistic return for the strategy below over the same length of time? The tricky part is determining what a “reasonable” value is for a parameter. But, let’s assume all the values shown are reasonable. We take the average of the bottom three returns which yields approximately $400. One benefit to analyzing your systems this way is it gives you a more complete and realistic view.
Best Case Return: $24,000 (MAX)
Optimistic/Target Return: $19,000 (Average of top 3rd)
Average: $10,000 (Average)
Pessimistic: $400 (Average of bottom 3rd)
Worst Case: -$1,500 (MIN)
All values are rounded to avoid giving a false sense of precision.
Input | Return ($) |
---|---|
10 | 24,000 |
9 | 17,000 |
8 | 15,000 |
7 | 12,000 |
6 | 10,000 |
5 | 11,000 |
4 | 2,500 |
3 | -1,500 |
2 | -700 |
1 | 3,500 |
-- Curtis from blog Beyond Backtesting
The problem with the ‘sensitivity test’ you describe is that it doesn’t tell you anything about the distribution of the results around the optimal value.
For example, the average return in both these cases is the same:
1: -10
2: 23
3: 4
4: 19
5: -19
6: 32
7: 8
8: 11
9: -16
10: 30
1: -19
2: -10
3: 4
4: 23
5: 30
6: 32
7: 19
8: 11
9: 8
10: -16
The first system you wouldn’t touch – the second looks more promising.
The returns of the second are distributed in a bell curve like manner about the optimal value. In fact, if you had enough values there (let’s say 200 instead of 10) then you could start making some inferences about the probabilites of the system performing at a certain rate of return.
With the first system, it’s pot-luck whether you get a good return or a poor one.