Recap of the Optimization Sequence
In the last post I described the sequence of actions required to generate new permutations, run the optimizer and select the best permution. This is repeated over and over again until you decide you can't improve the performance any further. Before going any further I would like to recap the iterative optimization process.
From the spreadsheet you start by clicking on the manual calculate button. This causes new randomized data to be generated from the reference weights. Then block select the randomized data (8 nodes x 10 iterations). Copy the data.
At Portfolio123, select Edit Permutations if you are still at the last optimizer results page. Otherwise skip this step.
Click on the small plus sign button to modify the ranking system weights.
Block select the permutations that are already there. Right click with your mouse and overwrite the existing permutations with the new iterations. Save the new permutations.
Generate the permutations.
Click on Run and then click on Toggle Charts. Wait for the optimizer to process all of the permutations.
Choose what you consider to be the best permutation. In this example, I chose permutation #8.
Now return to the spreadsheet, block select iteration #8 from the randomized data array, and copy it.
Paste iteration #8 into the reference node array.
This completes one optimization cycle.
Hitting The Brick Wall
This seems to be pretty easy although a bit monotonous. But you will find that it doesn't take too long before you run into a brick wall. The performance of the top bucket will stop increasing no matter how many times you generate new permutations.
At this point in time you may need to shake things up a little bit. This is kind of like being in a maze, running into a dead end and having to backtrack before you can go forward again.
Try using a different tactic for deciding on the "best" permutation. I find that selecting the lowest first bucket while ignoring the top bucket for one or two iterations often works. You might also consider upping the sensitivity to 70% until you find a new "best" permutation.
One of the interesting aspects of this optimization process is that you can eliminate nodes (stock factors) without sacrificing performance. Think of this as pruning a plant, cutting out the dead leaves, branches allowing the plant's energy to be focused on the healthy parts.
The time to prune is when a reference weight decreases to about 1 (from the original 10).
As you can see from the example above, node 5 has a reference weight of 0.1763. It is time to eliminate this node. This is done by zeroing out the reference weight and the corresponding randomized data. Note that Nodes 2 and 3 are almost ready to be pruned as well, but not this time around.
Once the node is zeroed out then continue on with the same steps as before. The optimizer can handle the zero weights.
Pruning nodes often results in a performance setback with the last bucket dropping in value. But it is usually made up fairly quickly with subsequent optimization iterations.
When you do achieve a new high for the last bucket it is a good idea to set aside the reference weights for future use. Simply copy and paste into an unused section of the spreadsheet. You can always come back to these numbers at a later time if need be.
So that is the optimization process. Next post I'll outline how to finish it off, create the final ranking system, and as a bonus, create ten (or more) noisy ranking systems for robustness testing of future models.