CSS Wave Theory Building Blocks Part 2
In Part 1 we discussed some of the elements required for a testable framework for wave theories such as Elliott Wave. In this post we will discuss a very simple framework for testing and research. One of the most important aspects of a good classification system is the ability to categorize into broad general principles and compress information into manageable chunks. This information compression is what permits humans to excel at certain tasks that machines cannot– game theory applications that involve incomplete information, and abstract pattern recognition in the presence of random noise.
There are two major components that can describe a movement in time series data: 1) price and 2) time. Other elements will be discussed in part 3. However, for the most part, a move in the market from its starting point can be described by its duration (length of time since the move started), and magnitude (cumulative gain/loss in volatility units since the move started). As discussed in part 1, it is important for us to employ normalization in order to increase the ability to generalize out of sample. Normalization allows us to neatly categorize things into evenly distributed categories that can contain a range of examples instead of having to look for a highly specific example that may or may not have happened in the past. I would recommend a lookback for wave analysis of at least 7-15 years for creating the distribution for categorization. As a consequence, waves can be coded based on the schematic below to have a two-bit descriptor: (price code, time code). Once we can neatly categorize waves, we can now employ classical data-mining techniques to find relevant patterns. For example, a move in the market can be described as: T6, M6- which stands for a very large move that has lasted a very long time. There are more nuances to this topic, and I will discuss this in Part 3.