# Time series—looking for a needle in a haystack

We continue to share key insights into time series analysis methods. Let me outline another popular method—the autocorrelation function (ACF). Unlike smoothing methods highlighted in my previous post, ACF handles series in a slightly different way. Below we will also illustrate how autocorrelations are applied to specific problems.

### Autocorrelation

We can easily construct the autocorrelation function. To do so, we need to:

- Take two copies of the given series.
- For each copy, subtract the average value of the series from each value in the series.
- Align copies. Multiply together each pair of values from both copies that correspond to the same point in time.
- Sum up the products. The obtained result (yet unnormalized) is a correlation coefficient
*corr (0)*for*lag 0*. - Shift series copies one time step relative to each other. Reiterate the calculations to obtain the result for lag 1.
- Reiterate the calculations for the full length of the series, each time increasing the lag.

The resulting set of correlation coefficients for different lags is the autocorrelation function. Finally, all coefficients are divided by the coefficient *corr (0)* to normalize the function so that the coefficient for *lag 0* is equal to 1.

Concisely, it can be written as follows:

How to interpret ACF? Originally, both series are fully aligned (lag 0), and the correlation is equal to 1. When lagged, series go out of sync gradually, and the correlation value falls. The rate of fall shows how well the series retains its previous values. If the correlation falls rapidly with increasing lag, the series “forgets” the previous values quickly. If the correlation falls slowly, it means that we’re dealing with a process that is relatively stable over time. The function may decrease rapidly and then increase again, making one or more peaks. It means that the series coincide again if we shift them relatively to each other for a sufficient lag in time. In other words, the given series shows periodicity (or seasonality). The number of time steps, within which lies the peak of the autocorrelation function, corresponds to the period.

That is, the function describes the similarity between the two series as a function of time lag between them.

Using autocorrelation, let’s try to determine a period for our first series that describes the volume of passenger air traffic (http://robjhyndman.com/tsdldata/data/airpass.dat source: Time Series Data Library, R. J. Hyndman). Note that it makes sense to apply autocorrelation only to series that have no trend and whose average value is zero. This is due to the fact that, if a series has a trend and a non-zero average (for example, if all values are great enough and grow constantly, as in our case), the autocorrelation function will identify these peculiarities and perceive them as the “memory” of the series, showing correlation almost to all lags.

There are two ways to remove the trend from our data:

- Detrend the data by subtracting the trend component from the data.
- Instead of the given series, build a series of differences of neighboring data points.

It’s quite simple to subtract the trend, but before that, we need to estimate it. Building a series of differences *x*_{i+1}– *x*_{i} also removes the trend from the data. Once the trend is removed, we subtract the average value point by point.

We can do all these manipulations in the Prognoz Platform using a smart tool for series calculation. The Series Calculator enables us to handle series as vectors of values, where we can sum up and subtract them; add constants; multiply by different coefficients; and apply various calculation methods. The beauty of the Series Calculator is that we can combine functions so that, after nondestructive transformations, the given series remains intact.

Let’s estimate a linear trend in our series, and then subtract it. To do so, let’s take the series of international airline passenger traffic that we’ve used earlier. Using the Series Calculator of the Time Series Analysis tool, build the expression as shown below:

First, use the Extrapolate function to calculate a series that has a linear trend. Then subtract the trend from the given series. The Fitted function returns the trend component; while the Extrapolate function returns a number of residues in addition, like many other methods. Below I have provided charts for the given series, trend, and detrended series.

To estimate the average value of the resulting series, let’s invoke the Statistics command:

That is, after trend subtraction, the average value of our series is already equal to zero.

Now we can calculate the autocorrelation function. To do this, we’ll use another tool of the Prognoz Platform—the Modeling & Forecasting tool—since the autocorrelation function is not accessible from the Time Series Analysis tool. In the Modeling & Forecasting section, invoke the Descriptive Statistics command, choose the ACF option, specify the series for which the calculation is required, and select linear detrending as the desired series transformation. We’ll get the following function after the calculation:

As we can see, the function falls rapidly, and the lag 12 has the highest peak. This means that the series has periodicity equal to 12 months. For instance, if the analysts at an airline company had observed similar series describing passenger air traffic and detected the periodicity of 12 months, they would have concluded that passenger distribution by months in the coming year would correspond to that of the outgoing year. This, in turn, would affect the costs and revenues of the airport or airline company in each period. With exponential smoothing, it is possible to build an even more accurate forecast. Note that the methods are quite simple.

### Looking for a needle in a haystack

The autocorrelation concept may be used to look for a needle in a haystack. Suppose that our needle is a particular audio track fragment; and the haystack is a long track containing the desired piece somewhere inside.

An audio signal can be described as amplitude changes over time. In other words, it is also a time series, but the frequency of reading the amplitude values is very high—dozens and hundreds of thousands of values per second. Suppose we have two audio fragments *u* and *v*:

1 | -2 | 3 | 5 | -6 | 3 | 1 | 0 | 6 | 8 |

2 | -1 | 2 | 3 | 0 | 3 | 4 | 2 | 3 | 1 |

The easy way to compare them is to calculate a scalar product The more similar our two vectors are, the greater the product will be; that is, almost identical audio tracks will show a greater value than the differentiating ones.

So, we can compare the desired fragment with the full track and get the scalar product. After that, we can shift the fragments relative to one another and recalculate the product.

1 | 4 | 3 | 6 | -7 | 2 | -1 | -2 | 4 | 5 | 7 | 8 | 0 | -2 | -1 | 3 | 4 | 0 | 4 | 6 | 7 | 8 |

2 | 0 | 1 | -5 | 6 | -4 | 3 | 2 | 1 | 9 | 0 |

1 | 4 | 3 | 6 | -7 | 2 | -1 | -2 | 4 | 5 | 7 | 8 | 0 | -2 | -1 | 3 | 4 | 0 | 4 | 6 | 7 | 8 |

2 | 0 | 1 | -5 | 6 | -4 | 3 | 2 | 1 | 9 | 0 |

The position corresponding to the greatest value denotes the start of the fragment that is most similar to the desired one.

Now suppose that our haystack contains millions of items, and our needle has dozens of thousands of them. That is, we have to calculate millions of scalar products. This process is too resource-intensive. However, there are methods that enable us to calculate all such products at a glance; but they are rather sophisticated.

### Autocorrelation and modern pop music

An interesting example of using autocorrelation is filtering a singer’s voice using the AutoTune algorithm. The idea is that, when we hear music and singing, we perceive harmonious sounds if the tones of all instruments and voices coincide. However, unskilled singers often make mistakes and sing out of tune. AutoTune software is used to reduce the number of recorded vocal tracks. It helps correct the pitch of a singer’s voice to the nearest semitone (the interval between two adjacent keys on a keyboard, such as a piano) almost imperceptibly. Some singers use AutoTune in live performances as well.

In order to correct the output voice tone, we must first calculate the pitch. To do this, we use one of these two options:

- Follow the principle of looking for a needle in a haystack and try to compare the sound of the voice with one of the notes. This method is not as fast as we wish.
- Use the autocorrelation and calculate the period of vibrations, and then calculate the pitch. This very method is utilized in commercial recordings.

One of the vendors of such equipment declares in the specification that the algorithm can calculate the period of vibrations using just a couple cycles of the tone, when the audience has not even begun to perceive it, in a centisecond. The accuracy of determining the pitch reaches 0.0001 Hz. The inaccuracy of correction to the nearest semitone does not exceed 1%. That’s a very good performance, to my mind.

I’d like you to listen to the very first commercially released AutoTune song, which Cher put out in 1998. (Cher’s publicists claim that she used AutoTune for artistic effect, not to correct performance mistakes.) In the snippets where the pitch is modulated rapidly, the voice acquires an unnatural-sounding, metallic quality. The effect is heard repeatedly during the last half-minute. This is a case of extreme use. Typically, the algorithm works imperceptibly. It’s up to you to judge whether Cher succeeded or not! So, check it out!