Method for determining the average deviation of data values over time?

132 Views Asked by At

I've recorded my weight every day since 1 January 2012 and plotted the data in an Excel spreadsheet.

For convenience, I've set the minimum and maximum values on the y-axis to the weights that correspond with the underweight and obese BMI cutoff values for a person of my height. The resulting graphs display some surprisingly consistent trends; a quick visual inspection of the data suggests that my weight on any given day typically differs by less than 1 kg from weights recorded on surrounding days.

I'm wondering whether there is a more precise or standard way to determine the average deviation of individual data points (from the surrounding data) in a data set whose overall trend cannot be accurately represented by any of the typical linear/exponential/power models. (I've considered splitting the data set into smaller pieces and analyzing those using regressions, but such a method seems rather inefficient and arbitrary.)

1

There are 1 best solutions below

1
On

Yeah. If your data points are $x_1,x_2,\dots,x_n$, then you can compute $n-1$ new data points $y_1,y_2,\dots,y_{n-1}$ which are the differences between consecutive $x_i$s. Specifically let

$$y_i=|x_i-x_{i+1}|$$

Now you can simply compute the average of this to find the average difference from one day to the next (you can also compute standard deviation, etc. for more information about these differences).