Determining Significance in Trends

27 Views Asked by At

I am responsible for metrics reporting within our IT organization. This involves monthly tracking of various metrics - for this post I will use one example: Incident Count (a.k.a Ticket Count). Each metric has a monthly count and a yearly avearage (the average of all of the months YTD). Here is a a real-life example:

Jan 2018: 4881

Feb 2018: 4502

Mar 2018: 5255

Apr 2018: 5310

May 2018: 5350

Jun 2018: 4576

Jul 2018: 4999

2018 YTD Monthly Avg: 4982

We want to add some up / down arrow indicators to show the trend direction. My boss has asked that these indicate some sort of significant trend. In other words, he doesn't want it to show that the trend is down if we have gone from 4,999 Incidents to 5,000 from one month to another. I thought about using a +/- standard deviation, maybe a certain percent, but I keep thinking there is a better way. I'd like to use the same methodology across all of our metrics, so the stats/math do not change, but work for metrics with any type of count or variance.

Some things that I can point out - since we are doing current year monthly numbers, our sample size is small (1-12 months), some metrics have larger numbers (4,000+) and others have smaller numbers (~100 or so). I feel like there should be a way to determine significant change (statistically) for a data set like this. Keep in mind, all of our metrics will be the same sample size (1-12 months). Are there any basic methods I can use that I can use to determine if my trend is truly going up or down in a significant way (other than just comparing two months). I am coding this up in R, and just want something standard I can use across the board and is reproducible. Thanks!