I'm collecting data on a battery which is being discharged and charged again repeatedly. This battery consists of 4 cells in series and I am recording the fully-charged voltage of each cell at the end of every 10 cycles. Firstly, we know they are not initially at the exact same voltage (though close). Secondly, there is some uncertainty in what the fully-charged voltage will be due to equipment accuracy. Example:
+-----------------------------------+ | Cycle Cell0 Cell1 Cell2 Cell3 | +-----------------------------------+ | 0 4.149 4.1745 4.1715 4.1475 | | 10 4.1205 4.1565 4.158 4.1325 | | 20 4.155 4.179 4.191 4.1505 | +-----------------------------------+
I'm thinking about how to prove or disprove (statistically) this hypothesis:
The voltage difference between cells is increasing with respect to the original difference (ie at cycle 0).
I know that standard deviation will give me an idea of "how spread out" the cells are for a given cycle, and I can compute that for every cycle, and then fit a line to the list of standard deviations. But does that even make sense? I'm just thinking, if (for instance) the resulting slope is tiny, how can I be confident that the result is significant (ie not just a product of noise)?