I wasn't that sure what kind of mean value the following is, I hope you can help me out:
What it's about:
I have this testseries of about 30 elements, that delay the boot sequence of an embedded System. I checked them separately to say, element #3 adds e.g. 0.4 s to the boot sequence (lets call it tadd).
Now when I want to know, in what time the system boots when all 30 elements take influcence, I can assume this by adding each t.add of every package to a tbase (the time the system boots without any of these elements). Let's say I get a time t.assume = 30 s. Now I'm measuring the system with all the elements enabled and I measure the time with 29.5 s.
so subtracting each other lets me know how good my assumptions were:
t.accuracy = t.assume - t.measured = 30 s - 29.5 s = 0.5 s
Now to the question:
What kind of a mean value is this, if I divide t.accuracy 0.5 s by the number of elements used ?
--> 0.5 s / 30.
Is it the mean failure value of each element, or does it even exist? And if so what's the name of it.
Thanks for your support!
This would be average error per element. In other words, on average, how big a mistake did I make in assumptions for each element of the sequence.