TL;DR: Is there a name for an error term having more significance when it's a larger portion of the range or the sample size is smaller?
(That's an awful question - let me explain some and maybe you'll figure out what I'm actually trying to ask.)
I have some timeseries data that I'm partitioning into buckets. The timeseries data generator generates a flat line (that is, every sample has the same value). In theory, independent of how I bucket/aggregate the data, if I plot the average value as sum / count I should get a flat line. However, the way I'm bucketing the data creates an approximately flat line.
The source of the wiggle in the line is aggregation error that is causing some values to be larger or smaller than the anticipated value. (The source of the error is what I'm trying to resolve.)
I have two of these sets, one where the number of samples per bucket is 100x the other; the version with the larger number of samples is a better approximation of the flat line (less wiggle).
Here's a screenshot of an approximately flat line for the smaller set:

and here's the line for the larger set

(The error term is the second chart is hard to see in the line itself, but I left in the computed standard deviation to show that there is error in the points)
Just because it's smaller doesn't mean it's not wrong, but it's a harder argument for me in that case. What I'm looking for is a term/name/theorem/... (think law of large numbers as the kind of term I'm looking for) that describes why the impact is more significant on the smaller sample set.
Does this effect (smaller samples more sensitive to variance) actually exist? And does it have a name?
Thanks for your help!