NB: I've recently posted pretty much the same question over at Cross Validated, but since I got no answer, I wanted to try here: If it not appropriate, just let me know in the comments and I will delete this question. Thank you.
I'm investigating the scenario seen in the GIF below: We see $n$ (in this case $10$) randomly placed circles expand on a grid.
In the graph (also below), we see how quickly the square grid is covered for $n=1,2,..,10$ (blue being $n=1$).
Now, this is just for a single run of my script and I'd like to represent the data from multiple runs in a single graph, so that the noise is a bit diminished and one can see the most dominating tendencies more clearly.
My problem is that I do not know what kind of manipulation would be appropriate here. I thought about simply letting the $i$th entry in the desired vector (for one of the graphs, say, $n=1$) be equal the mean of the $i$th entries for all runs (for that particular $n$, of course), but first of all I find it weird to completely break up the individual runs like this, and second of all (and most importantly) the vectors for each run (and $n$) are not of the same length (they just end when they've filled out the grid)!
Q: What is a standard way of achieving a similar effect as averaging when the sets that are sought averaged aren't the same length?


Here are some things you could do:
You could build a D3 visualization to add more details to each curve to really flesh out what is going on.
Edits: Addressing the comments below.