I did some performance testing on one of our pipelines at work, feeding in batches of records at increasing factors of 10. I noticed that at every increase, the growth rate of the runtime approximately doubled. This is a bit easier to see in this sample data:
| Input size ($X$) | Runtime ($Y$) | Runtime Growth ($Y_{n+1}/Y_n$) |
|---|---|---|
| 10 | 1 | 2 |
| 100 | 2 | 4 |
| 1000 | 8 | 8 |
| 10000 | 64 | 16 |
| 100000 | 1024 | 32 |
| 1000000 | 32768 | null (would be 64) |
I noticed that this recurrence relation holds true:
Define growth rate $G_n = Y_{n+1}/Y_n$
${G_n} ^ {1/log_{10}(X_n)} = 2$
I'm really curious how I could represent $Y$ in terms of $X$, or at the very least get a better sense of the runtime complexity.