This is a stupid question, but I feel my answer to the following isn't good enough.
My mother asked me what percent increase would arise if 1.6 million arose to 16 million. This should be a 900% increase. When calculating, I disregarded the magnitude of million for each digit, since I knew that it was all, relatively, the same as if it were 1.6 arising to 16 in terms of percentage increase. In addition, according to the following, old-school relation:
$$ Percentage_{increase} = \frac{Final - Initial}{Initial}*100$$
- Oh, by the way this seems to peak its head in in certain ways throughout physics and maths - can someone explain why I see this so often, not just for percent increase, but just the general concept of $\frac{final - initial}{initial}$?
Obviously, according to this relation, the millions place $*10^6$ can be ignored, due to the fact that all elements of the fraction possess this. However, one thing bugs me. Oh, and I have two questions overall, so here're the final ones:
- While the magnitude of these numbers don't demonstrably matter, the distance between initial to final in the scale of millions compared to single digits is larger. I know this is reconciled in the formula, but, intuitively, why does this not really matter?
- Where did the percentage increase formula get derived from?