I need to find the total margin of error for calculating velocity, while I have margins of error for time and distance. Actually the margins are the same (as both measurements were based on GPS - but this is not important here), and are given as 1/365000 (20 cm for 730 km).
So, I spent quite some time studying various information, most of which was about standard deviation (which I understand vaguely) and found here (but not only) that the formula I should use is:
if $S=A×B$ or $A/B$ then $σ_S/S=\sqrt{(σ_A/A)^2+(σ_B/B)^2}$
Well, let's take an easy example. Say I have two measurements: distance of 40 m and time of 4 sec; both margins are the same and equal 0.2 (huuuuge, I know). The above formula (if I am interpreting it correctly) would give me the margin of $σ_S/S=\sqrt{(0.2)^2+(0.2)^2}=\sqrt{0.04+0.04}= 0.2828$ (for the calculated velocity of 10 m/s). Therefore the lowest actual velocity can be $v=7,172 m/s$.
Now, let's try to calculate the maximum error from actual numbers for distance and time. For the distance I can have measured up so the actual distance might even be as low as 33,33333 m, while for the time I can have measured down, which gives me the maximum possible time of 5 s. The real velocity would have been then $v=33,3333m/5s=6,66667 m/s$, which means I was wrong by 0,3333.
Obviously, the above calculation shows the theoretical equation underestimated the margin of error, as it said the error cannot exceed 0.2828.
On the other hand, I found elsewhere, yet without any explanation (but from a credible source) that in such case I should have calculated the total margin of error as simply a square root of 0,2 (or 1/365000 in my original problem). In such case the total margin of error equals 0.4472, which - although much higher than what I calculated in my example - is not underestimated at least.
What do I do wrong, and - if the error of margin in my simple example really is 0.4472 (i.e. square root of a margin of error for the distance or time) than - why do I calculate it this way?
What you're doing wrong is assuming too much about the error distribution.
There are two common ways to look at the error in a measurement: normally distributed and absolute bounds. Normal (or Gaussian) distributions have a bell shape, with a 66% chance of the actual value being within one standard deviation of your measured value and a 95% chance of the actual value being within two standard deviations. With absolute bounds, you assume a 100% chance of the actual value being between the measurement bounds, but little more.
Your first formula gives a relative standard deviation of 0.2828, assuming normally distributed data. There's a 34% chance of the actual value being off more than that, under that assumption.
Your second formula establishes a relative maximum error of 0.3333, with a 0% chance of the error being more.
These two statements aren't in direct conflict yet - there could be a 34% chance the error is between 0.2828 and 0.3333. But that's not how it works. The normal distribution also has a 2.5% chance of the speed being lower by at least two standard deviations, 0.5656 (i.e. v=4.344 m/s), but the absolute bounds assumption has the chance of that at 0%.
[Edit]
Note that your example is quite misleading. You first quote a standard deviation of 20% for the speed (so measuring a speed of less than 32 m/s happens in 2.5% of cases), and then state that the minimum you can measure is 33.3 m/s. Those are quite different distributions. If 33.3 m/s is a truly unexpected outcome, you're talking about 5 or 6 standard deviations, not 1.7
[Edit 2]
So, the 0,2 error isn't a standard deviation, and the first formula used is therefore wrong. The result bounds are then :
[10 * (1-0,2/1+0.2) , 10 * (1+0,2/1-0,2)] = [6.667, 15]. Note that the bounds are asymmetric.