I need the precise time between two time stamps from a remote computer and to eliminate any difference due to the transmission time of signals to and from my computer. I have an NTP program running to do this and think that all I need to do is correct for the differences in transmission time that might occur. I don't require the precise time only the difference between the two readings.
I have been looking at the Wikipedia entry for Network Time Protocol under the sub heading of Clock Synchronisation Protocol Network time protocol but find it confusing. (It looks like there is an error in the formula for time offset, $\theta$, which they have as $$ \theta = \frac{(t_1-t_0)-(t_2-t_3)}{2}, $$ and I think should be $$ \theta = \frac{(t_1-t_0)-(t_3-t_2)}{2}. $$ In the figure they seem to have labelled $\delta$ as $35$ milliseconds which surely is $\theta$ and $\delta$ is $2$ms.
I don't understand how to use the right combination of these time intervals to eliminate the error due to time delay in sending and return and the difference in client time and server time values. Can any one help please?
You have misquoted the Wikipedia expression as it has a $+$ sign between the terms in the numerator while you have a $-$ sign. Their expression is equivalent to the one you propose. They have listed $\delta$ as $65$ ms, which is the result of the computation they give. It gives the total time for both message transits. $\theta$ is the amount you should add to the client time to make it match the server. Another way to write $\theta$ is $$\theta = \frac {t_2+t_1}2-\frac {t_3+t_0}2$$ which makes it clear that it is the difference between the two clocks at the midpoint of the exchange.