I'm working with a radar that gives me range and angle for each of its plots/outputs. The radar has a fixed range resolution of 20m, and an angular resolution of 2 degrees. This means that the further away a target is located, the greater the variability.
I want to convert these polar coordinates into a bivariate probability density function, but I am totally stumped as to how this should be done. I want to keep the calculations as simple as possible (maybe use a rectangle?).
Any advice would be greatly appreciated.
I'm using the following approach:
(1) Assume r and theta are normal
(2) Set variance = (resolution / 6)^2
p(r,theta)
= p(r).p(theta)
= normal_prob(r).normal_prob(theta)
p(x,y) = normal_prob( sqrt(x^2 + y^2) ) . normal_prob( atan(y/x) )
Does this seem OK?
My next step is to set up a hypothesis between two points from different sensors to see if they are in fact the same points, i.e.:
H(0): X1 - X2 = 0
thus P(X1 - X2 = 0) = alpha
If the above is correct, this is the next hurdle to surmount...