I posted this yesterday on physics., but I think it really boils down to a trig problem more than anything, so reposting here for advice.
I have a navigation device for which I am trying to derive an azimuth algorithm. I am getting stuck with the derivation, as I'm pretty sure it is impossible given the hardware I have access to. I'm looking for a way to prove that a definitive azimuth is impossible to get from the following setup, or, even better, for someone to tell me that I'm missing something obvious here that actually makes it possible.
The sensors in the tool are as follows:
A tri-axial accelerometer, with the $Z$-axis perpendicular to the earth's surface when the device is 'neutral'.
A single axis gyro that rotates through $X$ and $Y$, but who's position with respect to the accelerometer is not known. i.e. it essentially returns a magnitude of the gyro vector's projection onto the $XY$-plane and nothing more. We can also use this to derive $Z$ with $gz = \sqrt{\Omega^2 - gx^2 - gy^2}$ since the device will be more or less stationary.
I have tried several approaches to this problem, including rotating the device around every axis and every conceivable angle in a frustrated brute force approach - none are consistently correct. Intuitively, it seems to me that, for a given tilt angle $(\theta)$, latitude $(\lambda)$, and gz measurement, there will be four possible azimuthal angles $(\phi)$ - one for each quadrant. Is there a way to prove this mathematically that doesn't just boil down to me making a slide show full of arrows and axes and repeatedly shouting "LOOK!"?