Let us assume there are $N$ sensors permanently fixated on a rigid body each measuring the orientation (call it $q_i$) at their corresponding location (call it $p_i$) with respect to a fixed/well-defined coordinate system.
Now, if the rigid body is moved/rotated, the orientation reported by each sensor would be different (call it $Q_i$). How can I related $p_i$, $q_i$ and $Q_i$ to the translation/rotation information. I currently have the orientations in quaternion form.
One way I assume that the problem can be tackled is to define four points ($p_i$, $p_{ix}$, $p_{iy}$ and $p_{iz}$) all of which are part of the rigid body. Then $q_i$ is defined by those four points (think of the points as $(0,0,0)$, $(\delta_x,0,0)$, $(0,\delta_y,0)$ and $(0,0,\delta_z)$ localized to the location of each $p_i$). $q_i$ can now be fully determined from each set of four points. Similarly $Q_i$'s can be defined after moving/rotating the $N \times 4$ points.