I have a problem with computer vision-related task that I'm struggling with.
So in short I have images based on 4 set of cameras that takes 360 degrees images but the issue is that those cameras are outputting rotated images. Camera 0 output images that are rotated +90 degree clockwise
Camera 1->3 output images that are rotated -90 degrees clockwise.
Every camera have quaternion as follows:
Cam 0 Cam 1 Cam 2 Cam 3
W 0.554038881 0.7933533473 0.5663966889 0.001525752367
X -0.436331528 0 0.4305736222 -0.6145368421
Y -0.4300295689 0.6087614198 0.4262777488 -0.001479600449
Z -0.5636756921 0 -0.5586487515 0.7888852594
Is there any mathematical method where I can predict by quaternion if the image is rotated to 90 degrees on the left or to the right?
the motive behind finding this is that those images are passed to an Object detection deep learning model. Any object detection model is trained on a large dataset of images that are correctly oriented so when I try to detect assets in a rotated image the model cannot detect due to its limitation in extracting features from rotated images because it is not trained on.
My approach is to rotate those images and then pass them to the detection model because when Images are not rotated the OB model detects assets but once the image is rotated +- 90 it fails in detecting any assets.
Additionally, I know that all images are rotated 90 degrees but what I'm missing is the direction (+90 or -90 clockwise) so If I rotate the image by +90 degrees some images might have correct rotation while others will be rotated 180 degrees which returns me to the original problem.
PS: sending 2 copies of the same image rotated -+90 is not feasible in my application due to limitation in processing power and time wich will lead to O(2n) instead of O(n)

