Transforming 2D Laser Profile Data to 3D Coordinates with Angle Considerations: Resolving Discrepancies

37 Views Asked by At

General Understanding of what the project does:

I'm building a system where I take image(s) of an object at Various positions with a Line-Lazer being projected onto the object Vertically (Perpendicular to the floor plane) and ideally re-constructs this object in 3D space with a Point Cloud.

enter image description here enter image description here enter image description here enter image description here

NOTE: All these Images have a Known Rotation Angle, this is used as slice_angle later

Generated from Slices: Generated Point Cloud

How I went about Calculations from 2D to 3D Points:

I do preprocessing and gather the most bright Column on every Row, this way I build an X - Y map of points I want to transform into 3D

Then I use an undistortion algorithm on the points with Camera Matrix & Distortion matrix to get 2 undistorted X and Y points

From here, I make the Z point to be Z = Undistorted X / tan ( Laser Angle ) which my lazer angle is 45 deg, so Z = X pretty much

After all 2D to 3D point conversions, I multiply all 3D points in that slice, with it's respective angle.

I do X = X * cos(slice_angle) and Z = Z * sin(slice_angle), this way a slice at 45 degrees will be rotated that way.

UPDATE: Started using a rotation matrix, now I'm getting this for the cubic object: enter image description here

Now I've got X,Y,Z coords for every point from the 2D image, rotated at it's respective slice_angle

Camera Matrix \begin{pmatrix} focalX & 0 & cY \\ 0 & focalY & cY \\ 0 & 0 & 1 \end{pmatrix}
Distortion Coefficients Matrix \begin{pmatrix} k1 \\ k2 \\ p1 \\ p2 \\ k3 \end{pmatrix}

            std::vector<cv::Point2f> srcPoints;
            srcPoints.push_back(cv::Point2f(static_cast<float>(middle), static_cast<float>(row)));

            std::vector<cv::Point2f> dstPoints;

            // Apply the camera calibration and distortion correction
            cv::undistortPoints(srcPoints, dstPoints, cameraMatrix, distCoeffs, cv::noArray(), newCameraMatrix);

            GLfloat offset = 0.05;

            GLfloat normalX = normalizeCoordinate(static_cast<float>(dstPoints[0].x), n_rows);
            GLfloat normalY = normalizeCoordinate(static_cast<float>(dstPoints[0].y), n_cols) + offset;

            double result = dstPoints[0].x / tan(45 * pi / 180); // Divide value by tangent of 45 degrees

            GLfloat normalZ = normalizeCoordinate(static_cast<float>(result), n_cols);


            GLfloat theta = (slice.angle +90) * pi / 180; // Convert angle to radians

            normalX = normalX * cos(theta); // Apply rotation matrix
            normalZ = normalZ * sin(theta); // Apply rotation matrix

            slice.list_3d_points.push_back(glm::vec3(normalX, normalY, normalZ)); // GLM::VEC3 works well with OpenGL

** The Problem: (The Issue)

This seems to work OK on round objects, ( Like a Mug, which is round on it's X and Z Axis )

But when it comes to a Cube, my calculations break... Exhibit:

NOTE: Square-esque on X and Z axis

enter image description here

Here it is at 0* and 45* enter image description here enter image description here

And It's Software rendering:
enter image description here enter image description here enter image description here

As you can see,

The Angled Slices don't align perfectly, and stuff seems to break...

I'm thinking of implementing a system where it finds offsets for everything to overlay correctly, but everything is quite relative so It would be hard to dynamically determine what is rendering in it's correct

Conclusion:

I'm looking for general advice & solutions for how I can potentially solve this Problem for non-circular shaped objects, **and or** Insight on if there are any parts that I can improve, really just tried to fix this up overall and get as many rendering working from Lazer Slices!
I'd really appreciate any criticism / advice on stuff I can do better!