metric information from camera ray reconstruction

As explained in a previous article (simulating a camera) (which is basically converting 3d objects to 2D projective geometry, with specific camera parameters) is one thing, but getting metric information out of a photograph is totally another. There are several ways of achieving this. You can either use a digital terrain model and one oriented image or a pair of oriented images (stereoscopic). In this article I am going to focus on a stereoscopic “technique”. Some “techniques” require good mathematical knowledge and skills while others are a bit more graphical (meaning you can actually imagine the solution and even graphically depict it).In this article I try to explain how to compute a point’s 3D space coordinates, from two images using camera ray reconstruction.


I am going to skip completely ignore the pure photogrammetric solution that includes partial derivatives and other semi-complex mathematics and I am going focus on a more “graphic” solution. So I can do this

ray

figure 1 : top view

photosim2

figure 2 : front view

I love GeoGebra for designing geometrical figures, but unfortunately at the time it only supports 2D figures which will make explaining some things a bit more challenging.

As we can see in figure 1 the whole idea is to create two lines (the red ones) that pass through the pair of known points for each photograph separately.

those points are

A[x0, y0, -f] & E[x,y,0]
B[x0, y0, -f] & Z[x,y,0]

A & B being the principal points of the cameras
and E & Z our corresponded image observations

The next step would be to find the point of intersection of those lines. As its pretty simple to imagine 2 lines in 3d space almost never intersect (except by pure luck due to lack of accuracy). What we could do it to find the point of converge and by finding the minimum distance between them, which is a unique value. Based on the length of that line segment we can estimate the precision of our method and an approximate 3D space position of our observations.

To do so, though I skipped a step and that is to transform those pair of lines from the local coordinate system of the cameras to real world coordinates.

So we would need to know the position of each camera principal point in real world coordinates, as well their 3 rotations (ω,φ,κ) then we cant transform those coordinates into real world coordinates.

so




then by using the The shortest line between two lines in 3D by Paul Bourke we could estimate our expected point of intersection 3D space position.