As explained in a previous article (simulating a camera) (which is basically converting 3d objects to 2D projective geometry, with specific camera parameters) is one thing, but getting metric information out of a photograph is totally another. There are several ways of achieving this. You can either use a digital terrain model and one oriented image or a pair of oriented images (stereoscopic). In this article I am going to focus on a stereoscopic “technique”. Some “techniques” require good mathematical knowledge and skills while others are a bit more graphical (meaning you can actually imagine the solution and even graphically depict it).In this article I try to explain how to compute a point’s 3D space coordinates, from two images using camera ray reconstruction.
Its not often that someones asks you to re-invent the wheel. But this was one one these times and it was mostly necessary.
The task involved projecting already digitized city blocks outlines onto camera images.
A normal approach would be something among the lines of drawing the shapes in a 3d space (directX, openGL etc) and showing the image in the background. But when you want to accurately simulate a “real” existing camera then it seemed to me easier to build the whole construct from scratch.
In Order to do that we had to know both the interior and exterior orientation of the camera.
- the interior orientation is the position of the principal point, the focal length and the radial distortion. Some manufacturers provide those values (when it comes to photogrammetric cameras at least) but most do not. There are ways to calibrate yourself a camera but that’s completely a different topic.
- the exterior orientation of the camera is the position (x,y,z) and rotations (ω,φ,κ) of the camera at the time of the shooting
So what we need to do is to go from city block coordinates to image coordinates.
The main idea is described in the following steps