Have you ever used a pinhole camera before? Maybe you made one to look at a solar eclipse, or as a summer camp project, or in art class. The pinhole camera, or camera obscura is one of the oldest image-making technologies we have, and here at the IRC we are seeing its potential to improve our photogrammetry work.
The principles behind the pinhole camera go all the way back to Euclid in 300 BCE, but Leonardo Da Vinci is widely credited with refining and improving the camera obscura. A camera obscura is a light-proof box (it could actually be an entire room) with one very small hole. The hole lets in light, and directly opposite the hole is an upside down projection of whatever is outside. This happens because light travels in a straight line, but is distorted or refracted as it passes through the hole. Da Vinci realized that this is essentially the same way that the human eye works-- our brain flips the image to right-side up. The smaller the hole, the sharper the image.
What does this have to do with out 94-camera photogrammetry rig? Mark Murnane, the IRC's post-baccalaureate faculty research assistant is our photogrammetry expert, and over the course of his research has grappled with two related issues. The first is that when the photon of light hits any given camera, the computer has to figure out the location in the photo of the corresponding pixel. That location is a function of the angle from which the light is entering the camera. But, camera lenses actually complicate this process, because they add distortion. Lens distortion warps the image in a variety of ways, causing an image of a single point to appear as a larger blob. This blob is called a point spread function, and describes how photons entering from a particular angle move through the lense. In the sample image below, each star is a point spread function of a sharp point of light. In an ideal world, each photon would only light up one pixel. With practical pinhole lenses however, refraction causes the image of a single point to spread into an Airy disk.
Traditional lenses exhibit radial distortion, tangential distortion, skew, and other optical effects. Accounting for all of these can lead to a model with 16 or more parameters. With 94 cameras to solve together this leads to an incredibly difficult problem.
But, a pinhole lens, especially one with an extremely small and precise hole (the one Mark tested is .2 mm) has much less distortion, and that distortion is also more predictable or regular. Furthermore, with the pinhole, the distortion doesn't vary by location within the image. The sample image from the pinhole (below) shows straighter lines, and therefore the math needed to correct it is correspondingly simpler, resulting in 3-parameter model. Having fewer free parameters means we can find a more accurate reconstruction in less time, and moving from 16 parameters to 3 makes a world of difference.
Having a predictable point spread function also opens the door to programmatic image sharpening. By deconvolving the point spread function across a captured image, some detail lost in the capturing process may be recovered. This method has become a crucial tool in other fields such as astronomy, where it has been used to sharpen images taken from many observatories.
The trick is to figure out how to get a perfect pinhole, without any aberrations. The less precise the pinhole, the less detail in the image. However, once we have figured out the optimal pinhole, then we will switch all of the cameras in the photogrammetry rig to pinholes, which should in turn allow for more accurate scanning.
There is a trade-off being made with this: is less light on the individual camera sensors worth knowing exactly where the light came from? We think that it is.