How to Capture Structure from Photographs for 3D Printing
Very high-resolution modeling for computer graphics can be performed using 3D scanners, but to capture objects in motion — such as a runner leaving the starting blocks — requires another technique — photogrammetry. Photogrammetry uses multiple 2D photographs to calculate the shape of objects within the field of view. By taking multiple photographs at the same moment, objects in motion can be captured as easily as the still objects required for 3D scanners.
Photogrammetric results are often not as exact as scanned equivalents because all points are calculated based on differences between two photos taken at slightly different locations.
Professional photogrammetric studios can make use of carefully measured locations for each camera — and highly calibrated depth-of-field measurements on the lenses and lighting — to provide the best possible capture of living subjects. Models using systems like this can capture details down to individual hairs on a subject’s arm, depending on the type of lenses and number of cameras used.
Advances in computing power, particularly in video card GPGPU processors, have made photogrammetry available to home users without high-end supercomputers. Early Structure from Motion techniques have been collected into commercial software packages such as AgiSoft’s PhotoScan, which is the same package used to capture 3D models for movie CGI, and for video game designers and artists who want full-body, full-motion captures of their subjects.
As computing power continues to increase, photogrammetric applications have developed greater capability to locate similar features between several different photos and calculate the relative position of the camera for each photograph without requiring the fine calibration needed for a professional studio. A hobbyist used his Google Glass camera to snap repeated photos of a museum statue, which he then reconstructed into a 3D model using Autodesk’s free, 123D Catch application.
No one was aware of this capture of the museum artifact because he was using a script that captured a photograph of what was directly in front of him every time he blinked.
This capability strikes fear into the designers of next year’s car-body models because their unique creations may already be captured fully by the time a model walks off the runway during the first public display. As additive manufacturing continues to mature, a fabricator downtown or around the world could already be printing out copies of the designs in wearable materials before the model even reaches her dressing area.
As part of its cloud-based 3D modelling services, Autodesk’s popular 123D Catch application allows tablet and PC users to perform photogrammetry even when their local computer lacks the resources necessary to process all of the detail matches in a reasonable amount of time.
Using an example photographic set, for example, Kirk extracted the model of a warrior’s statue at home — in a little over five days. The same translation was possible in just under three hours using the free “Create 3D Model” feature built into the browser-based Autodesk 360 interface.
Photogrammetric surface calculations can be used to capture 3D models of statues, moving people, and animals, even when a high-resolution 3D scan would not be quick enough to capture all the details of the subject. However, the same systems can stitch together photographs to build models of buildings — and even areas of land for agricultural review.
Using a quadrotor or other type of UAV equipped with a camera, architects can capture a 3D model of a business park by simply flying the vehicle overhead and then later building the model in a photogrammetry solution. Using ROV submersibles, researchers in the local marine archaeology department are starting to use this capacity to map out wreckage and debris on the ocean floor to plan their recovery dives.