Getting started with Photogrammetry
Capturing the world in a future-proof data form
With the recent advancements in computer vision, spatial computing devices (VR/AR headsets), and aerial capture technology, a 3D scanning method called Photogrammetry is becoming more accessible to game developers, photographers, as well as VFX artists.
You’ve probably seen all the cool videos, played games with hyper-realistic assets (like, or even visited photogrammetry locations in VR. So, here’s how to get your feet wet into capturing with photogrammetry!
- Camera (preferably with a high megapixel count, but I’ll be using an iPhone 7 Plus for this tutorial)
- Photogrammetry software (Reality Capture 40$/month , Agisoft Photoscan 179$, Autodesk Recap 360 free for students)
- Patience (and lots of it)
Choosing the subject
You can divide photogrammetry into two types of subjects: Interior captures and exterior.
There isn’t any real difference in how the subjects get processed and created into 3D, but how you take pictures.
I Interior subjects (inside out photography)
When it comes to capturing the interiors of environments, the best tools at your disposal will be:
- A camera like a DSLR that can handle lower lit situations (new smartphones can be decent at handling these situations as well)
- a tripod (if the space is actually really dark and you need longer exposure times).
- TONS of overlapping pictures (more on this later)
Exterior captures (Capturing houses or singular objects)
Exterior captures can be somewhat easier to begin with purely because they’re harder to mess up. You have a subject that you can walk (or fly) around and take a overlapping photos along that path.
The scale of the subjects can vary. You can capture everything from the exterior of a building to something as small as a pinecone. It all depends on what cameras you have access to.
Now that you’ve decided what you want to capture, let’s start with planning the capturing method.
For this example, I’ll be shooting with my iPhone 7 Plus using the stock Camera app (12MP camera with ƒ/1.8).
Walk around your subject and take a photo at every step. Be sure to stop to take each photo to increase sharpness and reduce rolling shutter issues. You can download the 97 images to work with the same data set.
Keep the lighting consistent
Minimize the capture time to keep the lighting even between the first and the last photo for your data set. This won’t be a problem for this example, but can be an issue for much larger projects.
This scan of the Geghard Monastery was done over the course of an hour or so. As you can tell, the sun moved, and created two different angles of shade.
Remember, shadows move slower when the sun is directly vertical (~12:00 PM) than at dawn or dusk. Ideally, a cloudy day works best to have the most diffused shadows and have consistent lighting through your model.
Two important details to remember:
Capturing for alignment: To make sure that you all your photos align as one 3D model, your pictures need to have a ton of overlap (at least 50%). It’s best to use as wide angle lens (10mm –18mm) to reduce the amount of total overlapping photos you need. It’s better to stand far and capturing beyond the subject that you’re interested in.
Capturing for texture: The other important side of the capture method is to ensure you have close-up photos of the subject, to get those fine textures and geometry integrated into your scan. It’s important to take pictures between your overview “alignment” photos and these close-up “texture” photos so the software can make sense of the camera movements.
For the sake of keeping this tutorial at an introductory level, I’ll be using Autodesk’s Recap 360 which is drag-and-drop, cloud based platform and doesn’t require any processing on a local machine. As a result, you have less control over the overall quality of the scan, but it’s a good place to start!
Also because all Autodesk software is free for students!
Start by uploading you data set (all captured photos) to Recap, and toggling on Smart Cropping, Smart Texture, and Nadir optimization.
Recap will notify you when the processing is done (via email) and you’ll be able to view/download your new model!
As you can tell, although the subject of the scan was only the firepit, Recap reconstructed the entire backyard (despite having the smart crop option toggled).
What you’ll also notice is that the vegetation never looks good in scans. Thin strands that move in the wind never really produces acceptable results. Avoid capturing them if possible.
Since doesn’t support cropping point clouds and meshes before reconstruction to produce just the firepit, you can use Meshlab to clean up the model.
You can download my model to compare with your results.
With more advanced software like Agisoft Photoscan and Reality Capture, you can do a lot of editing and adjustments to the meshes after they’ve been reconstructed.
I’ll be comparing the results of Agisoft Photoscan and Reality Capture in a more advanced tutorial article soon!
Az Balabanian is a filmmaker and holographer exploring volumetric capture methods and aerial cinematography.
Talk to the Experts 🗣
Download the Realities app for FREE 🔥
Explore the most beautiful and detailed environments from the most inaccessible parts of the world. Constantly updated with new places to explore for free!
Download from Steam for Oculus Rift / HTC Vive.
Follow the Realities story 🌎
Powered by WPeMatico