First photogrammetry result with an iPhone and an M1 Mac
17 Nov 2021A few days ago I captured a 4K video of a lion sculpture in Burggarten in Vienna. Taking a video feels more fluid and intuitive, but it can result in blurry frames and thus bad scans. But it was a sunny day, so no problems there.
A downsampled version of the video.
When I got home I transferred the file to my Mac opened it with with PhotoCatch. It can either process a video or a bunch photos. When opening a video we get to choose how many pictures it should extract. I went with 6 per second giving me 65 pictures in total.
The model when I used “reduced” for the model quality. I exported it as USDZ and converted it to gltf with Blender. I’m using model-viewer to embed it here in the website.
Some stats I collected when using this video:
Model Quality | Time to Convert | Number of Polygons |
---|---|---|
preview | 1m 51s | 24.999 |
reduced | 4m 35s | 31.110 |
medium | 4m 46s | 49.999 |
full | 5m 20s | 100.000 |
raw | 4m 27s | 314.780 |
The conversion times are not representative as I used the machine while it was converting.
Overall I’m pretty impressed with the results, also considering that this is a free tool. So far PhotoCatch covers all of my needs, but I’ll try my own implementation using RealityKit at some point.