This project covers the practical use of photogrammetry and won't explain the algorithms in the background. Where needed some information will be given.
This project is a brief tutorial from acquiring multiple images to a finished 3D print of a real world object.
To make the experience fit your profile, pick a username and tell us what interests you.
We found and based on your interests.
This project covers the practical use of photogrammetry and won't explain the algorithms in the background. Where needed some information will be given.
I've overcome my problems with flipped normals in Meshlab. I will upload a small programm to correct the normals in the ply files in the next days, until then I will complete the tutorial.
I've been moving to a new apartment this week and therefore the progress is a bit slow. Besides that I tried a quick scan of my former bathroom.
Source:[Alexander Kroth]
The picture shows the output of CMVS, therefore the dense point cloud. It's visible that only edges are recognized and walls or other low texture surfaces create nearly no points. The initial photo set is a series of 116 photos with a resolution of 3008x2000. The room wasn't lit very well and therefore the single images had even less feature points.
Source:[Alexander Kroth]
On the right you can see the tiles on the wall, that should be flat, but due to the low amount of points in the center of the tiles they tend to be shaped like above. The shiny edge of the bath tub couldn't be captured at all and there is a hole inside the wall in the resulting mesh. Overall the mesh quality is quite low and the mesh fits the reality poorly. For a better scan additional indirect lightning would have been necessary to capture fine details on the wall and on the tiles.
I'm currently struggling to find a good workflow to create the normals for the mesh in Meshlab, but it mostly ends in try and error... Maybe on friday I will continue the project. Until then I will collect some sample datasets that can be tested with the tutorial and compared against the results of other users.
As the tutorial is growing I'd be happy with some feedback to further improve the tutorial and to cover areas I didn't mention enough.
I've been working with 3D printers and 3D scanners for over two years now and wanted to share my knowledge about photogrammetry. The contents shown here are also part of a course I give at our local hackerspace in Darmstadt, the Makerspace Darmstadt. The photos in this tutorial where taken in the Felsenmeer:
Source:[http://www.felsenmeer-odenwald.de]
Roughly translated Felsenmeer means oceans of stones which is quite fitting if you've been there.
As every 3D scanning technique photogrammetry has certain restrictions that you need to know before you start scanning. The first program that we use ist VisualSFM. It takes a series of images and creates a 3D point cloud from them. The problem VisualSFM solves while creating the point cloud is called structure from motion.
Source: [openmvg.readthedocs.org/en/latest/_images/structureFromMotion.png]
As shown in the picture above points on the object are seen by multiple cameras and are saved in the individual images as feature points. The camera may be a single camera moved around a still object taking multiple photos or a group of cameras taking photos at the exact same time. By recognizing object points in mulitple photos both the position of the object points, as well as the position of the cameras, while taking the photos can be calculated.
This leads to the question, what qualifies as a object point and can be detected as a feature point?
Source:[Alexander Kroth]
The image above shows what the SIFT algorithm, used by VisualSFM, sees in a photo. The arrows are figures of the feature points. A feature point is described by a direction and a magnitude by the SIFT algorithm. It can be seen, that there are a few strong features and many weak ones. The magnitude describes the ability of a feautre point to be recognized in a different photo.
The photo shows a couple of rocks inside a forest with leaves on the ground. A rock is nearly perfect for photogrammetry. It has little to no reflections and very textured. These are already the key requirements for photogrammetry.
To conclude this chapter here are some examples that work well with photogrammetry and some that might create problems:
Working well:
Difficult objects:
As mentioned above the algorithm doesn't like movement of objects within the scenery that you try to scan. Therefore nature and crowded spaces are hard to scan. Trees and bushes are quite non-stationary as leaves and branches move. It's ok to capture a tree from a distance, so that little movements are not visible to the camera, but close ups are difficult. As well as trees, people and cars in public spaces tend to move and with them the object points on them, therefore you should avoid to capture pedestrians and moving cars, while trying to scan a building for example.
In the previous chapter I gave a short introduction in the properties necessary for capturing an object. In this chapter I'd like to describe how to shoot photos for the later use in VisualSFM.
As in photography you don't want areas that are overexposed or underexposed. If you can't lit the object homogenous then it's better to underexpose certain areas than to overexpose. Underexposed areas offer no texture and therefore there are fewer feature points, while overexposed areas tend to show little sparkles, which might create false positives. False positives are, in this context, object points, that are found by the algorithm, but don't exist in the real world. Another important point is to take sharp pictures without motion blur.
Source:[Alexander Kroth]
The SIFT Algorithm used by VisualSFM detects object points in multiple images. To achieve this the algorithm must fulfill certain requirements. Object points should be found with the following changes from one picture to the other picture: Position/orientation, light, scale.
Knowing these basic principles, how can we optimize our pictures for the algorithm?
As you can see in the image above I've moved around the stone bit by bit, taking images with only a small change in perspective from picture to picture. By this we help the algorithm keeping the factor positon/orientation in good condition. Even though the algorithm can detect object points in two images with a huge difference in perspective, it is better to create a series of images, with only small changes in perspective to achieve more correctly recognized object points and therefore a denser point cloud. Mixing horizontal and vertical pictures doesn't seem to be a problem and won't affect the matching process. Matching is the process of comparing two feature points and deciding wether they represent the same object point in the real world.
If you have the time it's allways better to take more photos. You now might think of using a movie for structure from motion. It works, but you have to keep in mind a few more things, which i might explain in an additional project.
The light conditions in the pictures above are pretty perfect for outdoor structure from motion. A foggy sky with indirect sunlight. Clouds tend to move and... you know. The light conditions should be homogenous with little direct light. I wouldn't recommend using the flash because it's easy to create little overexposures which create false positives and the flash is depended of the current position of the camera, therefore you create quite different light conditions from picture to picture. It's better to use a stationary lamp and think about light before starting to capture photos.
The last point mentioned is scale. With the rock above you might want to capture the rock as a whole and then capture some details on certain parts of the rock. Simply taking a series of pictures from a distance and then taking some close up pictures won't work. As mentioned earlier your camera might not see the texture from the distance, that you can see. Therefore the camera won't see the object points within the details that you try to capture from the distance. It's better to create a series of picture moving towards the detail and from the detail, giving some "guide" to the algorithm. To capture multiple parts of an object and even details it is good to think of taking pictures in a flow. Don't interrupt taking pictures and only make small changes in perspective from picture to picture.
In the steps before we discussed the principles of taking suitable pictures. Now we create a point cloud based on those pictures. First download and unzip VisualSFM. If you have a recent Nvidia GPU chose the 64bit CUDA version. CUDA is a framework for parallel processing on GPUs and makes the process of matching images notably faster (10x or more...). Now download the CMV program and unzip it in the same folder als VisualSFM.
Source:[Alexander Kroth]
In this tutorial I will only explain the buttons necessary. VisualSFM offers many options for optimization and different view options, but only the ones necessary will be covered. To enable GPU processing choose Tools->Enable GPU->Standard Param and Tools->Enable GPU->Match using CUDA. If you don't have CUDA available you have to check Tools->Enable GPU->Disable SiftGPU and Tools->Enable GPU->Match using GLSL.
Click on File->Open+ Multi Images. Now choose all images that you want to process and open them. Note that, when opening large numbers of files (500+), VisualSFM sometimes crashes or won't load any images at all. You can view the pictures now by clicking and dragging or zoom in.
Once all images have been loaded, which can take some time, depending on the image size, we want to detect and match the feature points of the given images. First click on SfM->Pairwise Matching->Show Match Matrix. The same view can be accessed by clicking on the colored tiles symbol while holding shift. Now all images are shown on both the x- and y-axis. Now choose SfM->Pairwise Matching->Compute Missing Match.
Source:[Alexander Kroth]
VisualSFM now starts to apply the SIFT algorithm to every image. The picture above shows the output for a series of pictures. The last column is the time needed to find all feature points in the picture. The one before gives you the numer of feature points found in a picture. A low number of feature points means that the SIFT algorithm couldn't find many recognizable object points in the picture. You should aim for a high number of feature points for matching.
After finding all feature points VisualSFM starts to match the feature points against each other to find object points visible in multiple pictures. The diagonal of the matrix would mean an image is matched against itself. As there is no use in doing that, the diagonal stays white, which means, that no matches or very little have been found. A dark red color represents a high number of matching feature points while yellow and green represent a lower number of matching feature points. The log windows shows the number of matches for a pair of images and the time taken to compute these. Note that both computing the SIFT features and matching the features is VERY demanding for your hardware, therefore you shouldn't be working on the pc while it is matching.
Be patient while matching, depending on the picture size and the number of pictures it takes from minutes to hours to complete.You can stop the matching process with Ctrl+C, but be warned that VisualSFM tends to crash if matching is stopped. If this happens simply restart the program and open your pictures again. VisualSFM keeps track of the sift features and matches already found. The image above shows a finished matching process. You can now see how the pictures are matched. If you draw a horizontal line from one of the pictures on the right side you can see als pictures that have matching feature points on the buttom of the matrix. If you have small islands inside this picture with nothing horizontally or verticaly from it, these images tend to create seperated point clouds not connected the main one.
Additional Information:
If you have a series of pictures that don't form any loops, for example you walk in a straigt line for a while, while taking pictures in contrary to walking in circles, you won't need to match a picture, taken hundreds of meters away from another picture, with said picture. For this occasion you can use SfM->Pairwise Matchin->Compute Sequence Match. VisualSFM only matches pictures within a given range. For exmaple 10 pictures before and after a picture. Keep in mind to name the pictures according to their logical order, for example first picture of a path is 1 and the last one is 900 with increasing numbers.
Create an account to leave a comment. Already have an account? Log In.
Become a member to follow this project and never miss any updates
By using our website and services, you expressly agree to the placement of our performance, functionality, and advertising cookies. Learn More
Hi, Do you know how I might make a cheap lider type scanner to scan the ground maybe plugging a laser device into my phone? I am not looking for high res, just something i can tie in with gps on my phone to scan say terrain and objects at a basic level. If i can do this i might go for a next stage of object recognition.