Levels are instruments that are heavily used in construction. Because they have a relatively low price, they are very popular and it is important for them to be accurate. Due to the fact that the end of line inspection is still performed by a person in many cases, I though this would be an interesting Computer Vision project that could possibly help the industry.
Finding a solution for this problem was complicated by the fact that there is no set of rules to follow, there is no intuition about best or most efficient algorithm to apply.
I've learned that in order to develop an optimum solution (more or less) would require a high-resolution camera, a telecentric lens, proper lighting, and an accurate level indicator/platform. These items can cost in the tens of thousands of dollars, so I had to make some compromises.
- Using a camera from my laptop.
There are obvious pitfalls with this, including average resolution, auto-focusing, image reflections, etc.
- Establishing "0" using an external reference.
In this case, I used the inclinometer built into my cell phone. While not perfect, I decided to reference the inclinometer as the "0" degree (perfect level) as well as measurement of angle when necessary.
- Lighting
This really was one of the most challeging aspects. I had to do a lot of trial and error in order to come up with the best setting that would allow me to take pictures that are not only clear enough, but also free of effects. I used flashlights, cell phones, and simple ambient light. Several configurations were tried in order to obtain the best image possible.
- Individual pictures versus a "live" image.
A live image would be the solution on a production line; however, this was not possible here. A few live images were taken, but the resolution was poor and the processing time proved to be excessive. For this reason, I decided to capture the needed images using my cellphone and then process them later.
- "Defective" Levels
I acquired two levels of slightly different quality, but I don't have a reason to believe that any one of them is not accurate. In order to test my application, I had to create scenarios where the bubble would appear as having large errors. The main goal was to make an assumption that the images captured, while not ideal, would be sufficient to provide a range of situations for my application to work.
In order to establish a scale I can compare against, I used an inclinometer application on my phone to establish perfect level. In this case it was 0.200 degrees (referencing the inclinometer). Using a number of other tools and devices, I established that the bubble moving from center to one edge was approximately 0.7 mm.
The scale then represents (0.200 degrees / 0.7 mm) or 0.286 degrees per mm of movement. This meant that for every mm that the bubble moves, the number of degrees of "tilt" is 0.286 degrees.
In order to be able to take accurate pictures and simulate the conditions I needed, I used shims to get the bubble to the center, while referencing an inclinometer.
Limits were set based on the manufacturer's specification of 0.005 inches per inch OR 0.029 degrees. The claim is that the accuracy of the level is within +/- 0.029 degrees (when looking at the bubble on a level surface).
Several images were captured and saved using my cellphone, and the images that appeared to be the most useful were used. The test specimen was then slightly adjusted using shims, such that I could "tip" the level just enough to case a slight movement of the bubble. This was to simulate a "bad" bubble, or a bubble that was not exactly 0.
In the end, images were taken of "good" bubble locations, "bad" bubble locations, and some in between. All of this was done in order to verify that my software was able to detect the distance that the bubble moved away from the center, calculate a resulting "angle" and comparing this to a standard (pass/fail). In this case, I used 0.029 degrees.
Application Development
After calculating all measurements needed and finding the best method to acquire satisfactory images, I was finally ready to write the application.
I had to re-considered the programming language to be used for this project; I decided to use Matlab. Even though I still have a very complicated time trying to find the correct way to accomplish my goals, it proved to be very advantageous, due to the fact that it allowed me to try certain methods and algorithms that are already built-in, without investing too much time.
After first trying template matching and not getting the most promising outcome, I've decided that the methods we studied in class that could give me the best results would be a combination of line detection using Hough Transform and then applying Active Contour for finding the bubble, once I located its approximate position.
The application takes all images placed in a certain directory and applies the following steps to each:
- I learned that in order to detect lines/edges better, lines in a picture should not be completely horizontal or completely vertical. So the image of the level is first rotated by 30° and a filter is applied in order to smooth out noise.
- Image is converted to grayscale after which a binary image is obtained and the Canny edge detection algorithm applied to it.
- The Hough Transform algorithm is used now, in order to find exact location of edges of interest.
- My idea was that in order to locate the bubble, I can start by finding the longest line in the image.
From there, using various calculated measurements and sizes, I can detect the perpendicular edges describing the markers that are painted on the vial.
- Having the rectangular area that surrounds the bubble allows me to create a mask. This process proved to be very challenging, mostly due to imperfections in the images acquired using the cellphone camera.
- Using the grayscale image and the mask, Active Countour is applied and the image of the bubble is obtained.
- Matlab has another interesting algorithm called 'regionprops'. It measures different properties of specific regions in the image, like area, the bounding box, centroids, major and minor axis of regions it finds.
For each bubble, 'regionprops' finds the center and angle of rotation of bubble.
- Using these and the rest of the physical measurements I calculated, axis and movement from center are painted on the image of the bubble, along with calculated information related to amount of error found.
The application then moves the processed images to a temporary folder and saves the data obtained, in a file.
I chose to save the data as a .txt file, because it can easily be accessed using any application.
Please see next page "Test Results" to see several examples.