We humans learn to recognize patterns from the day we first open our eyes. That’s a bird! That’s an airplane! Those are the letters A, B and C! If we couldn’t pick out specific objects from our surroundings and categorize them, our sense of sight would be of little use.
Machine vision systems also need to be able to find and recognize patterns. The first step in any machine vision task is pattern matching, i.e. locating an object within the field of view based on an expected arrangement of shape attributes like edges. How does this process work?
As it turns out, there are numerous ways to perform pattern matching, since it’s a task that involves a significant amount of data crunching and it’s crucial to optimize the process. At the very basic level, it involves using the intensities of individual pixels within an image to determine where the contours (or edges) of a shape lie. If the arrangement of these contours is reasonably similar to that of a template pattern based upon statistical analysis, the vision system will output a match.
Contrast Is King
Since the detection of edges is based upon the varying intensities of individual pixels, good contrast is very important. This requirement has led to all sorts of innovations in lighting arrangements, such as on-axis illumination that shines light perpendicular to a surface and dome illumination that provides consistent lighting from all directions. It’s also necessary to minimize ambient light and the glare coming from reflective surfaces. Good illumination maximizes the detection of relevant edges and minimizes the perception of edges that don’t really exist.
There are also a number of image processing algorithms that can bring out certain features of images when good lighting alone isn’t enough. One example of this sort of image processing algorithm is thresholding, where a histogram of the number of pixels at different intensities is computed to determine an intensity threshold. The algorithm looks at the image pixel by pixel, setting all pixels below the threshold to black and all pixels above the threshold to white.
Example of thresholding in Visionscape Machine Vision Software.
In Search of a Pattern
Before a machine vision system can play the industrial version of “Where’s Waldo?” on the product line, it needs to know what it’s looking for. (Probably not Wizard Whitebeard.) The operators of the system tell it what to look for by providing a template image. The system then memorizes the shape, including where the edges are and how far away they are from one another relative to the overall size of the template pattern. It’s important for the vision system to be able to find the same pattern in various sizes and orientations.
Once the system knows what it wants to find, it starts analyzing the pixel data of a captured image. Pixels are given an intensity ranging from 0 to 255, and these values can be used in calculations. To find the edges of an object, the software looks for large changes in contrast by performing pixel subtractions. If the result of a subtraction is close to zero, the two pixels don’t comprise an edge. Large discrepancies in intensity – either positive or negative – indicate that an edge is likely present.
The results of pixel subtractions are stored in a data structure, and the software scans both horizontally and vertically across the modified image representation to find contours. It keeps tabs on the relative distance of the contours and then looks for a statistical similarity between the reference image and any discovered shapes. It’s possible to vary the statistical tolerance of the system in order to make matches more or less likely. A high tolerance will make false positives more common, whereas a system with the tolerance set at a lower level will be more likely to miss real matches.
The Future of Machine Vision: Deep Learning
In traditional machine vision, you need to tell the system exactly what you want it to look for. This can require a significant amount of programming effort. Wouldn’t it be great if the system could figure out what to look for on its own? That’s the impetus behind efforts to incorporate deep learning algorithms into industrial machine vision systems.
Deep learning makes use of advanced algorithms such as convolutional neural networks (CNNs) to allow computers to learn what they need to do based on a set of training data. Instead of needing to specify the exact contours of an object, a series of images of are presented to the system with the specification that they either do or do not contain the object of interest. During this training period, the algorithms discover for themselves which attributes of an image indicate that the object is likely present. This leads to more robust recognition rates because the vision system’s ability to crunch large amounts of data makes it better than humans at identifying the defining features of objects.
Machine vision is rapidly advancing, and the engineers who specialize in it are constantly looking towards the future. The next several years are likely to bring numerous innovations in lighting, camera equipment and machine vision algorithms to make industrial automation more streamlined and flexible than ever before.
Here are a couple of useful resources to learn more about machine vision software:
- Click here to learn more about AutoVISION - an easy-to-use machine vision software for basic to mid-range applications.
- Learn more about Visionscape - a comprehensive machine vision software for multi-platform use here.
- Feel free to try out either of the software options - download now!
Marked for Life: A Glance at Some Common Direct Part Marking Methods
Direct part marking is crucial for enabling the traceability of products and the parts comprising...
Do Your DPMs Make the Grade? Understanding the 2D Barcode Verification Parameters in ISO 15415
Direct part marks (DPMs) and other two-dimensional codes can vary widely in their readability. Fr...
How X-Mode Brings Out the Best in Barcodes
Barcodes have a unique challenge. Whether printed, etched, engraved or stamped onto their substra...