You never know where the next breakthrough in machine vision will come from. Sometimes, it will be the result of years of academic study. Other times, it will come from a simple decision to start incorporating new-and-improved technologies from regular consumer devices into high-end smart cameras. Two technologies that have taken the machine vision world by storm recently are the CMOS sensor and the FPGA. Let’s look at how these exciting technologies have supercharged the development of smart cameras.
CMOS Sensors: From Smartphones to Smart Cameras
Once left out of the machine vision world due to their low sensitivity and high levels of noise relative to charge-coupled devices (CCDs), complementary metal-oxide semiconductor (CMOS) sensors have suddenly grown up. Now they’re an essential component of smart cameras, thanks to their rapid increase in sensitivity and overall quality.
CMOS image sensor
Both CMOS sensors and CCDs do their job by initially turning light energy into electrons inside a bunch of tiny solar cells. However, the next step – reading the electron charge accumulated within each cell – is performed differently by each. CCDs transport the charge with zero distortion across the array of cells and then read the result at one corner of the array. To ensure that the charge transport is distortion-free, these sensors must be manufactured using a special (and relatively expensive) method. CMOS sensors, on the other hand, use several transistors to amplify and transport the charge in a less specialized way. The manufacturing cost is kept down, but the reading is no longer free of distortion.
The obstacle presented by noise and distortion was eventually overcome, but this didn’t occur within the confines of industrial imaging. Instead, CMOS sensors found their niche as a component of digital cameras for the average consumer and then made their way into smartphones. Constantly under pressure to keep prices down while upping resolution and quality, smartphone manufacturers embraced the lower-cost option while keeping an eye out for ways to make it a little better. Eventually, the sensitivity of CMOS sensors reached a point where they could give CCDs a run for their money. That was when the machine vision industry welcomed the more cost-effective sensors into the fold and smart camera development became less costly overall. As costs came down, progress took off.
FPGAs: An Adaptable Solution for Real-Time Action
Just like computers, smartphones and other digital appliances, smart cameras use integrated circuits to process inputs and generate outputs. Integrated circuits can be customized for special uses, in which case they are known as application-specific integrated circuits (ASICs). These have been used in smart cameras, but one of their major downsides is that the logic is completely set in stone (or rather, in silicon) once they’re manufactured. They’re not ideal for a rapidly-evolving technology where customizability is key.
This is where the field-programmable gate array, or FPGA, comes in. As the name suggests, FPGAs can be reprogrammed even after they have been manufactured and shipped off to the end user. They consist of numerous logic blocks that can be reconfigured using a hardware description language (HDL). This flexibility allows engineers more freedom in testing and developing new smart camera models.
The HAWK MV-4000 High-Performance smart camera builds upon the previous generation by quadrupling processing power and achieving real-time trigger response using an FPGA.
FPGAs first came into their own in the telecommunications and networking industries. When industrial automation systems started requiring smart cameras to capture parts on fast-moving conveyor belts, FPGAs came into the picture as a means for real-time control of the camera hardware. One example is the coordination of the input trigger and sensor acquisition. With FPGA control, the camera is able to achieve near real-time response to the trigger input and control over the sensor, ensuring that the camera acquires an image of the part in a repeatable position. Without this technology, the input-sensor coordination would need to go through a much slower labyrinth of general-purpose processing circuits. The effect of this delay is that the object will shift by varying amounts in the field of view based on the conveyor speed before the camera gets the message that it was supposed to take a picture. With a FPGA, the picture is taken nearly the instant the trigger is fired. Incorporating FPGAs to serve as the real-time pseudo-hardware has been one of the key factors in the recent explosion in smart camera capability.
Thanks to the addition of CMOS sensors and FPGAs, smart camera development is going through an exciting phase right now. Costs are going down while capabilities are on the rise, and more breakthroughs could be just around the corner. Perhaps another yet-overlooked technology is about to suddenly find its niche in the world of machine vision. As said before, you never know where the next breakthrough will come from!
Learn about our extremely powerful HAWK MV-4000 Smart Camera using FPGA here.
How X-Mode Brings Out the Best in Barcodes
Barcodes have a unique challenge. Whether printed, etched, engraved or stamped onto their substra...
How CMOS Sensors and FPGAs Upped the Tempo of Smart Camera Progress
You never know where the next breakthrough in machine vision will come from. Sometimes, it will b...
Spin the Color Wheel: How Varying the LED Color Can Improve Machine Vision Outcomes
Manufacturers aren’t going to let inspection challenges get in the way of the artwork on th...