Alex Maslakow – Applications Engineer
May 2018
To form an image, a camera is set to acquire light for a certain amount of time, this is the exposure time of the camera. When inspecting a moving object the image needs to be captured fast enough so that the image is not blurred due to the motion of the object.
An image is formed by a camera when light passes through the lens and hits the sensor. CCD and CMOS are two types of camera sensors. They vary in their design but both end up converting light into an analog value. The camera then converts that analog value into a digital value that represents the light intensity. The sensors are made up of a series of square elements called pixels. Each pixel has a specific position and intensity within the image.
The size of each pixel in the image depends on various factors:
- Number of pixels in camera sensor
- Physical size of each pixel in the sensor
- Camera lens
- Distance from the lens to the object
These factors combine to determine how large each square pixel element appears within the acquired image.
If your object is moving at the time of image acquisition, it’s important to understand how that will affect your image. If you don’t set up the acquisition parameters correctly you can get a blurry image as a result of this motion. To acquire an image without motion blur, the exposure time must be set to a value so that the object only moves the width of one pixel or less in that amount of time. If the exposure time is too long a discrete point on the object will move through multiple pixel locations in the image causing the image to be blurred as shown in the images below.
To calculate the required exposure time, the following system details need to be known.
- Object or Conveyor speed (ft/min, mm/sec, etc.)
- Field of View - FOV (length or width of the area seen in the image)
- Sensor size in terms of pixels (640x480, 1600x1200, etc.)
First, the pixel size of the image is determined by dividing the FOV by the sensor size to get the size of each pixel.
𝐹𝑖𝑒𝑙𝑑 𝑜𝑓 𝑉𝑖𝑒𝑤 (𝑖𝑛𝑐ℎ𝑒𝑠) / 𝑆𝑒𝑛𝑠𝑜𝑟 𝑆𝑖𝑧𝑒 (𝑝𝑖𝑥𝑒𝑙𝑠)
= 𝑃𝑖𝑥𝑒𝑙 𝑆𝑖𝑧𝑒 (inches/pixel)
The object speed is then divided by the pixel size. After some simple unit conversions/cancellations, the final product should be in units of pixels and milliseconds.
As an example, I have a system where a widget is placed onto a conveyor by a robot. It has been determined that I need a 2MP camera to do an inspection as the parts pass by. The camera that is being used is an In-Sight 7802 and it will be setup to view the entire 15 inch width of the conveyor. The conveyor is moving at 20 ft/min.
The In-Sight 7802 has a 1600x1200 pixel sensor. If the long axis of the image spans the width of the 15 inch conveyor, the image will be 11.25 inches tall. That image height is determined by cross-multiplying as shown below.
To determine the pixel size the sensor size of 1600 pixels is divided by the field of view of 15 inches:
Now to determine that maximum exposure time that the camera can be set to without any motion blur in the image.
So, if the camera exposure time is set to less than 2.34ms the widget will move less than the distance of a single pixel meaning for all intents and purposes the part will be not moving in the image resulting in a clean, non-blurry, image.
Comments