Abstract:
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
Abstract:
A method and apparatus for decoding codes applied to objects for use with an image sensor that includes a two dimensional field of view (FOV), the method comprising the steps of providing a processor programmed to perform the steps of obtaining an image of the FOV and applying different decode algorithms to code candidates in the obtained image to attempt to decode the code candidates wherein the decode algorithm applied to each candidate is a function of the location of the code candidate in the FOV.
Abstract:
Systems and methods read machine readable symbols, the systems and methods capture multiple images of the symbol and can locate symbol data region(s) from an image even when the symbol data is corrupted and not decodable. Binary matrices are generated of the symbol data regions obtained from the multiple images and can be accumulated to generate a decodable image. A correspondence can be established among multiple images acquired on the same symbol when the symbol has moved from one image to the next.
Abstract:
A method is presented for processing an image of a two-dimensional (2D) matrix symbol having a plurality of data modules and a discontinuous finder pattern, each distorted by “donut effects”. A resulting processed image contains an image of the 2D matrix symbol having a continuous finder pattern suitable for conventional 2D matrix symbol locating techniques, and having a plurality of data modules, each data module having a center more truly representative of intended data, and suitable for conventional 2D matrix symbol sampling and decoding. The method includes sharpening the distorted image of the 2D matrix symbol to increase a difference between low frequency and high frequency image feature magnitudes, thereby providing a sharpened image, and smoothing the sharpened image using a moving window over the sharpened image so as to provide a smoothed image, the moving window and a module of the 2D matrix code being of substantially similar size.
Abstract:
A method is presented for processing an image of a two-dimensional (2D) matrix symbol having a plurality of data modules and a discontinuous finder pattern, each distorted by “donut effects”. A resulting processed image contains an image of the 2D matrix symbol having a continuous finder pattern suitable for conventional 2D matrix symbol locating techniques, and having a plurality of data modules, each data module having a center more truly representative of intended data, and suitable for conventional 2D matrix symbol sampling and decoding. The method includes sharpening the distorted image of the 2D matrix symbol to increase a difference between low frequency and high frequency image feature magnitudes, thereby providing a sharpened image, and smoothing the sharpened image using a moving window over the sharpened image so as to provide a smoothed image, the moving window and a module of the 2D matrix code being of substantially similar size.
Abstract:
A method is provided for reading distorted optical symbols using known locating and decoding methods, without requiring a separate and elaborate camera calibration procedure, without excessive computational complexity, and without compromised burst noise handling. The invention exploits a distortion-tolerant method for locating and decoding 2D code symbols to provide a correspondence between a set of points in an acquired image and a set of points in the symbol. A coordinate transformation is then constructed using the correspondence, and run-time images are corrected using the coordinate transformation. Each corrected run-time image provides a distortion-free representation of a symbol that can be read by traditional code readers that normally cannot read distorted symbols. The method can handle both optical distortion and printing distortion. The method is applicable to “portable” readers when an incident angle with the surface is maintained, the reader being disposed at any distance from the surface.
Abstract:
A method is provided for reading distorted optical symbols using known locating and decoding methods, without requiring a separate and elaborate camera calibration procedure, without excessive computational complexity, and without compromised burst noise handling. The invention exploits a distortion-tolerant method for locating and decoding 2D code symbols to provide a correspondence between a set of points in an acquired image and a set of points in the symbol. A coordinate transformation is then constructed using the correspondence, and run-time images are corrected using the coordinate transformation. Each corrected run-time image provides a distortion-free representation of a symbol that can be read by traditional code readers that normally cannot read distorted symbols. The method can handle both optical distortion and printing distortion. The method is applicable to “portable” readers when an incident angle with the surface is maintained, the reader being disposed at any distance from the surface.
Abstract:
This invention provides a system and method to validate the accuracy of camera calibration in a single or multiple-camera embodiment, utilizing either 2D cameras or 3D imaging sensors. It relies upon an initial calibration process that generates and stores camera calibration parameters and residual statistics based upon images of a first calibration object. A subsequent validation process (a) acquires images of the first calibration object or a second calibration object having a known pattern and dimensions; (b) extracts features of the images of the first calibration object or the second calibration object; (c) predicts positions expected of features of the first calibration object or the second calibration object using the camera calibration parameters; and (d) computes a set of discrepancies between positions of the extracted features and the predicted positions of the features. The validation process then uses the computed set of discrepancies in a decision process that determines whether at least one of the discrepancies exceeds a predetermined threshold value. If so, recalibration is required. Where multiple cameras are employed, extrinsic parameters are determined and used in conjunction with the intrinsics.
Abstract:
Systems and methods are described for acquiring and decoding a plurality of images. First images are acquired and then processed to attempt to decode a symbol. Contributions of the first images to the decoding attempt are identified. An updated acquisition-settings order is determined based at least partly upon the contributions of the first images to the decoding attempt. Second images are acquired or processed based at least partly upon the updated acquisition-settings order.
Abstract:
This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. A 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. 3D points are computed for each pair of cameras to derive a 3D point cloud. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified whereby the closest match is the best refined three-dimensional pose.