Abstract:
This invention provides a system and method for runtime determination (self-diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less-expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.
Abstract:
The disclosure relates to weighing moving objects in a weighing platform functionally coupled to a computer-vision tracking platform. The objects can translate, rotate, and translate and rotate. Weighing of the objects can be accomplished through combination of object imaging and upstream weighing. Object imaging can permit tracking, through computer vision, a logical object moving in a trajectory from the first location to the second location, wherein a logical object is a formal representation of one or more physical objects. Upstream weighing can permit updating a record indicative of weight of the one or more physical objects associated with the tracked logical object. As a part of weighing termination, data integrity check(s) can be performed on a plurality of records indicative of a weight of a single physical object. Based on outcome of the data integrity check(s), a record indicative of the weight of the single physical object can be supplied.
Abstract:
This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.
Abstract:
Inspection of solder paste on a printed circuit board using a before printing image (pre-image) to normalize an after printing image (post-image) of the printed circuit board. Existing lighting and optics used for alignment of the screen printing stencil to the printed circuit board are used for the solder paste inspection. A stencil in the screen printing process is also inspected using a before printing image (pre) to normalize an after printing image (post) of the stencil.
Abstract:
The invention can be used to find the orientation of a semiconductor wafer without wafer handling, i.e., in a non-contact manner. The invention uses knowledge of the position of a semiconductor wafer, and the position of an orientation feature of the wafer, to find the orientation of the wafer. According to the invention, a curved band image is formed that includes an image of an orientation feature. The curved band image is then transformed into a straight band image. The longitudinal position of the orientation feature is then determined in the coordinate system of the straight band image, which longitudinal position is then converted into an angular displacement in the coordinate system of the curved band image to provide the orientation of the wafer.
Abstract:
Carbon bonded refractory brick containing 1 to 6% by weight liquid thermosetting resin binder consisting of polyhydroxydiphenyl resin and a curing agent and the balance being refractory aggregate.
Abstract:
This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.
Abstract:
A vision system is provided to determine a positional relationship between a photovoltaic device wafer on a platen and a printing element, such as a printing screen, on a remote side of the photovoltaic device wafer from the platen. A source emits ultraviolet light along a path that is transverse to a longitudinal axis of an aperture through the platen, and a diffuser panel is located along that path. A reflector directs the light from the diffuser panel toward the aperture. A video camera is located along the longitudinal axis of the aperture and produces an image using light received from the platen aperture, wherein some of that received light was reflected by the wafer. A band-pass filter is placed in front of the camera to block ambient light. The use of diffused ultraviolet light enhances contrast in the image between the wafer and the printing element.
Abstract:
During statistical training and automated inspection of objects by a machine vision system, a General Affine Transform is advantageously employed to improve system performance. During statistical training, the affine poses of a plurality of training images are determined with respect to an alignment model image. Following filtering to remove high frequency content, the training images and their corresponding affine poses are applied to an affine transformation. The resulting transformed images are accumulated to compute template and threshold images to be used for run-time inspection. During run-time inspection, the affine pose of the run-time image relative to the alignment model image is determined. Following filtering of the run-time image, the run-time image is affine transformed by its affine pose. The resulting transform image is compared with the template and threshold images computed during statistical training to determine object status. In this manner, automated training and inspection is relatively less demanding on system storage, and results in an improvement in system speed and accuracy.
Abstract:
The invention provides methods and apparatus for reducing the bandwidth of a multichannel image. In one aspect, the methods and apparatus call for acquiring a multichannel training image representing a training scene. Weighting factors for the respective channels are determined based on the contrast at corresponding locations in that multichannel training image. A reduced bandwidth runtime image is generated from the multichannel runtime images as a function of (i) the weighting factors determined from the training image and (ii) a multichannel image representing the runtime scene.