Abstract:
A super resolution bore imaging system is disclosed for imaging a cylindrical bore. The system includes a controller, a photodetector configuration having a known pixel geometry, and an imaging arrangement that images bore surface segments onto the photodetector. In one embodiment, the controller is configured to acquire respective combinable sets of raw bore segment image data using the pixels of the photodetector configuration positioned, relative to the bore segment, at respective imaging-Z coordinates which are separated along the bore axial direction by a subpixel shift. In some embodiments, the pixel geometry is configured to provide super resolution along the circumferential direction without a change in position along the circumferential direction between acquiring the respective sets of image data. The controller combines the sets of raw image data to form a super resolution image data for the bore segment.
Abstract:
A super resolution bore imaging system is disclosed for imaging a cylindrical bore. The system includes a controller, a photodetector configuration having a known pixel geometry, and an imaging arrangement that images bore surface segments onto the photodetector. In one embodiment, the controller is configured to acquire respective combinable sets of raw bore segment image data using the pixels of the photodetector configuration positioned, relative to the bore segment, at respective imaging-Z coordinates which are separated along the bore axial direction by a subpixel shift. In some embodiments, the pixel geometry is configured to provide super resolution along the circumferential direction without a change in position along the circumferential direction between acquiring the respective sets of image data. The controller combines the sets of raw image data to form a super resolution image data for the bore segment.
Abstract:
A method for improving repeatability in edge location measurement results of a machine vision inspection system comprises: placing a workpiece in a field of view of the machine vision inspection system; providing an edge measurement video tool comprising an edge-referenced alignment compensation defining portion; operating the edge measurement video tool to define a region of interest of the video tool which includes an edge feature of the workpiece; operating the edge measurement video tool to automatically perform scan line direction alignment operations such that the scan line direction of the edge measurement video tool is aligned along a first direction relative to the edge feature, wherein the first direction is defined by predetermined alignment operations of the edge-referenced alignment compensation defining portion; and performing edge location measurement operations with the region of interest in that position.
Abstract:
A method for improving repeatability in edge location measurement results of a machine vision inspection system comprises: placing a workpiece in a field of view of the machine vision inspection system; providing an edge measurement video tool comprising an edge-referenced alignment compensation defining portion; operating the edge measurement video tool to define a region of interest of the video tool which includes an edge feature of the workpiece; operating the edge measurement video tool to automatically perform scan line direction alignment operations such that the scan line direction of the edge measurement video tool is aligned along a first direction relative to the edge feature, wherein the first direction is defined by predetermined alignment operations of the edge-referenced alignment compensation defining portion; and performing edge location measurement operations with the region of interest in that position.
Abstract:
A user interface for setting parameters for an edge location video tool is provided. The user interface includes a multi-dimensional parameter space representation that allows a user to adjust a single parameter combination indicator in order to adjust multiple edge detection parameters at once. One or more edge feature representation windows may be provided which indicate the edge features detectable by the current configuration of the edge detection parameters.
Abstract:
A user interface for setting parameters for an edge location video tool is provided. In one implementation, the user interface includes a multi-dimensional parameter space representation with edge zones that allows a user to adjust a single parameter combination indicator in a zone in order to adjust multiple edge detection parameters for detecting a corresponding edge. The edge zones indicate the edge features that are detectable when the parameter combination indicator is placed within the edge zones. In another implementation, representations of multiple edge features that are detectable by different possible combinations of the edge detection parameters are automatically provided in one or more windows. When a user selects one of the edge feature representation, the corresponding combination of edge detection parameters is set as the parameters for the edge location video tool.
Abstract:
A user interface for setting parameters for an edge location video tool is provided. In one implementation, the user interface includes a multi-dimensional parameter space representation with edge zones that allows a user to adjust a single parameter combination indicator in a zone in order to adjust multiple edge detection parameters for detecting a corresponding edge. The edge zones indicate the edge features that are detectable when the parameter combination indicator is placed within the edge zones. In another implementation, representations of multiple edge features that are detectable by different possible combinations of the edge detection parameters are automatically provided in one or more windows. When a user selects one of the edge feature representation, the corresponding combination of edge detection parameters is set as the parameters for the edge location video tool.