Abstract:
A user interface for setting parameters for an edge location video tool is provided. In one implementation, the user interface includes a multi-dimensional parameter space representation with edge zones that allows a user to adjust a single parameter combination indicator in a zone in order to adjust multiple edge detection parameters for detecting a corresponding edge. The edge zones indicate the edge features that are detectable when the parameter combination indicator is placed within the edge zones. In another implementation, representations of multiple edge features that are detectable by different possible combinations of the edge detection parameters are automatically provided in one or more windows. When a user selects one of the edge feature representation, the corresponding combination of edge detection parameters is set as the parameters for the edge location video tool.
Abstract:
A method of automatically adjusting lighting conditions improves the results of points from focus (PFF) 3D reconstruction. Multiple lighting levels are automatically found based on brightness criteria and an image stack is taken at each lighting level. In some embodiments, the number of light levels and their respective light settings may be determined based on trial exposure images acquired at a single global focus height which is a best height for an entire region of interest, rather than the best focus height for just the darkest or brightest image pixels in a region of interest. The results of 3D reconstruction at each selected light level are combined using a Z-height quality metric. In one embodiment, the PFF data point Z-height value that is to be associated with an X-Y location is selected based on that PFF data point having the best corresponding Z-height quality metric value at that X-Y location.
Abstract:
A reliable method for discriminating between a plurality of edges in a region of interest of an edge feature video tool in a machine vision system comprises determining a scan direction and an intensity gradient threshold value, and defining associated gradient prominences. The gradient threshold value may be required to fall within a maximum range that is based on certain characteristics of an intensity gradient profile derived from an image of the region of interest. Gradient prominences are defined by limits at sequential intersections between the intensity gradient profile and the edge gradient threshold. A single prominence is allowed to include gradient extrema corresponding to a plurality of respective edges. A gradient prominence-counting parameter is automatically determined that is indicative of the location of the selected edge in relation to the defined gradient prominences. The gradient prominence-counting parameter may correspond to the scan direction.
Abstract:
A user interface for setting parameters for an edge location video tool is provided. In one implementation, the user interface includes a multi-dimensional parameter space representation with edge zones that allows a user to adjust a single parameter combination indicator in a zone in order to adjust multiple edge detection parameters for detecting a corresponding edge. The edge zones indicate the edge features that are detectable when the parameter combination indicator is placed within the edge zones. In another implementation, representations of multiple edge features that are detectable by different possible combinations of the edge detection parameters are automatically provided in one or more windows. When a user selects one of the edge feature representation, the corresponding combination of edge detection parameters is set as the parameters for the edge location video tool.