Abstract:
A method of image processing in a structured light imaging device is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair in the structured light imaging system, and wherein the captured image includes a pre-determined hierarchical binary pattern projected into the scene by the projector, wherein the pre-determined hierarchical binary pattern was formed by iteratively scaling a lower resolution binary pattern to multiple successively higher resolutions, rectifying the captured image to generated a rectified captured image, extracting a binary image from the rectified captured image at full resolution and at each resolution used to generate the pre-determined hierarchical binary pattern, and using the binary images to generate a depth map of the captured image.
Abstract:
A method of image processing in a structured light imaging device is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair in the structured light imaging system, and wherein the captured image includes a pre-determined hierarchical binary pattern projected into the scene by the projector, wherein the pre-determined hierarchical binary pattern was formed by iteratively scaling a lower resolution binary pattern to multiple successively higher resolutions, rectifying the captured image to generated a rectified captured image, extracting a binary image from the rectified captured image at full resolution and at each resolution used to generate the pre-determined hierarchical binary pattern, and using the binary images to generate a depth map of the captured image.
Abstract:
A method of generating an alignment matrix for a camera-radar system includes: receiving radar data originated by a radar subsystem and representative of an area of interest within a field of view for the radar subsystem; receiving image data originated by a camera subsystem and representative of the area of interest within a field of view for the camera subsystem; processing the radar data to detect features within the area of interest and to determine a reflected radar point with three dimensions relating to a camera-radar system; processing the image data to detect features within the area of interest and to determine a centroid with two dimensions relating to the camera-radar system; and computing an alignment matrix for radar and image data from the camera-radar system based on a functional relationship between the three dimensions for the reflected radar point and the two dimensions for the centroid.
Abstract:
A method of misalignment correction in a structured light device is provided that includes extracting features from a first captured image of a scene, wherein the first captured image is captured by an imaging sensor component of the structured light device, and wherein the first captured image includes a pattern projected into the scene by a projector component of the structured light device, matching the features of the first captured image to predetermined features of a pattern image corresponding to the projected pattern to generate a dataset of matching features, determining values of alignment correction parameters of an image alignment transformation model using the dataset of matching features, and applying the image alignment transformation model to a second captured image using the determined alignment correction parameter values.
Abstract:
A method of automatically focusing a projector in a projection system is provided that includes projecting, by the projector, a binary pattern on a projection surface, capturing an image of the projected binary pattern by a camera synchronized with the projector, computing a depth map from the captured image, and adjusting focus of the projector based on the computed depth map.
Abstract:
A method of misalignment correction in a structured light device is provided that includes extracting features from a first captured image of a scene, wherein the first captured image is captured by an imaging sensor component of the structured light device, and wherein the first captured image includes a pattern projected into the scene by a projector component of the structured light device, matching the features of the first captured image to predetermined features of a pattern image corresponding to the projected pattern to generate a dataset of matching features, determining values of alignment correction parameters of an image alignment transformation model using the dataset of matching features, and applying the image alignment transformation model to a second captured image using the determined alignment correction parameter values.