Abstract:
A method for displaying a surround view on a single display screen is disclosed. A plurality of image frames for a particular time may be received from a corresponding plurality of cameras. A viewpoint warp map corresponding to a predetermined first virtual viewpoint may be selected, wherein the viewpoint warp map defines a source pixel location in the plurality of image frames for each output pixel location in the display screen. The warp map was predetermined offline and stored for later use. An output image is synthesized for the display screen by selecting pixel data for each pixel of the output image from the plurality of image frames in accordance with the viewpoint warp map. The synthesized image is then displayed on a display screen.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
A method of image processing in a structured light imaging device is provided that includes capturing a plurality of images of a scene into which a structured light pattern is projected by a projector in the structured light imaging device, extracting features in each of the captured images, finding feature matches between a reference image of the plurality of captured images and each of the other images in the plurality of captured images, rectifying each of the other images to align with the reference image, wherein each image of the other images is rectified based on feature matches between the image and the reference image, combining the rectified other images and the reference image using interpolation to generate a high resolution image, and generating a depth image using the high resolution image.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
A method for computing a depth map of a scene in a structured light imaging system including a time-of-flight (TOF) sensor and a projector is provided that includes capturing a plurality of high frequency phase-shifted structured light images of the scene using a camera in the structured light imaging system, generating, concurrently with the capturing of the plurality of high frequency phase-shifted structured light images, a time-of-flight (TOF) depth image of the scene using the TOF sensor, and computing the depth map from the plurality of high frequency phase-shifted structured light images wherein the TOF depth image is used for phase unwrapping.
Abstract:
An image classification system includes a convolutional neural network, a confidence predictor, and a fusion classifier. The convolutional neural network is configured to assign a plurality of probability values to each pixel of a first image of a scene and a second image of the scene. Each of the probability values corresponds to a different feature that the convolutional neural network is trained to identify. The confidence predictor is configured to assign a confidence value to each pixel of the first image and to each pixel of the second image. The confidence values correspond to a greatest of the probability values generated by the convolutional neural network for each pixel. The fusion classifier is configured to assign, to each pixel of the first image, a feature that corresponds to a higher of the confidence values assigned to the pixel of the first image and the second image.
Abstract:
A method of depth map optimization using an adaptive structured light pattern is provided that includes capturing, by a camera in a structured light imaging device, a first image of a scene into which a pre-determined structured light pattern is projected by a projector in the structured light imaging device, generating a first disparity map based on the captured first image and the structured light pattern, adapting the structured light pattern based on the first disparity map to generate an adaptive pattern, wherein at least one region of the structured light pattern is replaced by a different pattern, capturing, by the camera, a second image of the scene into which the adaptive pattern is projected by the projector, generating a second disparity map based on the captured second image and the adaptive pattern, and generating a depth image using the second disparity map.
Abstract:
A method of image processing in a structured light imaging system is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair, and wherein the captured image includes a binary pattern projected into the scene by the projector, applying a filter to the rectified captured image to generate a local threshold image, wherein the local threshold image includes a local threshold value for each pixel in the rectified captured image, and extracting a binary image from the rectified captured image wherein a value of each location in the binary image is determined based on a comparison of a value of a pixel in a corresponding location in the rectified captured image to a local threshold value in a corresponding location in the local threshold image.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
An apparatus and method for geometrically correcting an arbitrary shaped input frame and generating an undistorted output frame. The method includes capturing arbitrary shaped input images with multiple optical devices and processing the images, identifying redundant blocks and valid blocks in each of the images, allocating an output frame with an output frame size and dividing the output frame into regions shaped as a rectangle, programming the apparatus and disabling processing for invalid blocks in each of the regions, fetching data corresponding to each of the valid blocks and storing in an internal memory, interpolating data for each of the regions with stitching and composing the valid blocks for the output frame and displaying the output frame on a display module.