Abstract:
Techniques including obtaining a first location of a vehicle, the vehicle having one or more cameras disposed about the vehicle, and wherein each camera is associated with a physical camera pose indicating where each camera is located with respect to the vehicle, capturing, by a first camera, a first image of a first area, associating the first image with the first location of the vehicle when the first image was captured, moving the vehicle in a direction so that the first area is no longer within a field of view of the first camera, obtaining a second location of the vehicle, determining a temporal camera pose based on the physical camera pose of the first camera and the second location of the vehicle, and rendering a view of the first area based on the temporal camera pose and the first image.
Abstract:
Disclosed examples include three-dimensional imaging systems and methods to reconstruct a three-dimensional scene from first and second image data sets obtained from a single camera at first and second times, including computing feature point correspondences between the image data sets, computing an essential matrix that characterizes relative positions of the camera at the first and second times, computing pairs of first and second projective transforms that individually correspond to regions of interest that exclude an epipole of the captured scene, as well as computing first and second rectified image data sets in which the feature point correspondences are aligned on a spatial axis by respectively applying the corresponding first and second projective transforms to corresponding portions of the first and second image data sets, and computing disparity values of a stereo disparity map according to the rectified image data sets to construct.
Abstract:
Disclosed examples include three-dimensional imaging systems and methods to reconstruct a three-dimensional scene from first and second image data sets obtained from a single camera at first and second times, including computing feature point correspondences between the image data sets, computing an essential matrix that characterizes relative positions of the camera at the first and second times, computing pairs of first and second projective transforms that individually correspond to regions of interest that exclude an epipole of the captured scene, as well as computing first and second rectified image data sets in which the feature point correspondences are aligned on a spatial axis by respectively applying the corresponding first and second projective transforms to corresponding portions of the first and second image data sets, and computing disparity values of a stereo disparity map according to the rectified image data sets to construct.
Abstract:
Techniques including obtaining a first location of a vehicle, the vehicle having two or more cameras disposed about the vehicle, each camera associated with a physical camera pose, capturing, by a first camera, a first image of a first area in a first field of view, associating the first image with the first location of the vehicle when the first image was captured, moving the vehicle in a direction so that the first area is in an expected second field of view of a second camera, wherein the second camera is not capturing images, obtaining a second location of the vehicle, determining a temporal camera pose based on a first physical camera pose, a second physical camera pose, and the second location of the vehicle, and rendering a view of the first area from the expected second field of view of the second camera based on the first image.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
Described examples include an integrated circuit having a point identifier configured to receive a stream of input frames and to identify point pairs on objects in the input frames. A ground plane converter transposes a position of the point pairs to a ground plane, the ground plane having a fixed relationship in at least one dimension relative to a source of the input frames. A motion estimator estimates a motion of the source of the input frames by comparing a plurality of point pairs between at least two input frames as transposed to the ground plane, in which the motion estimator compares a motion estimate determined by the plurality of point pairs and determines a refined motion estimate based on the plurality of point pairs excluding outliers from the plurality of point pairs.
Abstract:
An apparatus comprising a memory and one or more processing circuits is provided. The memory stores a blend table having blend weights. The processing circuits, for partitions of the blend table: determine whether a subset of the pixels associated with the partition includes pixels associated with seamlines defined in a three-dimensional surface representation of the scene. If none of the subset of the pixels are associated with the seamlines, the processing circuits populate a region of the virtual image corresponding to the partition with pixel values from an image captured by one of the plurality of image capture devices. If one or more of the subsets of the pixels is associated with the seamlines, the processing circuits populate the region of the virtual image associated with the partition with blended pixel values from two or more images captured by two or more of the plurality of image capture devices.
Abstract:
A method for generating a surround view (SV) image for display in a view port of an SV processing system is provided that includes capturing, by at least one processor, corresponding images of video streams from each camera of a plurality of cameras and generating, by the at least one processor, the SV image using ray tracing for lens remapping to identify coordinates of pixels in the images corresponding to pixels in the SV image.
Abstract:
Techniques including obtaining a first location of a vehicle, the vehicle having two or more cameras disposed about the vehicle, each camera associated with a physical camera pose, capturing, by a first camera, a first image of a first area in a first field of view, associating the first image with the first location of the vehicle when the first image was captured, moving the vehicle in a direction so that the first area is in an expected second field of view of a second camera, wherein the second camera is not capturing images, obtaining a second location of the vehicle, determining a temporal camera pose based on a first physical camera pose, a second physical camera pose, and the second location of the vehicle, and rendering a view of the first area from the expected second field of view of the second camera based on the first image.