Abstract:
A method for displaying a surround view on a single display screen is disclosed. A plurality of image frames for a particular time may be received from a corresponding plurality of cameras. A viewpoint warp map corresponding to a predetermined first virtual viewpoint may be selected, wherein the viewpoint warp map defines a source pixel location in the plurality of image frames for each output pixel location in the display screen. The warp map was predetermined offline and stored for later use. An output image is synthesized for the display screen by selecting pixel data for each pixel of the output image from the plurality of image frames in accordance with the viewpoint warp map. The synthesized image is then displayed on a display screen.
Abstract:
A method for automatic generation of calibration parameters for a surround view (SV) camera system is provided that includes capturing a video stream from each camera comprised in the SV camera system, wherein each video stream captures two calibration charts in a field of view of the camera generating the video stream; displaying the video streams in a calibration screen on a display device coupled to the SV camera system, wherein a bounding box is overlaid on each calibration chart, detecting feature points of the calibration charts, displaying the video streams in the calibration screen with the bounding box overlaid on each calibration chart and detected features points overlaid on respective calibration charts, computing calibration parameters based on the feature points and platform dependent parameters comprising data regarding size and placement of the calibration charts, and storing the calibration parameters in the SV camera system.
Abstract:
An apparatus and method for geometrically correcting an arbitrary shaped input frame and generating an undistorted output frame. The method includes capturing arbitrary shaped input images with multiple optical devices and processing the images, identifying redundant blocks and valid blocks in each of the images, allocating an output frame with an output frame size and dividing the output frame into regions shaped as a rectangle, programming the apparatus and disabling processing for invalid blocks in each of the regions, fetching data corresponding to each of the valid blocks and storing in an internal memory, interpolating data for each of the regions with stitching and composing the valid blocks for the output frame and displaying the output frame on a display module.
Abstract:
An apparatus and method for geometrically correcting a distorted input frame and generating an undistorted output frame. The apparatus includes an external memory block that stores the input frame, a counter block to compute output coordinates of the output frame for a region based on a block size of the region, a back mapping block to generate input coordinates corresponding to each of the output coordinates, a bounding module to compute input blocks corresponding to each of the input coordinates, a buffer module to fetch data corresponding to each of the input blocks, an interpolation module to interpolate data from the buffer module and a display module that receives the interpolated data for each of the regions and stitch an output image. The method includes determining the size of the output block based on a magnification data.
Abstract:
A method includes reading a composite video descriptor data structure and a plurality of window descriptor data structures. The composite video descriptor data structure defines a width and height of a composite video frame and each window descriptor data structure defines the starting X and Y coordinate, width and height of each constituent video window to be rendered in the composite video frame. The method further includes determining top and bottom Y coordinates for each constituent video window, as well as determining left and right X coordinates for each constituent video window. The method also includes dividing each constituent video window using the top and bottom Y coordinates to obtain Y-divided sub-windows, dividing each Y-divided sub-window using left and right X coordinates to obtain X and Y divided sub-windows, and storing X, Y coordinates of opposing corners of each X and Y divided sub-window in the storage device.
Abstract:
A method for automatic generation of calibration parameters for a surround view (SV) camera system is provided that includes capturing a video stream from each camera comprised in the SV camera system, wherein each video stream captures two calibration charts in a field of view of the camera generating the video stream; displaying the video streams in a calibration screen on a display device coupled to the SV camera system, wherein a bounding box is overlaid on each calibration chart, detecting feature points of the calibration charts, displaying the video streams in the calibration screen with the bounding box overlaid on each calibration chart and detected features points overlaid on respective calibration charts, computing calibration parameters based on the feature points and platform dependent parameters comprising data regarding size and placement of the calibration charts, and storing the calibration parameters in the SV camera system.
Abstract:
An apparatus and method for geometrically correcting a distorted input frame and generating an undistorted output frame. The apparatus includes an external memory block that stores the input frame, a counter block to compute output coordinates of the output frame for a region based on a block size of the region, a back mapping block to generate input coordinates corresponding to each of the output coordinates, a bounding module to compute input blocks corresponding to each of the input coordinates, a buffer module to fetch data corresponding to each of the input blocks, an interpolation module to interpolate data from the buffer module and a display module that receives the interpolated data for each of the regions and stitch an output image. The method includes determining the size of the output block based on a magnification data.
Abstract:
A method for displaying a surround view on a single display screen is disclosed. A plurality of image frames for a particular time may be received from a corresponding plurality of cameras. A viewpoint warp map corresponding to a predetermined first virtual viewpoint may be selected, wherein the viewpoint warp map defines a source pixel location in the plurality of image frames for each output pixel location in the display screen. The warp map was predetermined offline and stored for later use. An output image is synthesized for the display screen by selecting pixel data for each pixel of the output image from the plurality of image frames in accordance with the viewpoint warp map. The synthesized image is then displayed on a display screen.
Abstract:
A method for automatic generation of calibration parameters for a surround view (SV) camera system is provided that includes capturing a video stream from each camera comprised in the SV camera system, wherein each video stream captures two calibration charts in a field of view of the camera generating the video stream; displaying the video streams in a calibration screen on a display device coupled to the SV camera system, wherein a bounding box is overlaid on each calibration chart, detecting feature points of the calibration charts, displaying the video streams in the calibration screen with the bounding box overlaid on each calibration chart and detected features points overlaid on respective calibration charts, computing calibration parameters based on the feature points and platform dependent parameters comprising data regarding size and placement of the calibration charts, and storing the calibration parameters in the SV camera system.
Abstract:
A method includes reading a composite video descriptor data structure and a plurality of window descriptor data structures. The composite video descriptor data structure defines a width and height of a composite video frame and each window descriptor data structure defines the starting X and Y coordinate, width and height of each constituent video window to be rendered in the composite video frame. The method further includes determining top and bottom Y coordinates for each constituent video window, as well as determining left and right X coordinates for each constituent video window. The method also includes dividing each constituent video window using the top and bottom Y coordinates to obtain Y-divided sub-windows, dividing each Y-divided sub-window using left and right X coordinates to obtain X and Y divided sub-windows, and storing X, Y coordinates of opposing corners of each X and Y divided sub-window in the storage device.