Abstract:
A pair of cameras having an overlapping field of view is aligned based on images captured by image sensors of the pair of cameras. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or both of the pair of cameras. To determine the pixel shift, the camera applies correlation methods including edge matching. Calibrating the pair of cameras may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
A pair of cameras having an overlapping field of view is aligned based on images captured by image sensors of the pair of cameras. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or both of the pair of cameras. To determine the pixel shift, the camera applies correlation methods including edge matching. Calibrating the pair of cameras may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is output.
Abstract:
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
Abstract:
An image capture device having multiple image sensors having overlapping fields of view that aligns the image sensors based on images captured by image sensors. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or more of the image sensors. To determine the pixel shift, a processor applies correlation methods including edge matching. Calibrating the image sensors may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
Abstract:
A system receives an encoded image representative of the 2D projection of a cubic image, the encoded image generated from two overlapping hemispherical images separated along a longitudinal plane of a sphere. The system decodes the encoded image to produce a decoded 2D projection of the cubic image, and perform a stitching operation to portions of the decoded 2D projection representative of overlapping portions of the hemispherical images to produce stitched overlapping portions. The system combine the stitched overlapping portions with portions of the decoded 2D projection representative of the non-overlapping portions of the hemispherical images to produce a stitched 2D projection of the cubic image, and encode the stitched 2D projection of the cubic image to produce an encoded cubic projection of the stitched hemispherical images.
Abstract:
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
Abstract:
A pair of cameras having an overlapping field of view is aligned based on images captured by image sensors of the pair of cameras. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or both of the pair of cameras. To determine the pixel shift, the camera applies correlation methods including edge matching. Calibrating the pair of cameras may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
An underwater housing comprises a laterally offset back-to-back dome configuration. A dual-lens camera having laterally offset back-to-back lenses is mounted within the housing such that the optical axes of the camera lenses align with the optical axes of the domes. This configuration beneficially minimizes effects introduced by the dome on field of view and focus.