Abstract:
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.
Abstract:
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
Abstract:
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Abstract:
An underwater housing includes a mounting plate, a first dome attached to a first surface of the mounting plate, and a second dome attached to a second surface of the mounting plate in a back-to-back configuration. A camera mount for a dual-lens camera is oriented at a tilt angle relative to a plane of a mounting plate. The dual-lens camera has laterally offset back-to-back lenses. The tilt angle is set such that the optical axes of the dual-lens camera intersect the center points of the respective domes.
Abstract:
A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata provides proximity information indicating which cameras were in proximity to a target device during a particular time frame. The platform can then generate an output video from spherical video captured from those cameras. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from frames of one or more spherical videos to generate an output video that tracks a particular individual or object of interest.
Abstract:
A pair of cameras having an overlapping field of view is aligned based on images captured by image sensors of the pair of cameras. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or both of the pair of cameras. To determine the pixel shift, the camera applies correlation methods including edge matching. Calibrating the pair of cameras may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
A system receives an encoded image representative of the 2D projection of a cubic image, the encoded image generated from two overlapping hemispherical images separated along a longitudinal plane of a sphere. The system decodes the encoded image to produce a decoded 2D projection of the cubic image, and perform a stitching operation to portions of the decoded 2D projection representative of overlapping portions of the hemispherical images to produce stitched overlapping portions. The system combine the stitched overlapping portions with portions of the decoded 2D projection representative of the non-overlapping portions of the hemispherical images to produce a stitched 2D projection of the cubic image, and encode the stitched 2D projection of the cubic image to produce an encoded cubic projection of the stitched hemispherical images.
Abstract:
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.