Abstract:
Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.
Abstract:
A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non- spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest.
Abstract:
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Abstract:
A pair of cameras having an overlapping field of view is aligned based on images captured by image sensors of the pair of cameras. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or both of the pair of cameras. To determine the pixel shift, the camera applies correlation methods including edge matching. Calibrating the pair of cameras may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
Abstract:
Apparatus and methods for the stitch zone calculation of a generated projection of a spherical image. In one embodiment, a computing device is disclosed which includes logic configured to: obtain a plurality of images; map the plurality of images onto a spherical image; re-orient the spherical image in accordance with a desired stitch line and a desired projection for the desired stitch line; and map the spherical image to the desired projection having the desired stitch line. In a variant, the desired stitch line is mapped onto an optimal stitch zone, the optimal stitch zone characterized as a set of points that defines a single line on the desired projection in which the set of points along the desired projection lie closest to the spherical image in a mean square sense.
Abstract:
A unified image processing algorithm results in better post-processing quality for combined images that are made up of multiple single-capture images. To ensure that each single-capture image is processed in the context of the entire combined image, the combined image is analyzed to determine portions of the image (referred to as "zones") that should be processed with the same parameters for various image processing algorithms. These zones may be determined based on the content of the combined image. Alternatively, these zones may be determined based on the position of each single-capture image with respect to the entire combined image or the other single-capture images. Once zones and their corresponding image processing parameters are determined for the combined image, they are translated to corresponding zones each of the single-capture images. Finally, the image processing algorithms are applied to each of the single-capture images using the zone-specified parameters.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.