Abstract:
A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest.
Abstract:
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Abstract:
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Abstract:
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.
Abstract:
In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted.