Abstract:
An image capture accelerator performs accelerated processing of image data. In one embodiment, the image capture accelerator includes accelerator circuitry including a pre-processing engine and a compression engine. The pre-processing engine is configured to perform accelerated processing on received image data, and the compression engine is configured to compress processed image data received from the pre-processing engine. In one embodiment, the image capture accelerator further includes a demultiplexer configured to receive image data captured by an image sensor array implemented within, for example, an image sensor chip. The demultiplexer may output the received image data to an image signal processor when the image data is captured by the image sensor array in a standard capture mode, and may output the received image data to the accelerator circuitry when the image data is captured by the image sensor array in an accelerated capture mode.
Abstract:
Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. In one example, best scenes are identified based on the motion values associated with frames or portions of a frame of a video. Motion values are determined for each frame and portions of the video including frames with the most motion are identified as best scenes. Best scenes may also be identified based on the motion profile of a video. The motion profile of a video is a measure of global or local motion within frames throughout the video. For example, best scenes are identified from portion of the video including steady global motion. A video summary can be generated including one or more of the identified best scenes.
Abstract:
Blurring is simulated in post-processing for captured images. A 3D image is received from a 3D camera, and depth information in the 3D image is used to determine the relative distances of objects in the image. One object is chosen as the subject of the image, and an additional object in the image is identified. Image blur is applied to the identified additional object based on the distance between the 3D camera and the subject object, the distance between the subject object and the additional object, and a virtual focal length and virtual f-number.
Abstract:
A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest.
Abstract:
Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.
Abstract:
A device includes a processor that is configured to obtain first facets of a first wide field-of-view image. An object is identified in a facet of the first facets. A second wide field-of-view image is obtained. A location of the object is identified in the second wide field-of-view image. Using the location of the object, the second wide field-of-view image is partitioned into second facets such that no boundary of any of the second facets overlaps the object. The second facets are then encoded in a compressed bitstream.
Abstract:
Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.
Abstract:
A method includes obtaining visual content comprising spatial portions; determining respective spatial qualities of the spatial portions, wherein the respective spatial qualities are based on locations of the spatial portions within the visual content; and encoding the spatial portions of the visual content based on the respective spatial qualities. An apparatus includes a camera, a display, and a processor. The processor is configured to identify, using facial recognition, a face of a user of the apparatus; identify a distance of the face of the user to the display; and render visual content on the display using a quality that is based on the distance.
Abstract:
Methods and apparatus for encoding and decoding image data based on one or more parameters. In one embodiment, various spatial portions or regions of image data (e.g., a still or moving image) are weighted according to the perceived or measured quality. Processing for these weighted regions can be selectively altered or adjusted so as to optimize one or more operational parameters including for example processing and/or memory requirements, or speed.