Abstract:
An image capture accelerator performs accelerated processing of image data. In one embodiment, the image capture accelerator includes accelerator circuitry including a pre-processing engine and a compression engine. The pre-processing engine is configured to perform accelerated processing on received image data, and the compression engine is configured to compress processed image data received from the pre-processing engine. In one embodiment, the image capture accelerator further includes a demultiplexer configured to receive image data captured by an image sensor array implemented within, for example, an image sensor chip. The demultiplexer may output the received image data to an image signal processor when the image data is captured by the image sensor array in a standard capture mode, and may output the received image data to the accelerator circuitry when the image data is captured by the image sensor array in an accelerated capture mode.
Abstract:
An image capture accelerator performs accelerated processing of image data. In one embodiment, the image capture accelerator includes accelerator circuitry including a pre-processing engine and a compression engine. The pre-processing engine is configured to perform accelerated processing on received image data, and the compression engine is configured to compress processed image data received from the pre-processing engine. In one embodiment, the image capture accelerator further includes a demultiplexer configured to receive image data captured by an image sensor array implemented within, for example, an image sensor chip. The demultiplexer may output the received image data to an image signal processor when the image data is captured by the image sensor array in a standard capture mode, and may output the received image data to the accelerator circuitry when the image data is captured by the image sensor array in an accelerated capture mode.
Abstract:
An image capture accelerator performs accelerated processing of image data. In one embodiment, the image capture accelerator includes accelerator circuitry including a pre-processing engine and a compression engine. The pre-processing engine is configured to perform accelerated processing on received image data, and the compression engine is configured to compress processed image data received from the pre-processing engine. In one embodiment, the image capture accelerator further includes a demultiplexer configured to receive image data captured by an image sensor array implemented within, for example, an image sensor chip. The demultiplexer may output the received image data to an image signal processor when the image data is captured by the image sensor array in a standard capture mode, and may output the received image data to the accelerator circuitry when the image data is captured by the image sensor array in an accelerated capture mode.
Abstract:
Encoded content is accessed. The encoded content includes an encoded first centrally located tile corresponding to a first centrally located tile of a first image, an encoded first peripherally located tile of the first image, and an encoded second peripherally located tile of a second image. The encoded first peripherally located tile is decoded to obtain a decoded first peripherally located tile. The encoded second peripherally located tile is decoded to obtain a decoded second peripherally located tile. The decoded first peripherally located tile and the decoded second peripherally located tile are stitched to obtain a stitched image portion. The stitched image portion is encoded to obtain an encoded stitched image portion. An encoded stitched image of the first image and the second image is obtained by combining the encoded first centrally located tile, and the encoded stitched image portion.
Abstract:
Encoded content is accessed. The encoded content includes an encoded first centrally located tile corresponding to a first centrally located tile of a first image, an encoded first peripherally located tile of the first image, and an encoded second peripherally located tile of a second image. The encoded first peripherally located tile is decoded to obtain a decoded first peripherally located tile. The encoded second peripherally located tile is decoded to obtain a decoded second peripherally located tile. The decoded first peripherally located tile and the decoded second peripherally located tile are stitched to obtain a stitched image portion. The stitched image portion is encoded to obtain an encoded stitched image portion. An encoded stitched image of the first image and the second image is obtained by combining the encoded first centrally located tile, and the encoded stitched image portion.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
Abstract:
A processing device generates composite images from a sequence of images. The composite images may be used as frames of video. A foreground/background segmentation is performed at selected frames to extract a plurality of foreground object images depicting a foreground object at different locations as it moves across a scene. The foreground object images are stored to a foreground object list. The foreground object images in the foreground object list are overlaid onto subsequent video frames that follow the respective frames from which they were extracted, thereby generating a composite video.
Abstract:
Systems and methods for providing panoramic image and/or video content using multi-resolution stitching. Panoramic content may include stitched spherical (360-degree) images and/or VR video. In some implementations, multi-resolution stitching functionality may be embodied in a spherical image capture device that may include two lenses configured to capture pairs of hemispherical images. The capture device may obtain images (e.g., representing left and right hemispheres) that may be characterized by 180-degree (or greater) field of view. Source images may be combined using multi-resolution stitching methodology. Source images may be transformed to obtain multiple image components characterized by two or more image resolutions. The stitched image may be encoded using selective encoding methodology including: partitioning source images into a low resolution/frequency and a high resolution/frequency components; stitching low resolution/frequency components using coarse stitching operation, stitching high resolution/high frequency components using a refined stitch operation; combining stitched LF components and stitched HF components.
Abstract:
Systems and methods for providing panoramic image and/or video content using spatially selective encoding and/or decoding. Panoramic content may include stitched spherical (360-degree) images and/or VR video. In some implementations, selective encoding functionality may be embodied in a spherical image capture device that may include two lenses configured to capture pairs of hemispherical images. Encoded source images may be decoded and stitched in order to obtain a combined image characterized by a greater field of view as compared to source images. The stitched image may be encoded using a selective encoding methodology including: partitioning a stitched image into multiple portions, determining if a portion is to be re-encoded. If the image portion is to be re-encoded, re-encoding the image portion. If an image portion is not to be re-encoded, copying previously encoded image portion in lieu of encoding.
Abstract:
A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame.