Abstract:
In the video encoders described herein, blocks of pixels from a video frame may be encoded (e.g., using CAVLC encoding) in a block processing pipeline using wavefront ordering (e.g., in knight's order). Each of the encoded blocks may be written to a particular one of multiple DMA buffers such that the encoded blocks written to each of the buffers represent consecutive blocks of the video frame in scan order. A transcode pipeline may operate in parallel with (or at least overlapping) the operation of the block processing pipeline. The transcode pipeline may read encoded blocks from the buffers in scan order and merge them into a single bit stream (in scan order). A transcoder core of the transcode pipeline may decode the encoded blocks and encode them using a different encoding process (e.g., CABAC). In some cases, the transcoder may be bypassed.
Abstract:
Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels.
Abstract:
Systems and methods are provided for collecting image statistics using a pixel mask. In one example, statistics collection logic of an image signal processor may include a pixel weighting mask and accumulation logic. The pixel weighting mask may receive a first representation of a pixel that includes a luma and chroma representation of the pixel. The pixel weighting mask may output a pixel weighting using first and second chroma components of the luma and chroma representation of the pixel. The accumulation logic may receive the first or a second representation of the pixel and the pixel weighting value. Using these, the accumulation logic may weight the second representation of the pixel or the first representation of the pixel using the pixel weighting value to obtain a weighted pixel value, adding the weighted pixel value to a statistics count.
Abstract:
A system may include a display for displaying an image frame that is divided into regions having respective resolutions based on display image data. The system may also include image processing circuitry to generate the display image data based on multi-resolution image data of the image frame. Generating the display image data may include determining an enhancement to be applied to a portion of the multi-resolution image data and adjusting the determined enhancement to be applied to the portion of the multi-resolution image data based on boundary data associated with locations of boundaries between the regions.
Abstract:
A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.
Abstract:
Support for additional components may be specified in a coding scheme for image data. A layer of a coding scheme that specifies color components may also specify additional components. Characteristics of the components may be specified in the same layer or a different layer of the coding scheme. An encoder or decoder may identify the specified components and determine the respective characteristics to perform encoding and decoding of image data.
Abstract:
Systems and methods for down-scaling are provided. In one example, a method for processing image data includes determining a plurality of output pixel locations using a position value stored by a position register, using the current position value to select a center input pixel from the image data and selecting an index value, selecting a set of input pixels adjacent to the center input pixel, selecting a set of filtering coefficients from a filter coefficient lookup table using the index value, filtering the set of source input pixels to apply a respective one of the set of filtering coefficients to each of the set of source input pixels to determine an output value for the current output pixel at the current position value, and correcting chromatic aberrations in the set of source input pixels.
Abstract:
Systems, apparatuses, and methods for implementing a timestamp based display update mechanism. A display control unit includes a timestamp queue for storing timestamps, wherein each timestamp indicates when a corresponding frame configuration set should be fetched from memory. At pre-defined intervals, the display control unit may compare the timestamp of the topmost entry of the timestamp queue to a global timer value. If the timestamp is earlier than the global timer value, the display control unit may pop the timestamp entry and fetch the frame next configuration set from memory. The display control unit may then apply the updates of the frame configuration set to its pixel processing elements. After applying the updates, the display control unit may fetch and process the source pixel data and then drive the pixels of the next frame to the display.
Abstract:
Adaptive video processing for a target display panel may be implemented in or by a server/encoding pipeline. The adaptive video processing methods may obtain and take into account video content and display panel-specific information including display characteristics and environmental conditions (e.g., ambient lighting and viewer location) when processing and encoding video content to be streamed to the target display panel in an ambient setting or environment. The server-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to generate video content in the color gamut and dynamic range of the target display panel that is adapted to the display panel characteristics and ambient viewing conditions.
Abstract:
A context switching method for video encoders that enables higher priority video streams to interrupt lower priority video streams. A high priority frame may be received for processing while another frame is being processed. The pipeline may be signaled to perform a context stop for the current frame. The pipeline stops processing the current frame at an appropriate place, and propagates the stop through the stages of the pipeline and to a transcoder through DMA. The stopping location is recorded. The video encoder may then process the higher-priority frame. When done, a context restart is performed and the pipeline resumes processing the lower-priority frame beginning at the recorded location. The transcoder may process data for the interrupted frame while the higher-priority frame is being processed in the pipeline, and similarly the pipeline may begin processing the lower-priority frame after the context restart while the transcoder completes processing the higher-priority frame.