Abstract:
Devices and methods for error diffusion and spatiotemporal dithering are provided. By way of example, a method of operating a display includes receiving a pixel input, a set of pixel coordinates, and a current frame number. A kernel and a particular kernel bit of the kernel is selected from a set of kernels, based upon the pixel input, the pixel coordinates, the frame number, or any combination thereof. A dithered output is determined based at least in part upon the kernel bit. When the display is in a diamond pixel configuration, the dithered output is applied in accordance with a diamond pattern formed by red, blue, or red and blue pixel channels.
Abstract:
System and method for improving operational efficiency of a video encoding pipeline used to encode image data. In embodiments, the video encoding pipeline includes bit-rate statistics generation that is useful for controlling subsequent bit rates and/or determining encoding operational modes.
Abstract:
System and method for improving operational efficiency of a video encoding pipeline used to encode image data. The video encoding pipeline includes a motion estimation setup block, which dynamically adjusts a setup configuration of the motion estimation block based at least in part on operational parameters of the video encoding pipeline and select an initial candidate inter-frame prediction mode based at least on the setup configuration, a full-pel motion estimation block, which determines an intermediate candidate inter-frame prediction mode by performing a motion estimation search based on the initial candidate inter-frame prediction mode, a sub-pel motion estimation block, which determines a final candidate inter-frame prediction by performing a motion estimation search based on the intermediate candidate inter-frame prediction mode, and a mode decision block, which determines a rate-distortion cost associated with the final candidate inter-frame prediction mode and determines a prediction mode used to prediction encoding the image data.
Abstract:
A knight's order processing method for block processing pipelines in which the next block input to the pipeline is taken from the row below and one or more columns to the left in the frame. The knight's order method may provide spacing between adjacent blocks in the pipeline to facilitate feedback of data from a downstream stage to an upstream stage. The rows of blocks in the input frame may be divided into sets of rows that constrain the knight's order method to maintain locality of neighbor block data. Invalid blocks may be input to the pipeline at the left of the first set of rows and at the right of the last set of rows, and the sets of rows may be treated as if they are horizontally arranged rather than vertically arranged, to maintain continuity of the knight's order algorithm.
Abstract:
A video encoder may include a context-adaptive binary arithmetic coding (CABAC) encode component that converts each syntax element of a representation of a block of pixels to binary code, serializes it, and codes it mathematically with its probability model, after which the resulting bit stream is output. When the probability of a bin being coded with one of two possible symbols is one-half, the bin may be coded using bypass bin coding mode rather than a more compute-intensive regular bin coding mode. The CABAC encoder may code multiple consecutive bypass bins in a series of cascaded processing units during a single processing cycle (e.g., a regular bin coding cycle). Intermediate outputs of each processing unit may be coupled to inputs of the next processing unit. A resolver unit may accept intermediate outputs of the processing units and generate final output bits for the bypass bins.
Abstract:
Systems and methods are provided for using an optical crosstalk compensation (OXTC) block to compensate for optical crosstalk resulted from a combination of viewing angle change across field of view (FoV), color filter (CF) crosstalk, and the OLED various angle color shift (VACS) of a foveated electronic display. One or more two-dimensional (2D) OXTC factor maps are used to determine OXTC factors for input image data of the OXTC block, and the OXTC factors are updated on a per frame basis. Offset values are determined using a parallel architecture and used to determine the OXTC factors. Compensation weights are used to determine weighted OXTC factors to improve processing efficiency. Output image data are obtained by applying the weighted OXTC factors to the input image data.
Abstract:
An electronic device may include an electronic display to display an image based on processed image data. The electronic device may also include image processing circuitry to determine a hierarchical grid having multiple grid points divided into grid partitions. A first set of grid points associated with a first set of grid partitions may include a first set of mappings to corresponding coordinates of input image data in a source frame. The image processing circuitry may also interpolate between the first set of grid points to determine a second set of grid points of having a second set of mappings to corresponding coordinates of the input image data based on the first set of mappings. The image processing circuitry may also generate the processed image data by applying the first set of mappings and the second set of mappings to the input image data.
Abstract:
Methods and systems include neural network-based image processing and blending circuitry to blend an output of the neural network to compensate for potential artifacts from the neural network-based image processing. The neural network(s) apply image processing to image data using one or more neural networks as processed data. Enhance circuitry enhances the image data in a scaling circuitry to generate enhanced data. Blending circuitry receives the processed image data and the enhanced data along with an image plane of the processed data. The blending circuitry also determines whether the image processing using the one or more neural networks has applied a change to the image data greater than a threshold amount. The blending circuitry then, based at least in part in response to the change being greater than the threshold amount and/or edge information of the image data, blends the processed data with the enhanced data.
Abstract:
A device may include an electronic display to display an image frame based on blended image data and image processing circuitry to generate the blended image data by combining first image data and second image data via a blend operation. The blend operation may include receiving graphics alpha data indicative of a transparency factor to be applied to the first image data to generate a first layer of the blend operation. The blend operation may also include overlaying the first layer onto a second layer that is based on the second image data. Overlaying the first layer onto the second layer may include adding first pixels values of the first image data that include negative pixel values and are augmented by the transparency factor to second pixel values of the second image data to generate blended pixel values of the blended image data.
Abstract:
A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.