Abstract:
Systems and methods for decoding are described. A decoder system is configured to retrieve, from the grain array stored within the on-chip memory, noise pixel data associated with at least one reconstructed video pixel. The decoder system is configured to apply noise data to the at least one reconstructed video pixel to generate at least one output video pixel. The noise data is determined based on the noise pixel data.
Abstract:
Video decoding systems and techniques are described. A decoder reads first video data from a first block of a video frame. The decoder retrieves neighboring video data from a line buffer. The neighboring video data is from a neighboring block that neighbors the first block in the video frame. The decoder processes the first video data and the retrieved neighboring video data using a constrained directional enhancement filter (CDEF) to generate filtered first video data. The decoder upscales the filtered first video data using an upscaler to generate upscaled filtered first video data. The decoder upscales the retrieved neighboring video data using the upscaler to generate upscaled neighboring video data, for instance after generating the filtered first video data. The decoder processes the upscaled filtered first video data and the upscaled neighboring video data using a loop restoration (LR) filter to generate output video data.
Abstract:
Video decoding systems and techniques are described. The decoder applies a deblocking (DB) filter to the plurality of sub-blocks of a block of video data to generate a DB-filtered plurality of sub-blocks. The decoder applies the DB filter to one or more lines (e.g., columns) of pixels in an additional sub-block of the block to generate a DB-filtered portion of the additional sub-block. The one or more lines of pixels in the additional sub-block are filtered without filtering an entirety of the additional sub-block using the DB filter. The additional sub-block is adjacent to at least one of the plurality of sub-blocks. The decoder applies a constrained directional enhancement filter (CDEF) to the DB-filtered plurality of sub-blocks and the DB-filtered portion of the additional sub-block to generate a CDEF-filtered plurality of sub-blocks.
Abstract:
A device for upscale filtering video data includes one or more processors configured to generate a number of upscaled pixels that is an integer multiple of one half of a size of a filter used to upscale filter the video data. The device may then store the remaining pixels to be used to upscale filter a subsequent, right-neighboring block. In this manner, the device may avoid generating upscaled pixels that will result in being unaligned with the block and that may not be output due to being unaligned.
Abstract:
As the quality and quantity of shared video content increases, video encoding standards and techniques are being developed and improved to reduce bandwidth consumption over telecommunication and other networks. One such technique for compressing videos involves transforming image data into an alternate, encoding-friendly domain (e.g., by a two-dimensional discrete cosine transform). Transform modules may be implemented to perform these transformations, which may occur during both video encoding and decoding processes. Provided are exemplary techniques for improving the efficiency and performance of transform module implementations.
Abstract:
As the quality and quantity of shared video content increases, video encoding standards and techniques are being developed and improved to reduce bandwidth consumption over telecommunication and other networks. One technique to reduce bandwidth consumption is intra-prediction, which exploits spatial redundancies within video frames. Each video frame may be segmented into blocks, and intra-prediction may be applied to the blocks. However, intra-prediction of some blocks may rely upon the completion (e.g., reconstruction) of other blocks, which can make parallel processing challenging. Provided are exemplary techniques for improving the efficiency and throughput associated with the intra-prediction of multiple blocks.
Abstract:
As the quality and quantity of shared video content increases, video encoding standards and techniques are being developed and improved to reduce bandwidth consumption over telecommunication and other networks. One such technique for compressing videos involves transforming image data into an alternate, encoding-friendly domain (e.g., by a two-dimensional discrete cosine transform). Transform modules may be implemented to perform these transformations, which may occur during both video encoding and decoding processes. Provided are exemplary techniques for improving the efficiency and performance of transform module implementations.
Abstract:
As the quality and quantity of shared video content increases, video encoding standards and techniques are being developed and improved to reduce bandwidth consumption over telecommunication and other networks. One technique to reduce bandwidth consumption is intra-prediction, which exploits spatial redundancies within video frames. Each video frame may be segmented into blocks, and intra-prediction may be applied to the blocks. However, intra-prediction of some blocks may rely upon the completion (e.g., reconstruction) of other blocks, which can make parallel processing challenging. Provided are exemplary techniques for improving the efficiency and throughput associated with the intra-prediction of multiple blocks.
Abstract:
Certain aspects of the present disclosure provide techniques for scalably and efficiently converting linear image data into multi-dimensional image data for multimedia applications. In one example, a method for managing image data includes receiving a line of image data in a linear format via a system bus of width T, wherein the image data's native format is a tile format of H lines per tile; forming H subsets of image data from the line of image data in the linear format; writing the H subsets of image data to a memory comprising BN=H banks of BW=T/BN pixel width, wherein each subset of the H subsets is written to a different bank of the BN banks; and outputting the H subsets of image data in the tile format.
Abstract:
Systems and techniques are provided for processing video data. For example, an apparatus may obtain one or more first sets of collocated motion vector data and one or more second sets of collocated motion vector data, associated with a respective first and second block of video data included in a current frame of video data. The apparatus may project the one or more first sets of collocated motion vector data into a first projected motion field associated with a first buffer and project the one or more second sets of collocated motion vector data into the first projected motion field associated with the first buffer. Based on projecting the one or more first sets and one or more second sets of collocated motion vector data, the apparatus may decode the first block of video data based on the first projected motion field associated with the first buffer.