Abstract:
An apparatus for decoding frames of a compressed video data stream having at least one frame divided into partitions, includes a memory and a processor configured to execute instructions stored in the memory to read partition data information indicative of a partition location for at least one of the partitions, decode a first partition of the partitions that includes a first sequence of blocks, decode a second partition of the partitions that includes a second sequence of blocks identified from the partition data information using decoded information of the first partition.
Abstract:
Adaptive directional loop filtering can reduce the number of blocking artifacts produced by coding a non-perpendicular picture edge in a frame of a video sequence. A directional filter is selected from a set of directional filters based on one of an orientation of the non-perpendicular picture edge or filter data included as part of an encoded video sequence in association with the frame. The selection can include selecting a directional filter based on a directional intra prediction mode used for encoding the block, a filter angle most closely matching an angle explicitly signaled as part of the video sequence, the incremental reduction of the number of blocking artifacts, a threshold value for blocking artifacts, or a frequency of filter use. Each directional filter of the set of directional filters can have a filter angle between 0 and 180 degrees, exclusive.
Abstract:
Encoding or decoding blocks of video frames using multiple reference frames with adaptive temporal filtering can include generating one or more candidate reference frames by applying temporal filtering to one or more frames of a video sequence according to relationships between respective ones of the one or more frames and a current frame of the video sequence. A reference frame to use for predicting the current frame can be selected from the one or more candidate reference frames, and a prediction block can be generated using the selected reference frame. During an encoding operation, the prediction block can be used to encode a block of a current frame of the video sequence. During a decoding operation, the prediction block can be used to decode a block of a current frame of the video sequence.
Abstract:
Decoding an encoded video stream may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a decoded video for presentation to a user, and outputting the decoded video. Generating the decoded video may include receiving an encoded video stream, generating a decoded constructed reference frame by decoding an encoded constructed reference frame from the encoded video stream, generating a decoded current frame by decoding an encoded current frame from the encoded video stream using the decoded constructed reference frame as a reference frame, and including the decoded current frame in the decoded video such that the decoded constructed reference frame is omitted from the decoded video.
Abstract:
Encoding or decoding blocks of video frames using multiple reference frames with adaptive temporal filtering can include generating one or more candidate reference frames by applying temporal filtering to one or more frames of a video sequence according to relationships between respective ones of the one or more frames and a current frame of the video sequence. A reference frame to use for predicting the current frame can be selected from the one or more candidate reference frames, and a prediction block can be generated using the selected reference frame. During an encoding operation, the prediction block can be used to encode a block of a current frame of the video sequence. During a decoding operation, the prediction block can be used to decode a block of a current frame of the video sequence.
Abstract:
A prediction block is determined for a current block of a current frame of a video stream using a template having pixel locations that conform to a subset of the pixel locations of the current block. A first portion of the prediction block having the same pattern of pixel locations as the template is populated by inter-predicted pixel values, and the remaining portion of the prediction block is populated by intra-predicted pixel values. The intra-predicted pixel values may be determined using inter-predicted pixel values of the first portion, pixel values of pixels adjacent to the current block, or both.
Abstract:
Implementations of the teachings herein include coding video data with an alternate reference frame generated using a temporal filter. The alternate reference frame is generated by determining a first weighting factor, for each corresponding block of a respective frame of a filter set, that represents a temporal correlation of the block with the corresponding block, determining a second weighting factor, for each pixel for each corresponding block of the respective frame of the filter set, that represents a temporal correlation of the pixel to a spatially-correspondent pixel in the block, determining a filter weight for each pixel in the block and for each spatially-correspondent pixel is each corresponding block based on the first weighting factor and the second weighting factor, and generating a weighted average pixel value for each pixel position in the block to form a block of the alternate reference frame based on the filter weights.
Abstract:
Implementations of the teachings herein include coding video data with an alternate reference frame generated using a temporal filter. The alternate reference frame is generated by determining a first weighting factor, for each corresponding block of a respective frame of a filter set, that represents a temporal correlation of the block with the corresponding block, determining a second weighting factor, for each pixel for each corresponding block of the respective frame of the filter set, that represents a temporal correlation of the pixel to a spatially-correspondent pixel in the block, determining a filter weight for each pixel in the block and for each spatially-correspondent pixel is each corresponding block based on the first weighting factor and the second weighting factor, and generating a weighted average pixel value for each pixel position in the block to form a block of the alternate reference frame based on the filter weights.
Abstract:
Video data streams can be encoded and decoded using inter or intra prediction. The blocks of a frame can be processed based on depth, from the lowest level sub-blocks to the highest level large blocks, and divided into groups of blocks to be inter predicted, blocks having sub-blocks that are to be inter predicted and sub-blocks that are to be intra predicted, and blocks to be intra predicted, and the blocks to be inter predicted are encoded first, the blocks having sub-blocks to be inter predicted and intra predicted encoded second, and the blocks to be intra predicted encoded last. The availability of data from the inter predicted blocks can improve the performance of intra prediction over processing the blocks in the scan order since more pixel data is available for intra prediction of some blocks.
Abstract:
Decoding an encoded video stream may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a decoded video for presentation to a user, and outputting the decoded video. Generating the decoded video may include receiving an encoded video stream, generating a decoded constructed reference frame by decoding an encoded constructed reference frame from the encoded video stream, generating a decoded current frame by decoding an encoded current frame from the encoded video stream using the decoded constructed reference frame as a reference frame, and including the decoded current frame in the decoded video such that the decoded constructed reference frame is omitted from the decoded video.