Abstract:
In an example, a method of coding video data includes determining a partition mode for coding a block of video data, where the partition mode indicates a division of the block of video data for predictive coding. The method also includes determining whether to code a weighting factor for an inter-view residual prediction process based on the partition mode, where, when the weighting factor is not coded, the inter-view residual prediction process is not applied to predict a residual for the block. The method also includes coding the block of video data with the determined partition mode.
Abstract:
A method of coding video data includes deriving prediction weights for illumination compensation of luma samples of a video block partition once for the video block partition such that the video block partition has a common set of prediction weights for performing illumination compensation of the luma samples regardless of a transform size for the video block partition, calculating a predicted block for the video block partition using the prediction weights using illumination compensation, and coding the video block partition using the predicted block.
Abstract:
For a depth block in a depth view component, a video coder derives a motion information candidate that comprises motion information of a corresponding texture block in a decoded texture view component, adds the motion information candidate to a candidate list for use in a motion vector prediction operation, and codes the current block based on a candidate in the candidate list.
Abstract:
A video encoder generates a bitstream that includes a reference picture list modification (RPLM) command. The RPLM command belongs to a type of RPLM commands for inserting short-term reference pictures into reference picture lists. The RPLM command instructs a video decoder to insert a synthetic reference picture into the reference picture list. The video decoder decodes, based at least in part on syntax elements parsed from the bitstream, one or more view components and generates, based at least in part on the one or more view components, the synthetic reference picture. The video decoder modifies, in response to the RPLM commands, a reference picture list to include the synthetic reference picture. The video decoder may use one or more pictures in the reference picture list as reference pictures to perform inter prediction on one or more video blocks of a picture.
Abstract:
An example device for filtering a decoded block of video data includes one or more processing units configured to construct filters for classes of blocks of a current picture of video data. To construct filters for each of the classes, the processing units are configured to determine a value of a flag that indicates whether a fixed filter is used to predict a set of filter coefficients of the class, and in response to the fixed filter being used to predict the set of filter coefficients, determine an index value into a set of fixed filters and predict the set of filter coefficients of the class using a fixed filter of the set of fixed filters identified by the index value.
Abstract:
An apparatus for wireless communication is provided. The apparatus may be a receiver device that includes an error correction decoder, such as a low-density parity check (LDPC) decoder. The apparatus may achieve power savings and/or operation cycle savings by disabling the error correction decoder in scenarios where bits of a codeword in a signal transmission are received without errors. The apparatus obtains a first set of bits of a codeword, wherein the codeword includes the first set of bits and a second set of bits, and wherein the second set of bits is punctured. The apparatus recovers the second set of bits based on at least the first set of bits and determines whether to operate an error correction decoder based on a result of an error detection operation performed on the codeword using the first set of bits and the second set of bits.
Abstract:
An example device for filtering a decoded block of video data includes one or more processors implemented in circuitry and configured to decode a current block of a current picture of the video data, select a filter (such as an adaptive loop filter) to be used to filter pixels of the current block, calculate a gradient of at least one pixel for the current block, select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter, wherein the one or more processors are configured to select the geometric transform that corresponds to an orientation of the gradient of the at least one pixel, perform the geometric transform on either the filter support region or the coefficients of the selected filter, and filter the at least one pixel of the current block using the selected filter after performing the geometric transform.
Abstract:
A device for decoding video data includes a memory configured to store the video data; and one or more processors configured to decode syntax information that indicates a selected intra prediction mode for the block of video data from among a plurality of intra prediction modes. The one or more processors apply an N-tap intra interpolation filter to neighboring reconstructed samples of the block of video data according to the selected intra prediction mode, wherein N is greater than 2. The one or more processors reconstruct the block of video data based on the filtered neighboring reconstructed samples according to the selected intra prediction mode.
Abstract:
An apparatus (e.g., receive chain) for wireless communications may perform de-interleaving, de-rate matching, and hybrid automatic repeat request (HARQ) combining in a single step. The apparatus may include a data pool configured to store HARQ log likelihood ratio (LLR) data from previous transmissions. The apparatus may include a HARQ onload controller configured to load HARQ LLR data from the HARQ data pool into a HARQ buffer. The apparatus may include an LLR buffer configured to store received demodulated, interleaved, and rate matched LLR data. The apparatus may include a plurality of processing engines configured to, starting at different locations of the LLR buffer: receive new input data from the LLR buffer; combine the HARQ LLR data from the HARQ buffer with the new input data to generate de-interleaved, de-rate matched, and HARQ combined data; and write the de-interleaved, de-rate matched, and HARQ combined data into the HARQ buffer.
Abstract:
In an example, a method of processing video data includes splitting a current block of video data into a plurality of sub-blocks for deriving motion information of the current block, where the motion information indicates motion of the current block relative to reference video data. The method also includes deriving, separately for each respective sub-block of the plurality of sub-blocks, motion information comprising performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of each respective sub-block. The method also includes decoding the plurality of sub-blocks based on the derived motion information and without decoding syntax elements representative of the motion information.