Abstract:
A video coder may determine reference samples based on a location of a current block of a current picture of the 360-degree video data and a packing arrangement that defines an arrangement of a plurality of regions in the current picture. The current picture is in a projected domain and each respective region of the plurality of regions is a respective face defined by a projection of 360-degree video data. The regions are arranged in the current picture according to the packing arrangement. Based on the location of the current block being at a border of the first region that is adjacent to the second region and there being a discontinuity at the border due to the packing arrangement, the reference samples are samples of the current picture that spatially neighbor the current block in a spherical domain and not in the projected domain.
Abstract:
A device for decoding 360-degree video data is configured to store a decoded picture of 360-degree video as a reference frame; derive an extended reference frame from the stored reference frame based on a padding amount by extending a first cube face in the reference frame; inter-predict a block of a current picture from a block of the extended reference frame by determining a motion vector for the block of the current picture; in response to a determination that the motion vector points to a cube face in the extended reference frame other than the first cube face, clipping the motion vector such that the motion vector points to a location in the first cube face; and locating a prediction block for a current block using the clipped motion vector.
Abstract:
A video coder is configured to determine a split type of a block of video data from an intra prediction mode associated with a neighboring block. The video coder may determine an intra prediction mode associated with a neighboring block of the current block of video data, determine a split type of the current block of video data based on the intra prediction mode associated with the neighboring block, split the current block of video data into a plurality of sub-partitions based on the determined split type, and code the plurality of sub-partitions.
Abstract:
Techniques are described using Position Dependent Intra Prediction Combination (PDPC) and multiple reference lines. For example, a video coder (e.g., an encoder and/or decoder) can predict an initial prediction sample value for a sample of a current block using an intra-prediction mode. The initial prediction sample value can be predicted from a first neighboring block and/or a second neighboring block of the current block. One or more reference sample values can be determined from at least one line of multiple lines of reference samples from the first neighboring block and/or the second neighboring block. At least one of the lines from the multiple lines used for determining the reference sample value(s) is not adjacent to the current block. A final prediction sample value can be determined for the sample of the current block, such as by modifying the initial prediction sample value using the one or more reference sample values.
Abstract:
Techniques and systems are described for mapping 360-degree video data to a truncated square pyramid shape. A 360-degree video frame can include 360-degrees' worth of pixel data, and thus be spherical in shape. By mapping the spherical video data to the planes provided by a truncated square pyramid, the total size of the 360-degree video frame can be reduced. The planes of the truncated square pyramid can be oriented such that the base of the truncated square pyramid represents a front view and the top of the truncated square pyramid represents a back view. In this way, the front view can be captured at full resolution, the back view can be captured at reduced resolution, and the left, right, up, and bottom views can be captured at decreasing resolutions. Frame packing structures can also be defined for 360-degree video data that has been mapped to a truncated square pyramid shape.
Abstract:
Techniques described herein are related to harmonizing the signaling of coding modes and filtering in video coding. In one example, a method of decoding video data is provided that includes decoding a first syntax element to determine whether PCM coding mode is used for one or more video blocks, wherein the PCM coding mode refers to a mode that codes pixel values as PCM samples. The method further includes decoding a second syntax element to determine whether in-loop filtering is applied to the one or more video blocks. Responsive to the first syntax element indicating that the PCM coding mode is used, the method further includes applying in-loop filtering to the one or more video blocks based at least in part on the second syntax element and decoding the one or more video blocks based at least in part on the first and second syntax elements.
Abstract:
In an example, a method of processing data includes determining, by a receiver device, an allowable excess delay parameter based on a difference between a time at which received data is received by the receiver device and a time at which the received data is scheduled to be played out, where the allowable excess delay parameter indicates an amount of delay that is supportable by a channel between a sender device and the receiver device. The method also includes determining, by the receiver device, a sender bit rate increase for increasing a bit rate at which data is to be sent from the sender device to the receiver device based on the determined allowable excess delay parameter, and transmitting an indication of the sender bit rate increase to the sender device.
Abstract:
In an example, a method of processing data includes transmitting data over a network at a first bit rate, identifying a reduction in a network link rate of the network from a first network link rate to a second network link rate, and in response to identifying the reduction in the network link rate, determining a recovery bit rate at which to transmit the data over the network, where the recovery bit rate is less than the second network link rate. The method also includes determining a buffering duration based on a difference between a time of the identification of the reduction in the network link rate and an estimated actual time of the reduction in the network link rate, and determining a recovery rate duration during which to transmit the data at the recovery bit rate based on the recovery bit rate and the buffering duration.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
In one example, an apparatus for processing video data comprises a video coder configured to, for each of the one or more chrominance components, calculate a chrominance quantization parameter for a common edge between two blocks of video data based on a first luminance quantization parameter for the first block of video data, a second luminance quantization parameter for the second block of video data, and a chrominance quantization parameter offset value for the chrominance component. The video coder is further configured to determine a strength for a deblocking filter for the common edge based on the chrominance quantization parameter for the chrominance component, and apply the deblocking filter according to the determined strength to deblock the common edge.