Abstract:
In one implementation, a method codes video pictures, in which each of the video pictures is partitioned into LCUs (largest coding units). The method operates by receiving a current LCU, partitioning the current LCU adaptively to result in multiple leaf CUs, determining whether a current leaf CU has at least one nonzero quantized transform coefficient according to both Prediction Mode (PredMode) and Coded Block Flag (CBF), and incorporating quantization parameter information for the current leaf CU in a video bitstream, if the current leaf CU has at least one nonzero quantized transform coefficient. If the current leaf CU has no nonzero quantized transform coefficient, the method excludes the quantization parameter information for the current leaf CU in the video bitstream.
Abstract:
A method and apparatus for processing of coded video using in-loop processing are disclosed. Input data to the in-loop processing is received and the input data corresponds to reconstructed or reconstructed-and-deblocked coding units of the picture. The input data is divided into multiple filter units and each filter unit includes one or more boundary-aligned reconstructed or reconstructed-and-deblocked coding units. A candidate filter is then selected from a candidate filter set for the in-loop processing. The candidate filter set comprises at least two candidate filters the said in-loop processing corresponding to adaptive loop filter (ALF), adaptive offset (AO), or adaptive clipping (AC). The in-loop processing is then applied to one of the filter units to generate a processed filter unit by using the candidate filter selected to all boundary-aligned reconstructed or reconstructed-and-deblocked coding units in said one of the filter units.
Abstract:
A method and apparatus for loop processing of reconstructed video in an encoder system are disclosed. The loop processing comprises an in-loop filter and one or more adaptive filters. The filter parameters for the adaptive filter are derived from the pre-in-loop video data so that the adaptive filter processing can be applied to the in-loop processed video data without the need of waiting for completion of the in-loop filter processing for a picture or an image unit. In another embodiment, two adaptive filters derive their respective adaptive filter parameters based on the same pre-in-loop video data. In yet another embodiment, a moving window is used for image-unit-based coding system incorporating in-loop filter and one or more adaptive filters. The in-loop filter and the adaptive filter are applied to a moving window of pre-in-loop video data comprising one or more sub-regions from corresponding one or more image units.
Abstract:
A method for video decoding includes receiving a video frame reconstructed based on data received from a bitstream. The method further includes extracting, from the bitstream, a first syntax element indicating whether a spatial partition for partitioning the video frame is active. The method also includes, responsive to the first syntax element indicating that the spatial partition for partitioning the video frame is active, determining a configuration of the spatial partition for partitioning the video frame, determining a plurality of parameter sets of a neural network, and applying the neural network to the video frame. The video frame is spatially divided based on the determined configuration of the spatial partition for partitioning the video frame into a plurality of portions, and the neural network is applied to the plurality of portions in accordance with the determined plurality of parameter sets.
Abstract:
A video coding system that uses multiple models to predict chroma samples is provided. The video coding system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The video coding system derives multiple prediction linear models based on luma and chroma samples neighboring the current block. The video coding system constructs a composite linear model based on the multiple prediction linear models. The video coding system applies the composite linear model to incoming or reconstructed luma samples of the current block to generate a chroma predictor of the current block. The video coding system uses the chroma predictor to reconstruct chroma samples of the current block or to encode the current block.
Abstract:
A method and apparatus for video coding system that utilizes low-latency template-matching motion-vector refinement are disclosed. According to this method, input data associated with a current block of a video unit in a current picture are received. Motion compensation is then applied to the current block according to an initial motion vector (MV) to obtain initial motion-compensated predictors of the current. After applying the motion compensation to the current block, template-matching MV refinement is applied to the current block to obtain a refined MV for the current block. The current block is then encoded or decoded using information including the refined MV. The method may further comprise determining gradient values of the initial motion-compensated predictors. The initial motion-compensated predictors can be adjusted by taking into consideration of the gradient values and/or MV difference between the refined and initial MVs.
Abstract:
Video encoding or decoding methods and apparatuses include receiving input data associated with a current block in a current picture, determining a preload region in a reference picture shared by two or more coding configurations of affine prediction or motion compensation or by two or more affine refinement iterations, loading reference samples in the preload region, generating predictors for the current block, and encoding or decoding the current block according to the predictors. The predictors associated with the affine refinement iterations or coding configurations are generated based on some of the reference samples in the preload region.
Abstract:
Video encoding methods and apparatuses for frequency domain mode decision include receiving residual data of a current block, testing multiple coding modes on the residual data, calculating a distortion associated with each of the coding modes in a frequency domain, performing a mode decision to select a best coding mode from the tested coding modes according to the distortion calculated in the frequency domain, and encoding the current block based on the best coding mode.
Abstract:
Low-latency video coding methods and apparatuses include receiving input data associated with a current Intra slice composed of Coding Tree Units (CTU), where each CTU includes luma and chroma Coding Tree Blocks (CTBs), partitioning each CTB into non-overlapping pipeline units, and encoding or decoding the CTUs in the current Intra slices by performing processing of chroma pipeline units after beginning processing of luma pipeline units in at least one pipeline stage. Each of the pipeline units is processed by one pipeline stage after another pipeline stage, and different pipeline stages process different pipeline units simultaneously. The pipeline stage in the low-latency video coding methods and apparatuses simultaneously processes one luma pipeline unit and at least one previous chroma pipeline unit within one pipeline unit time interval.
Abstract:
A method and apparatus for video coding utilizing a current block, a maximum side of the transform block of the current block corresponds to 64. A scaling matrix is derived from elements of an 8×8 base scaling matrix, where the elements in a bottom-right 4×4 region of the 8×8 base scaling matrix are skipped, either not signaled or set to zero. According to another method, a current block belongs to a current picture in a first color format that has only a first color component. A first scaling matrix is signaled at the video encoder side or parsed at the video decoder side for the first color component of the current block. Signaling any second scaling matrix is disabled at the video encoder side or parsing any second scaling matrix is disabled at the video decoder side for a second or third color component of the current block.