Abstract:
A method and apparatus for luma-based chroma intra prediction for a current chroma block are disclosed. The chroma intra predictor is derived from reconstructed luma pixels of a current luma block according to the chroma sampling format. Depending on the chroma sampling format, either sub-sampling, down-sampling or no processing is applied to the reconstructed luma pixels in horizontal or vertical direction. The information associated with the chroma sampling format can be incorporated in the sequence parameter set (SPS), the picture parameter set (PPS), the adaptation parameter set (APS) or the slice header of a video bitstream.
Abstract:
A method and apparatus for video coding including an Intra transform Skip mode is disclosed. When the transform Skip mode is ON for a transform unit, embodiments according to the present invention apply different coding processes to the transform unit. The coding process with the transform Skip mode ON uses a different scan pattern from the coding process with the transform Skip mode OFF. According to various embodiments, the transform Skip mode is enabled when the transform unit size is 4×4, the prediction unit and the transform unit having the same size, or the prediction unit uses an INTRA_N×N mode. When the transform Skip mode is enabled, a flag can be signaled in the bitstream to indicate the transform Skip mode selection. Furthermore, the flag can be incorporated in a picture level, a slice level or a sequence level of the video bitstream.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) candidate set for motion vector coding of a current block. Embodiments according to the present invention determine a redundancy-removed spatial MVP candidate set by removing any redundant MVP candidate from the spatial MVP candidate set. The redundancy-removal process does not apply to the temporal MVP candidate. In another embodiment of the present invention, a redundancy-removed spatial-temporal MVP candidate set is determined and the number of candidates in the redundancy-removed spatial-temporal MVP candidate set is checked to determine whether it is smaller than a threshold. If the number of candidates is smaller than the threshold, a zero motion vector is added to the redundancy-removed spatial-temporal MVP candidate set. The redundancy-removed spatial-temporal MVP candidate set is then provided for encoding or decoding of the motion vector of the current block.
Abstract:
A method and apparatus for clipping a transform coefficient are disclosed. Embodiments according to the present invention avoid overflow of the quantized transform coefficient by clipping the quantization level adaptively after quantization. In one embodiment, the method comprises generating the quantization level for the transform coefficient of a transform unit by quantizing the transform coefficient according to a quantization matrix and quantization parameter. The clipping condition is determined and the quantization level is clipped according to the clipping condition to generate a clipping-processed quantization level. The clipping condition includes a null clipping condition. The quantization level is clipped to fixed-range represented in n bits for the null clipping condition, where n correspond to 8, 16, or 32. The quantization level may also be clipped within a range from −m to m−1 for the null clipping condition, where m may correspond to 128, 32768, or 2147483648.
Abstract:
A method and an apparatus for decoding of a video bitstream are disclosed. In one embodiment, the method comprises: decoding a first coded block flag (cbf) of the color component indicating whether a current coding unit (CU) of the color component has at least one non-zero transform coefficient (830). According to the first cbf of the color component, the method further comprises decoding four second cbfs, each indicating whether one of four sub-blocks in the current CU of the color component has at least one non-zero transform coefficient (850). The residual quad-tree (RQT) of the current CU of the color component is determined based on the first cbf of the color component (870), or based on the first cbf and the second cbfs of the color component if the second cbfs exist (860). In another embodiment, the method comprises decoding a cbf associated with a transform unit (TU) and determining RQT of the TU based on the cbf, wherein said determining the RQT of the TU based on the cbf is the same for a luma component and a chroma component and the cbf is recovered from the video bitstream.
Abstract:
A method and apparatus for luma-based chroma intra prediction for a current chroma block are disclosed. The chroma intra predictor is derived from reconstructed luma pixels of a current luma block according to the chroma sampling format. Depending on the chroma sampling format, either sub-sampling, down-sampling or no processing is applied to the reconstructed luma pixels in horizontal or vertical direction. The information associated with the chroma sampling format can be incorporated in the sequence parameter set (SPS), the picture parameter set (PPS), the adaptation parameter set (APS) or the slice header of a video bitstream.
Abstract:
A method and apparatus for chroma intra prediction is based on reconstructed luma pixels and chroma pixels, where the chroma intra prediction is based on a linear model of derived co-located current luma pixels of the current luma block scaled by a scaling factor. The scaling factor comprises a product term of a division factor and a scaled covariance-like value associated with neighboring reconstructed luma and chroma pixels of a current block. The division factor is related to a first data range divided with rounding by a scaled variance-like value associated with the neighboring reconstructed luma pixels of the current block. The scaled covariance-like value, the first data range, or both of the scaled covariance-like value and the first data range are dependent on the internal bit depth, with which the chroma signal is processed during video coding process, according to an embodiment of the present invention.
Abstract:
A method and apparatus for chroma intra prediction based on reconstructed luma pixels and chroma pixels are disclosed. The chroma intra prediction is based on a linear model of derived co-located current luma pixels of the current luma block scaled by a scaling factor. The scaling factor comprises a product term of a division factor and a scaled covariance-like value associated with neighboring reconstructed luma and chroma pixels of a current block. The division factor is related to a first data range divided with rounding by a scaled variance-like value associated with the neighboring reconstructed luma pixels of the current block. The scaled covariance-like value, the first data range, or both of the scaled covariance-like value and the first data range are dependent on the internal bit depth, with which the chroma signal is processed during video coding process, according to an embodiment of the present invention.