TRANSFORM COEFFICIENT CODING USING LEVEL MAPS

    公开(公告)号:US20190215533A1

    公开(公告)日:2019-07-11

    申请号:US16299436

    申请日:2019-03-12

    Applicant: GOOGLE LLC

    Abstract: Encoding a transform block includes de-composing transform coefficients of the transform block into binary level maps arranged in a tier and a residual transform map, the binary level maps formed by breaking down a value of a respective transform coefficient into a series of binary decisions; and encoding, using a context model, a to-be-encoded binary decision that is at a scan location in a scan order, the to-be-encoded binary decision being a value of a binary level map at a level k. The context model is selected using first neighboring binary decisions of the binary level map at a level k that precede the to-be-encoded binary decision; and second neighboring binary decisions of a binary level map at a level (k−1), the second neighboring binary decisions including values that precede and values that follow, in the scan order, a co-located binary decision of the to-be-encoded binary decision.

    Entropy coding motion vector residuals obtained using reference motion vectors

    公开(公告)号:US10142652B2

    公开(公告)日:2018-11-27

    申请号:US15147053

    申请日:2016-05-05

    Applicant: GOOGLE LLC

    Abstract: Techniques are described to code motion vectors using reference motion vectors to reduce the amount of bits needed. One method includes determining, for a current block of the video bitstream, a reference motion vector from a varying number of candidate reference motion vectors, wherein the reference motion vector is associated with a reference block and includes a predicted portion and a residual portion; selecting a probability context model for the current block by evaluating the residual portion of the reference motion vector with one or more thresholds; and entropy decoding, for the current block using a processor, a motion vector residual associated with the current block using the probability context model.

    Dual filter type for motion compensated prediction in video coding

    公开(公告)号:US10116957B2

    公开(公告)日:2018-10-30

    申请号:US15266400

    申请日:2016-09-15

    Applicant: GOOGLE LLC

    Abstract: Inter-prediction using a dual filter type is described. To decode a video frame, a block location within a reference frame is determined using a motion vector and a location of a current block to be decoded. Rows of pixel values of a temporal pixel block or columns of pixel values of the temporal pixel block are generated applying a first interpolation filter to pixels corresponding to the block location along a first axis. Columns of pixel values or rows of pixel values for a first prediction block are generated by applying a second interpolation filter to the pixel values of the temporal pixel block along a second axis perpendicular to the first axis. The first and second interpolation filters are different. An encoded residual block is decoded to generate a residual block, and combining the residual block with the first prediction block reconstructs the current block.

    Selective reference block generation without full reference frame generation

    公开(公告)号:US12244818B2

    公开(公告)日:2025-03-04

    申请号:US18542997

    申请日:2023-12-18

    Applicant: GOOGLE LLC

    Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.

    EFFICIENT CONTEXT MODEL COMPUTATION DESIGN IN TRANSFORM COEFFICIENT CODING

    公开(公告)号:US20240276015A1

    公开(公告)日:2024-08-15

    申请号:US18641482

    申请日:2024-04-22

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/60 H04N19/129 H04N19/13 H04N19/184 H04N19/88

    Abstract: An encoded bitstream is decodable by a processor configured to execute instructions to store, in a first line buffer, first values of a first scan-order diagonal line scanned immediately before a current scan-order diagonal line of a transform block; and store, in a second line buffer, second values of a second scan-order diagonal line scanned immediately before the first scan-order diagonal line. The first values of the first line buffer and the second values of the second line buffer are interleaved in a destination buffer. Using the destination buffer, a probability distribution is selected for coding a current value of the current scan-order diagonal line. The current value is entropy decoded from the bitstream using the probability distribution. One of the second line buffer or the first line buffer is replaced with current values of the current scan-order diagonal line for coding values of an immediately subsequent scan-order diagonal line.

    MOTION FIELD ESTIMATION BASED ON MOTION TRAJECTORY DERIVATION

    公开(公告)号:US20240171733A1

    公开(公告)日:2024-05-23

    申请号:US18424445

    申请日:2024-01-26

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/105 H04N19/139 H04N19/172 H04N19/573

    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.

    DEBLOCKING FILTERING
    68.
    发明公开

    公开(公告)号:US20240155121A1

    公开(公告)日:2024-05-09

    申请号:US18406816

    申请日:2024-01-08

    Applicant: Google LLC

    CPC classification number: H04N19/117 H04N19/176 H04N19/186 H04N19/46

    Abstract: A bitstream that stores encoded image data is described. In addition to the compressed data for color planes of the image, signals identifying respective deblocking filters is identified for the different color planes of the image. The deblocking filters may include those having different lengths for a luma plane as compared to one or more chroma planes of the image. One or more of the color planes, such as the luma plane, may have different filters for filtering reconstructed pixels vertically as compared to filtering the reconstructed pixels horizontally.

    CONSTRAINED MOTION FIELD ESTIMATION FOR HARDWARE EFFICIENCY

    公开(公告)号:US20220377364A1

    公开(公告)日:2022-11-24

    申请号:US17868011

    申请日:2022-07-19

    Applicant: GOOGLE LLC

    Abstract: Decoding a current block of a current frame includes obtaining motion trajectories between the current frame and at least one previously coded frame by projecting motion vectors from the at least one previously coded frame onto the current frame. A motion field is obtained between the current frame and a reference frame used for coding the current frame. The motion field is obtained by extending the motion trajectories from the current frame towards the reference frame. A motion vector for the current block is identified based on the motion field. A prediction block is obtained for the current block using a reference block of the reference frame identified using the motion vector.

Patent Agency Ranking