Motion Vector Dependent Spatial Transformation in Video Coding

    公开(公告)号:US20180213239A1

    公开(公告)日:2018-07-26

    申请号:US15935301

    申请日:2018-03-26

    Applicant: GOOGLE LLC

    Abstract: Coding efficiency may be improved by subdividing a block into smaller sub-blocks for prediction. A first rate distortion value of a block optionally partitioned into smaller prediction sub-blocks of a first size is calculated using respective inter prediction modes and transforms of the first size. The residuals are used to encode the block using a transform of a second size smaller than the first size, generating a second rate distortion value. The values are compared to determine whether coding efficiency gains may result from inter predicting the smaller, second size sub-blocks. If so, the block is encoded by generating prediction residuals for the second size sub-blocks, and neighboring sub-blocks are grouped, where possible, based on common motion information. Each resulting composite residual block is transformed by a transform of the same size to generate another rate distortion value. The encoded block with the lowest rate distortion value is used.

    Motion field estimation based on motion trajectory derivation

    公开(公告)号:US12206842B2

    公开(公告)日:2025-01-21

    申请号:US18424445

    申请日:2024-01-26

    Applicant: GOOGLE LLC

    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.

    BLOCK-BASED Optical Flow Estimation FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

    公开(公告)号:US20240195979A1

    公开(公告)日:2024-06-13

    申请号:US18542997

    申请日:2023-12-18

    Applicant: GOOGLE LLC

    Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.

    Mapping-aware coding tools for 360 degree videos

    公开(公告)号:US11924467B2

    公开(公告)日:2024-03-05

    申请号:US17527590

    申请日:2021-11-16

    Applicant: GOOGLE LLC

    Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.

    Motion field estimation based on motion trajectory derivation

    公开(公告)号:US11917128B2

    公开(公告)日:2024-02-27

    申请号:US17090094

    申请日:2020-11-05

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/105 H04N19/139 H04N19/172 H04N19/573

    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.

    Deblocking filtering
    36.
    发明授权

    公开(公告)号:US11870983B2

    公开(公告)日:2024-01-09

    申请号:US16995078

    申请日:2020-08-17

    Applicant: Google LLC

    CPC classification number: H04N19/117 H04N19/176 H04N19/186 H04N19/46

    Abstract: Techniques for encoding and decoding image data are described. An image is reconstructed and deblocked. A respective deblocking filter is identified for different color planes of the image. The deblocking filters may include those having different lengths for a luma plane as compared to one or more chroma planes of the image. One or more of the color planes, such as the luma plane, may have different filters for filtering reconstructed pixels vertically as compared to filtering the reconstructed pixels horizontally.

    MOTION PREDICTION CODING WITH COFRAME MOTION VECTORS

    公开(公告)号:US20230308679A1

    公开(公告)日:2023-09-28

    申请号:US18323613

    申请日:2023-05-25

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/52 H04N19/176 H04N19/577

    Abstract: Video coding using motion prediction coding with coframe motion vectors includes generating a reference coframe spatiotemporally concurrent with a current frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, generating an encoded frame by encoding the current frame using the reference coframe, including the encoded frame in an encoded bitstream, and outputting the encoded bitstream.

    Probability Estimation for Video Coding

    公开(公告)号:US20230007260A1

    公开(公告)日:2023-01-05

    申请号:US17775565

    申请日:2020-11-09

    Applicant: Google LLC

    Abstract: Entropy coding a sequence of symbols is described. A first probability model for entropy coding is selected. At least one symbol of the sequence is coded using a probability determined using the first probability model. The probability according to the first probability model is updated with an estimation of a second probability model to entropy code a subsequent symbol. The combination may be a fixed or adaptive combination.

    BLOCK-BASED OPTICAL FLOW ESTIMATION FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

    公开(公告)号:US20220264109A1

    公开(公告)日:2022-08-18

    申请号:US17738105

    申请日:2022-05-06

    Applicant: GOOGLE LLC

    Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.

Patent Agency Ranking