Motion prediction coding with coframe motion vectors

    公开(公告)号:US11665365B2

    公开(公告)日:2023-05-30

    申请号:US16131133

    申请日:2018-09-14

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/52 H04N19/176 H04N19/577

    Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.

    Multi-Frame Motion Compensation Synthesis For Video Coding

    公开(公告)号:US20250150574A1

    公开(公告)日:2025-05-08

    申请号:US18836951

    申请日:2022-03-07

    Applicant: Google LLC

    Abstract: A motion vector for a current block of a current frame is decoded. The motion vector for the current block refers to a first reference block in a first reference frame. A first prediction block of two or more prediction blocks is identified in the first reference frame and using the first reference block. A first grid-aligned block is identified based on the first reference block. A second reference block is identified using a motion vector of the first grid-aligned block in a second reference frame. A second prediction block of the two or more prediction blocks is identified in the second reference frame and using the second reference block. The two or more prediction blocks are combined to obtain a prediction block for the current block.

    Block-based optical flow estimation for motion compensated prediction in video coding

    公开(公告)号:US11876974B2

    公开(公告)日:2024-01-16

    申请号:US17738105

    申请日:2022-05-06

    Applicant: GOOGLE LLC

    Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.

    MAPPING-AWARE CODING TOOLS FOR 360 DEGREE VIDEOS

    公开(公告)号:US20230156221A1

    公开(公告)日:2023-05-18

    申请号:US17527590

    申请日:2021-11-16

    Applicant: GOOGLE LLC

    Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.

    Motion field estimation based on motion trajectory derivation

    公开(公告)号:US12206842B2

    公开(公告)日:2025-01-21

    申请号:US18424445

    申请日:2024-01-26

    Applicant: GOOGLE LLC

    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.

    BLOCK-BASED Optical Flow Estimation FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

    公开(公告)号:US20240195979A1

    公开(公告)日:2024-06-13

    申请号:US18542997

    申请日:2023-12-18

    Applicant: GOOGLE LLC

    Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.

    Mapping-aware coding tools for 360 degree videos

    公开(公告)号:US11924467B2

    公开(公告)日:2024-03-05

    申请号:US17527590

    申请日:2021-11-16

    Applicant: GOOGLE LLC

    Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.

    Motion field estimation based on motion trajectory derivation

    公开(公告)号:US11917128B2

    公开(公告)日:2024-02-27

    申请号:US17090094

    申请日:2020-11-05

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/105 H04N19/139 H04N19/172 H04N19/573

    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.

Patent Agency Ranking