DIRECTIONAL DEBLOCKING FILTER
    41.
    发明申请

    公开(公告)号:US20190052912A1

    公开(公告)日:2019-02-14

    申请号:US15844894

    申请日:2017-12-18

    Applicant: GOOGLE LLC

    Abstract: Multiple directional filters are applied against lines of pixels associated with a video block to determine filtered noise values. Each directional filter uses a different direction for filtering lines of pixels. For example, for each pixel value of the video block along a line of pixels having a direction corresponding to a directional filter, a difference can be determined between the pixel value and a corresponding pixel value along the line of pixels and outside of the video block. A value for line of pixels is determined as the sum of the absolute values of each of the differences, and a filtered noise value is determined as the sum of the values for the lines of pixels. The directional filter used to determine a lowest one of the filtered noise values for the video block is then selected. The video block is filtered using the selected directional filter.

    Multi-Frame Motion Compensation Synthesis For Video Coding

    公开(公告)号:US20250150574A1

    公开(公告)日:2025-05-08

    申请号:US18836951

    申请日:2022-03-07

    Applicant: Google LLC

    Abstract: A motion vector for a current block of a current frame is decoded. The motion vector for the current block refers to a first reference block in a first reference frame. A first prediction block of two or more prediction blocks is identified in the first reference frame and using the first reference block. A first grid-aligned block is identified based on the first reference block. A second reference block is identified using a motion vector of the first grid-aligned block in a second reference frame. A second prediction block of the two or more prediction blocks is identified in the second reference frame and using the second reference block. The two or more prediction blocks are combined to obtain a prediction block for the current block.

    Video coding using constructed reference frames

    公开(公告)号:US12184901B2

    公开(公告)日:2024-12-31

    申请号:US17834972

    申请日:2022-06-08

    Applicant: GOOGLE LLC

    Abstract: Video coding using constructed reference frames may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed video. Generating the reconstructed video may include receiving an encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed non-showable reference frame. Generating the reconstructed non-showable reference frame may include decoding a first encoded frame from the encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed frame. Generating the reconstructed frame may include decoding a second encoded frame from the encoded bitstream using the reconstructed non-showable reference frame as a reference frame. Video coding using constructed reference frames may include including the reconstructed frame in the reconstructed video and outputting the reconstructed video.

    Block-based optical flow estimation for motion compensated prediction in video coding

    公开(公告)号:US11876974B2

    公开(公告)日:2024-01-16

    申请号:US17738105

    申请日:2022-05-06

    Applicant: GOOGLE LLC

    Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.

    Transform Kernel Selection and Entropy Coding

    公开(公告)号:US20220353534A1

    公开(公告)日:2022-11-03

    申请号:US17866612

    申请日:2022-07-18

    Applicant: Google LLC

    Abstract: Transform kernel candidates including a vertical transform type associated with a vertical motion and a horizontal transform type associated with a horizontal motion can be encoded or decoded. During a decoding operation, a probability model for decoding encoded bitstream video data associated with a transform kernel candidate for an encoded transform block is identified based on one or both of a first transform kernel candidate selected for an above neighbor transform block of the encoded transform block or a second transform kernel candidate selected for a left neighbor transform block of the encoded transform block. The encoded bitstream video data associated with the transform kernel candidate is decoded using the probability model.

    VIDEO CODING USING CONSTRUCTED REFERENCE FRAMES

    公开(公告)号:US20220303583A1

    公开(公告)日:2022-09-22

    申请号:US17834972

    申请日:2022-06-08

    Applicant: GOOGLE LLC

    Abstract: Video coding using constructed reference frames may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed video. Generating the reconstructed video may include receiving an encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed non-showable reference frame. Generating the reconstructed non-showable reference frame may include decoding a first encoded frame from the encoded bitstream. Video coding using constructed reference frames may include generating a reconstructed frame. Generating the reconstructed frame may include decoding a second encoded frame from the encoded bitstream using the reconstructed non-showable reference frame as a reference frame. Video coding using constructed reference frames may include including the reconstructed frame in the reconstructed video and outputting the reconstructed video.

    DYNAMIC MOTION VECTOR REFERENCING FOR VIDEO CODING

    公开(公告)号:US20210112270A1

    公开(公告)日:2021-04-15

    申请号:US17132065

    申请日:2020-12-23

    Applicant: GOOGLE LLC

    Abstract: Dynamic motion vector referencing is used to predict motion within video blocks. A motion trajectory is determined for a current frame including a video block to encode or decode based on a reference motion vector used for encoding or decoding one or more reference frames of the current frame. One or more temporal motion vector candidates are then determined for predicting motion within the video block based on the motion trajectory. A motion vector is selected from a motion vector candidate list including the one or more temporal motion vector candidates and used to generate a prediction block. The prediction block is then used to encode or decode the video block. The motion trajectory is based on an order of video frames indicated by frame offset values encoded to a bitstream. The motion vector candidate list may include one or more spatial motion vector candidates.

Patent Agency Ranking