WAVEFRONT PARALLEL PROCESSING FOR VIDEO CODING
    11.
    发明申请
    WAVEFRONT PARALLEL PROCESSING FOR VIDEO CODING 有权
    WAVEFRONT并行处理视频编码

    公开(公告)号:US20130272370A1

    公开(公告)日:2013-10-17

    申请号:US13776071

    申请日:2013-02-25

    CPC classification number: H04N19/436 H04N19/17 H04N19/174

    Abstract: In one example, a video coder may be configured to determine that a slice of a picture of video data begins in a row of coding tree units (CTUs) in the picture at a position other than a beginning of the row. Based on the determination, the video coder may be further configured to determine that the slice ends within the row of CTUs. The video coder may be further configured to code the slice based on the determination that the slice ends within the row of CTUs.

    Abstract translation: 在一个示例中,视频编码器可以被配置为确定视频数据的图片的片段在除行的开头之外的位置的图像中的编码树单位(CTU)行中开始。 基于该确定,视频编码器可以被进一步配置为确定切片在CTU行内结束。 视频编码器还可以被配置为基于片段在CTU行中结束的确定来对切片进行编码。

    MACHINE LEARNING BASED FLOW DETERMINATION FOR VIDEO CODING

    公开(公告)号:US20220272355A1

    公开(公告)日:2022-08-25

    申请号:US17676510

    申请日:2022-02-21

    Abstract: Systems and techniques are described herein for processing video data. In some aspects, a method can include obtain, by a machine learning system, input video data. The input video data includes one or more luminance components for a current frame. The method can include determining, by the machine learning system, motion information for the luminance component(s) of the current frame and motion information for one or more chrominance components of the current frame using the luminance component(s) for the current frame. In some cases, the method can include determining the motion information for the luminance component(s) based on the luma component(s) of the current frame and at least one reconstructed luma component of a previous frame. In some cases, the method can further include determining the motion information for the chrominance component(s) of the current frame using the motion information determined for the luminance component(s) of the current frame.

    ADAPTATION PARAMETER SETS (APS) FOR ADAPTIVE LOOP FILTER (ALF) PARAMETERS

    公开(公告)号:US20200344473A1

    公开(公告)日:2020-10-29

    申请号:US16842343

    申请日:2020-04-07

    Abstract: Techniques are described for adaptation parameter sets (APS) for adaptive loop filter (ALF) parameters. One example involves obtaining an APS ID value and an APS type value associated with a NAL unit from a bitstream. A first APS associated with at least a portion of at least one picture is identified, with the first APS being uniquely identified by a combination of the APS type value and the APS identifier value, and the APS identifier value of the first APS is in a range based on the APS type value. The portion of the at least one picture is then reconstructed using an adaptive loop filter with parameters defined by the first APS uniquely identified by the APS type value and the APS identifier value.

    BLOCK-BASED QUANTIZED RESIDUAL DOMAIN PULSE CODE MODULATION ASSIGNMENT FOR INTRA PREDICTION MODE DERIVATION

    公开(公告)号:US20200344469A1

    公开(公告)日:2020-10-29

    申请号:US16854720

    申请日:2020-04-21

    Abstract: Techniques are described for improving video coding. For example, a first block of a picture included in an encoded video bitstream can be obtained. A second block of the picture can be determined as being coded (e.g., encoded) using a type of block-based quantized residual domain pulse code modulation (BDPCM) mode, such as vertical BDPCM mode or horizontal BDPCM mode. In the event the second block is coded using the vertical BDPCM mode, a vertical intra-prediction mode can be determined for an intra-prediction mode list for the first block. The vertical intra-prediction mode can be added to the intra-prediction mode list for the first block. In the event the second block is coded using the horizontal BDPCM mode, a horizontal intra-prediction mode can be determined for the intra-prediction mode list for the first block and the horizontal intra-prediction mode can be added to the intra-prediction mode list.

    PREDICTION USING A COMPRESSION NETWORK
    18.
    发明公开

    公开(公告)号:US20240308505A1

    公开(公告)日:2024-09-19

    申请号:US18183867

    申请日:2023-03-14

    CPC classification number: B60W30/0953 G06T7/215

    Abstract: A device includes one or more processors configured to obtain encoded data associated with one or more motion values. The one or more processors are also configured to obtain conditional input of a compression network, wherein the conditional input is based on one or more first predicted motion values. The one or more processors are further configured to process, using the compression network, the encoded data and the conditional input to generate one or more second predicted motion values.

    MOTION COMPENSATION USING SIZE OF REFERENCE PICTURE

    公开(公告)号:US20230283769A1

    公开(公告)日:2023-09-07

    申请号:US18170909

    申请日:2023-02-17

    CPC classification number: H04N19/105 H04N19/503 H04N19/176

    Abstract: A video coder is configured to determine a reference block of a reference picture for prediction of a current block of a current picture using motion information and to generate a set of reference samples for the current block of the current picture. To generate the set of reference samples, the video coder is configured to perform reference sample clipping on the reference block of the reference picture based on a size of the reference picture. The video coder is further configured to generate a prediction block for the current block of the current picture based on the set of reference samples.

    FRONT-END ARCHITECTURE FOR NEURAL NETWORK BASED VIDEO CODING

    公开(公告)号:US20220191523A1

    公开(公告)日:2022-06-16

    申请号:US17643383

    申请日:2021-12-08

    Abstract: Techniques are described herein for processing video data using a neural network system. For instance, a process can include generating, by a first convolutional layer of an encoder sub-network of the neural network system, output values associated with a luminance channel of a frame. The process can include generating, by a second convolutional layer of the encoder sub-network, output values associated with at least one chrominance channel of the frame. The process can include generating, by a third convolutional layer based on the output values associated with the luminance channel of the frame and the output values associated with the at least one chrominance channel of the frame, a combined representation of the frame. The process can further include generating encoded video data based on the combined representation of the frame.

Patent Agency Ranking