Spatial proximity context entropy coding
    41.
    发明授权
    Spatial proximity context entropy coding 有权
    空间邻域上下文熵编码

    公开(公告)号:US09179151B2

    公开(公告)日:2015-11-03

    申请号:US14057554

    申请日:2013-10-18

    Applicant: Google Inc.

    CPC classification number: H04N19/13 H04N19/129 H04N19/91

    Abstract: Encoding and decoding using spatial proximity context entropy coding may include identifying a plurality of transform coefficients for a current block of a current frame of a video stream. The plurality of transform coefficients may be ordered based on a scan order. A current transform coefficient may be identified from the plurality of transform coefficients. A plurality of context coefficients may be identified from the plurality of transform coefficients. Each context coefficient may be spatially proximate to the current transform coefficient and may be available for entropy coding the current transform coefficient. An entropy coding probability for the current transform coefficient may be identified based on the scan order and the plurality of context coefficients. The current transform coefficient may be entropy coded based on the entropy coding probability. The entropy coded current transform coefficient may be included in an output bitstream, which may be stored or transmitted.

    Abstract translation: 使用空间邻近度上下文熵编码的编码和解码可以包括识别视频流的当前帧的当前块的多个变换系数。 可以基于扫描顺序对多个变换系数进行排序。 可以从多个变换系数中识别当前变换系数。 可以从多个变换系数中识别多个上下文系数。 每个上下文系数可以在空间上接近当前变换系数,并且可用于对当前变换系数进行熵编码。 可以基于扫描顺序和多个上下文系数来识别当前变换系数的熵编码概率。 可以基于熵编码概率对当前变换系数进行熵编码。 熵编码的电流变换系数可以包括在可以存储或发送的输出比特流中。

    RATE-DISTORTION-COMPLEXITY OPTIMIZATION OF VIDEO ENCODING GUIDED BY VIDEO DESCRIPTION LENGTH
    42.
    发明申请
    RATE-DISTORTION-COMPLEXITY OPTIMIZATION OF VIDEO ENCODING GUIDED BY VIDEO DESCRIPTION LENGTH 有权
    视频描述长度指导的视频编码速率 - 复杂度优化

    公开(公告)号:US20150036740A1

    公开(公告)日:2015-02-05

    申请号:US14516349

    申请日:2014-10-16

    Applicant: Google Inc.

    Abstract: A system and method provide a video description length (VDL) guided constant quality video encoding strategy with bitrate constraint and a video coding system for optimizing encoding bitrate, distortion and complexity of an input video. The method obtains an overall VDL, temporal VDL and spatial VDL of the input video and compares the overall VDL, temporal VDL and spatial VDL of the input video with a reference VDL, temporal VDL and spatial VDL. Based on the comparison, the method adjusts the encoding bitrate, the overall encoding complexity, temporal encoding complexity and spatial encoding complexity of the input video and encodes the input video with the adjusted encoding bitrate, overall encoding complexity, temporal encoding complexity and spatial encoding complexity of the input video.

    Abstract translation: 系统和方法提供具有比特率约束的视频描述长度(VDL)导向恒定质量视频编码策略和用于优化输入视频的编码比特率,失真和复杂度的视频编码系统。 该方法获得输入视频的整体VDL,时间VDL和空间VDL,并将输入视频的整体VDL,时间VDL和空间VDL与参考VDL,时间VDL和空间VDL进行比较。 基于比较,该方法调整输入视频的编码比特率,整体编码复杂度,时间编码复杂度和空间编码复杂度,并对经调整的编码比特率,整体编码复杂度,时间编码复杂度和空间编码复杂度的输入视频进行编码 的输入视频。

    DEPTH-MAP GENERATION FOR AN INPUT IMAGE USING AN EXAMPLE APPROXIMATE DEPTH-MAP ASSOCIATED WITH AN EXAMPLE SIMILAR IMAGE

    公开(公告)号:US20190037197A1

    公开(公告)日:2019-01-31

    申请号:US15295944

    申请日:2016-10-17

    Applicant: Google Inc.

    Abstract: A two-dimensional image to be converted to a first three-dimensional image may be received. A second three-dimensional image that is visually similar to the two-dimensional image that is to be converted may be identified. A feature-to-depth mapping function may be computed for the first three-dimensional image by using an approximate depth map of the second three-dimensional image that is visually similar to the two-dimensional image that is to be converted. The feature-to-depth mapping function may be applied to a plurality of pixels of the two-dimensional image to determine a depth value for the plurality of pixels of the two-dimensional image. The first three-dimensional image may be generated based on the depth values for the plurality of pixels of the two-dimensional image.

    Super-transform video coding
    44.
    发明授权

    公开(公告)号:US10104398B2

    公开(公告)日:2018-10-16

    申请号:US15700238

    申请日:2017-09-11

    Applicant: GOOGLE INC.

    Abstract: Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.

    Restoration for Video Coding with Self-guided Filtering and Subspace Projection

    公开(公告)号:US20180131968A1

    公开(公告)日:2018-05-10

    申请号:US15719918

    申请日:2017-09-29

    Applicant: GOOGLE INC.

    Abstract: Restoring a degraded frame resulting from reconstruction of a source frame is described. A method includes generating, using first restoration parameters, a first guide tile for a degraded tile of the degraded frame, determining a projection parameter for a projection operation, and encoding, in an encoded bitstream, the first restoration parameters and the projection parameter. The projection operation relates differences between a source tile of the source frame and the degraded tile to differences between the first guide tile and the degraded tile.

Patent Agency Ranking