-
公开(公告)号:US10225573B1
公开(公告)日:2019-03-05
申请号:US15420564
申请日:2017-01-31
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , James Bankoski , Yue Chen , Sarah Parker
IPC: H04N19/105 , H04N19/172 , H04N19/513 , H04N19/176 , H04N19/182
Abstract: A current block of a video frame can be encoded or decoded using parameterized motion models. First and second parameterized motion models are identified. The first parameterized motion model corresponds to a first motion model type, and the second parameterized motion model corresponds to a second motion model type. The first and second parameterized motion models are associated with one or more reference frames. One of the first or second parameterized motion models is selected along with an associated reference frame, such as based on a lowest prediction error. A motion vector is generated between the current block and the selected reference frame by warping pixels of the current block to a warped patch of the selected reference frame according to the selected parameterized motion model. A prediction block is generated using the motion vector, and the current block is encoded or decoded using the prediction block.
-
公开(公告)号:US10165283B1
公开(公告)日:2018-12-25
申请号:US15470140
申请日:2017-03-27
Applicant: Google Inc.
Inventor: Yue Chen , Debargha Mukherjee
IPC: H04N19/50 , H04N19/107 , H04N19/159 , H04N19/51 , H04N19/176 , H04N19/182 , H04N19/184
Abstract: Combining intra-frame and inter-frame prediction is described. A first combined prediction block for a first block is formed by combining weighted pixel values of a first inter prediction block and a first intra prediction block. A second combined prediction block is formed by combining pixel values of a second intra prediction block and a second inter prediction block. The first intra prediction block and the second intra prediction block have pixel dimensions corresponding to the first block. The pixel values of the second inter prediction block have pixel locations corresponding to a first partitioned area formed by an oblique line extending across the first block, and the pixel values of the second intra prediction block used in forming the second combined prediction block have pixel locations corresponding to a second partitioned area formed by the oblique line. One of the combined prediction blocks is selected to encode the first block.
-
公开(公告)号:US10110914B1
公开(公告)日:2018-10-23
申请号:US15266480
申请日:2016-09-15
Applicant: GOOGLE INC.
Inventor: Yue Chen , Debargha Mukherjee
IPC: H04N19/13 , H04N19/52 , H04N19/176 , H04N19/182 , H04N19/184
Abstract: Encoding or decoding blocks of video frames using locally adaptive warped motion compensation can include determining projection samples for predicting a warped motion of a current block to be encoded or decoded based on a warping model of a neighbor block adjacent to the current block. Parameters of a projection model can be determined based on the projection samples. A prediction block can be generated by projecting pixels of the current block to a warped patch within a reference frame using the parameters of the projection model. The warped patch can be a non-rectangular patch having a shape and a position in the reference frame indicated by the parameters of the projection model.
-
公开(公告)号:US20170353735A1
公开(公告)日:2017-12-07
申请号:US15174223
申请日:2016-06-06
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , Yue Chen
IPC: H04N19/567 , H04N19/182 , H04N19/176
CPC classification number: H04N19/567 , H04N19/107 , H04N19/119 , H04N19/157 , H04N19/176 , H04N19/182 , H04N19/583
Abstract: Encoding frames of a video stream may include encoding a current block of a current frame, generating a base prediction block for the current block based on current prediction parameters associated with the current block, identifying adjacent prediction parameters used for encoding previously encoded adjacent blocks that are adjacent to the current block. At least one side of the current block is adjacent to two or more of the previously encoded adjacent blocks. The encoding may include determining overlap regions in the current block, each of the overlap regions corresponding to a respective previously encoded adjacent block, generating an overlapped prediction of pixel values for each of the overlap regions according to a weighted function of the base prediction and a prediction based on the adjacent prediction parameters. The weighted function may be based on a difference between the current prediction parameters and the adjacent prediction parameters.
-
公开(公告)号:US20180184118A1
公开(公告)日:2018-06-28
申请号:US15387797
申请日:2016-12-22
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , Yue Chen
IPC: H04N19/583 , H04N19/167 , H04N19/176
CPC classification number: H04N19/583 , H04N19/167 , H04N19/176
Abstract: A method for processing a selected portion of a video, the selected portion of the video having a plurality of blocks. The method includes obtaining current prediction parameters for all of a plurality of adjacent blocks from the plurality of blocks that are adjacent to a current block from the plurality of blocks in the selected portion of the video, generating a base prediction for the current block from the plurality of blocks using the current prediction parameters associated with the current block, identifying adjacent prediction parameters from the current prediction parameters for a first adjacent block from the plurality of adjacent blocks, determining an overlap region within the current block and adjacent to the first adjacent block, and generating, for each pixel within the overlap region, an overlapped prediction for the pixel as a function of the base prediction and a prediction based on the adjacent prediction parameters.
-
公开(公告)号:US20180109811A1
公开(公告)日:2018-04-19
申请号:US15297603
申请日:2016-10-19
Applicant: Google Inc.
Inventor: Debargha Mukherjee , Yue Chen , Aamir Anis
IPC: H04N19/65 , H04N19/184 , H04N19/172 , H04N19/182
CPC classification number: H04N19/65 , H04N19/117 , H04N19/136 , H04N19/154 , H04N19/172 , H04N19/174 , H04N19/176 , H04N19/182 , H04N19/184 , H04N19/46 , H04N19/82
Abstract: Reducing error in a reconstructed frame is described. Pixels of the frame are classified into classes based on a classification scheme. Offset values for each class of at least some of the classes are determined, and a respective offset value for a class is applied to each pixel of the class, resulting in offset-adjusted pixels for the class. For the classes, a respective error rate reduction in using the respective offset value for a class as compared to omitting the respective offset value is determined, where the respective error rate reduction is based on the pixels of the class in the reconstructed frame, the offset-adjusted pixels of the class, and co-located source pixels in a source frame decoded to generate the reconstructed frame. A subset of classes is selected for reducing error in the reconstructed frame based on the error rate reductions.
-
公开(公告)号:US09769499B2
公开(公告)日:2017-09-19
申请号:US14823269
申请日:2015-08-11
Applicant: Google Inc.
Inventor: Debargha Mukherjee , Yue Chen , Shunyao Li
IPC: H04N19/61 , H04N19/59 , H04N19/176 , H04N19/122 , H04N19/583 , H04N19/184
CPC classification number: H04N19/61 , H04N19/122 , H04N19/176 , H04N19/184 , H04N19/583 , H04N19/59
Abstract: Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.
-
公开(公告)号:US20180014031A1
公开(公告)日:2018-01-11
申请号:US15700238
申请日:2017-09-11
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , Yue Chen , Shunyao Li
IPC: H04N19/61 , H04N19/184 , H04N19/122 , H04N19/176 , H04N19/59 , H04N19/583
CPC classification number: H04N19/61 , H04N19/122 , H04N19/176 , H04N19/184 , H04N19/583 , H04N19/59
Abstract: Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.
-
公开(公告)号:US10104398B2
公开(公告)日:2018-10-16
申请号:US15700238
申请日:2017-09-11
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , Yue Chen , Shunyao Li
IPC: H04N19/61 , H04N19/59 , H04N19/176 , H04N19/122 , H04N19/583 , H04N19/184
Abstract: Super-transform coding may include identifying a plurality of sub-blocks for prediction coding a current block, determining whether to encode the current block using a super-transform, and super-prediction coding the current block. Super-prediction coding may include generating a super-prediction block for the current block by generating a prediction block for each unpartitioned sub-block of the current block, generating a super-prediction block for each partitioned sub-block of the current block by super-prediction coding the sub-block, and including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block. Including the prediction blocks and super-prediction blocks for the sub-blocks in a super-prediction block for the current block may include filtering at least a portion of each prediction block and each super-prediction block based on a spatially adjacent prediction block. Super-transform coding may include transforming the super-prediction block for the current block using a corresponding super-transform.
-
公开(公告)号:US20170353733A1
公开(公告)日:2017-12-07
申请号:US15173881
申请日:2016-06-06
Applicant: GOOGLE INC.
Inventor: Debargha Mukherjee , Yue Chen
IPC: H04N19/50 , H04N19/176 , H04N19/182 , H04N19/44
CPC classification number: H04N19/50 , G06T7/20 , G06T9/00 , H04N19/107 , H04N19/119 , H04N19/157 , H04N19/176 , H04N19/182 , H04N19/44 , H04N19/583
Abstract: Decoding a current block of an encoded video stream may include generating a base prediction block for the current block based on current prediction parameters associated with the current block, identifying adjacent prediction parameters used for decoding a previously decoded adjacent block that is adjacent to the current block, and determining an overlap region within the current block and adjacent to the adjacent block. The overlap region has a size being determined as a function of a difference between the first prediction parameters and the adjacent prediction parameters. For each pixel within the overlap region, an overlapped prediction of a pixel value may be generated as a function of the base prediction and a prediction based on the adjacent prediction parameters.
-
-
-
-
-
-
-
-
-