-
公开(公告)号:US11665365B2
公开(公告)日:2023-05-30
申请号:US16131133
申请日:2018-09-14
Applicant: GOOGLE LLC
Inventor: Bohan Li , Yaowu Xu , Jingning Han
IPC: H04N19/52 , H04N19/176 , H04N19/577
CPC classification number: H04N19/52 , H04N19/176 , H04N19/577
Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.
-
公开(公告)号:US10659788B2
公开(公告)日:2020-05-19
申请号:US15817369
申请日:2017-11-20
Applicant: GOOGLE LLC
Inventor: Yaowu Xu , Bohan Li , Jingning Han
IPC: H04N19/00 , H04N19/139 , H04N19/105 , H04N19/577 , H04N19/573 , H04N19/172 , H04N19/537
Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.
-
公开(公告)号:US20250150574A1
公开(公告)日:2025-05-08
申请号:US18836951
申请日:2022-03-07
Applicant: Google LLC
Inventor: Bohan Li , Yaowu Xu , Jingning Han
IPC: H04N19/105 , H04N19/137 , H04N19/172 , H04N19/176
Abstract: A motion vector for a current block of a current frame is decoded. The motion vector for the current block refers to a first reference block in a first reference frame. A first prediction block of two or more prediction blocks is identified in the first reference frame and using the first reference block. A first grid-aligned block is identified based on the first reference block. A second reference block is identified using a motion vector of the first grid-aligned block in a second reference frame. A second prediction block of the two or more prediction blocks is identified in the second reference frame and using the second reference block. The two or more prediction blocks are combined to obtain a prediction block for the current block.
-
公开(公告)号:US11876974B2
公开(公告)日:2024-01-16
申请号:US17738105
申请日:2022-05-06
Applicant: GOOGLE LLC
Inventor: Yaowu Xu , Bohan Li , Jingning Han
IPC: H04N19/00 , H04N19/139 , H04N19/105 , H04N19/577 , H04N19/573 , H04N19/172 , H04N19/537
CPC classification number: H04N19/139 , H04N19/105 , H04N19/172 , H04N19/537 , H04N19/573 , H04N19/577
Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.
-
公开(公告)号:US20230156221A1
公开(公告)日:2023-05-18
申请号:US17527590
申请日:2021-11-16
Applicant: GOOGLE LLC
Inventor: Bohan Li , Ching-Han Chiang , Jingning Han , Yao Yao
IPC: H04N19/597 , H04N19/80 , H04N19/139 , H04N19/167 , H04N19/182 , H04N19/176 , H04N19/57 , H04N19/52 , H04N19/583
CPC classification number: H04N19/597 , H04N19/80 , H04N19/139 , H04N19/167 , H04N19/182 , H04N19/176 , H04N19/57 , H04N19/52 , H04N19/583
Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.
-
公开(公告)号:US11284107B2
公开(公告)日:2022-03-22
申请号:US15683684
申请日:2017-08-22
Applicant: GOOGLE LLC
Inventor: Yaowu Xu , Bohan Li , Jingning Han
IPC: H04N19/58 , H04N19/587 , H04N19/80 , H04N19/176 , H04N19/182 , H04N19/44 , H04N19/59 , H04N19/527 , H04N19/105 , H04N19/567 , H04N19/54 , H04N19/56 , H04N19/523 , H04N19/172
Abstract: An optical flow reference frame is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A first (e.g., forward) reference frame and a second (e.g., backward reference frame) are used in an optical flow estimation that produces a respective motion field for pixels of the current frame. The motion fields are used to warp the reference frames to the current frame. The warped reference frames are blended to forms the optical flow reference frame.
-
公开(公告)号:US12206842B2
公开(公告)日:2025-01-21
申请号:US18424445
申请日:2024-01-26
Applicant: GOOGLE LLC
Inventor: Yaowu Xu , Bohan Li , Jingning Han
IPC: H04N19/105 , H04N19/139 , H04N19/172 , H04N19/573
Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.
-
18.
公开(公告)号:US20240195979A1
公开(公告)日:2024-06-13
申请号:US18542997
申请日:2023-12-18
Applicant: GOOGLE LLC
Inventor: Yaowu Xu , Bohan Li , Jingning Han
IPC: H04N19/139 , H04N19/105 , H04N19/172 , H04N19/537 , H04N19/573 , H04N19/577
CPC classification number: H04N19/139 , H04N19/105 , H04N19/172 , H04N19/537 , H04N19/573 , H04N19/577
Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.
-
公开(公告)号:US11924467B2
公开(公告)日:2024-03-05
申请号:US17527590
申请日:2021-11-16
Applicant: GOOGLE LLC
Inventor: Bohan Li , Ching-Han Chiang , Jingning Han , Yao Yao
IPC: H04N19/597 , H04N19/139 , H04N19/167 , H04N19/176 , H04N19/182 , H04N19/52 , H04N19/57 , H04N19/583 , H04N19/80
CPC classification number: H04N19/597 , H04N19/139 , H04N19/167 , H04N19/176 , H04N19/182 , H04N19/52 , H04N19/57 , H04N19/583 , H04N19/80
Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere. These mapping-aware coding tools contemplate changes to video information by the mapping of 360 degree video data into a conventional video format.
-
公开(公告)号:US11917128B2
公开(公告)日:2024-02-27
申请号:US17090094
申请日:2020-11-05
Applicant: GOOGLE LLC
Inventor: Bohan Li , Yaowu Xu , Jingning Han
IPC: H04N19/105 , H04N19/172 , H04N19/573 , H04N19/139
CPC classification number: H04N19/105 , H04N19/139 , H04N19/172 , H04N19/573
Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame. During decoding, the motion field estimate may be determined using motion vectors signaled within a bitstream and without additional side information, thereby improving prediction coding efficiency.
-
-
-
-
-
-
-
-
-