Abstract:
An exemplary video processing method includes: receiving an omnidirectional content corresponding to a sphere; obtaining a plurality of projection faces from the omnidirectional content of the sphere according to a pyramid projection; creating at least one padding region; and generating a projection-based frame by packing the projection faces and the at least one padding region in a pyramid projection layout. The projection faces packed in the pyramid projection layout include a first projection face. The at least one padding region packed in the pyramid projection layout includes a first padding region. The first padding region connects with at least the first projection face, and forms at least a portion of one boundary of the pyramid projection layout.
Abstract:
A video processing method includes receiving a bitstream, processing the bitstream to obtain at least one syntax element from the bitstream, and decoding the bitstream to generate a current decoded frame having a rotated 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format. The at least one syntax element signaled via the bitstream indicates rotation information of content-oriented rotation that is involved in generating the rotated 360-degree image/video content, and includes a first syntax element. When the content-oriented rotation is enabled, the first syntax element indicates a rotation degree along a specific rotation axis.
Abstract:
A video processing method includes: receiving a current input frame having a 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format, applying content-oriented rotation to the 360-degree image/video content in the current input frame to generate a content-rotated frame having a rotated 360-degree image/video content represented in the 360 VR projection format, encoding the content-rotated frame to generate a bitstream, and signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate rotation information of the content-oriented rotation.
Abstract:
A video processing method includes receiving a projection-based frame, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by projection faces packed in a 360-degree Virtual Reality (360 VR) projection layout, and there is at least one image content discontinuity boundary resulting from packing of the projection faces. The step of encoding the projection-based frame includes performing a prediction operation upon a current block in the projection-based frame, including: checking if the current block and a spatial neighbor are located at different projection faces and are on opposite sides of one image content discontinuity boundary; and when a checking result indicates that the current block and the spatial neighbor are located at different projection faces and are on opposite sides of one image content discontinuity boundary, treating the spatial neighbor as non-available.
Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and an octahedron projection layout, and encoding, by a video encoder, the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by triangular projection faces packed in the octahedron projection layout. The omnidirectional image/video content of the viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
Abstract:
A video processing method includes: receiving a current input frame having a 360-degree image/video content represented in a 360-degree Virtual Reality (360 VR) projection format, applying content-oriented rotation to the 360-degree image/video content in the current input frame to generate a content-rotated frame having a rotated 360-degree image/video content represented in the 360 VR projection format, encoding the content-rotated frame to generate a bitstream, and signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate rotation information of the content-oriented rotation.
Abstract:
A method of encoding a frame to generate an output bitstream has following steps: dividing the frame into partitions; dividing each of the partitions into blocks, wherein each of the blocks is composed of pixels; assigning a first segmentation identifier to each of first blocks located at partition boundaries each between two adjacent partitions within the frame, wherein the first blocks belong to a first segment, and the first segmentation identifier is signaled per first block; and encoding each of the blocks. The step of encoding each of the blocks includes: generating reconstructed blocks for the blocks, respectively; and configuring an in-loop filter by a predetermined in-loop filtering setting in response to the first segmentation identifier, wherein the in-loop filter with the predetermined in-loop filtering setting does not apply in-loop filtering to each reconstructed block corresponding to the first segment.
Abstract:
A memory pool management method includes: allocating a plurality of memory pools in a memory device according to information about a plurality of computing units, wherein the computing units are independently executed on a same processor; and assigning one of the memory pools to one of the computing units, wherein at least one of the memory pools is shared among different computing units of the computing units.
Abstract:
An exemplary video processing method includes: receiving an omnidirectional content corresponding to a sphere; obtaining a plurality of projection faces from the omnidirectional content of the sphere according to a pyramid projection; creating at least one padding region; and generating a projection-based frame by packing the projection faces and the at least one padding region in a pyramid projection layout. The projection faces packed in the pyramid projection layout include a first projection face. The at least one padding region packed in the pyramid projection layout includes a first padding region. The first padding region connects with at least the first projection face, and forms at least a portion of one boundary of the pyramid projection layout.
Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and a viewport-based cube projection layout, and encoding the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by rectangular projection faces packed in the viewport-based cube projection layout. The rectangular projection faces include a first rectangular projection face, a second rectangular projection face, a third rectangular projection face, a fourth rectangular projection face, a fifth rectangular projection face, and a sixth rectangular projection face split into partial rectangular projection faces. The first rectangular projection face corresponds to user's viewport, and is enclosed by a surrounding area composed of the second rectangular projection face, the third rectangular projection face, the fourth rectangular projection face, the fifth rectangular projection face, and the partial rectangular projection faces.