Methods and apparatus for volumetric video transport

    公开(公告)号:US11483536B2

    公开(公告)日:2022-10-25

    申请号:US17259852

    申请日:2019-06-21

    Abstract: A method and a device provide for transmitting information representing a viewpoint in a 3D scene represented with a set of volumetric video contents; receiving a first volumetric video content of the set, the first volumetric video content being according to a range of points of view comprising the viewpoint, the first volumetric video content being represented with a set of first patches, each of which corresponds to a 2D parametrization of a first group of points in a 3D part of the 3D scene associated with the first volumetric video content, and at least one first patch refers to an area of a second patch corresponding to a 2D parametrization of a second group of points in another 3D part of the 3D scene associated with a second volumetric video content of the set of volumetric video contents.

    Method, apparatus and stream for immersive video format

    公开(公告)号:US10891784B2

    公开(公告)日:2021-01-12

    申请号:US16477620

    申请日:2018-01-08

    Abstract: Method and device for generating a stream of data representative of a 3D point cloud. The 3D point cloud is partitioned into a plurality of 3D elementary parts. A set of two-dimensional 2D parametrizations is determined, each 2D parametrization representing one 3D part of the point cloud with a set of parameters. Each 3D part is represented as a 2D pixel image. A depth map and a color map are determined as a first patch atlas and a second patch atlas. A data stream is generated by combining and/or coding the parameters of the 2D parametrization, the first patch atlas, the second patch atlas and mapping information that links each 2D parametrization with its associated depth map and color map in respectively the first and second patch atlas.

    Method and apparatus for depth encoding and decoding

    公开(公告)号:US12143633B2

    公开(公告)日:2024-11-12

    申请号:US17440151

    申请日:2020-03-17

    Abstract: Methods, device and data stream format are disclosed in the present document for the encoding, the formatting and the decoding of depth information representative of a 3D scene. Compression and decompression of quantized values by a video codec leads to a value error. This error on values is particularly sensitive for depth encoding. The present invention proposes to encode and decode depth with a quantization function that minimize an angle error when a value error on quantized depth creates a location delta between the projected point and the de-projected point. The inverse of such a quantization function has to be encoded in metadata associated with the 3D scene, for example as a LUT, to be retrieved at the decoding, as such functions are not tractable.

    Method and apparatus for encoding and decoding three-dimensional scenes in and from a data stream

    公开(公告)号:US11375235B2

    公开(公告)日:2022-06-28

    申请号:US16962157

    申请日:2019-01-04

    Abstract: Methods and devices are provided to encode and decode a data stream carrying data representative of a three-dimensional scene, the data stream comprising color pictures packed in a color image; depth pictures packed in a depth image; and a set of patch data items comprising de-projection data; data for retrieving a color picture in the color image and geometry data. Two types of geometry data are possible. The first type of data describes how to retrieve a depth picture in the depth image. The second type of data comprises an identifier of a 3D mesh. Vertex coordinates and faces of this mesh are used to retrieve the location of points in the de-projected scene.

Patent Agency Ranking