Abstract:
An encoding device, a decoding device, and a method for mesh decoding are disclosed. The method for mesh decoding includes receiving a compressed bitstream. The method also includes separating, from the compressed bitstream, a first bitstream and a second bitstream. The method further includes decoding, from the second bitstream, connectivity information of a three dimensional (3D) mesh. The method additionally includes decoding, from the first bitstream, a first frame and a second frame that include patches. The patches included in the first frame represent vertex coordinates of the 3D mesh and the patches included in the second frame represent a vertex attribute of the 3D mesh. The method also includes reconstructing a point cloud based on the first and second frames. Additionally, the method also includes applying the connectivity information to the point cloud to reconstruct the 3D mesh.
Abstract:
A method for generating content includes receiving information regarding electronic devices respectively capturing content. The method also includes identifying, based on the received information, one or more parameters for the electronic devices to use in capturing the content, the one or more parameters identified to assist in generating multi-view content for the event from the captured content. The method further includes identifying, based on the received information, a common resolution for the electronic devices to use in capturing the content. Additionally, the method includes identifying, based on the received information, a common frame rate for the electronic devices to use in capturing the content. The method also includes sending information indicating the one or more parameters, the common resolution, and the common frame rate to the electronic devices.
Abstract:
A method for operating a decoding device for point cloud decoding includes receiving a compressed bitstream. The method also includes decoding the compressed bitstream into two-dimensional (2-D) frames that represent a three-dimensional (3-D) point cloud. Each of the 2-D frames includes a set of patches, and each patch includes a cluster of points of the 3-D point cloud. The cluster of points corresponds to an attribute associated with the 3-D point cloud. One patch of the set of patches, the set of patches, and the 2-D frames correspond to respective access levels representing the 3-D point cloud. The method also includes identifying a first and a second flag. In response to identifying the first and the second flag, the method includes reading the metadata from the bitstream. The method further includes generating, based on metadata and using the sets of 2-D frames, the 3-D point cloud.
Abstract:
An encoding device and a method for point cloud encoding are disclosed. The method includes segmenting an area including points representing a three-dimensional (3D) point cloud into multiple voxels. The method also includes generating a patch information for each of the multiple voxels that include at least one of the points of the 3D point cloud. The method further includes assigning the patch information of the multiple voxels to the points included in each respective voxel, to generate patches that represent the 3D point cloud based on the patch information of the multiple voxels. Additionally, the method includes generating frames that include pixels that represent the patches. The method also includes encoding the frames to generate a bitstream and transmitting the bitstream.
Abstract:
A decoding device, an encoding device and a method for point cloud decoding is disclosed. The method includes decoding the compressed bitstream into a first set and second set of 2-D frames. The first set of 2-D frames include first set of regular patches representing geometry of a 3-D point cloud and the second set of 2-D frames include first set of regular patches representing texture of the 3-D point cloud. The method includes identifying in the first set of 2-D frames, a missed points patch representing geometry of points of the 3-D point cloud not included in the regular patches, and in the second set of 2-D frames a missed points patch that represents texture of the points of the 3-D point cloud not included in the regular patches. The method also includes generating, using the set of 2-D frames, the 3-D point cloud using the missed points patches.
Abstract:
An electronic device includes a receiver receives a compressed bitstream and metadata. The electronic device also includes at least one processor that generates an HDR image by decoding the compressed bitstream, identifies viewpoint information based on an orientation of the electronic device, maps the HDR image onto a surface, and renders a portion of the HDR image based on the metadata and the viewpoint information. A display displays the portion of the HDR image.
Abstract:
An encoding device, a decoding device, and a method for mesh decoding are disclosed. The method for mesh decoding includes decoding a frame that includes pixels from the bitstream. A portion of the pixels of the frame represent geometric locations of vertices of a 3D mesh that are organized into overlapped patches The method further includes decoding connectivity information from the bitstream. Additionally, the method includes identifying triangles associated with the overlapped patches The triangles represented in an overlapped patch of the overlapped patches are allocated to a projection direction based on a normal vector associated with each of the triangles of the overlapped patch. The method also includes reconstructing the 3D mesh based on the connectivity information and the overlapped patches.
Abstract:
A method for real-time multi-frame super resolution (SR) of video content is provided. The method includes receiving a bitstream including an encoded video, motion metadata for a plurality of blocks of a frame of video content, and parameters. The motion metadata is estimated from the original video before downsampling and encoding. The motion metadata is averagedover consecutive blocks. The method includes upscaling themotion metadata for the plurality of blocks. The method also includes upscaling the decoded video using the upscaled motion metadata. The method also includes deblurring and denoising the upscaled video.
Abstract:
A method includes identifying an optimal backlight value for at least one quality level of a first video segment.The method also includes transmitting data for thefirst video segment. The transmitted data for the first video segment includes a message containinga first set of display adaptation information. The first set of display adaptation information includes the optimal backlight value for the at least one quality level of the first video segment.The method further includes identifying a backlight value for the at least one quality level of a second video segment.The method also includes determining a maximum backlight value change threshold between successive video segments. In addition, the method includes applying temporal smoothing between the optimal backlight value and the backlight value based on the maximum backlight value change threshold.
Abstract:
An encoding device and methods for point cloud encoding are disclosed. The method for encoding includes generating, using a processor of an encoder, a first frame and a second frame that include patches representing a cluster of points of three-dimensional (3D) point cloud; identifying a patch to segment in the patches of the first frame and the second frame; determining, in response to identifying the patch, a path representing a boundary between segmented regions within the patch; segmenting the patch along the path into two patches for the first frame and the second frame; encoding the first frame and the second frame to generate a compressed bitstream; and transmitting, using a communication interface operably coupled to the processor, the compressed bitstream.