Abstract:
A method and a device for transmitting and obtaining dynamic 3-dimensional (3D) avatar data are provided. The method for transmitting dynamic 3D avatar data may include generating a plurality of data elements configuring the dynamic 3D avatar data; performing first encoding and transmission for a first data element of the plurality of data elements; and performing second encoding and transmission for a second data element of the plurality of data elements. Each of at least one of the first data element or the second data element may be divided into a plurality of sub-data elements.
Abstract:
A method of encoding a dynamic mesh includes creating a base mesh through mesh decimation, subdividing the base mesh, extracting displacement information for the subdivided mesh, and encoding the base mesh and the displacement information. In this instance, mesh subdivision information for subdivision of the base mesh is encoded and signaled.
Abstract:
The present invention relates to a method and apparatus for encoding a displacement video using image tiling. A method for encoding multi-dimensional data according to an embodiment of the present disclosure may comprise: converting the multi-dimensional data into one or more frames with two-dimensional characteristics; generating one or more frame groups by grouping the one or more frames with pre-configured number units; reconstructing frames belonging to each frame group into a tiled frame; and generating a bitstream by encoding the tiled frame. Here, the tiled frame may be constructed with one or more blocks, and each block may be constructed by rearranging pixels existing at the same location in the frames.
Abstract:
Disclosed herein is a method of creating an image stitching workflow including acquiring 360-degree virtual reality (VR) image parameters necessary to makes a request for image stitching and create the image stitching workflow, acquiring a list of functions applicable to the image stitching workflow, creating the image stitching workflow based on functions selected from the list of functions, determining the number of media processing entities necessary to perform tasks configuring the image stitching workflow and generating a plurality of media processing entities according to the determined number of media processing entities, and allocating the tasks configuring the image stitching workflow to the plurality of media processing entities.
Abstract:
Disclosed is a 360 virtual reality (VR) video encoding method. A 360 virtual reality (VR) video encoding method according to the present disclosure includes: dividing the 360 VR video into a plurality of regions based on a division structure of the 360 VR video; generating an region sequence using the divided plurality of regions; generating a bitstream for the generated region sequence; and transmitting the generated bitstream, wherein the region sequence comprises regions having a same position in at least one or more frame included in the 360 VR image.
Abstract:
A method of receiving content in a client is provided. The method may include receiving, from a server, a spatial set identifier (ID) corresponding to a tile group including at least one tile, sending, to the server, a request for first content corresponding to metadata, and receiving, from the server, the first content corresponding to the request.
Abstract:
Provided is a parallax minimization stitching method and apparatus using control points in an overlapping region. A parallax minimization stitching method may include defining a plurality of control points in an overlapping region of a first image and a second image received from a plurality of cameras, performing a first geometric correction by applying a homography to the control points, defining a plurality of patches based on the control points, and performing a second geometric correction by mapping the patches.
Abstract:
The present invention proposes a method and apparatus for correcting a motion of panorama video captured by a plurality of cameras. The method of the present invention includes performing global motion estimation for estimating smooth motion trajectories from the panorama video, performing global motion correction for correcting a motion in each frame of the estimated smooth motion trajectories, performing local motion correction for correcting a motion of each of the plurality of cameras for the results in which the motions have been corrected, and performing warping on the results on which the local motion correction has been performed.