Abstract:
Provided is a sensory effect adaptation method performed by an adaptation engine, the method including identifying first metadata associated with an object in a virtual world and used to describe the object and converting the identified first metadata into second metadata to be applied to a sensory device in a real world, wherein the second metadata is obtained by converting the first metadata based on a scene determined by a gaze of a user in the virtual world.
Abstract:
A method and apparatus for generating immersive media and a mobile terminal using the method and apparatus is disclosed. An apparatus for generating immersive media comprises an image generation unit generating image data based on image signals, a sensory effect data generation unit generating sensory effect data by obtaining information related to a sensory effect for providing a sensory effect in conjunction with the image data, and an immersive media generation unit generating immersive media based on the sensory effect data.
Abstract:
Disclosed is a method of compressing video frame using dual object extraction and object trajectory information in a video encoding and decoding process, including: segmenting a background and a object from a reference frame in video to extract the object, extracting and encoding motion information of the object based on the object, determining whether a frame is a reference frame based on encoded video in a decoding process, if it is determined that the frame is the reference frame, generating background information of a prediction frame based on the reference frame, and generating the prediction frame by extracting an object of the reference frame and referring to header information to reflect motion information of the object.
Abstract:
A method for generating a super resolution image may comprise up-scaling an input low resolution image; determining a directivity for each patch included in the up-scaled image; selecting an orientation-specified neural network or an orientation-non-specified neural network according to the directivity of the patch; applying the selected neural network to the patch; and obtaining a super resolution image by combining one or more patches output from the orientation-specified neural network and the orientation-non-specified neural network.
Abstract:
A method for removing compressed Poisson noises in an image, based on deep neural networks, may comprise generating a plurality of block-aggregation images by performing block transform on low-frequency components of an input image; obtaining a plurality of restored block-aggregation images by inputting the plurality of block-aggregation images into a first deep neural network; generating a low-band output image from which noises for the low-frequency components are removed by performing inverse block transform on the plurality of restored block-aggregation images; and generating an output image from which compressed Poisson noises are removed by adding the low-band output image to a high-band output image from which noises for high-frequency components of the input image are removed.
Abstract:
A method and an apparatus for authoring a machine learning-based immersive media are provided. The apparatus determines an immersive effect type of an original image of image contents to be converted into an immersive media by using an immersive effect classifier learned using an existing immersive media that the immersive effect is already added to an image, detects an immersive effect section of the original image based on the immersive effect type determination result, and generates metadata of the detected immersive effect section.
Abstract:
A method for providing a coping service based on context-aware information includes: recognizing a context through interworking with the devices provided in the space, and generating context-aware information; and searching for a service ID corresponding to the context-aware information from an awareness information and service mapping table in which awareness information occurrence time, occurrence place codes and service IDs are stored according to multiple awareness information IDs. Further, the method includes searching for workflow information corresponding to the searched service ID from a service workflow table in which workflow information according to service IDs is stored; and providing a service corresponding to the context awareness in accordance with the searched workflow.
Abstract:
An apparatus and a method for predicting error possibility, including: generating a first annotation for input data for training by using an algorithm; performing a machine-learning for an annotation evaluation model based on the first annotation and a correction history for the first annotation; generating a second annotation for input data for evaluating by using the algorithm; and predicting the error probability of the second annotation based on the annotation evaluation model are provided.
Abstract:
Disclosed is a multi-point connection control apparatus and method for a video conference service. The apparatus may include a front end processor configured to receive video streams and audio streams from user terminals of participants using the video conference service, and generate screen configuration information for providing the video conference service based on the received video streams and the received audio streams, and a back end processor configured to receive at least one of the video streams, at least one of the audio streams, and the screen configuration information from the front end processor, and generate a mixed video for the video conference service based on the received at least one of the video streams, at least one of the audio streams, and the screen configuration information.
Abstract:
Provided are a method of providing a video conference service and apparatuses performing the same, the method including determining contributions of a plurality of participants to a video conference based on first video signals and first audio signals of devices of the plurality of participants participating in the video conference, and generating a second video signal and a second audio signal to be transmitted to the devices of the plurality of participants based on the contributions.