Abstract:
A method and an apparatus for authoring a machine learning-based immersive media are provided. The apparatus determines an immersive effect type of an original image of image contents to be converted into an immersive media by using an immersive effect classifier learned using an existing immersive media that the immersive effect is already added to an image, detects an immersive effect section of the original image based on the immersive effect type determination result, and generates metadata of the detected immersive effect section.
Abstract:
Disclosed is a sensory information providing apparatus. The sensory information providing apparatus may comprise a learning model database storing a plurality of learning models related to sensory effect information with respect to a plurality of videos; and a video analysis engine generating the plurality of learning models by extracting sensory effect association information by analyzing the plurality of videos and sensory effect meta information of the plurality of videos, and extracting sensory information corresponding to an input video stream by analyzing the input video stream based on the plurality of learning model.
Abstract:
A visualizing apparatus of social network elements collects social network relationship information, community information, and content information of a user, generates relationship data between the user, the contents, and the community using the collected information, and visualizes an association relationship between the user, the contents, and the community using the relationship data between the user, the contents, and the community.
Abstract:
Disclosed is a multi-point connection control apparatus and method for a video conference service. The apparatus may include a front end processor configured to receive video streams and audio streams from user terminals of participants using the video conference service, and generate screen configuration information for providing the video conference service based on the received video streams and the received audio streams, and a back end processor configured to receive at least one of the video streams, at least one of the audio streams, and the screen configuration information from the front end processor, and generate a mixed video for the video conference service based on the received at least one of the video streams, at least one of the audio streams, and the screen configuration information.
Abstract:
The present disclosure relates to a method and apparatus for a thing collaboration service based on a social community. More specifically, the method includes: forming a social community among users having a purpose of detecting and preventing a risk in a predetermined place and environment; managing the social community to connect thing terminals owned by the individual users to each other on the basis of the formed social community; collecting thing status information from the thing terminal of each of the users; detecting an occurrence of a risk for predicting an occurrence of an accident or detecting a time point at which the accident occurs by analyzing the collected thing status information; and sharing information about the detected occurrence of the risk with the users of the social community.
Abstract:
A thing cooperation service system and method, and a modeling tool thereof are provided. The thing cooperation service system includes a storage manager configured to store and manage a thing control specification with respect to a plurality of things; and a modeling tool configured to provide the user with a thing which is in a specific social relationship with a user in the plurality of things using information of the user and the thing control specification, and generate an application providing a thing cooperation service using a thing selected from things which are in the specific social relationship.
Abstract:
Provided herein is an education service system including a user device, which reproduces provided learning content and generates device input information through user input; a learning situation recognition unit, which calculates user state information based on the device input information and selects recommended content depending on the user state information, and a learning content providing unit, which provides learning content corresponding to the recommended content from among a plurality of pieces of pre-stored learning content to the user device.