Abstract:
Disclosed is an apparatus and method for tagging a topic to content. The apparatus may include an unstructured data-based topic generator configured to generate a topic model including an unstructured data-based topic based on content and unstructured data, a viewer group analyzer configured to analyze a characteristic of a viewer group including a viewer of the content based on a social network of the viewer and viewing situation information of the viewer, a multifaceted topic generator configured to generate a multifaceted topic based on the topic model and the characteristic of the viewer group, a content divider configured to divide the content into a plurality of scenes, and a tagger configured to tag the multifaceted topic to the scenes.
Abstract:
Provided is a method of authorizing a video scene and metadata for providing a GUI screen provided to a user for authorizing the video scene and the metadata. The method includes generating a GUI screen configuration for an input of data including a video, sound, subtitles, and a script, generating a GUI screen configuration for extracting and editing shots from the data, generating a GUI screen configuration for generating and editing scenes, based on the shots, generating a GUI screen configuration for automatically generating and editing metadata of the scenes, and generating a GUI screen configuration for storing the scenes and the metadata in a database.
Abstract:
Disclosed is a method and apparatus for generating a title and a keyframe of a video. According to an embodiment of the present disclosure, the method includes: selecting a main subtitle by analyzing subtitles of the video; selecting the keyframe corresponding to the main subtitle; extracting content information of the keyframe by analyzing the keyframe; generating the title of the video using metadata of the video, the main subtitle, and the content information of the keyframe; and outputting the title and the keyframe of the video.
Abstract:
There are provided an apparatus and method for tracking temporal variation of a video content context using dynamically generated metadata, wherein the method includes generating static metadata on the basis of internal data held during an initial publication of video content and tagging the generated static metadata to the video content, collecting external data related to the video content generated after the video content is published, generating dynamic metadata related to the video content on the basis of the collected external data and tagging the generated dynamic metadata to the video content, repeating regeneration and tagging of the dynamic metadata with an elapse of time, tracking a change in content of the dynamic metadata, and generating and providing a trend analysis report corresponding to a result of tracking the change in the content.
Abstract:
A method and apparatus for video data augmentation that automatically constructs a large amount of learning data using video data. An apparatus for augmenting video data according to an embodiment of this disclosure, the apparatus including: a feature information check unit checking feature information including a content feature, a flow feature, and a class feature of a sub video of a predetermined unit constituting an original video; a section check unit selecting a video section including at least one sub video on the basis of the feature information of the sub video; and a video augmentation unit extracting at least one substitute sub video corresponding to the selected video section from multiple pre-stored sub videos, and applying the extracted at least one sub video to the selected video section to generate an augmented video.
Abstract:
The present invention relates to an apparatus and method for providing a content map service using a story graph of video content and a user structure query. The apparatus according to an embodiment of the present includes: a story graph generating apparatus configured to extract video entities contained in video content and entity relations between the entities, and generate a story graph on the basis of the extracted entity relations; a story graph database configured to store the generated story graph; a structure query input apparatus configured to receive a user structure query in the form of a graph; a story graph matching apparatus configured to calculate a similarity between the story graph and the input user structure query from a similar sub-structure, and select a matching video on the basis of the calculated similarity; and a visualization apparatus configured to visualize the input user structure query and the video matching the user structure query in the story graph and provide a visualization result to a user.
Abstract:
An apparatus and method for verifying broadcast content object identification based on web data. The apparatus includes: a web data processor configured to collect and process web data related to broadcast content and create content knowledge information by tagging the web data to the broadcast content; a content knowledge information storage portion configured to store the content knowledge information; and an object identification verifier configured to verify a result of identifying an object contained in the broadcast content, using the content knowledge information.
Abstract:
A technology for allowing anyone to easily create interactive media capable of easily recognizing a user interaction by using a stored image is provided. A system according to the present invention includes an image reconstruction server, an image ontology, and an image repository. The image reconstruction server includes an image reconstruction controller, a natural language processing module, and an image search module. The image reconstruction controller of the image reconstruction server receives a scenario based on a natural language from a user and searches for images desired by the user by using the natural language processing module, the image search module, and the image repository. The natural language processing module of the image reconstruction server performs a morphological analysis and a syntax analysis on the scenario input by the user as a preliminary operation for the search of the image ontology. The image search module of the image reconstruction server automatically generates an ontology search query sentence, such as SPARQL, by using a result of natural language processing, and searches the image ontology by using the generated query sentence.
Abstract:
Provided are a clustering method using broadcast content and broadcast related data and a user terminal to perform the method, the clustering method including creating a story graph with respect to each of a plurality of scenes associated with broadcast content based on the broadcast content and broadcast related data, and creating a cluster of a scene based on the created story graph.
Abstract:
Provided is an object detecting method and apparatus, the apparatus configured to extract a frame image and a motion vector from a video, generate an integrated feature vector based on the frame image and the motion vector, and detect an object included in the video based on the integrated feature vector.