Abstract:
Disclosed is an apparatus and method for tagging a topic to content. The apparatus may include an unstructured data-based topic generator configured to generate a topic model including an unstructured data-based topic based on content and unstructured data, a viewer group analyzer configured to analyze a characteristic of a viewer group including a viewer of the content based on a social network of the viewer and viewing situation information of the viewer, a multifaceted topic generator configured to generate a multifaceted topic based on the topic model and the characteristic of the viewer group, a content divider configured to divide the content into a plurality of scenes, and a tagger configured to tag the multifaceted topic to the scenes.
Abstract:
Provided is a method of authorizing a video scene and metadata for providing a GUI screen provided to a user for authorizing the video scene and the metadata. The method includes generating a GUI screen configuration for an input of data including a video, sound, subtitles, and a script, generating a GUI screen configuration for extracting and editing shots from the data, generating a GUI screen configuration for generating and editing scenes, based on the shots, generating a GUI screen configuration for automatically generating and editing metadata of the scenes, and generating a GUI screen configuration for storing the scenes and the metadata in a database.
Abstract:
An apparatus for detecting a malicious app. The apparatus may include a collector to collect a mobile app, a static analyzer to extract basic information from the collected mobile app, analyze the basic information of the extracted mobile app, and generate a call flow graph (CFG) of the mobile app, a dynamic analyzer to execute the collected mobile app, expand the CFG of the mobile app, generated by the static analyzer, to a dynamic action-based CFG, and determine a similarity between the expanded CFG and a flow graph that performs a malicious action, and a malicious app determiner to determine whether the collected mobile app is malicious by analyzing the basic information, the CFG, the call flow graph, and the similarity.
Abstract:
The present disclosure discloses a method of determining precise positioning. A method of determining precise positioning according to an embodiment of the present disclosure includes: determining at least one piece of image positioning information of at least one image object detected from at least one image; determining at least one piece of wireless positioning information of at least one wireless object on the basis of signal strength of a wireless signal; performing mapping for the at least one piece of image positioning information and the at least one piece of wireless positioning information; and determining final positioning information on the basis of the at least one piece of image positioning information, and the at least one piece of wireless positioning information for which mapping is performed.
Abstract:
Disclosed is a method and apparatus for generating a title and a keyframe of a video. According to an embodiment of the present disclosure, the method includes: selecting a main subtitle by analyzing subtitles of the video; selecting the keyframe corresponding to the main subtitle; extracting content information of the keyframe by analyzing the keyframe; generating the title of the video using metadata of the video, the main subtitle, and the content information of the keyframe; and outputting the title and the keyframe of the video.
Abstract:
There are provided an apparatus and method for tracking temporal variation of a video content context using dynamically generated metadata, wherein the method includes generating static metadata on the basis of internal data held during an initial publication of video content and tagging the generated static metadata to the video content, collecting external data related to the video content generated after the video content is published, generating dynamic metadata related to the video content on the basis of the collected external data and tagging the generated dynamic metadata to the video content, repeating regeneration and tagging of the dynamic metadata with an elapse of time, tracking a change in content of the dynamic metadata, and generating and providing a trend analysis report corresponding to a result of tracking the change in the content.
Abstract:
Provided is an apparatus for bi-directional sign language/speech translation in real time and method that may automatically translate a sign into a speech or a speech into a sign in real time by separately performing an operation of recognizing a speech externally made through a microphone and outputting a sign corresponding to the speech, and an operation of recognizing a sign sensed through a camera and outputting a speech corresponding to the sign.
Abstract:
A method and apparatus for video data augmentation that automatically constructs a large amount of learning data using video data. An apparatus for augmenting video data according to an embodiment of this disclosure, the apparatus including: a feature information check unit checking feature information including a content feature, a flow feature, and a class feature of a sub video of a predetermined unit constituting an original video; a section check unit selecting a video section including at least one sub video on the basis of the feature information of the sub video; and a video augmentation unit extracting at least one substitute sub video corresponding to the selected video section from multiple pre-stored sub videos, and applying the extracted at least one sub video to the selected video section to generate an augmented video.
Abstract:
The present invention relates to an apparatus and method for providing a content map service using a story graph of video content and a user structure query. The apparatus according to an embodiment of the present includes: a story graph generating apparatus configured to extract video entities contained in video content and entity relations between the entities, and generate a story graph on the basis of the extracted entity relations; a story graph database configured to store the generated story graph; a structure query input apparatus configured to receive a user structure query in the form of a graph; a story graph matching apparatus configured to calculate a similarity between the story graph and the input user structure query from a similar sub-structure, and select a matching video on the basis of the calculated similarity; and a visualization apparatus configured to visualize the input user structure query and the video matching the user structure query in the story graph and provide a visualization result to a user.
Abstract:
An apparatus and method for verifying broadcast content object identification based on web data. The apparatus includes: a web data processor configured to collect and process web data related to broadcast content and create content knowledge information by tagging the web data to the broadcast content; a content knowledge information storage portion configured to store the content knowledge information; and an object identification verifier configured to verify a result of identifying an object contained in the broadcast content, using the content knowledge information.