Abstract:
A method and apparatus for video data augmentation that automatically constructs a large amount of learning data using video data. An apparatus for augmenting video data according to an embodiment of this disclosure, the apparatus including: a feature information check unit checking feature information including a content feature, a flow feature, and a class feature of a sub video of a predetermined unit constituting an original video; a section check unit selecting a video section including at least one sub video on the basis of the feature information of the sub video; and a video augmentation unit extracting at least one substitute sub video corresponding to the selected video section from multiple pre-stored sub videos, and applying the extracted at least one sub video to the selected video section to generate an augmented video.
Abstract:
A method for generating a personal profile in a user device is provided. The user device extracts meaningful data from daily data of the user, extracts semantic information by analyzing the meaningful data, and then generates a current user profile having a single vector form using the meaningful data and the semantic information, and stores the current user profile in a data storage unit.
Abstract:
Disclosed herein is an apparatus for analyzing a video shot. The apparatus includes at least one program, memory in which the program is recorded, and a processor for executing the program. The program may include a frame extraction unit for extracting at least one frame from a video shot, a shot composition and camera position recognition unit for predicting shot composition and a camera position for the extracted at least one frame based on a previously trained shot composition recognition model, a place and time information extraction unit for predicting a shot location and a shot time for the extracted at least one frame based on previously trained shot location recognition model and shot time recognition model, and an information combination unit for combining pieces of information, respectively predicted for the at least one frame, for each video shot and tagging the video shot with the combined pieces of information.
Abstract:
Disclosed are an object tracking method and an object tracking apparatus performing the object tracking method. The object tracking method may include extracting locations of objects in an object search area using a global camera, identifying an interest object selected by a user from the objects, and determining an error in tracking the identified interest object and correcting the determined error.
Abstract:
Provided is an object detecting method and apparatus, the apparatus configured to extract a frame image and a motion vector from a video, generate an integrated feature vector based on the frame image and the motion vector, and detect an object included in the video based on the integrated feature vector.