摘要:
Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
摘要:
An implementation of a technology, described herein, for relevance-feedback, content-based facilitating accurate and efficient image retrieval minimizes the number of iterations for user feedback regarding the semantic relevance of exemplary images while maximizing the resulting relevance of each iteration. One technique for accomplishing this is to use a Bayesian classifier to treat positive and negative feedback examples with different strategies. In addition, query refinement techniques are applied to pinpoint the users' intended queries with respect to their feedbacks. These techniques further enhance the accuracy and usability of relevance feedback. This abstract itself is not intended to limit the scope of this patent. The scope of the present invention is pointed out in the appending claims.
摘要:
Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
摘要:
Methods and systems for generic adaptive multimedia content delivery are described. In one embodiment, a novel framework features an abstract content model and an abstract adaptive delivery decision engine. The abstract content model recognizes important aspects of contents while hiding their physical details from other parts of the framework. The decision engine then makes content adaptation plans based on the abstracted model of the contents and needs little knowledge of any physical details of the actual contents. Thus, under the same framework, adaptive delivery of generic contents is possible.
摘要:
Methods and systems for identifying black frames within a sequence of frames are provided. In one embodiment, the detection system detects black frames within a sequence of frames by fully decoding base frames and then partially decoding non-black, non-base frames in a way that ensures the blackness of each frame can be determined. The detection system decodes base frames before decoding dependent frames, which is referred to as processing frames in reverse order of dependency since a frame is processed before the frames that depend on it are processed. In another embodiment, the detection system determines the blackness of frames within a sequence of frames by processing the frames in order of their dependency and following chains of block dependency to decode and determine the blackness of blocks.
摘要:
Systems and methods for smart media content thumbnail extraction are described. In one aspect program metadata is generated from recorded video content. The program metadata includes one or more key-frames from one or more corresponding shots. An objectively representative key-frame is identified from among the key-frames as a function of shot duration and frequency of appearance of key-frame content across multiple shots. The objectively representative key-frame is an image frame representative of the recorded video content. A thumbnail is created from the objectively representative key-frame.
摘要:
A system and process for video characterization that facilitates video classification and retrieval, as well as motion detection, applications. This involves characterizing a video sequence with a gray scale image having pixel levels that reflect the intensity of motion associated with a corresponding region in the sequence of video frames. The intensity of motion is defined using any of three characterizing processes. Namely, a perceived motion energy spectrum (PMES) characterizing process that represents object-based motion intensity over the sequence of frames, a spatio-temporal entropy (STE) characterizing process that represents the intensity of motion based on color variation at each pixel location, a motion vector angle entropy (MVAE) characterizing process which represents the intensity of motion based on the variation of motion vector angles.
摘要:
A system and methods analyze music to detect musical beats and to rectify beats that are out of sync with the actual beat phase of the music. The music analysis includes onset detection, tempo/meter estimation, and beat analysis, which includes the rectification of out-of-sync beats.
摘要:
A face recognition system and process for identifying a person depicted in an input image and their face pose. This system and process entails locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted. All the extracted face regions are preprocessed by normalizing, cropping, categorizing and finally abstracting them. More specifically, the images are normalized and cropped to show only a persons face, categorized according to the face pose of the depicted person's face by assigning them to one of a series of face pose ranges, and abstracted preferably via an eigenface approach. The preprocessed face images are preferably used to train a neural network ensemble having a first stage made up of a bank of face recognition neural networks each of which is dedicated to a particular pose range, and a second stage constituting a single fusing neural network that is used to combine the outputs from each of the first stage neural networks. Once trained, the input of a face region which has been extracted from an input image and preprocessed (i.e., normalized, cropped and abstracted) will cause just one of the output units of the fusing portion of the neural network ensemble to become active. The active output unit indicates either the identify of the person whose face was extracted from the input image and the associated face pose, or that the identity of the person is unknown to the system.
摘要:
A system and process for video characterization that facilitates video classification and retrieval, as well as motion detection, applications. This involves characterizing a video sequence with a gray scale image having pixel levels that reflect the intensity of motion associated with a corresponding region in the sequence of video frames. The intensity of motion is defined using any of three characterizing processes. Namely, a perceived motion energy spectrum (PMES) characterizing process that represents object-based motion intensity over the sequence of frames, a spatio-temporal entropy (STE) characterizing process that represents the intensity of motion based on color variation at each pixel location, a motion vector angle entropy (MVAE) characterizing process which represents the intensity of motion based on the variation of motion vector angles.