Abstract:
A method of extracting audio excerpts comprises: segmenting audio data into a plurality of audio data segments; setting a fitness criteria for the plurality of audio data segments; analyzing the plurality of audio data segments based on the fitness criteria; and selecting one of the plurality of audio data segments that satisfies the fitness criteria. In various exemplary embodiments, the method of extracting audio excerpts further comprises associating the selected one of the plurality of audio data segments with video data. In such embodiments, associating the selected one of the plurality of audio data segments with video data may comprise associating the selected one of the plurality of audio data segments with a keyframe.
Abstract:
Embodiments of the present invention provide a method for producing a summary of a digital file on one or more computers. The method includes segmenting the digital file into a plurality of segments, clustering said segments into a plurality of clusters and selecting a cluster from said plurality of clusters wherein said selected cluster includes segments representative of said digital file. Upon selection of a cluster a segment of the cluster is provided as a summary of said digital file.
Abstract:
A stream of ordered information, such as, for example, audio, video and/or text data, can be windowed and parameterized. A similarity between the parameterized and windowed stream of ordered information can be determined, and a probabilistic decomposition or probabilistic matrix factorization, such as non-negative matrix factorization, can be applied to the similarity matrix. The component matrices resulting from the decomposition indicate major components or segments of the ordered information. Excerpts can then be extracted from the stream of ordered information based on the component matrices to generate a summary of the stream of ordered information.
Abstract:
Methods and systems for classifying images, such as photographs, allow a user to incorporate subjective judgments regarding photograph qualities when making classification decisions. A slide-show interface allows a user to classify and advance photographs with a one-key action or a single interaction event. The interface presents related information relevant to a displayed photograph that is to be classified, such as contiguous photographs, similar photographs, and other versions of the same photograph. The methods and systems provide an overview interface which allows a user to review and refine classification decisions in the context of the original sequence of photographs.
Abstract:
Embodiments of the present invention provide a system and method for discriminatively selecting keyframes that are representative of segments of a source digital media and at the same time distinguishable from other keyframes representing other segments of the digital media. The method and system, in one embodiment, includes pre-processing the source digital media to obtain feature vectors for frames of the media. Discriminatively selecting a keyframe as a representative for each segment of a source digital media wherein said discriminative selection includes determining a similarity measure for each candidate keyframe and determining a dis-similarity measure for each candidate keyframe and selecting the keyframe with the highest goodness value computing from the similarity and dis-similarity measures.
Abstract:
The present invention provides a system and method for automatically combining image and audio data to create a multimedia presentation. In one embodiment, audio and image data are received by the system. The audio data includes a list of events that correspond to points of interest in an audio file. The audio data may also include an audio file or audio stream. The received images are then matched to the audio file or stream using the time. In one embodiment, the events represent times within the audio file or stream at which there is a certain feature or characteristic in the audio file. The audio events list may be processed to remove, sort or predict or otherwise generate audio events. Images processing may also occur, and may include image analysis to determine image matching to the event list, deleting images, and processing images to incorporate effects. Image effects may include cropping, panning, zooming and other visual effects.
Abstract:
Systems and methods generate a video for virtual reality wherein the video is both panoramic and spatially indexed. In embodiments, a video system includes a controller, a database including spatial data, and a user interface in which a video is rendered in response to a specified action. The video includes a plurality of images retrieved from the database. Each of the images is panoramic and spatially indexed in accordance with a predetermined position along a virtual path in a virtual environment.
Abstract:
A method and apparatus for providing multi-resolution video to multiple users under hybrid human and automatic control. Initial environment and close-up images are captured using a first camera and a PTZ camera. The initial images are then stored in memory. Current environment and close-up images are captured and the an estimated difference between the initial and current images and the true image is determined. The estimated differences are weighted and compared and the stored images are updated. A close-up image is then provided to each user of the system. The close-up camera is then directed to a portion of the environment image having high distortion, and current environment and close-up images are captured again.
Abstract:
Methods for segmenting audio-video recording of meetings containing slide presentations by one or more speakers are described. These segments serve as indexes into the recorded meeting. If an agenda is provided for the meeting, these segments can be labeled using information from the agenda. The system automatically detects intervals of video that correspond to presentation slides. Under the assumption that only one person is speaking during an interval when slides are displayed in the video, possible speaker intervals are extracted from the audio soundtrack by finding these regions. Since the same speaker may talk across multiple slide intervals, the acoustic data from these intervals is clustered to yield an estimate of the number of distinct speakers and their order. Clustering the audio data from these intervals yields an estimate of the number of different speakers and their order. Merged clustered audio intervals corresponding to a single speaker are then used as training data for a speaker segmentation system. Using speaker identification techniques, the full video is then segmented into individual presentations based on the extent of each presenter's speech. The speaker identification system optionally includes the construction of a hidden Markov model trained on the audio data from each slide interval. A Viterbi assignment then segments the audio according to speaker.
Abstract:
A system and method for detecting useful images and for ranking images in order of usefulness based on a vignette score describing how closely each one resembles a “vignette,” or a central object or image surrounded by a featureless or deemphasized background. Several methods for determining an image's vignette score are disclosed as examples. Variance ratio analysis entails calculation of the ratio of variance between the edge region of the image and the entire image. Statistical model analysis entails developing a statistical classifier capable of determining a statistical model of each image class based on pre-entered training data. Spatial frequency analysis involves estimating the energy at different spatial frequencies in the central and edge regions and in the image as a whole. A vignette score is calculated as the ratio of mid-frequency energies in the edge region to the mid-frequency energies of the entire image.