Abstract:
This invention relates to a force-feedback apparatus which includes a stylus that is equipped with an electromagnetic device or a freely rotating ball. The stylus is functionally coupled to a controller which is capable of exerting a magnetic field to the electromagnetic device or to the rotating ball, which results in a force being created between the stylus and a surface. This invention also relates to a method of using a force-feedback stylus including moving a force-feedback stylus over a surface, controlling a force-feedback device via a controller coupled to the force-feedback stylus and applying a force to the force-feedback stylus via the force-feedback device, the force being determined for at least features on the surface.
Abstract:
A stream of ordered information, such as, for example, audio, video and/or text data, can be windowed and parameterized. A similarity between the parameterized and windowed stream of ordered information can be determined, and a probabilistic decomposition or probabilistic matrix factorization, such as non-negative matrix factorization, can be applied to the similarity matrix. The component matrices resulting from the decomposition indicate major components or segments of the ordered information. Excerpts can then be extracted from the stream of ordered information based on the component matrices to generate a summary of the stream of ordered information.
Abstract:
Methods and systems for classifying images, such as photographs, allow a user to incorporate subjective judgments regarding photograph qualities when making classification decisions. A slide-show interface allows a user to classify and advance photographs with a one-key action or a single interaction event. The interface presents related information relevant to a displayed photograph that is to be classified, such as contiguous photographs, similar photographs, and other versions of the same photograph. The methods and systems provide an overview interface which allows a user to review and refine classification decisions in the context of the original sequence of photographs.
Abstract:
Methods for interactive selecting video queries consisting of training images from a video for a video similarity search and for displaying the results of the similarity search are disclosed. The user selects a time interval in the video as a query definition of training images for training an image class statistical model. Time intervals can be as short as one frame or consist of disjoint segments or shots. A statistical model of the image class defined by the training images is calculated on-the-fly from feature vectors extracted from transforms of the training images. For each frame in the video, a feature vector is extracted from the transform of the frame, and a similarity measure is calculated using the feature vector and the image class statistical model. The similarity measure is derived from the likelihood of a Gaussian model producing the frame. The similarity is then presented graphically, which allows the time structure of the video to be visualized and browsed. Similarity can be rapidly calculated for other video files as well, which enables content-based retrieval by example. A content-aware video browser featuring interactive similarity measurement is presented. A method for selecting training segments involves mouse click-and-drag operations over a time bar representing the duration of the video; similarity results are displayed as shades in the time bar. Another method involves selecting periodic frames of the video as endpoints for the training segment.
Abstract:
A system in accordance with one embodiment of the present invention comprises a device for facilitating video communication between a remote participant and another location. The device can comprise a screen adapted to display the remote participant, the screen having a posture adapted to be controlled by the remote participant. A camera can be mounted adjacent to the screen, and can allow the subject to view a selected conference participant or a desired location such that when the camera is trained on the selected participant or desired location a gaze of the remote participant displayed by the screen appears substantially directed at the selected participant or desired location.
Abstract:
Method for interactive selecting video consisting of training images from a video for a video similarity search and for displaying the results of the similarity search are disclosed. The user selects a time interval in the video as a query definition of training images for training an image class statistical model. Time intervals can be as short as one frame or consist of disjoint segments or shots. A statistical model of the image class defined by the training images is calculated on-the-fly from feature vectors extracted from transforms of the training images. For each frame in the video, a feature vector is extracted from the transform of the frame, and a similarity measure is calculated using the feature vector and the image class statistical model. The similarity measure is derived from the likelihood of a Gaussian model producing the frame. The similarity is then presented graphically, which allows the time structure of the video to be visualized and browsed. Similarity can be rapidly calculated for other video files as well, which enables content-based retrieval by example. A content-aware video browser featuring interactive similarity measurement is presented. A method for selecting training segments involves mouse click-and-drag operations over a time bar representing the duration of the video; similarity results are displayed as shades in the time bar. Another method involves selecting periodic frames of the video as endpoints for the training segment.
Abstract:
Embodiments of the present invention provide a system and method for discriminatively selecting keyframes that are representative of segments of a source digital media and at the same time distinguishable from other keyframes representing other segments of the digital media. The method and system, in one embodiment, includes pre-processing the source digital media to obtain feature vectors for frames of the media. Discriminatively selecting a keyframe as a representative for each segment of a source digital media wherein said discriminative selection includes determining a similarity measure for each candidate keyframe and determining a dis-similarity measure for each candidate keyframe and selecting the keyframe with the highest goodness value computing from the similarity and dis-similarity measures.
Abstract:
The present invention provides a system and method for automatically combining image and audio data to create a multimedia presentation. In one embodiment, audio and image data are received by the system. The audio data includes a list of events that correspond to points of interest in an audio file. The audio data may also include an audio file or audio stream. The received images are then matched to the audio file or stream using the time. In one embodiment, the events represent times within the audio file or stream at which there is a certain feature or characteristic in the audio file. The audio events list may be processed to remove, sort or predict or otherwise generate audio events. Images processing may also occur, and may include image analysis to determine image matching to the event list, deleting images, and processing images to incorporate effects. Image effects may include cropping, panning, zooming and other visual effects.
Abstract:
A camera array captures plural component images which are combined into a single scene. In one embodiment, each camera of the array is a fixed digital camera. The images from each camera are warped to a common coordinate system and the disparity between overlapping images is reduced using disparity estimation techniques.
Abstract:
A method, system, and apparatus for easily creating a video collage from a video is provided. By segmenting the video into a set number of video segments and providing an interface for a user to select images which represent the video segments and insert the selected images into a video collage template, a video collage may be easily created in a short amount of time. The system is designed to assign values to the video inserted in a video collage and compact the video based on these values thereby creating a small file which may be easily stored or transmitted.