Abstract:
A spatial-color Gaussian mixture model (SCGMM) image segmentation technique for segmenting images. The SCGMM image segmentation technique specifies foreground objects in the first frame of an image sequence, either manually or automatically. From the initial segmentation, the SCGMM segmentation system learns two spatial-color Gaussian mixture models (SCGMM) for the foreground and background objects. These models are built into a first-order Markov random field (MRF) energy function. The minimization of the energy function leads to a binary segmentation of the images in the image sequence, which can be solved efficiently using a conventional graph cut procedure.
Abstract:
Systems and methods are disclosed that facilitate real-time information exchange in a multimedia conferencing environment. Data Client(s) facilitate data collaboration between users and are maintained separately from audio/video (AV) Clients that provide real-time communication functionality. Data Clients can be remotely located with respect to one another and with respect to a server. A remote user Stand-in Device can be provided that comprises a display to present a remote user to local users, a digital automatic pan/tilt/zoom camera to capture imagery in, for example, a conference room and provide real-time information to an AV Client in a remote office, and a microphone array that can similarly provide real-time audio information from the conference room to an AV Client in the remote office. The invention further facilitates file transfer and presentation broadcast between Data Clients in a single location or in a plurality of disparate locations.
Abstract:
An automated system and method for broadcasting meetings over a computer network. The meeting is filmed using an omni-directional camera system and capable of being presented to a viewer both live and on-demand. The system of the present invention includes an automated camera management system for controlling the camera system and an analysis module determining the location of meeting participants in the meeting environments. The method of the present invention includes using the system of the present invention to broadcast an event to a viewer over a computer network. In particular, the method includes filming the event using an omni-directional camera system. Next, the method determines the location of each event participant in the event environment. Finally, a viewer is provided with a user interface for viewing the broadcast event. This user interface allows a viewer to choose which event participant that the viewer would like to view.
Abstract:
A “virtual video studio”, as described herein, provides a highly portable real-time capability to automatically capture, record, and edit a plurality of video streams of a presentation, such as, for example, a speech, lecture, seminar, classroom instruction, talk-show, teleconference, etc., along with any accompanying exhibits, such as a corresponding slide presentation, using a suite of one or more unmanned cameras controlled by a set of videography rules. The resulting video output may then either be stored for later use, or broadcast in real-time to a remote audience. This real-time capability is achieved by using an abstraction of “virtual cameramen” and physical cameras in combination with a scriptable interface to the aforementioned videography rules for capturing and editing the recorded video to create a composite video of the presentation in real-time under the control of a “virtual director.”
Abstract:
A system and method for teleconferencing and recording of meetings. The system uses a variety of capture devices (a novel 360° camera, a whiteboard camera, a presenter view camera, a remote view camera, and a microphone array) to provide a rich experience for people who want to participate in a meeting from a distance. The system is also combined with speaker clustering, spatial indexing, and time compression to provide a rich experience for people who miss a meeting and want to watch it afterward.
Abstract:
An automated camera management system and method for capturing presentations using videography rules. The system and method use technology components and aesthetic components represented by the videography rules to capture a presentation. In general, the automated camera management method captures a presentation using videography rules to determine camera positioning, camera movement, and switching or transition between cameras. The videography rules depend on the type of presentation room and the number of audio-visual camera units used to capture the presentation. The automated camera management system of the invention uses the above method to capture a presentation in a presentation room. The system includes a least one audio-visual (A-V) camera unit for capturing and tracking a subject based on vision or sound. The (A-V) camera unit includes any combination of the following components: (1) a pan-tilt-zoom (PTZ) camera; (2) a fixed camera; and (3) a microphone array.
Abstract:
A system and process is described for estimating the location of a speaker using signals output by a microphone array characterized by multiple pairs of audio sensors. The location of a speaker is estimated by first determining whether the signal data contains human speech components and filtering out noise attributable to stationary sources. The location of the person speaking is then estimated using a time-delay-of-arrival based SSL technique on those parts of the data determined to contain human speech components. A consensus location for the speaker is computed from the individual location estimates associated with each pair of microphone array audio sensors taking into consideration the uncertainty of each estimate. A final consensus location is also computed from the individual consensus locations computed over a prescribed number of sampling periods using a temporal filtering technique.
Abstract:
A unique system and method that facilitates multi-user collaborative interactions is provided. Multiple users can provide input to an interactive surface at or about the same time without yielding control of the surface to any one user. The multiple users can share control of the surface and perform operations on various objects displayed on the surface. The objects can undergo a variety of manipulations and modifications depending on the particular application in use. Objects can be moved or copied between the interactive surface (a public workspace) and a more private workspace where a single user controls the workspace. The objects can also be grouped as desired
Abstract:
A computer network-based distributed presentation system and process is presented that controls the display of one or more video streams output by multiple video cameras located across multiple presentation sites on display screens located at each presentation site. The distributed presentation system and process provides the ability for a user at a site to customize the screen configuration (i.e., what video streams are display at any one time and in what format) for that site via a two-layer display director module. In the design layer of the module, a user interface is provided for a user to specify display priorities dictating what video streams are to be displayed on the screen over time. These display priorities are then provided to the execution layer of the module which translates them into probabilistic timed automata and uses the automata to control what is displayed on the display screen.
Abstract:
A system and process for muting the audio transmission from a location of a participant engaged in a multi-party, computer network-based teleconference when that participant is working on a keyboard, is presented. The audio is muted as it is assumed the participant is doing something other than actively participation in the meeting when typing on the keyboard. If left un-muted the sound of typing would distract the other participant in the teleconference.