Abstract:
A system and method for optimizing the visual fidelity of a presentation for a plurality of audience members and a plurality of display devices, comprising: modeling the quality of view available to the plurality of audience members based on: one or more properties of the display devices, a distribution of the display devices, a distribution of the plurality of audience members, and the visual presentation wherein the visual presentation comprises one or more h-slides; and determining an optimal mapping for the one or more h-slides to the plurality of display devices based on the modeling.
Abstract:
Systems and methods in accordance with the present invention can be applied to generate a personal media library of media segments from a media stream. A method in accordance with one embodiment can comprise receiving the media stream, identifying one or more novelty points within the media stream and creating a plurality of media segments based on said one or more novelty points. The method can further be applied to compile a playlist or substitute media stream organizing such stream as desired, eliminating redundant media clips and discarding advertisements.
Abstract:
Systems and methods in accordance with embodiments of the present invention can include a convertible podium having a compact and lightweight design that can provide multiple functionalities by converting its form. A system in accordance with one embodiment of the present invention can convert from an interactive podium to other presentation devices including (but not limited to) an imaging device, a remote avatar for a presenter, an interactive whiteboard, and an information board. The system includes one or more configurable controls for controlling one or both of a presentation and a presentation environment.
Abstract:
A system for providing a dynamic audio-visual environment using an eSurface situated in a room environment; a projector situated for projecting images onto the eSurface; a camera situated to picture the room environment; a central processor coupled to the eSurface, the projector and the camera. The processor receives pictures from the camera for detecting the location of the eSurface; and controls the projector to aim its projection beam onto the eSurface. The eSurface is a sheet-like surface having the property of accepting optically projected image when powered, and retaining the projected image after the power is turned off.
Abstract:
Video recordings of meetings and scanned paper documents are natural digital documents that come out of a meeting. These can be placed on the Internet for easy access, with links generated between them by matching scanned documents to a segment of the video referencing the scanned document. Furthermore, annotations made on the paper documents during the meeting can be extracted and used as indexes to the video. An orthonormal transform, such as a Digital Cosine Transform (DCT) is used to compare scanned documents to video frames.
Abstract:
Systems and methods for providing a status of a teleconference by determining an approximate delay time and providing a status signal in view of the determined approximate delay time are provided. An approximate delay time is approximately the amount of time that will elapse before an occurrence occurring at a first time, which is captured into an occurrence signal by a source unit, will be experienced at a second time after the occurrence signal is received by at least one receiving unit.
Abstract:
A camera array captures plural component images which are combined into a single scene from which “panning” and “zooming” within the scene are performed. In one embodiment, each camera of the array is a fixed digital camera. The images from each camera are warped and blended such that the combined image is seamless with respect to each of the component images. Warping of the digital images is performed via pre-calculated non-dynamic equations that are calculated based on a registration of the camera array. The process of registering each camera in the arrays is performed either manually, by selecting corresponding points or sets of points in two or more images, or automatically, by presenting a source object (laser light source, for example) into a scene being captured by the camera array and registering positions of the source object as it appears in each of the images. The warping equations are calculated based on the registration data and each scene captured by the camera array is warped and combined using the same equations determined therefrom. A scene captured by the camera array is zoomed, or selectively steered to an area of interest. This zooming- or steering, being done in the digital domain is performed nearly instantaneously when compared to cameras with mechanical zoom and steering functions.
Abstract:
A system is provided for locating a target, such as a person, relative to a projection screen, the system including two infrared light sources for casting separate shadows of the target on a translucent screen, such as those commonly used for back-projection displays. A sensitive video camera with an infrared filter over the lens that blocks all visible light is located behind the screen. This video camera captures a crisp silhouette for each of the shadows of the target. Image processing techniques detect the person's location as well as typical gestures, such as indicating or pointing to an area of the screen. This allows natural interaction with the display, for example, controlling a pointer or cursor on the screen by pointing at the desired area.