摘要:
Embodiments of the present invention introduce a user navigation interface that allows a user to monitor/navigate video streams captured from multiple cameras. It integrates video streams from multiple cameras with the semantic layout into a 3-D immersive environment and renders the video streams in multiple displays on a user navigation interface. It conveys the spatial distribution of the cameras as well as their fields of view and allows a user to navigate freely or switch among preset views. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
Embodiments of the present invention introduce a novel technique to analyze and monitor video streams captured from multiple cameras. It highlights the foreground region of the video streams via local alpha blending and displays the videos in an immersive 3-D environment. The spatial arrangement of the displays can be generated by multi-dimensional scaling of the amount of simultaneous motion across different video streams. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
Embodiments of the present invention introduce a novel technique to analyze and monitor video streams captured from multiple cameras. It highlights the foreground region of the video streams via local alpha blending and displays the videos in an immersive 3-D environment. The spatial arrangement of the displays can be generated by multi-dimensional scaling of the amount of simultaneous motion across different video streams. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
Embodiments of the present invention introduce a user navigation interface that allows a user to monitor/navigate video streams captured from multiple cameras. It integrates video streams from multiple cameras with the semantic layout into a 3-D immersive environment and renders the video streams in multiple displays on a user navigation interface. It conveys the spatial distribution of the cameras as well as their fields of view and allows a user to navigate freely or switch among preset views. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
Embodiments of the present invention enable an image based controller to control and manipulate objects with simple point-and-capture operations via images captured by a camera enhanced mobile device. Powered by this technology, a user is able to complete many complicated control tasks via guided control of objects without utilizing laser pointers, IR transmitters, mini-projectors, or bar code tagging and/or customized wall paper are not needed for the environment control. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
Embodiments of the present invention describe a collaborative framework for mining of surveillance videos to detect abnormal events, which introduces a two-stage training process to alleviate the high false alarm problem. In the first stage, unsupervised clustering is performed on the segments of the video streams and a set of abnormal events are combined with user feedback to generate a clean training set. In the second stage, the clean training set is used to train a more precise model for the analysis of normal events and the motion detection results from multiple cameras can be cross validated and combined. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
摘要:
A method for exchanging information in a shared interactive environment, comprising selecting a first physical device in a first live video image wherein the first physical device has information associated with it, causing the information to be transferred to a second physical device in a second live video image wherein the transfer is brought about by manipulating a visual representation of the information, wherein the manipulation includes interacting with the first live video image and the second live video image, wherein the first physical device and the second physical device are part of the shared interactive environment, and wherein the first physical device and the second physical device are not the same.
摘要:
Video recording technology is utilized to enable business process investigation in an unobtrusive manner. Several cameras are situated, each having a defined field of view. For each camera, a region of interest (ROI) within the field of view is defined, and a background image is determined for each ROI. Motion within the ROI is detected by comparing each frame to the background image. The video recording can then be segmented and indexed according to the motion detection.
摘要:
Systems and methods for providing a status of a teleconference by determining an approximate delay time and providing a status signal in view of the determined approximate delay time are provided. An approximate delay time is approximately the amount of time that will elapse before an occurrence occurring at a first time, which is captured into an occurrence signal by a source unit, will be experienced at a second time after the occurrence signal is received by at least one receiving unit.
摘要:
A system and method for authoring a media presentation including a media presentation environment representation having a portion defined as a hot spot associated with a media presentation device. Various embodiments include a hyper-slide listing portion, a media presentation authoring portion, and/or a media presentation device listing portion. Various embodiments include an integrated presentation authoring preview environment. The method includes selecting a physical device for a presentation unit in the media presentation environment, manipulating a visual representation of the presentation unit, recording a display of the presentation unit, and previewing the presentation in an augmented reality environment, a virtual reality environment, or both. Various embodiments operate with a plurality of types of media presentation devices and a plurality of each type of device.