Abstract:
A method for monitoring a monitored display monitors data to be output from a monitored display. The monitored data is analyzed to generate one or more content identifiers. The content identifiers are compared to a set of rules to determine if the monitored data should be blocked from being output or if an alert should be transmitted to a supervisor device. One or more supervisor devices may be used to respond to alerts and may also be used to control the output of the monitored display.
Abstract:
A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
Abstract:
A method, apparatus, and computer readable medium for identifying a person in an image includes an image analyzer. The image analyzer determines the content of an image such as a person, location, and object shown in the image. A person in the image may be identified based on the content and event data stored in a database. Event data includes information concerning events and related people, locations, and objects determined from other images and information. Identification metadata is generated and linked to each analyzed image and comprises information determined during image analysis. Tags for images are generated based on identification metadata. The event database can be queried to identify particular people, locations, objects, and events depending on a user's request.
Abstract:
Aa brokering device manages multimedia information including an interface device having access to a network and a multimedia service provider. The interface device enables selection of multimedia information from the network and provides the selected multimedia information to a plurality of locations without requiring the user to specify a protocol associated with the multimedia information.
Abstract:
A method, apparatus, and computer readable medium for identifying a person in an image includes an image analyzer. The image analyzer determines the content of an image such as a person, location, and object shown in the image. A person in the image may be identified based on the content and event data stored in a database. Event data includes information concerning events and related people, locations, and objects determined from other images and information. Identification metadata is generated and linked to each analyzed image and comprises information determined during image analysis. Tags for images are generated based on identification metadata. The event database can be queried to identify particular people, locations, objects, and events depending on a user's request.
Abstract:
A computer implemented method is disclosed, the method including but not limited to detecting an event of interest in video conference data for a plurality of video conference participants and notifying an end user of the event of interest. A computer readable medium is also disclosed for containing a computer program for performing the method. A computer implemented method is also disclosed for receiving at an end user device, a notification of an event of interest in a video teleconference, the method including but not limited to receiving at an end user device from a notification indicating a detection of the event of interest in video conference data from the video teleconference for a plurality of video conference participants; and sending data from the end user device to the server requesting a transcription of comments from the speaker in video teleconference.
Abstract:
Disclosed herein are systems, methods, and computer-readable media for transmedia video bookmarks, the method comprising receiving a first place marker and a second place marker for a segment of video media, extracting metadata from the video media between the first and second place markers, normalizing the extracted metadata, storing the normalized metadata, first place marker, and second place marker as a video bookmark, and retrieving the media represented by the video bookmark upon request from a user. Systems can aggregate video bookmarks from multiple sources and refine the first place marker and second place marker based on the aggregated video bookmarks. Metadata can be extracted by analyzing text or audio annotations. Metadata can be normalized by generating a video thumbnail representing the video media between the first place marker and the second place marker. Multiple video bookmarks may be searchable by metadata or by the video thumbnail visually.
Abstract:
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for representing media assets. The method includes receiving an original media asset and derivative versions of the original media asset and associated descriptors, determining a lineage to each derivative version that traces to the original media asset, generating a version history tree of the original media asset representing the lineage to each derivative version and associated descriptors from the original media asset, and presenting at least part of the version history tree to a user. In one aspect, the method further includes receiving a modification to one associated descriptor and updating associated descriptors for related derivative versions with the received modification. The original media asset and the derivative versions of the original media asset can share a common identifying mark. Descriptors can include legal documentation, licensing information, creation time, creation date, actors' names, director, producer, lens aperture, and position data.
Abstract:
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for synthesizing a virtual window. The method includes receiving an environment feed, selecting video elements of the environment feed, displaying the selected video elements on a virtual window in a window casing, selecting non-video elements of the environment feed, and outputting the selected non-video elements coordinated with the displayed video elements. Environment feeds can include synthetic and natural elements. The method can further toggle the virtual window between displaying the selected elements and being transparent. The method can track user motion and adapt the displayed selected elements on the virtual window based on the tracked user motion. The method can further detect a user in close proximity to the virtual window, receive an interaction from the detected user, and adapt the displayed selected elements on the virtual window based on the received interaction.
Abstract:
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for synthesizing a virtual window. The method includes receiving an environment feed, selecting video elements of the environment feed, displaying the selected video elements on a virtual window in a window casing, selecting non-video elements of the environment feed, and outputting the selected non-video elements coordinated with the displayed video elements. Environment feeds can include synthetic and natural elements. The method can further toggle the virtual window between displaying the selected elements and being transparent. The method can track user motion and adapt the displayed selected elements on the virtual window based on the tracked user motion. The method can further detect a user in close proximity to the virtual window, receive an interaction from the detected user, and adapt the displayed selected elements on the virtual window based on the received interaction.