Abstract:
A system and method for editing media content, by embedding graphics, text, images, icons and the like into it. An HTML overlay graphics layer is used to view the additional content in relation to the original media content. The additional content is embedded into the original media content through a rendering process once all the additions have been added by the user. The rendering may be carried out on multiple servers.
Abstract:
A system, method and computer program for processing at least one video sequence are provided, e.g. for transforming a video sequence to a different format, wherein the at least one video sequence comprises a plurality of time-successive image frames. The system is configured to provide a predetermined set of at least one feature, and associate a weighted value to each feature. The system is further configured to provide a predetermined set of at least one imaging process, and to provide a processed video sequence in which the one or more imaging processes have been applied to the video sequence as a function of features detected in the video sequence.
Abstract:
A computing device for facilitating access to items of content includes is configured to enable communication with a companion device. A companion device includes a user interface including a touch panel. The computing device is configured to determine whether touch event data received from a companion device corresponds to a particular gesture. The computing device causes a guide to be presented on the display, upon determining that the received touch event data corresponds to a swipe gesture, and causes a transition from the selected item of content to another item of content, upon determining that the received touch event data corresponds to a perpendicular swipe gesture.
Abstract:
A computing device for facilitating access to items of content includes is configured to enable communication with a companion device. A companion device includes a user interface including a touch panel. The computing device is configured to determine whether touch event data received from a companion device corresponds to a particular gesture. The computing device causes a transition from a selected item of content to another item of content accessible through a selected media application, upon determining that the touch event data corresponds to a first level associated with a particular gesture and causes a transition from the selected media application to another media application, upon determining that the touch event data corresponds to a second level associated with the particular gesture.
Abstract:
A computer implemented method of crowd-sourced video generation, comprising: by a server computer in communication with a plurality of remote client devices, receiving a feed of video captured by a camera, on a memory of the server computer, storing at least a portion of the video feed being received, receiving at least one tag from a respective one of the client devices, determining an occurrence of an event type, based on an at least one of the received tags, and forwarding a sub-portion of the video feed portion stored on the memory for further processing, the forwarded sub-portion having a video length predefined for the event type of the determined occurrence.
Abstract:
A method is described herein for the sequential presentation of images with enhanced functionality, and apparatus thereof, that allow to associate an audio track with one or more images in such a way that the association can be used whenever the user wants to create a presentation; a distinct feature of the method and/or apparatus thereof is that in which the step of associating a first audio track to at least a first image and the step of associating a second audio track to at least a second image provide for storing a first identification code of the first audio track in the first metadata of said at least first image, and storing a second identification code of the second audio track in second metadata of said at least one second image.
Abstract:
Real-time video targeting and exploration system that allows users to pause, step into, and explore modeled worlds of scenes in video. The system may leverage network-based computation resources to render and stream new video content from the models to clients with low latency. A user may pause a video, step into a scene, and interactively change viewing positions and angles to move through or explore the scene, as well as interactively explore objects associated with the scene. The user may step into and explore the scene within the scope of the model to discover parts of the scene that are not visible in the original video, as well as objects within the scene that may not have been readably observable in the original video. In addition, at least some content of a video may be replaced with content targeted at particular viewers according to viewers' profiles or preferences.
Abstract:
A method comprising causation of capture of a stream of visual information, sending of at least a portion of the stream of visual information to a separate apparatus, receipt of information indicative of a stream segment deletion input that identifies a segment of the stream of visual information for deletion, and sending of a stream segment deletion directive to the separate apparatus based, at least in part, on the stream segment deletion input is disclosed.
Abstract:
This document describes techniques and apparatuses for small-screen movie-watching using a viewport. These techniques enable viewers to experience movies and other media programs using a small screen as a viewport to the movie rather than dramatically compressing or cropping the movie to fit the small screen. A viewer may select whatever portion of the movie imagery he or she desires to experience through the small screen at a size sufficient to perceive details of plot elements and an environment in which the plot elements interact. Thus, the viewer may follow plot elements central to the plot while also exploring the environment that provides context for these plot elements.