Abstract:
Certain embodiments teach a variety of experience or “sentio” codecs, and methods and systems for enabling an experience platform, as well as a Quality of Experience (QoS) engine which allows the sentio codec to select a suitable encoding engine or device. The sentio codec is capable of encoding and transmitting data streams that correspond to participant experiences with a variety of different dimensions and features. As will be appreciated, the following description provides one paradigm for understanding the multi-dimensional experience available to the participants, and as implemented utilizing a sentio codec. There are many suitable ways of describing, characterizing and implementing the sentio codec and experience platform contemplated herein.
Abstract:
A method and system for providing computer-generated output and in particular graphical output. The system includes a network configured to carry digital information. The system includes a server in communication with the network, the server configured to execute an application and a cloud engine module. The application provides a graphical output. The output capturing and encoding engine module is further configured to intercept the graphical output from the application on the server. The output capturing and encoding engine module is further configured to convert the graphical output into at least one of: graphical commands and video codec data. The output capturing and encoding engine module is further configured to transmit the converted output over the network. The system includes a client in communication with the server over the network, the client configured to execute a graphics and video decoding and rendering engine module. The graphics and video decoding and rendering engine module is configured to, responsive to receiving the transmitted converted output, rendering the graphical output. The graphics and video decoding and rendering engine module is configured to intercept graphics and video decoding and rendering inputs at the client. The graphics and video decoding and rendering engine module is configured to transmit the intercepted user inputs to the output capturing and encoding engine module.
Abstract:
The present invention contemplates an interactive event experience capable of coupling and strategically synchronizing multiple (and varying) venues, with live events happening at one or more venues. For example, the system equalizes between local participants and remote ones, and between local shared screens and remote ones—thus making experience of events synchronized. In one event, a host participant creates and initiates the event, which involves inviting participants from the host participant's social network, and programming the event either by selecting a predefined event or defining the specific aspects of an event. In one specific instance, and event may have: a first layer with live audio and video dimensions; a video chat layer with interactive, graphics and ensemble dimensions; a Group Rating layer with interactive, ensemble, and i/o commands dimensions; a panoramic layer with 360 pan and i/o commands dimensions; an ad/gaming layer with game mechanics, interaction, and i/o commands dimensions; and a chat layer with interactive and ensemble dimensions. In addition to aspects of the primary portion of the event experience, the event can have pre-event and post-event activities.
Abstract:
The present invention contemplates a variety of improved methods and systems for providing an experience platform, as well as sentio or experience codecs, and experience agents for supporting the experience platform. The experience platform may be provided by a service provider to enable an experience provider to compose and direct a participant experience. The service provider monetizes the experience by charging the experience provider and/or the participants for services. The participant experience can involve one or more experience participants. The experience provider can create an experience with a variety of dimensions and features. As will be appreciated, the following description provides one paradigm for understanding the multi-dimensional experience available to the participants. There are many suitable ways of describing, characterizing and implementing the experience platform contemplated herein.
Abstract:
The techniques discussed herein contemplate methods and systems for providing, for example, interactive virtual experiences that are initiated or controlled using user gestures. In embodiments, the techniques provide for gestures performed by users holding devices to be recognized and processed in a cloud computing environment such that the gestures produce a predefined desired result. According to one embodiment, a server communicates with a first device in a cloud computing environment, wherein the first device can detect surrounding devices, and an application program is executable by the server, wherein the application program is controlled by the first device and the output of the application program is directed by the server to one of the devices detected by the first device.
Abstract:
Described are the architecture of such a system, algorithms for time synchronization during a multiway conferencing session, methods to fight with network imperfections such as jitter to improve synchronization, methods of introducing buffering delays to create handicaps for players with faster connections, methods which help players with synchronization (such as a synchronized metronome during a music conferencing session), methods for synchronized recording and live delivery of synchronized data to the audience watching the distributed interaction live over the Internet.