Abstract:
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.
Abstract:
An augmented reality (AR) output device or virtual reality (VR) output device is worn by a user, and includes one or more sensors positioned to detect actions performed by a user of the immersive output device. A processor provides a data signal configured for the AR or VR output device, causing the immersive output device to provide AR output or VR output via a stereographic display device. The data signal encodes audio-video data. The processor controls a pace of scripted events defined by a narrative in the one of the AR output or the VR output, based on output from the one or more sensors indicating actions performed by a user of the AR or VR output device. The audio-video data may be packaged in a non-transitory computer-readable medium with additional content that is coordinated with the defined narrative and is configured for providing an alternative output, such as 2D video output or the stereoscopic 3D output.
Abstract:
A smart media player configures interactive media customized for passengers temporarily sharing a common conveyance (e.g., an automobile or other vehicle), using programmed methods. The methods include identifying profile data for each of the passengers and trip data for the common conveyance. Then, the smart media player may select an interactive media title for the passengers as a group based on at least one of the profile data or the trip data, including one or more of a variety of games for playing with other passengers in the vehicle. The interactive media title may be configured for output by the smart media player while the passengers are sharing the common conveyance. The media player plays the interactive media title in the common conveyance enabled for interaction with the passengers during the period.
Abstract:
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.
Abstract:
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via a the common display screen and shared audio system.
Abstract:
A method or apparatus for delivering audio programming such as music to listeners may include identifying, capturing and applying a listener's audiometric profile to transform audio content so that the listener hears the content similarly to how the content was originally heard by a creative producer of the content. An audio testing tool may be implemented as software application to identify and capture the listener's audiometric profile. A signal processor may operate an algorithm used for processing source audio content, obtaining an identity and an audiometric reference profile of the creative producer from metadata associated with the content. The signal processor may then provide audio output based on a difference between the listener's and creative producer's audiometric profiles.
Abstract:
An augmented reality (AR) output device or virtual reality (VR) output device is worn by a user, and includes one or more sensors positioned to detect actions performed by a user of the immersive output device. A processor provides a data signal configured for the AR or VR output device, causing the immersive output device to provide AR output or VR output via a stereographic display device. The data signal encodes audio-video data. The processor controls a pace of scripted events defined by a narrative in the one of the AR output or the VR output, based on output from the one or more sensors indicating actions performed by a user of the AR or VR output device. The audio-video data may be packaged in a non-transitory computer-readable medium with additional content that is coordinated with the defined narrative and is configured for providing an alternative output, such as 2D video output or the stereoscopic 3D output.
Abstract:
A sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
Abstract:
A processor provides a simulated three-dimensional (3D) environment for a game or virtual reality (VR) experience, including controlling a characteristic parameter of a 3D object or character based on at least one of: an asynchronous event in a second game, feedback from multiple synchronous users of the VR experience, or on a function driven by one or variables reflecting a current state of at least one of the 3D environment, the game or the VR experience. In another aspect, a sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
Abstract:
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.