Abstract:
A processor provides a simulated three-dimensional (3D) environment for a game or virtual reality (VR) experience, including controlling a characteristic parameter of a 3D object or character based on at least one of: an asynchronous event in a second game, feedback from multiple synchronous users of the VR experience, or on a function driven by one or variables reflecting a current state of at least one of the 3D environment, the game or the VR experience. In another aspect, a sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
Abstract:
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
Abstract:
A computer-generated scene is generated as background for a live action set, for display on a panel of light emitting diodes (LEDs). Characteristics of light output by the LED panel are controlled such that the computer-generated scene rendered on the LED panel, when captured by a motion picture camera, has high fidelity to the original computer-generated scene. Consequently, the scene displayed on the screen more closely simulates the rendered scene from the viewpoint of the camera. Thus, a viewpoint captured by the camera appears more realistic and/or truer to the creative intent.
Abstract:
A computer-generated scene is generated as background for a live action set, for display on a panel of light emitting diodes (LEDs). Characteristics of light output by the LED panel are controlled such that the computer-generated scene rendered on the LED panel, when captured by a motion picture camera, has high fidelity to the original computer-generated scene. Consequently, the scene displayed on the screen more closely simulates the rendered scene from the viewpoint of the camera. Thus, a viewpoint captured by the camera appears more realistic and/or truer to the creative intent.
Abstract:
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.
Abstract:
A sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
Abstract:
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
Abstract:
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
Abstract:
Methods, apparatus and systems for geometric matching of virtual reality (VR) or augmented reality (AR) output contemporaneously with video output formatted for display on a 2D screen include a determination of value sets that when used in image processing cause an off-screen angular field of view of the at least one of the AR output object or the VR output object to have a fixed relationship to at least one of the angular field of view of the onscreen object or of the 2D screen. The AR/VR output object is outputted to an AR/VR display device and the user experience is improved by the geometric matching between objects observed on the AR/VR display device and corresponding objects appearing on the 2D screen.
Abstract:
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.