Abstract:
A system is provided for generating video content with hue-preservation in virtual production. The system comprises a memory for storing instructions and a processor configured to execute the instructions. Based on the executed instructions, the processor is further configured to control a saturation of scene linear data based on mapping of a first color gamut corresponding to a first encoding format of raw data to a second color gamut corresponding to a defined color space. The processor is further configured to determine a standard dynamic range (SDR) video content in the defined color space based on the scene linear data. Based on a scaling factor that is applied to three primary color values that describe the first color gamut, hue of the SDR video content is preserved.
Abstract:
A system is provided for generating video content with hue-preservation in virtual production. A processor determines data in a scene-based encoding format based on raw data received in a pre-defined format. The raw data includes a computer-generated background rendered on a rendering panel and a foreground object. Based on the data in the scene-based encoding format, scene linear data is determined. A saturation of the scene linear data is controlled when a first color gamut corresponding to the pre-defined format is mapped to a second color gamut corresponding to a display-based encoding color space. Based on the scene linear data, a standard dynamic range (SDR) video content in the display-based encoding color space is determined. Hue of the SDR video content is preserved, when the rendering panel is in-focus or out-of-focus, based on a scaling factor that is applied to three primary color values that describe the first color gamut.
Abstract:
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
Abstract:
Methods, apparatus and systems for geometric matching of virtual reality (VR) or augmented reality (AR) output contemporaneously with video output formatted for display on a 2D screen include a determination of value sets that when used in image processing cause an off-screen angular field of view of the at least one of the AR output object or the VR output object to have a fixed relationship to at least one of the angular field of view of the onscreen object or of the 2D screen. The AR/VR output object is outputted to an AR/VR display device and the user experience is improved by the geometric matching between objects observed on the AR/VR display device and corresponding objects appearing on the 2D screen.
Abstract:
A specification defining allowable luma and chroma code-values is applied in a region-of-interest encoding method of a mezzanine compression process. The method may include analyzing an input image to determine regions or areas within each image frame that contain code-values that are near allowable limits as specified by the specification. In addition, the region-of-interest method may comprise then compressing those regions with higher precision than the other regions of the image that do not have code-values that are close to the legal limits.
Abstract:
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
Abstract:
A sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
Abstract:
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
Abstract:
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
Abstract:
A processor provides a simulated three-dimensional (3D) environment for a game or virtual reality (VR) experience, including controlling a characteristic parameter of a 3D object or character based on at least one of: an asynchronous event in a second game, feedback from multiple synchronous users of the VR experience, or on a function driven by one or variables reflecting a current state of at least one of the 3D environment, the game or the VR experience. In another aspect, a sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.