LATENCY REDUCTION FOR IMMERSIVE CONTENT PRODUCTION SYSTEMS

    公开(公告)号:US20240096035A1

    公开(公告)日:2024-03-21

    申请号:US18371402

    申请日:2023-09-21

    IPC分类号: G06T19/00 G06T1/20 G06T7/20

    CPC分类号: G06T19/006 G06T1/20 G06T7/20

    摘要: A method of content production may include receiving tracking information for a camera with a frustum configured to capture images of a subject in an immersive environment. a first image of a virtual environment corresponding to the frustum may be rendered using a first rendering process based on the tracking information to be perspective-correct when displayed on the displays and viewed through the camera. A second image of the virtual environment may also be rendered using a second rendering process for a specific display. The first image and the second image may be rendered in parallel. The second image and a portion of the first image may be composited together to generate a composite image, where the portion of the first image may correspond to a portion of the display captured by the frustum.

    Light capture device
    3.
    发明授权

    公开(公告)号:US11762481B2

    公开(公告)日:2023-09-19

    申请号:US17716474

    申请日:2022-04-08

    摘要: In some implementations, an apparatus may include a housing enclosing a circuitry may include a processor and a memory, the housing forming a handgrip. In addition, the apparatus may include a plurality of light sensors arranged in a particular configuration, each of the plurality of light sensors coupled to an exterior the housing via a sensor arm. Also, the apparatus may include one or more controls mounted on the exterior of the housing and electrically coupled to the circuitry. The apparatus can include one or more antenna mounted on an exterior of the housing; and a transmitter connected to the circuitry and electrically connected to the one or more antenna to send data from the apparatus via a wireless protocol. The apparatus can include an electronic device for mounting an electronic device to the housing, the electronic device configured to execute an application for an immersive content generation system.

    SYSTEM FOR DELIVERABLES VERSIONING IN AUDIO MASTERING

    公开(公告)号:US20220345234A1

    公开(公告)日:2022-10-27

    申请号:US17236817

    申请日:2021-04-21

    摘要: Some implementations of the disclosure relate to using a model trained on mixing console data of sound mixes to automate the process of sound mix creation. In one implementation, a non-transitory computer-readable medium has executable instructions stored thereon that, when executed by a processor, causes the processor to perform operations comprising: obtaining a first version of a sound mix; extracting first audio features from the first version of the sound mix obtaining mixing metadata; automatically calculating with a trained model, using at least the mixing metadata and the first audio features, mixing console features; and deriving a second version of the sound mix using at least the mixing console features calculated by the trained model.

    RENDERING IMAGES FOR NON-STANDARD DISPLAY DEVICES

    公开(公告)号:US20210407174A1

    公开(公告)日:2021-12-30

    申请号:US16917636

    申请日:2020-06-30

    摘要: A method of rendering an image includes receiving information of a virtual camera, including a camera position and a camera orientation defining a virtual screen; receiving information of a target screen, including a target screen position and a target screen orientation defining a plurality of pixels, each respective pixel corresponding to a respective UV coordinate on the target screen; for each respective pixel of the target screen: determining a respective XY coordinate of a corresponding point on the virtual screen based on the camera position, the camera orientation, the target screen position, the target screen orientation, and the respective UV coordinate; tracing one or more rays from the virtual camera through the corresponding point on the virtual screen toward a virtual scene; and estimating a respective color value for the respective pixel based on incoming light from virtual objects in the virtual scene that intersect the one or more rays.

    Color correction for immersive content production systems

    公开(公告)号:US11200752B1

    公开(公告)日:2021-12-14

    申请号:US16999893

    申请日:2020-08-21

    IPC分类号: G06T15/50 G06T19/20 G06T7/90

    摘要: In at least one embodiment, an immersive content generation system may receive a first user input that defines a three-dimensional (3D) volume within a performance area. In at least one embodiment, the system may capture a plurality of images of an object in the performance area using a camera, wherein the object is at least partially surrounded by one or more displays presenting images of a virtual environment. In at least one embodiment, the system may receive a second user input to adjust a color value of a virtual image of the object as displayed in the images in the virtual environment. In at least one embodiment, the system may perform a color correction pass for the displayed images of the virtual environment. In at least one embodiment, the system may generate content based on the plurality of captured images that are corrected via the color correction pass.

    Simulation stream pipeline
    8.
    发明授权

    公开(公告)号:US10825225B1

    公开(公告)日:2020-11-03

    申请号:US16359732

    申请日:2019-03-20

    摘要: Some implementations of the disclosure are directed to a pipeline that enables real time engines such as gaming engines to leverage high quality simulations generated offline via film grade simulation systems. In one implementation, a method includes: obtaining simulation data and skeletal mesh data of a character, the simulation data and skeletal mesh data including the character in the same rest pose; importing the skeletal mesh data into a real-time rendering engine; and using at least the simulation data and the imported skeletal mesh data to derive from the simulation data a transformed simulation vertex cache that is usable by the real-time rendering engine during runtime to be skinned in place of the rest pose.

    Constrained virtual camera control

    公开(公告)号:US10762599B2

    公开(公告)日:2020-09-01

    申请号:US14194253

    申请日:2014-02-28

    发明人: Steve Sullivan

    IPC分类号: G06T3/20 G06F3/0481 G06T19/00

    摘要: A method is described that includes receiving, from a first device, input used to select a first object in a computer-generated environment. The first device has at least two degrees of freedom with which to control the selection of the first object. The method also includes removing, in response to the selection of the first object, at least two degrees of freedom previously available to a second device used to manipulating a second object in the computer-generated environment. The removed degrees of freedom correspond to the at least two degrees of freedom of the first device and specify an orientation of the second object relative to the selected first object. Additionally, the method includes receiving, from the second device, input including movements within the reduced degrees of freedom used to manipulate a position of the second object while maintaining the specified orientation relative to the selected first object.