VOLUMETRIC IMMERSIVE EXPERIENCE WITH MULTIPLE VIEWS

    公开(公告)号:US20250148699A1

    公开(公告)日:2025-05-08

    申请号:US18834191

    申请日:2023-01-30

    Abstract: A multi-view input image covering multiple sampled views is received. A multi-view layered image stack is generated from the multi-view input image. A target view of a viewer to an image space depicted by the multi-view input image is determined based on user pose data. The target view is used to select user pose selected sampled views from among the multiple sampled views. Layered images for the user pose selected sampled views, along with alpha maps and beta scale maps for the user pose selected sampled views are encoded into a video signal to cause a recipient device of the video signal to generate a display image for rendering on the image display.

    DUAL STREAM DYNAMIC GOP ACCESS BASED ON VIEWPORT CHANGE

    公开(公告)号:US20230300426A1

    公开(公告)日:2023-09-21

    申请号:US18019729

    申请日:2021-08-03

    CPC classification number: H04N21/816

    Abstract: A multi-view image stream encoded with primary and secondary image is accessed. Each primary image stream comprises groups of pictures (GOPs). Each secondary image stream comprises I-frames generated from a corresponding primary image stream. Viewpoint data collected in real time is received from a recipient decoding device to indicate that the viewer's viewpoint has changed from a specific time point. A camera is selected based on the viewer's changed viewpoint. It is determined whether the specific time point corresponds to a non-I-frame in a GOP of a primary image stream of the selected camera. If so, an I-frame from a secondary image stream corresponding to the primary image stream is transmitted to the recipient decoding device.

    SINGLE DEPTH TRACKED ACCOMMODATION-VERGENCE SOLUTIONS

    公开(公告)号:US20210264631A1

    公开(公告)日:2021-08-26

    申请号:US17191642

    申请日:2021-03-03

    Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.

    Multi-Resolution Multi-View Video Rendering
    4.
    发明申请

    公开(公告)号:US20200288114A1

    公开(公告)日:2020-09-10

    申请号:US16809397

    申请日:2020-03-04

    Abstract: A device and method for video rendering. The device includes a memory and an electronic processor. The electronic processor is configured to receive, from a source device, video data including multiple reference viewpoints, determine a target image plane corresponding to a target viewpoint, determine, within the target image plane, one or more target image regions, and determine, for each target image region, a proxy image region larger than the corresponding target image region. The electronic processor is configured to determine, for each target image region, a plurality of reference pixels that fit within the corresponding proxy image region, project, for each target image region, the plurality of reference pixels that fit within the corresponding proxy image region to the target image region, producing a rendered target region from each target image region, and composite one or more of the rendered target regions to create video rendering.

    Passive Multi-Wearable-Devices Tracking
    6.
    发明申请

    公开(公告)号:US20180293752A1

    公开(公告)日:2018-10-11

    申请号:US15949536

    申请日:2018-04-10

    Abstract: At a first time point, a first light capturing device at a first spatial location in a three-dimensional (3D) space captures first light rays from light sources located at designated spatial locations on a viewer device in the 3D space. At the first time point, a second light capturing device at a second spatial location in the 3D space captures second light rays from the light sources located at the designated spatial locations on the viewer device in the 3D space. Based on the first light rays captured by the first light capturing device and the second light rays captured by the second light capturing device, at least one of a spatial position and a spatial direction, at the first time point, of the viewer device is determined.

    Tiled Assemblies for a High Dynamic Range Display Panel

    公开(公告)号:US20170235041A1

    公开(公告)日:2017-08-17

    申请号:US15501400

    申请日:2015-08-03

    Abstract: Techniques are provided for a high dynamic range panel that includes an array of light sources (202,203) illuminating a corresponding array of light guides (204, 206). A light source (202) of the array illuminates a first light guide (204). The light source directly underlies, such as in a cavity (208), a second light guide (206) that is adjacent to the first light guide. The light source (202) does not extend below a bottom side (214) of either the first light guide or the second light guide to reduce thickness of the panel. The light source (202) and the first light guide (204) can be integrated as a tile assembly. Alternatively, the light source (202) and the second light guide (206) can be an integrated tile assembly. In a specific embodiment, the light source emits a blue or ultra-violet light, which is converted by quantum dots to a different color.

    AUGMENTED REALITY AND SCREEN IMAGE RENDERING COORDINATION

    公开(公告)号:US20240272712A1

    公开(公告)日:2024-08-15

    申请号:US18693504

    申请日:2022-09-22

    Inventor: Ajit NINAN

    CPC classification number: G06F3/013 G06T15/20 G06T19/00 G06V10/44 G06V20/20

    Abstract: A first image for rendering on a first image display in a combination of a stationary image display and a non-stationary image display is received. A visual object depicted in the first image is identified. A corresponding image portion in a second image is generated for rendering on a second image display in the combination of the stationary image display and the non-stationary image display. The corresponding image portion in the second image as rendered on the second image display overlaps in a vision field of a viewer with the visual object depicted in the second image as rendered on the first image display to modify one or more visual characteristics of the visual object. The second image is caused to be rendered on the second image concurrently while the first image is being rendered on the second image display.

    DEVICE AND RENDERING ENVIRONMENT TRACKING
    10.
    发明公开

    公开(公告)号:US20230283976A1

    公开(公告)日:2023-09-07

    申请号:US18161645

    申请日:2023-01-30

    CPC classification number: H04S7/301 H04S7/304 H04S7/305

    Abstract: Images of an actual rendering environment are acquired through image sensors operating in conjunction with a media consumption system. The acquired images of the actual rendering environment are used to predict audio characteristics of objects present in the actual rendering environment. Spatial audio rendered, to a user in the actual rendering environment, by audio speakers operating in conjunction with the media consumption system is adjusted or modified based at least in part on the audio characteristics of the objects present in the actual rendering environment.

Patent Agency Ranking