PRODUCING THREE-DIMENSIONAL IMAGES USING A VIRTUAL 3D MODEL

    公开(公告)号:WO2018187743A1

    公开(公告)日:2018-10-11

    申请号:PCT/US2018/026555

    申请日:2018-04-06

    Abstract: A method of producing vertically projecting three-dimensional images using virtual 3D models (22), wherein said 3D models (22) are created by the simultaneous localization and depth-mapping of the physical features of real objects. A camera (17) is used to take a first image (22A) from a first perspective, and a subsequent image (22N) from a subsequent perspective, wherein the autofocus system (18) provides a first set of depth mapping data and a subsequent set of depth mapping data. The first set of depth mapping data and the subsequent set of depth mapping data are used to generate a disparity mapping (21). A virtual 3D model (32) is created from the disparity mapping (21). The virtual 3D model (32) is imaged to obtain images that can be viewed as three-dimensional. Enhanced 3D effects are added to the virtual 3D model (32).

    HOLOGRAPHIC VIDEO CAPTURE AND TELEPRESENCE SYSTEM
    3.
    发明申请
    HOLOGRAPHIC VIDEO CAPTURE AND TELEPRESENCE SYSTEM 审中-公开
    全息视频拍摄和电话系统

    公开(公告)号:WO2017127832A1

    公开(公告)日:2017-07-27

    申请号:PCT/US2017/014616

    申请日:2017-01-23

    Abstract: The invention is directed to recording, transmitting, and displaying a three-dimensional image of a face of a user in a video stream. Reflected light from a curved or geometrically shaped screen is employed to provide multiple perspective views of the user's face that are transformed into the image, which is communicated to remotely located other users. A head mounted projection display system is employed to capture the reflective light. The system includes a frame, that when worn by a user, wraps around and grips the user's head. Also, at least two separate image capture modules are included on the frame and generally positioned relatively adjacent to the left and right eyes of a user when the system is worn. Each module includes one or more sensor components, such as cameras, that are arranged to detect at least reflected non-visible light from a screen positioned in front of the user.

    Abstract translation: 本发明涉及在视频流中记录,传输和显示用户的脸部的三维图像。 采用来自弯曲或几何形状屏幕的反射光来提供被转换成图像的用户脸部的多个透视图,其被传送到位于远处的其他用户。 采用头戴式投影显示系统来捕捉反射光。 该系统包括一个框架,当被使用者穿戴时,该框架环绕并抓住使用者的头部。 而且,至少两个单独的图像捕获模块被包括在框架上,并且当系统被佩戴时通常相对于用户的左眼和右眼定位。 每个模块包括一个或多个传感器组件,例如照相机,其被布置为至少检测来自位于用户前方的屏幕的反射的不可见光。

    SYSTEM AND METHOD FOR REAL-TIME DEPTH MODIFICATION OF STEREO IMAGES OF A VIRTUAL REALITY ENVIRONMENT
    4.
    发明申请
    SYSTEM AND METHOD FOR REAL-TIME DEPTH MODIFICATION OF STEREO IMAGES OF A VIRTUAL REALITY ENVIRONMENT 审中-公开
    虚拟现实环境立体图像实时深度修改的系统与方法

    公开(公告)号:WO2017031117A1

    公开(公告)日:2017-02-23

    申请号:PCT/US2016/047174

    申请日:2016-08-16

    Applicant: LEGEND3D, INC.

    CPC classification number: H04N13/128 H04N13/275 H04N2213/003

    Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.

    Abstract translation: 实现对本地3D虚拟现实环境的立体图像进行实时深度修改,例如,无需涉及区域设计师或深度艺术家的迭代工作流程,包括从原始3D模型重新渲染这些图像。 实施例从虚拟环境的3D模型生成球面平移图; 该球面平移映射是围绕虚拟环境的观看者的球体的每个点的左和右立体图像之间的像素偏移的函数。 可以直接对球面平移图进行修改,并且直接应用于立体图像,而不需要从完整的3D模型重新渲染场景。 该过程使得能够实时观察深度修改,大大提高了3D模型创建,审查和更新周期的效率。

    CONSTRUCTING A USER'S FACE MODEL USING PARTICLE FILTERS
    5.
    发明申请
    CONSTRUCTING A USER'S FACE MODEL USING PARTICLE FILTERS 审中-公开
    使用颗粒过滤器构建用户的面部模型

    公开(公告)号:WO2016140666A1

    公开(公告)日:2016-09-09

    申请号:PCT/US2015/018800

    申请日:2015-03-04

    Inventor: SURKOV, Sergey

    Abstract: Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles.

    Abstract translation: 公开了使用粒子滤波器构建用户的面部模型,包括:至少部分地基于所选择的面部模型粒子,使用第一粒子滤波器来生成与多个图像中的相应图像相对应的新的多组非本征相机信息粒子 ; 选择与所述多个图像中的相应图像相对应的所述新的多组非本征相机信息粒子的子集; 以及使用第二粒子滤波器至少部分地基于所述新的多组非本征相机信息粒子的所选择的子集来生成与所述多个图像相对应的新的多个面部模型粒子。

    STEREO IMAGE RECORDING AND PLAYBACK
    6.
    发明申请
    STEREO IMAGE RECORDING AND PLAYBACK 审中-公开
    立体图像记录和回放

    公开(公告)号:WO2016038240A1

    公开(公告)日:2016-03-17

    申请号:PCT/FI2014/050684

    申请日:2014-09-09

    Abstract: The invention relates to forming a scene model and determining a first group of scene points, the first group of scene points being visible from a rendering viewpoint, determining a second group of scene points, the second group of scene points being at least partially obscured by the first group of scene points viewed from the rendering viewpoint,forming a first render layer using the first group of scene points and a second render layer using the second group of scene points, and providing the first and second render layers for rendering a stereo image. The invention also relates to receiving a first render layer and a second render layer comprising pixels, the first render layer comprising pixels corresponding to first parts of a scene viewed from a rendering viewpoint and the second render layer comprising pixels corresponding to second parts of the scene viewed from the rendering viewpoint, wherein the second parts of the scene are obscured by the first parts viewed from the rendering viewpoint, placing pixels of the first render layer and pixels of the second render layer in a rendering space,associating a depth value with the pixels, and rendering a stereo image using said pixels and said depth values.

    Abstract translation: 本发明涉及形成场景模型并确定第一组场景点,第一组场景点从渲染视点可见,确定第二组场景点,第二组场景点被至少部分地模糊 从渲染角度观看的第一组场景点,使用第一组场景点形成第一渲染层,使用第二组场景点形成第二渲染层,以及提供用于渲染立体图像的第一和第二渲染层 。 本发明还涉及接收第一渲染层和包括像素的第二渲染层,第一渲染层包括对应于从渲染视点观看的场景的第一部分的像素,并且第二渲染层包括对应于场景的第二部分的像素 从渲染角度观察,其中场景的第二部分被从渲染视点观察的第一部分遮蔽,将第一渲染层的像素和第二渲染层的像素放置在渲染空间中,将深度值与 像素,并且使用所述像素和所述深度值呈现立体图像。

    THREE DIMENSIONAL MOVING PICTURES WITH A SINGLE IMAGER AND MICROFLUIDIC LENS
    7.
    发明申请
    THREE DIMENSIONAL MOVING PICTURES WITH A SINGLE IMAGER AND MICROFLUIDIC LENS 审中-公开
    三维尺寸移动图像与单个图像和微流体镜头

    公开(公告)号:WO2015175907A1

    公开(公告)日:2015-11-19

    申请号:PCT/US2015/031026

    申请日:2015-05-15

    Abstract: A method and system for determining depth of an image using a single imager and a lens having a variable focal length is provided. The system comprises a microfluidic lens having a variable focal length controlled by a lens controller and an imager receiving an image of an object from the lens, wherein the imager is configured to receive a first image comprising a first plurality of pixels from the lens at a first focal length and a second image comprising a second plurality of pixels from the lens at a second focal length, the second focal length being different than the first focal length, non-volatile memory, wherein the first image and the second image are stored in the non-volatile memory, a depth module configured to determine a distance between the lens and the object based by a comparison of the first image of the object and the second image of the object.

    Abstract translation: 提供了使用单个成像器和具有可变焦距的透镜来确定图像深度的方法和系统。 该系统包括具有由透镜控制器控制的可变焦距的微流体透镜和从透镜接收物体的图像的成像器,其中成像器被配置为在一个透镜控制器处从透镜接收包括第一多个像素的第一图像 第一焦距和第二图像,其包括来自第二焦距的透镜的第二多个像素,第二焦距不同于第一焦距非易失性存储器,其中第一图像和第二图像被存储在 非易失性存储器,深度模块,被配置为通过对象的第一图像和对象的第二图像的比较来确定透镜和对象之间的距离。

    METHODS FOR FULL PARALLAX COMPRESSED LIGHT FIELD 3D IMAGING SYSTEMS
    10.
    发明申请
    METHODS FOR FULL PARALLAX COMPRESSED LIGHT FIELD 3D IMAGING SYSTEMS 审中-公开
    完全PARALLAX压缩光场3D成像系统的方法

    公开(公告)号:WO2015106031A2

    公开(公告)日:2015-07-16

    申请号:PCT/US2015/010696

    申请日:2015-01-08

    Abstract: A compressed light field imaging system is described. The light field 3D data is analyzed to determine optimal subset of light field samples to be (acquired) rendered, while the remaining samples are generated using multi-reference depth-image based rendering. The light field is encoded and transmitted to the display. The 3D display directly reconstructs the light field and avoids data expansion that usually occurs in conventional imaging systems. The present invention enables the realization of full parallax 3D compressed imaging system that achieves high compression performance while minimizing memory and computational requirements.

    Abstract translation: 描述压缩光场成像系统。 分析光场3D数据以确定将被(呈现)渲染的光场样本的最优子集,而使用基于多参考深度图像的渲染来产生剩余样本。 光场被编码并传输到显示器。 3D显示器直接重建光场,避免了传统成像系统中常见的数据扩展。 本发明使得能够实现完全视差3D压缩成像系统,该系统在实现高压缩性能的同时最小化存储器和计算要求。

Patent Agency Ranking