Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes

    公开(公告)号:US10715791B2

    公开(公告)日:2020-07-14

    申请号:US15140738

    申请日:2016-04-28

    Applicant: Google Inc.

    Abstract: A virtual eyeglass set may include a frame, a first virtual lens and second virtual lens, and a processor. The frame may mount onto a user's head and hold the first virtual lens in front of the user's left eye and the second virtual lens in front of the user's right eye. A first side of each lens may face the user and a second side of each lens may face away from the user. Each of the first virtual lens and the second virtual lens may include a light field display on the first side, and a light field camera on the second side. The processor may construct, for display on each of the light field displays based on image data received via each of the light field cameras, an image from a perspective of the user's respective eye.

    Touchscreen hover detection in an augmented and/or virtual reality environment

    公开(公告)号:US10338673B2

    公开(公告)日:2019-07-02

    申请号:US15251573

    申请日:2016-08-30

    Applicant: GOOGLE INC.

    Abstract: A system for detecting and tracking a hover position of a manual pointing device, such as finger(s), on a handheld electronic device may include overlaying a rendered mono-chromatic keying screen, or green screen, on a user interface, such as a keyboard, of the handheld electronic device. A position of the finger(s) relative to the keyboard may be determined based on the detection of the finger(s) on the green screen and a known arrangement of the keyboard. An image of the keyboard and the position of the finger(s) may be rendered and displayed, for example, on a head mounted display, to facilitate user interaction via the keyboard with a virtual immersive experience generated by the head mounted display.

    System and Method for Dynamically Adjusting Rendering Parameters Based on User Movements
    4.
    发明申请
    System and Method for Dynamically Adjusting Rendering Parameters Based on User Movements 审中-公开
    基于用户移动动态调整渲染参数的系统和方法

    公开(公告)号:US20160189423A1

    公开(公告)日:2016-06-30

    申请号:US14583889

    申请日:2014-12-29

    Applicant: Google Inc.

    CPC classification number: G06T15/20 G06F3/012 G06F3/04815 G06T2210/36

    Abstract: A computer-implemented method for dynamically adjusting rendering parameters based on user movements may include determining viewpoint movement data for a user viewing a rendering of a 3D model at a first time, determining a first level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the first time and rendering the 3D model at the first level-of-detail. The method may also include determining viewpoint movement data for the user at a second time, wherein the viewpoint movement data at the second time differs from the viewpoint movement data at the first time. In addition, the method may include determining a second level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the second time and rendering the 3D model at the second level-of-detail, wherein the second level-of-detail differs from the first level-of-detail.

    Abstract translation: 用于基于用户移动来动态地调整渲染参数的计算机实现的方法可以包括:确定用于在第一时间观看3D模型的呈现的用户的视点移动数据,确定渲染3D模型的第一细节级别 至少部分地基于第一时间的视点移动数据,并且将3D模型呈现在第一细节级别。 该方法还可以包括在第二时间确定用户的视点移动数据,其中第二时间的视点移动数据与第一时间的视点移动数据不同。 另外,该方法可以包括至少部分地基于第二时间的视点移动数据确定用于呈现3D模型的第二细节级别,并且使3D模型处于第二级别细节, 其中所述第二细节级别与所述第一细节级别不同。

    Systems and Methods to Transition Between Viewpoints in a Three-Dimensional Environment
    7.
    发明申请
    Systems and Methods to Transition Between Viewpoints in a Three-Dimensional Environment 有权
    在三维环境中观察点之间的过渡系统和方法

    公开(公告)号:US20170046875A1

    公开(公告)日:2017-02-16

    申请号:US14825384

    申请日:2015-08-13

    Applicant: Google Inc.

    Abstract: Systems and methods to transition between viewpoints in a three-dimensional environment are provided. One example method includes obtaining data indicative of an origin position and a destination position of a virtual camera. The method includes determining a distance between the origin position and the destination position of the virtual camera. The method includes determining a peak visible distance based at least in part on the distance between the origin position and the destination position of the virtual camera. The method includes identifying a peak position at which the viewpoint of the virtual camera corresponds to the peak visible distance. The method includes determining a parabolic camera trajectory that traverses the origin position, the peak position, and the destination position. The method includes transitioning the virtual camera from the origin position to the destination position along the parabolic camera trajectory. An example system includes a user computing device and a geographic information system.

    Abstract translation: 提供了在三维环境中在视点之间转换的系统和方法。 一个示例性方法包括获得指示虚拟相机的原始位置和目的地位置的数据。 该方法包括确定虚拟相机的原始位置和目的位置之间的距离。 该方法包括至少部分地基于虚拟相机的原始位置和目的地位置之间的距离来确定峰值可见距离。 该方法包括识别虚拟照相机的视点对应于峰值可见距离的峰值位置。 该方法包括确定穿过原点位置,峰值位置和目的位置的抛物线相机轨迹。 该方法包括沿着抛物面相机轨迹将虚拟相机从原始位置转移到目的位置。 示例系统包括用户计算设备和地理信息系统。

    System and method for dynamically adjusting rendering parameters based on user movements

    公开(公告)号:US10255713B2

    公开(公告)日:2019-04-09

    申请号:US14583889

    申请日:2014-12-29

    Applicant: Google Inc.

    Abstract: A computer-implemented method for dynamically adjusting rendering parameters based on user movements may include determining viewpoint movement data for a user viewing a rendering of a 3D model at a first time, determining a first level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the first time and rendering the 3D model at the first level-of-detail. The method may also include determining viewpoint movement data for the user at a second time, wherein the viewpoint movement data at the second time differs from the viewpoint movement data at the first time. In addition, the method may include determining a second level-of-detail at which to render the 3D model based at least in part on the viewpoint movement data at the second time and rendering the 3D model at the second level-of-detail, wherein the second level-of-detail differs from the first level-of-detail.

Patent Agency Ranking