-
公开(公告)号:US20140240351A1
公开(公告)日:2014-08-28
申请号:US13779614
申请日:2013-02-27
申请人: Michael Scavezze , Nicholas Gervase Fajt , Arnulfo Zepeda Navratil , Jason Scott , Adam Benjamin Smith-Kipnis , Brian Mount , John Bevis , Cameron Brown , Tony Ambrus , Phillip Charles Heckinger , Dan Kroymann , Matthew G. Kaplan , Aaron Krauss
发明人: Michael Scavezze , Nicholas Gervase Fajt , Arnulfo Zepeda Navratil , Jason Scott , Adam Benjamin Smith-Kipnis , Brian Mount , John Bevis , Cameron Brown , Tony Ambrus , Phillip Charles Heckinger , Dan Kroymann , Matthew G. Kaplan , Aaron Krauss
IPC分类号: G06T19/00
CPC分类号: G06T19/006 , G02B27/0093 , G02B27/017 , G06F3/011 , G06F3/012 , G06F3/013 , G06F3/0346 , G06F3/04815
摘要: Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier.
摘要翻译: 公开了向虚拟环境提供运动放大的实施例。 例如,在一个所公开的实施例中,混合现实增强程序从头戴式显示设备接收与物理环境中的用户的运动相对应的运动数据。 该程序通过显示装置在主方向上呈现运动中的虚拟环境,与主要方向上的用户的运动相比,主方向运动被第一乘法器放大。 程序还呈现在辅助方向上运动的虚拟环境,其中次要方向运动与第二乘法器相比,在相对次要方向上与用户的运动相比较,第二乘法器小于第一乘法器。
-
公开(公告)号:US20140320389A1
公开(公告)日:2014-10-30
申请号:US13872861
申请日:2013-04-29
申请人: Michael Scavezze , Jonathan Steed , Stephen Latta , Kevin Geisner , Daniel McCulloch , Brian Mount , Ryan Hastings , Phillip Charles Heckinger
发明人: Michael Scavezze , Jonathan Steed , Stephen Latta , Kevin Geisner , Daniel McCulloch , Brian Mount , Ryan Hastings , Phillip Charles Heckinger
IPC分类号: G06F3/01
CPC分类号: G06T19/006 , G02B27/0093 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G06F3/011 , G06K9/6267 , G06T19/20
摘要: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
摘要翻译: 公开了通过头戴式显示器与混合现实环境中的物理对象相互作用的实施例。 在一个实施例中,混合现实交互程序基于由显示器捕获的图像识别对象。 基于混合现实环境的一个方面确定对象的交互上下文。 查询物理对象的配置文件,以确定对象的交互模式。 基于交互上下文以编程方式选择选择的交互模式。 通过显示器接收指向对象的用户输入,并且基于所选择的交互模式将其解释为对应于虚拟动作。 相对于与物理对象相关联的虚拟对象执行虚拟动作以修改虚拟对象的外观。 然后通过显示器显示修改的虚拟对象。
-
公开(公告)号:US20170372457A1
公开(公告)日:2017-12-28
申请号:US15195918
申请日:2016-06-28
申请人: Roger Sebastian Kevin Sylvan , Phillip Charles Heckinger , Arthur Tomlin , Nikolai Michael Faaland
发明人: Roger Sebastian Kevin Sylvan , Phillip Charles Heckinger , Arthur Tomlin , Nikolai Michael Faaland
CPC分类号: G06T5/003 , G02B27/017 , G02B27/0179 , G02B2027/0187 , G06F1/163 , G06F3/011 , G06F3/012 , G06F3/013 , G06F3/048 , G06T1/20 , G06T11/001 , G06T15/04 , G06T15/20 , G06T19/006 , G09G5/28 , G09G2340/0407 , G09G2360/18
摘要: A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
-
-