-
公开(公告)号:US20170161939A1
公开(公告)日:2017-06-08
申请号:US15412888
申请日:2017-01-23
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ben Sugden , Darren Bennett , Brian Mount , Sebastian Sylvan , Arthur Tomlin , Ryan Hastings , Daniel McCulloch , Kevin Geisner , Robert Crocco, JR.
CPC classification number: G06T15/60 , G02B27/017 , G02B27/0172 , G02B2027/0112 , G02B2027/0118 , G02B2027/014 , G02B2027/0178 , G02B2027/0187 , G06F3/011 , G06F3/012 , G06F3/04815 , G06T7/11 , G06T7/60 , G06T7/73 , G06T19/006 , G06T2215/16
Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
-
公开(公告)号:US09584766B2
公开(公告)日:2017-02-28
申请号:US14730061
申请日:2015-06-03
Applicant: Microsoft Technology Licensing, LLC
Inventor: Vivek Pradeep , Stephen G. Latta , Steven Nabil Bathiche , Kevin Geisner , Alice Jane Bernheim Brush
Abstract: Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.
Abstract translation: 描述了实现一体化交互空间的技术。 在实现中,定位成在不同位置捕获视频的摄像机被同步,使得不同位置的方面可以用于生成集成的交互式空间。 集成的交互式空间可以使不同位置的用户进行交互,例如通过视频交互,音频交互等。 在至少一些实施例中,可以实现技术来在视频会话期间调整参与者的图像,使得参与者似乎与其他位置处的其他视频会话参与者保持目光接触。 还可以实现技术来提供可以使用户与空间交互的虚拟共享空间,还可以使用户能够彼此交互和/或虚拟共享空间中显示的对象。
-
3.
公开(公告)号:US09524024B2
公开(公告)日:2016-12-20
申请号:US14160423
申请日:2014-01-21
Applicant: Microsoft Technology Licensing, LLC
Inventor: Relja Markovic , Gregory N. Snook , Stephen Latta , Kevin Geisner , Johnny Lee , Adam Jethro Langridge
CPC classification number: G06F3/017 , A63F13/213 , A63F13/428 , G06F3/011 , G06F3/016 , G06T3/20 , G06T3/40 , G06T11/60
Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
Abstract translation: 公开了用于控制照相机控制的计算机的透视的系统,方法和计算机可读介质。 捕获设备捕获用户手势并将相应的数据发送到识别引擎。 识别引擎使用多个滤波器分析数据,每个滤波器对应于手势。 基于这些滤波器的输出,确定透视控制,并且显示装置显示对应于透视控制的新视角。
-
公开(公告)号:US09292083B2
公开(公告)日:2016-03-22
申请号:US14290749
申请日:2014-05-29
Applicant: Microsoft Technology Licensing, LLC
Inventor: Jeffrey Evertt , Joel Deaguero , Darren Bennett , Dylan Vance , David Galloway , Relja Markovic , Stephen Latta , Oscar Omar Garza Santos , Kevin Geisner
Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
Abstract translation: 公开了涉及通过由化身提供的反馈与用户界面交互的实施例。 一个实施例提供了一种方法,包括接收深度数据,将人定位在深度数据中,以及将人的前面的物理空间映射到显示设备的屏幕空间。 该方法还包括形成表示人物的化身的图像,向显示器输出包括交互式用户界面控制的用户界面的图像,并向显示设备输出化身的图像,使得化身面向用户界面 控制。 所述方法还包括:经由所述深度数据检测所述人的运动,基于所述人的运动形成与所述用户界面控制相互作用的所述化身的动画表示,以及输出与所述控件交互的所述化身的动画表示。
-
公开(公告)号:US10691216B2
公开(公告)日:2020-06-23
申请号:US15183246
申请日:2016-06-15
Applicant: Microsoft Technology Licensing, LLC
Inventor: Kevin Geisner , Stephen Latta , Relja Markovic , Gregory N. Snook
IPC: G06F3/01 , G06F3/16 , G06F3/0346 , G06F3/00 , A63B24/00
Abstract: Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.
-
公开(公告)号:US09977882B2
公开(公告)日:2018-05-22
申请号:US14805685
申请日:2015-07-22
Applicant: Microsoft Technology Licensing, LLC
Inventor: Mike Scavezze , Jason Scott , Jonathan Steed , Ian McIntyre , Aaron Krauss , Daniel McCulloch , Stephen Latta , Kevin Geisner , Brian Mount
CPC classification number: G06F21/31 , G06F3/013 , G06F21/34 , G06F21/36 , G10L15/22 , H04L63/08 , H04W12/06
Abstract: Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.
-
公开(公告)号:US09836889B2
公开(公告)日:2017-12-05
申请号:US15446967
申请日:2017-03-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ben Sugden , John Clavin , Ben Vaught , Stephen Latta , Kathryn Stone Perez , Daniel McCulloch , Jason Scott , Wei Zhang , Darren Bennett , Ryan Hastings , Arthur Tomlin , Kevin Geisner
CPC classification number: G06T19/006 , G02B27/01 , G02B27/017 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G02B2027/0141 , G02B2027/0178 , G02B2027/0187 , G06F3/011 , G06F3/013 , G06F3/017 , G06F3/1407 , G06F3/167 , G06F17/30241 , G06T7/12 , G06T7/194 , G06T7/215
Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
-
8.
公开(公告)号:US09201243B2
公开(公告)日:2015-12-01
申请号:US14602746
申请日:2015-01-22
Applicant: Microsoft Technology Licensing, LLC
Inventor: Ben Sugden , John Clavin , Ben Vaught , Stephen Latta , Kathryn Stone Perez , Daniel McCulloch , Jason Scott , Wei Zhang , Darren Bennett , Ryan Hastings , Arthur Tomlin , Kevin Geisner
CPC classification number: G06T19/006 , G02B27/01 , G02B27/017 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G02B2027/0141 , G02B2027/0178 , G02B2027/0187 , G06F3/011 , G06F3/013 , G06F3/017 , G06F3/1407 , G06F3/167 , G06F17/30241 , G06T7/12 , G06T7/194 , G06T7/215
Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
Abstract translation: 公开了与与真实对象相关联的可执行虚拟对象进行交互的实施例。 在一个示例中,用于与与真实对象相关联的可执行虚拟对象交互的方法包括从附接到便携式透视显示设备的一个或多个传感器接收传感器输入,以及基于传感器获得关于用户位置的信息 输入。 该方法还包括,如果位置包括包括相关联的可执行虚拟对象的真实对象,则确定用户与可执行虚拟对象交互的意图,并且如果确定了交互意图,则与可执行对象进行交互。
-
公开(公告)号:US10510190B2
公开(公告)日:2019-12-17
申请号:US15694476
申请日:2017-09-01
Applicant: Microsoft Technology Licensing, LLC
Inventor: Michael Scavezze , Jonathan Steed , Stephen Latta , Kevin Geisner , Daniel McCulloch , Brian Mount , Ryan Hastings , Phillip Charles Heckinger
Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
-
公开(公告)号:US09754420B2
公开(公告)日:2017-09-05
申请号:US15262973
申请日:2016-09-12
Applicant: Microsoft Technology Licensing, LLC
Inventor: Michael Scavezze , Jonathan Steed , Stephen Latta , Kevin Geisner , Daniel McCulloch , Brian Mount , Ryan Hastings , Phillip Charles Heckinger
CPC classification number: G06T19/006 , G02B27/0093 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G06F3/011 , G06K9/6267 , G06T19/20
Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
-
-
-
-
-
-
-
-
-