-
公开(公告)号:US20170132839A1
公开(公告)日:2017-05-11
申请号:US14936480
申请日:2015-11-09
CPC分类号: G06T19/003 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G02B2027/0141 , G06T7/2033 , G06T7/246 , G06T7/60 , G06T19/006 , G06T2210/21
摘要: In various embodiments, computerized methods and systems for identifying object paths to navigate objects in scene-aware device environments are provided. An object path identification mechanism supports identifying object paths. In operation, a guide path for navigating an object from the start point to the end point in a scene-aware device environment is identified. A guide path can be predefined or recorded in real time. A visibility check, such as a look-ahead operation, is performed based on the guide path. Based on performing the visibility check, a path segment to advance the object from the start point towards the end point is determined. The path segment can be optionally modified or refined based on several factors. The object is caused to advance along the path segment. Iteratively performing visibility checks and traverse actions moves the object from the start point to the end point. The path segments define the object path.
-
公开(公告)号:US20180004308A1
公开(公告)日:2018-01-04
申请号:US15199742
申请日:2016-06-30
申请人: Daniel Joseph McCulloch , Nicholas Gervase Fajt , Adam G. Poulos , Christopher Douglas Edmonds , Lev Cherkashin , Brent Charles Allen , Constantin Dulu , Muhammad Jabir Kapasi , Michael Grabner , Michael Edward Samples , Cecilia Bong , Miguel Angel Susffalich , Varun Ramesh Mani , Anthony James Ambrus , Arthur C. Tomlin , James Gerard Dack , Jeffrey Alan Kohler , Eric S. Rehmeyer , Edward D. Parker
发明人: Daniel Joseph McCulloch , Nicholas Gervase Fajt , Adam G. Poulos , Christopher Douglas Edmonds , Lev Cherkashin , Brent Charles Allen , Constantin Dulu , Muhammad Jabir Kapasi , Michael Grabner , Michael Edward Samples , Cecilia Bong , Miguel Angel Susffalich , Varun Ramesh Mani , Anthony James Ambrus , Arthur C. Tomlin , James Gerard Dack , Jeffrey Alan Kohler , Eric S. Rehmeyer , Edward D. Parker
IPC分类号: G06F3/03 , G06F3/038 , G06F3/01 , G06F3/0346
摘要: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.
-
公开(公告)号:US20170371432A1
公开(公告)日:2017-12-28
申请号:US15192329
申请日:2016-06-24
申请人: Anatolie Gavriliuc , Shawn Crispin Wright , Jeffrey Alan Kohler , Quentin Simon Charles Miller , Scott Francis Fullam , Sergio Paolantonio , Michael Edward Samples , Anthony James Ambrus
发明人: Anatolie Gavriliuc , Shawn Crispin Wright , Jeffrey Alan Kohler , Quentin Simon Charles Miller , Scott Francis Fullam , Sergio Paolantonio , Michael Edward Samples , Anthony James Ambrus
IPC分类号: G06F3/038 , G06F3/0346 , G06F3/01 , G06T19/00 , G06F3/0354
CPC分类号: G06F3/0383 , G06F3/011 , G06F3/016 , G06F3/017 , G06F3/0304 , G06F3/0346 , G06F3/03542 , G06F3/03545 , G06F3/038 , G06F2203/0381 , G06T19/006 , G08C17/02 , G08C2201/32 , G08C2201/71
摘要: In various embodiments, methods and systems for implementing integrated free space and surface inputs are provided. An integrated free space and surface input system includes a mixed-input pointing device for interacting and controlling interface objects using free space inputs and surface inputs, trigger buttons, pressure sensors, and haptic feedback associated with the mixed-input pointing device. Free space movement data and surface movement data are tracked and determined for the mixed-input pointing device. An interface input is detected for the mixed-input pointing device transitioning from a first input to a second input, such as, from a free space input to a surface input or from the surface input to the free space input. The interface input is processed based on accessing the free space movement data and the surface movement data. An output for the interface input is communicated from the mixed-input pointing device to interact and control an interface.
-
公开(公告)号:US09928648B2
公开(公告)日:2018-03-27
申请号:US14936480
申请日:2015-11-09
CPC分类号: G06T19/003 , G02B27/0172 , G02B2027/0138 , G02B2027/014 , G02B2027/0141 , G06T7/2033 , G06T7/246 , G06T7/60 , G06T19/006 , G06T2210/21
摘要: In various embodiments, computerized methods and systems for identifying object paths to navigate objects in scene-aware device environments are provided. An object path identification mechanism supports identifying object paths. In operation, a guide path for navigating an object from the start point to the end point in a scene-aware device environment is identified. A guide path can be predefined or recorded in real time. A visibility check, such as a look-ahead operation, is performed based on the guide path. Based on performing the visibility check, a path segment to advance the object from the start point towards the end point is determined. The path segment can be optionally modified or refined based on several factors. The object is caused to advance along the path segment. Iteratively performing visibility checks and traverse actions moves the object from the start point to the end point. The path segments define the object path.
-
公开(公告)号:US20170358138A1
公开(公告)日:2017-12-14
申请号:US15182490
申请日:2016-06-14
CPC分类号: G06T19/006 , G06F3/011
摘要: In various embodiments, methods and systems for rendering augmented reality objects based on user heights are provided. Height data of a user of an augmented reality device can be determined. The height data relates to a viewing perspective from an eye level of the user. Placement data for an augmented reality object is generated based on a constraint configuration that is associated with the augmented reality object for user-height-based rendering. The constraint configuration includes rules that support generating placement data for rendering augmented reality objects based on the user height data. The augmented reality object is rendered based on the placement data. Augmented reality objects are rendered in a real world scene, such that, the augmented reality object is personalized for each user during an augmented reality experience. In shared experiences, with multiple users viewing a single augmented reality object, the object can be rendered based on a particular user's height.
-
-
-
-