Abstract:
Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The method may include capturing an image of one or more target objects, wherein the one or more target objects are positioned on a pre-defined background. The method may also include segmenting the image into one or more areas corresponding to the one or more target objects and one or more areas corresponding to the pre-defined background. The method may additionally include converting the one or more areas corresponding to the one or more target objects to a digital image. The method may further include generating one or more AR objects corresponding to the one or more target objects, based at least in part on the digital image.
Abstract:
A method for spatial interaction in Augmented Reality (AR) includes displaying an AR scene that includes an image of a real-world scene, a virtual target object, and a virtual cursor. A position of the virtual cursor is provided according to a first coordinate system within the AR scene. A user device tracks a pose of the user device relative to a user hand according to a second coordinate system. The second coordinate system is mapped to the first coordinate system to control movements of the virtual cursor. In a first mapping mode, virtual cursor movement is controlled to change a distance between the virtual cursor and the virtual target object. In a second mapping mode, virtual cursor movement is controlled to manipulate the virtual target object. User input is detected to control which of the first mapping mode or the second mapping mode is used.
Abstract:
Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The apparatus can include memory and one or more processors coupled to the memory. The one or more processors can be configured to receive an image of at least a portion of a real-world scene including a target object. The one or more processors can also be configured to generate an AR object corresponding to the target object and including a plurality of parts. The one or more processors can further be configured to receive a user input associated with a designated part of the plurality of parts and manipulate the designated part based on the received user input.
Abstract:
Techniques are presented for constructing a digital representation of a physical environment. In some embodiments, a method includes obtaining image data indicative of the physical environment; receiving gesture input data from a user corresponding to at least one location in the physical environment, based on the obtained image data; detecting at least one discontinuity in the physical environment near the at least one location corresponding to the received gesture input data; and generating a digital surface corresponding to a surface in the physical environment, based on the received gesture input data and the at least one discontinuity.
Abstract:
A user device receives an image stream from the user side of the user device and an image stream from a target side of the user device. The user device acquires a coordinate system for the user, acquires its own coordinate system, and relates the two coordinate systems to a global coordinate system. The user device then determines whether the user has moved and/or whether the user device has moved. Movement of the user and/or the user device is used as input modalities to control the user's interactions in the augmented reality environment.