Abstract:
A method, an apparatus, and a computer program product provide feedback to a user of an augmented reality (AR) device having an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with an object displayed on the HMD. The object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the object satisfies a criterion, and outputs a sensory indication, e.g., visual display, sound, vibration, when the criterion is satisfied. The apparatus may be configured to output a sensory indication when user interaction is successful, e.g., the icon was selected. Alternatively, the apparatus may be configured to output a sensory indication when the user interaction fails.
Abstract:
An apparatus for calibrating an augmented reality (AR) device having an optical see-through head mounted display (HMD) obtains eye coordinates in an eye coordinate system corresponding to a location of an eye of a user of the AR device, and obtains object coordinates in a world coordinate system corresponding to a location of a real-world object in the field of view of the AR device, as captured by a scene camera having a scene camera coordinate system. The apparatus calculates screen coordinates in a screen coordinate system corresponding to a display point on the HMD, where the calculating is based on the obtained eye coordinates and the obtained object coordinates. The apparatus calculates calibration data based on the screen coordinates, the object coordinates and a transformation from the target coordinate system to the scene camera coordinate system. The apparatus then derives subsequent screen coordinates for the display of AR in relation to other real-world object points based on the calibration data.
Abstract:
A method, an apparatus, and a computer program product render a graphical user interface (GUI) on an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with a GUI object displayed on the HMD. The GUI object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the GUI object satisfies a criterion, and adjusts a parameter of the GUI object when the criterion is not satisfied. The parameter may be one or more of a size of the GUI object, a size of a boundary associated with the GUI object or a location of the GUI object.
Abstract:
Techniques can conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
Abstract:
A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
Abstract:
A method by a wearable device is described. The method includes receiving geometric information from a controller. The geometric information includes a point cloud and a key frame of the controller. The method also includes receiving first six degree of freedom (6DoF) pose information from the controller. The method further includes synchronizing a coordinate system of the wearable device with a coordinate system of the controller based on the point cloud and the key frame of the controller. The method additionally includes rendering content in an application based on the first 6DoF pose information.
Abstract:
A method by a wearable device is described. The method includes receiving geometric information from a controller. The geometric information includes a point cloud and a key frame of the controller. The method also includes receiving first six degree of freedom (6DoF) pose information from the controller. The method further includes synchronizing a coordinate system of the wearable device with a coordinate system of the controller based on the point cloud and the key frame of the controller. The method additionally includes rendering content in an application based on the first 6DoF pose information.
Abstract:
A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
Abstract:
The disclosure generally relates to a low-cost and low-power smart parking system, and in particular, to forming a multi-hop wireless mesh network that can be used to estimate an occupancy map at a parking facility. The mesh network may be formed according to messages that are broadcasted from wireless identity transceivers corresponding to vehicles parked at the parking facility and include unique identifiers assigned to the broadcasting wireless identity transceivers and unique identifiers in any messages that the broadcasting wireless identity transceivers receive, whereby an occupancy map at the parking facility can be estimated according to the formed mesh network and a known physical layout associated with the parking facility. Furthermore, the broadcasted messages can be used to provide various other parking functions (e.g., contacting vehicle owners, directing drivers to available spaces, assisting with locating parked vehicles, etc.).
Abstract:
Aspects of the disclosed technology relate to an apparatus including a memory and at least one processor. The at least one processor can obtain at least one image of a scene and determine a portion of interest within the scene based on a first input. The first input can include a non-touch input. The at least one processor can output, in response to the first input, content associated with the portion of interest and receive a second input from the user. The second input can include a non-eye gaze input and be associated with the content. An action can be initiated by the one or more processor based on the second input.