Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
Systems and methods for performing localization and mapping with a mobile device are disclosed. In one embodiment, a method for performing localization and mapping with a mobile device includes identifying geometric constraints associated with a current area at which the mobile device is located, obtaining at least one image of the current area captured by at least a first camera of the mobile device, obtaining data associated with the current area via at least one of a second camera of the mobile device or a sensor of the mobile device, and performing localization and mapping for the current area by applying the geometric constraints and the data associated with the current area to the at least one image.
Abstract:
A method of performing context inference is described. The method includes collecting ambient light at a spectrometer sensor integrated in a portable device, characterizing the collected light to obtain optical information, comparing the optical information to optical data predetermined to match one or more contexts, inferring at least one characteristic of a specific context based on the comparison, and determining a probability that the portable device is in the specific context.
Abstract:
Systems and methods share context information on a neighbor aware network. In one aspect, a context providing device receives a plurality of responses to a discovery query from a context consuming device, and tailors services it offers to the context consuming device based on the responses. In another aspect, a context providing device indicates in its response to a discovery query which services or local context information it can provide to the context consuming device, and also a cost associated with providing the service or the local context information. In some aspects, the cost is in units of monetary currency. In other aspects, the cost is in units of user interface display made available to an entity associated with the context providing device in exchange for the services or local context information offered to the context consuming device.
Abstract:
A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.
Abstract:
A mobile device, such as a smartphone or a tablet computer, can execute functionality for configuring a network device in a communication network and for subsequently controlling the operation of the network device with little manual input. The mobile device can detect sensor information from a network device. The mobile device can determine device configuration information based, at least in part, on decoding the sensor information. The mobile device can provide the device configuration information to an access point of a network. The mobile device can receive communication link information from the access point. The mobile device can provide the communication link information to the network device. The mobile device can receive a message indicating a communication link between the network device and the access point is established.
Abstract:
In several aspects, an electronic device and method index a repository of N documents by W words, by not storing between queries, N*W numbers that are specific to each word i and each document j, normally used to compute a score of relevance to a query, of each document j. Instead, the electronic device and method generate the N*W word-specific -document-specific numbers dynamically at query time, based on a set of W numbers corresponding to the W words, and one or more sets (e.g. x sets) of N numbers corresponding to the N documents. Query-time generation of word-specific-document-specific numbers reduces memory otherwise required, e.g. to store these numbers. Hence, in certain aspects W+xN numbers are maintained between queries, and these numbers are changed incrementally when a new document is added to the set or an existing document is removed. Maintaining W+xN numbers reduces processing otherwise required, to start from scratch.
Abstract:
A method of performing context inference is described. The method includes collecting ambient light at a spectrometer sensor integrated in a portable device, characterizing the collected light to obtain optical information, comparing the optical information to optical data predetermined to match one or more contexts, inferring at least one characteristic of a specific context based on the comparison, and determining a probability that the portable device is in the specific context.
Abstract:
Methods, systems, and techniques to enhance computer vision application processing are disclosed. In particular, the methods, systems, and techniques may reduce power consumption for computer vision applications and improve processing efficiency for computer vision applications.
Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.