Abstract:
Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.
Abstract:
Some examples include transitioning between an individual mode and a collaborative mode in response to an orientation change of a device. Further, some implementations include identifying data to be shared with one or more other devices (e.g., co-located devices) in the collaborative mode. In some examples, the individual mode may be associated with an individual search, the collaborative mode may be associated with a collaborative search, and the devices may transition between the individual mode and the collaborative mode in response to orientation changes.
Abstract:
The systems and techniques described herein implement an improved gaze-based on-screen keyboard that provide dynamically variable dwell times to increase throughput and reduce errors. Utilizing a language model, the probability that each key of the on-screen keyboard will be the subsequential key can be determined, and based at least in part on this determined probability, a dwell time can be assigned to each key. When used as an iterative process, a minimum dwell time may be gradually reduced as confidence in the subsequential key increases to provide a cascading minimum dwell time.
Abstract:
Some examples include transitioning between an individual mode and a collaborative mode in response to an orientation change of a device. Further, some implementations include identifying data to be shared with one or more other devices (e.g., co-located devices) in the collaborative mode. In some examples, the individual mode may be associated with an individual search, the collaborative mode may be associated with a collaborative search, and the devices may transition between the individual mode and the collaborative mode in response to orientation changes.
Abstract:
A virtual reality system is described herein. The virtual reality system includes a cane controller and a computing system. The cane controller comprises a rod, a sensor, and a brake mechanism, wherein the sensor is configured to generate a signal that is indicative of position, direction of movement, and velocity of the rod, and wherein the brake mechanism is configured to apply a force to the rod. The computing system receives the signal, computes a position, direction of movement, and velocity of a virtual rod in a virtual space, and outputs a control signal to the brake mechanism based upon such computation. The brake mechanism applies the force to the rod in a direction and with a magnitude indicated in the control signal, thereby preventing the user from causing the virtual rod to penetrate a virtual barrier in the virtual space.
Abstract:
Described herein are various technologies pertaining to presenting search results to a user, wherein the search results are messages generated by way of social networking applications. An interactive graphical object is presented together with retrieved messages, and messages are filtered responsive to interactions with the interactive graphical object. Additionally, a graphical object that is indicative of credibility of a message is presented together with the message.
Abstract:
Described herein are various technologies pertaining to presenting search results to a user, wherein the search results are messages generated by way of social networking applications. An interactive graphical object is presented together with retrieved messages, and messages are filtered responsive to interactions with the interactive graphical object. Additionally, a graphical object that is indicative of credibility of a message is presented together with the message.
Abstract:
A virtual reality system is described herein. The virtual reality system includes a cane controller and a computing system. The cane controller comprises a rod, a sensor, and a brake mechanism, wherein the sensor is configured to generate a signal that is indicative of position, direction of movement, and velocity of the rod, and wherein the brake mechanism is configured to apply a force to the rod. The computing system receives the signal, computes a position, direction of movement, and velocity of a virtual rod in a virtual space, and outputs a control signal to the brake mechanism based upon such computation. The brake mechanism applies the force to the rod in a direction and with a magnitude indicated in the control signal, thereby preventing the user from causing the virtual rod to penetrate a virtual barrier in the virtual space.
Abstract:
Described herein are various technologies pertaining to presenting search results to a user, wherein the search results are messages generated by way of social networking applications. An interactive graphical object is presented together with retrieved messages, and messages are filtered responsive to interactions with the interactive graphical object. Additionally, a graphical object that is indicative of credibility of a message is presented together with the message.
Abstract:
Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.