Abstract:
A communal computing device, like an interactive digital whiteboard, can detect the start and end of user sessions with the device. When a communal computing device detects the end of a user session, the it can determine if a personal device that was connected at the start of the user session or during the user session was also connected at the end of the user session. If so, the device can initiate actions based on the session start or end signals such as, but not limited to, transmitting a message to an organizer of a meeting scheduled during the time of the user session, transmitting a message to a participant of a meeting scheduled during the time of the user session, transmitting a message to an administrator, or generating a notification, such as a user interface reminding a user to take their personal device.
Abstract:
A technique is described herein for defining at least some characteristics of a digital pen in a global manner across plural applications, such that the pen exhibits the same characteristics across two or more applications. In one implementation, the technique involves: receiving a pen activation signal in response to a user's activation of an input mechanism provided by a particular digital pen; identifying a location on a user interface (UI) presentation that is readily accessible to the user; generating a pen configuration presentation; presenting the pen configuration presentation on the UI presentation at the location that has been identified; receiving a configuration input from the user in response to the user's interaction with the pen configuration presentation; and, in response to the configuration input, storing a global configuration setting that governs a characteristic of ink strokes produced by the particular digital pen across at least two different applications.
Abstract:
A method for distributing video in a display system equipped with at least one camera. The video is distributed among multiple display zones, which are movable with respect to each other. The method includes acquiring optically, with the camera, a calibration image of a display zone of a display-enabled electronic device. The method includes computing, based on the calibration image, a coordinate transform responsive to dimensions, position, and orientation of the display zone relative to the camera, the coordinate transform being usable to effect video rendering on the display zone. The method includes transmitting to the display-enabled electronic device one or more of the coordinate transform and video rendered for display on the display zone based on the coordinate transform.
Abstract:
A computing platform for presenting contextual content based on detected user confusion is described. In at least one example, sensor data can be received from at least one sensor. The sensor data can be associated with measurements of at least one physiological attribute of a user. Based at least in part on the sensor data, an occurrence of an event corresponding to a confused mental state of the user can be determined. In at least one example, contextual data associated with the event can be determined. The contextual data can identify at least an application being executed at a time corresponding to the occurrence of the event. The contextual data can be leveraged to access content data for mitigating the confused mental state of the user and the content data can be presented via an output interface associated with a device corresponding to the user.
Abstract:
A user may be authenticated to access an account, computing device, or other resource based on the user's gaze pattern and neural or other physiological response(s) to one or more images or other stimuli. When the user attempts to access the resource, a computing device may obtain login gaze tracking data and measurement of a physiological condition of the user at the time that the user is viewing an image or other stimulus. Based on comparison of the login gaze tracking data and the measurement of the physiological condition to a model, the computing device can determine whether to authenticate the user to access the resource.
Abstract:
Computer systems, methods, and storage media for tailoring a user interface to a user according to a determined user state and a determined interface context corresponding to the determined user state. The user interface is tailored by modifying the format of at least a portion of the interface, including modifying the content, layout of the content, presentation sequence, or visual display of the interface. A user interface includes a selectable formatting object for controlling the formatting of the user interface and for generating feedback data for training an ensemble learning component to enable more effective predictive formatting changes.
Abstract:
Computer vision systems for segmenting scenes into semantic components identify a differential within the physiological readings from the user. The differential corresponds to a semantic boundary associated with the user's gaze. Based upon data gathered by a gaze tracking device, the computer vision system identifies a relative location of the user's gaze at the time of the identified differential. The computer vision system then associates the relative location of the user's gaze with a semantic boundary.
Abstract:
Technologies are described herein for modifying a user interface ("UI") provided by a computing device based upon a user's brain activity and gaze. A machine learning classifier is trained using data that identifies the state of a UI provided by a computing device, data identifying brain activity of a user of the computing device, and data identifying the location of the user's gaze. Once trained, the classifier can select a state for the UI provided by the computing device based upon brain activity and gaze of the user. The UI can then be configured based on the selected state. An API can also expose an interface through which an operating system and programs can obtain data identifying the UI state selected by the machine learning classifier. Through the use of this data, a UI can be configured for suitability with a user's current mental state and gaze.
Abstract:
Techniques and systems for identifying objects using gaze tracking techniques are described. A computing system may determine or infer that an individual is requesting to identify an object that is unknown to the individual based at least partly on images of the individual, images of a scene including the object, or both. In some cases, images of the individual may be used to determine a gaze path of the individual and the unknown object may be within the gaze path of the individual. Additionally, a computing system may send a request to identify the object to at least one individual. One or more of the responses received from the at least one individual may be provided in order to identify the object.
Abstract:
A system for contextual loading of operating system is described. A context module forms a dynamic context of a user log in at a host. The context mapper identifies a container corresponding to the dynamic context and determines whether the container is present in a local container cache of the host. In response to the container being present in the local container cache, the container is presented at the host. In response to the container being absent from the local container cache, the container is retrieved from a container store and presented at the host.