Abstract:
A system and method are provided for performing eye gaze tracking. The system is configured for and the method comprises optimizing illumination of a scene for a single on-axis imaging apparatus, capturing an image using the single on-axis imaging apparatus under the optimized illumination, and processing the captured image to perform a gaze estimation. Dynamic illumination control, eye candidate detection and filtering, and gaze estimation techniques are also provided.
Abstract:
Systems and associated methods are provided for monitoring a user while operating a computing device and providing active feedback to said user regarding health and safety best practices associated with operating said computing device. The methods comprise obtaining user biometric data; converting said biometric data into actionable instances of health and safety user device operation use cases; and interacting with the user based on said actionable instances in order to improve or remedy any deviations from recommended health and safety user device operation practices.
Abstract:
A method is provided for estimating the orientation of a user's eyes in a scene in a system-agnostic manner. The method approximates pose-invariant, user-independent feature vectors by transforming the input coordinate to a pose-invariant coordinate and then normalizing the data considering the statistical distributions of previously collected data used to create a learned mapping method. It then uses the learned mapping method to estimate the orientation of the user's eyes in the pose invariant coordinate system, and finalizes by transforming these to a world coordinate system.
Abstract:
A system and method are provided that use eye gaze as a pointing or selection tool, which enables hands-free operation of a display such as a television. wherein the use of eye gaze as an input can also lead to easier and faster interactions when compared to traditional remote controls. A system and method are also provided that use eye tracking on displays such as televisions to determine what content was viewed and, by association, what content was of most interest to the user. Systems and methods are also described that enable interaction with elements displayed in an augmented reality environment using gaze tracking and for controlling gaze tracking on a portable electronic device.
Abstract:
A system and associated methods are provided for enhancing videoconferencing interactions between users. The methods comprise obtaining user biometric data, converting said biometric data into a virtual representation of a user and embedding said virtual representation of the user into remotely shared media content.
Abstract:
A system and methods are provided to manage gestures and positional data from a pointing device, considering an attention sensing device with known accuracy characteristics. The method uses the state of the user's attention and the pointing device data as input, mapping them against predefined regions on the device's screen(s). It then uses both the mapping results and raw inputs to affect the device, such as sending instructions or moving the pointing cursor.
Abstract:
A system and methods are provided to manage gestures and positional data from a pointing device, considering an attention sensing device with known accuracy characteristics. The method uses the state of the user's attention and the pointing device data as input, mapping them against predefined regions on the device's screen(s). It then uses both the mapping results and raw inputs to affect the device, such as sending instructions or moving the pointing cursor.
Abstract:
A system and method are provided for object tracking in a scene over time. The method comprises obtaining tracking data from a tracking device, the tracking data comprising information associated with at least one point of interest being tracked; obtaining position data from a scene information provider, the scene being associated with a plurality of targets, the position data corresponding to targets in the scene; applying a probabilistic graphical model to the tracking data and the target data to predict a target of interest associated with an entity being tracked; and performing at least one of: using the target of interest to determine a refined point of interest; and outputting at least one of the refined point of interest and the target of interest.
Abstract:
A system and method are provided that use point of gaze information to determine what portions of 3D media content are actually being viewed to enable a 3D media content viewing experience to be improved. Tracking eye movements of viewers to obtain such point of gaze information are used to control characteristics of the 3D media content during consumption of that media, and/or to improve or otherwise adjust or refine the 3D media content during creation thereof by a media content provider. Outputs may be generated to illustrate what in the 3D media content was viewed at incorrect depths. Such outputs may then be used in subsequent or offline analysis, e.g., by editors for media content providers when generating the 3D media itself, in order to gauge the 3D effects. A quality metric can be computed based on the point of gaze information, which can be used to analyze the interactions between viewers and the 3D media content being displayed. The quality metric may also be calibrated in order to accommodate offsets and other factors and/or to allow for aggregation of results obtained for multiple viewers.
Abstract:
A system and an apparatus are provided, for gaze tracking within a defined operating range. The apparatus includes at least one optical system, which capable of capturing radiation in a wavelength range produced by a composite illumination source. The apparatus also includes at least one set of illumination sources creating the composite illumination source, wherein: at least one of the illumination sources is positioned relative to the optical system such that it ensures a user bright pupil response at the beginning of the apparatus operating range; and the composite illumination source size is such that it creates a Purkinje image on a user's eye capable of being distinguished by the optical system at the end of the apparatus operating range. The apparatus also includes an illumination controller for activating and deactivating the at least one composite illumination source, and a signal processing controller for transmitting generated images from the at least one optical system.