-
公开(公告)号:US20240211038A1
公开(公告)日:2024-06-27
申请号:US18534388
申请日:2023-12-08
Applicant: Apple Inc.
Inventor: Hao Qin , Hua Gao , Tom Sengelaub
CPC classification number: G06F3/013 , G02B27/017 , G06F3/017 , G06T19/006
Abstract: Methods to trigger eye enrollment without requiring good gaze interaction are described that allow a guest user of a device to initiate partial or full eye enrollment even though their eye model is not known and thus conventional gaze-based interactions do not work well. A gaze tracking system collects gaze data in the background. At any time (or within an interval after a user puts on the device), an eye enrollment can be triggered by detecting some gaze gesture, for example rolling the eyes in a large circle, or moving the eyes randomly for a time that exceeds a threshold. Depending on the coverage of the gaze/cornea data collected in the background, a full eye enrollment or only a visual axis enrollment may be performed in response to the gesture.
-
公开(公告)号:US20240104889A1
公开(公告)日:2024-03-28
申请号:US18469670
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Alper Yildirim , Chia-Yin Tsai , Hao Qin , Hua Gao , Tom Sengelaub , Martin Subert , Petr Bour
CPC classification number: G06V10/60 , G06V10/25 , G06V10/758 , G06V10/761 , G06V2201/07
Abstract: A system for detecting a proximate object includes one or more cameras and one or more illuminators. The proximate object is detected by obtaining image data captured by the one or more cameras, where the image data is captured when at least one of the one or more illuminators are illuminated, determining brightness statistics from the image data, and determining whether the brightness statistics satisfies a predetermined threshold. The proximate object is determined to be detected in accordance with a determination that the brightness statistics satisfies the predetermined threshold.
-
公开(公告)号:US20240103618A1
公开(公告)日:2024-03-28
申请号:US18470359
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Julia Benndorf , Qichao Fan , Julian K. Shutzberg , Paul A. Lacey , Hua Gao
IPC: G06F3/01 , H04N13/344
CPC classification number: G06F3/013 , H04N13/344
Abstract: Methods and apparatus for correcting the gaze direction and the origin (entrance pupil) in gaze tracking systems. During enrollment after an eye model is obtained, the pose of the eye when looking at a target prompt is determined. This information is used to estimate the true visual axis of the eye. The visual axis may then be used to correct the point of view (PoV) with respect to the display during use. If a clip-on lens is present, a corrected gaze axis may be calculated based on the known optical characteristics and pose of the clip-on lens. A clip-on corrected entrance pupil may then be estimated by firing two or more virtual rays through the clip-on lens to determine the intersection between the rays and the corrected gaze axis.
-
公开(公告)号:US20240105046A1
公开(公告)日:2024-03-28
申请号:US18470748
申请日:2023-09-20
Applicant: Apple Inc.
Inventor: Chia-Yin Tsai , Kai Benjamin Quack , Hua Gao , Tom Sengelaub , Alper Yildirim
CPC classification number: G08B21/182 , G02B27/0172 , G06F3/013 , G06T7/70
Abstract: Systems and methods are disclosed to enable performance of a lens distance test in head-mounted displays (HMDs) to determine the distance between a user's eye and a lens of the HMD (e.g. the display lens in a virtual or augmented reality device). In embodiments, the HMD is configured to determine a current pose of the eye based on a series of captured eye images. The pose information is used to determine the distance from the apex of the cornea to a closest point on the lens. If the determined distance is too small or too large, an alert or notification is generated instructing to adjust the HMD or change the light seal to achieve better distancing, in order to reduce the risk of eye injury and/or improve user experience. In embodiments, the lens distance test may be repeated during a user session to reevaluate and/or monitor the lens distance.
-
公开(公告)号:US11710350B2
公开(公告)日:2023-07-25
申请号:US17499205
申请日:2021-10-12
Applicant: Apple Inc.
Inventor: Tom Sengelaub , Hua Gao , Hao Qin , Julia Benndorf
CPC classification number: G06V40/19 , G06F3/013 , G06T7/50 , G06T7/70 , G06T7/74 , G06T7/90 , G06V40/165 , G06T2200/04 , G06T2207/10024 , G06T2207/10028 , G06T2207/10048 , G06T2207/30041 , G06T2207/30201
Abstract: Some implementations of the disclosure involve, at a device having one or more processors, one or more image sensors, and an illumination source, detecting a first attribute of an eye based on pixel differences associated with different wavelengths of light in a first image of the eye. These implementations next determine a first location associated with the first attribute in a three dimensional (3D) coordinate system based on depth information from a depth sensor. Various implementations detect a second attribute of the eye based on a glint resulting from light of the illumination source reflecting off a cornea of the eye. These implementations next determine a second location associated with the second attribute in the 3D coordinate system based on the depth information from the depth sensor, and determine a gaze direction in the 3D coordinate system based on the first location and the second location.
-
公开(公告)号:US20240104958A1
公开(公告)日:2024-03-28
申请号:US18470367
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Hao Qin , Hua Gao , Tom Sengelaub , Jie Zhong
CPC classification number: G06V40/197 , G06T7/75 , G06T17/00 , G06T2207/10048 , G06V40/50
Abstract: Methods and apparatus for providing eye model matching in a device are disclosed. When a user activates a device and the presence of the user's eye is detected, an image of the user's eye is captured. An eye model matching process is then implemented to determine a stored eye model (e.g., an eye model stored after enrollment of the eye on the device) that best matches the eye in the captured image. Determination of the best matching eye model may be based on matching between properties of the user's eye in the captured image (such as cornea and pupil features) and properties of the user's eye determined by the eye model. The best matching eye model may then be implemented in, for example, an eye gaze tracking process. In certain instances, the best matching eye model satisfies a threshold for matching before being implemented in the downstream process.
-
公开(公告)号:US20240272709A1
公开(公告)日:2024-08-15
申请号:US18568256
申请日:2022-06-08
Applicant: Apple Inc.
Inventor: Hao Qin , Hua Gao , Tom Sengelaub , Chia-Yin Tsai
IPC: G06F3/01
CPC classification number: G06F3/013
Abstract: Methods and apparatus for generating user-aware eye models. During an enrollment process, images of a user's eye are captured by one or more cameras when the eye is in two or more different orientations and at two or more different levels of display brightness. The captured images are processed to generate a 3-dimensional, user-aware eye model, for example a model of at least the eye's cornea and pupil features. The generated user-aware eye model may be used in other processes, for example in a gaze tracking process. The enrollment process may be an iterative process to optimize the eye model, or a continuous process performed while the user is using the system.
-
公开(公告)号:US20240211039A1
公开(公告)日:2024-06-27
申请号:US18534398
申请日:2023-12-08
Applicant: Apple Inc.
Inventor: Hao Qin , Hua Gao , Tom Sengelaub
CPC classification number: G06F3/013 , G06F1/163 , G06T7/75 , G06T2207/30196
Abstract: In an unobtrusive visual axis enrollment process, a line of text or other content is displayed at a known vertical location and virtual depth that the user can then read. This line of text may be content that the user needs to read as part of the normal enrollment process. As the user reads the line of text, eye tracking cameras may capture images of the eye. This data may then be used to estimate a stimulus plane. The error between the estimated stimulus plane and the ground truth stimulus plane (the actual location of the line of text in virtual space) may then be used to estimate the kappa angle.
-
公开(公告)号:US20240104967A1
公开(公告)日:2024-03-28
申请号:US18470364
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Rene Heideklang , Hua Gao , Hao Qin , Tom Sengelaub
CPC classification number: G06V40/50 , G06F3/013 , G06T7/70 , G06V40/19 , H04L9/0866 , G06T2207/30201
Abstract: A personalized eye model is used to generate synthetic gaze features at ground-truth eye poses Gg. Corresponding synthetic gaze poses Gs are estimated from the synthetic gaze features using an average eye model. A linear regression is applied between Gg and Gs to generate a gaze correction function. The gaze correction function represents differences between the synthetic gaze Gs of the subject eye at the display and that of the average eye model Gg at the display, but does not contain security- or privacy-sensitive information. Further, the personalized eye model cannot be recovered from the gaze correction function, and thus the gaze correction function can be stored unencrypted and available for use during a cold boot of a device prior to login. On a cold boot of the device, the gaze correction function may be accessed and used with an average eye model to improve gaze-based interactions.
-
公开(公告)号:US20220027621A1
公开(公告)日:2022-01-27
申请号:US17499205
申请日:2021-10-12
Applicant: Apple Inc.
Inventor: Tom Sengelaub , Hua Gao , Hao Qin , Julia Benndorf
Abstract: Some implementations of the disclosure involve, at a device having one or more processors, one or more image sensors, and an illumination source, detecting a first attribute of an eye based on pixel differences associated with different wavelengths of light in a first image of the eye. These implementations next determine a first location associated with the first attribute in a three dimensional (3D) coordinate system based on depth information from a depth sensor. Various implementations detect a second attribute of the eye based on a glint resulting from light of the illumination source reflecting off a cornea of the eye. These implementations next determine a second location associated with the second attribute in the 3D coordinate system based on the depth information from the depth sensor, and determine a gaze direction in the 3D coordinate system based on the first location and the second location.
-
-
-
-
-
-
-
-
-