CALIBRATION METHOD OF A SYSTEM COMPRISING AN EYE TRACKING DEVICE AND A COMPUTING DEVICE COMPRISING ONE OR MULTIPLE SCREENS

    公开(公告)号:WO2023275669A1

    公开(公告)日:2023-01-05

    申请号:PCT/IB2022/055739

    申请日:2022-06-21

    Abstract: The present invention relates to a calibration method for a system (10) comprising an eye-tracking device (16) and a computing device (12) for capturing the gaze of a user (P) on at least one screen (13a; 13b) of the computing device. The calibration method comprises: a. displaying on one screen (13a; 13b) of the computing device (12) one or more specific patterns (20a; 20b; 20c; 20d) or arbitrary content; b. capturing with a camera (18) of the eye-tracking device (16) at least one image of said one or more specific patterns (20a; 20b; 20c; 20d) or said arbitrary content when said eye-tracking device is in an initial 3D pose (ECS0); c. computing a 3D pose (SCS0) of said one screen (13a; 13b) with respect to the eye-tracking device (16) at said initial 3D pose (ECS0) as a function of said one or more specific patterns (20a; 20b; 20c; 20d) or said arbitrary content firstly defined in terms of pixel coordinates in the screen's display coordinate system (DCS), and secondly detected in the image coordinate system (ICS) of the camera (18) of the eye- tracking device (16); d. moving and placing the eye-tracking device (16) to a resting position convenient for a user (P) corresponding to a final 3D pose (ECS) of the eye-tracking device (16); and e. computing the final 3D pose (SCS) of said at least one screen (13a; 13b) with respect to the eye-tracking device coordinate system (ECS) when the eye-tracking device (16) is in said final 3D pose as a function of the 3D pose (SCS0) of said at least one screen (13a, 13b) with respect to the eye-tracking device (16) at said initial 3D pose (ECS0) and the final 3D pose (ECS) of the eye-tracking device (16).

    METHOD AND SYSTEM FOR GAZE ESTIMATION
    2.
    发明申请

    公开(公告)号:WO2020044180A2

    公开(公告)日:2020-03-05

    申请号:PCT/IB2019/057068

    申请日:2019-08-22

    Abstract: The invention concerns a method for estimating a gaze at which a user is looking at. The method comprises a step of retrieving an input image and a reference image of an eye of the user and/or an individual.The method comprises then a step of processing the input image and the reference image so as to estimate a gaze difference between the gaze of the eye within the input image and the gaze of the eye within the reference image. The gaze of the user is the retrieved using the estimated gaze difference and the known gaze of the reference image.The invention also concerns a system for enabling this method.

    METHOD AND SYSTEM FOR GAZE ESTIMATION
    3.
    发明申请

    公开(公告)号:WO2020044180A3

    公开(公告)日:2020-03-05

    申请号:PCT/IB2019/057068

    申请日:2019-08-22

    Abstract: The invention concerns a method for estimating a gaze at which a user is looking at. The method comprises a step of retrieving an input image and a reference image of an eye of the user and/or an individual.The method comprises then a step of processing the input image and the reference image so as to estimate a gaze difference between the gaze of the eye within the input image and the gaze of the eye within the reference image. The gaze of the user is the retrieved using the estimated gaze difference and the known gaze of the reference image.The invention also concerns a system for enabling this method.

    METHOD FOR GAZE TRACKING CALIBRATION WITH A VIDEO CONFERENCE SYSTEM

    公开(公告)号:WO2023275670A1

    公开(公告)日:2023-01-05

    申请号:PCT/IB2022/055740

    申请日:2022-06-21

    Abstract: The present invention relates to a method for calibrating an eye tracking device (16) for capturing the gaze (g) of a specific video conferee (P) on the screen (13) of a computing device, by means of a video conference system having a user interface (11) to display at least one other video conferee (P1, P2, P3) on the screen. Calibration of the eye tracking device comprises the steps of: i) retrieving the position (s1, s2, s3) of the other video conferee or each other video conferee (P1, P2, P3) as displayed in said user interface, in relation to said screen (13); ii) retrieving the activity state (a1, a2, a3) corresponding to the other video conferee or to each other video conferee (P1, P2, P3); iii) generating an attention model as a function of said position (s1, s2, s3) and said corresponding activity state (a1, a2, a3) per other video conferee (P1, P2, P3) which indicates the areas which are most likely to be looked at by an arbitrary video conferee, and iv) computing the calibration parameters for the eye tracking device (16) to output gaze (g) which best overlap with said areas according to said attention model. Capturing the gaze (g) of the specific video conferee (P) on the screen of the computing device (13) comprises: a) configuring said eye tracking device (16) with said computed calibration parameters, and b) retrieving in real-time said gaze (g) of said specific video conferee (P).

    METHOD FOR SENSING AND COMMUNICATING VISUAL FOCUS OF ATTENTION IN A VIDEO CONFERENCE

    公开(公告)号:WO2023275781A1

    公开(公告)日:2023-01-05

    申请号:PCT/IB2022/056053

    申请日:2022-06-29

    Abstract: The invention relates to a method for sensing and communicating visual focus of attention in a video conference between at least a first and a second video conferee (P1, P2, P3) located in respectively a first and a second location. The method is using a video conference system comprising a user interface (U) displaying the video conferees (P1, P2, P3) on a screen (13) in each of said first and second locations. The method comprises: i) retrieving the point of regard (g) of the first video conferee (P1) located in the first location, in relation to the user interface (U) displayed on the screen (13) to said first video conferee (P1) in said first location, by means of an eye tracking device (16), and ii) determining the likelihood that the first video conferee (P1) is looking at each of one or more video conferees (P2, P3) as a function of said point of regard (g) and the spatial regions of said one or more video conferees (P2, P3) as rendered in said user interface (U) in said first location. Communicating the visual focus of attention of the first video conferee (P1) in said first location, comprises: generating and displaying a visual cue (40a; 40b; 40c; 40d) in said user interface (U) displayed in said second location to the second video conferee (P2), indicating the likelihood that the first video conferee (P1) in said first location is looking at the second video conferee (P2), and/or a third video conferee (P3) in the same second location or in a third location.

    METHOD FOR CAPTURING IN REAL-TIME THE GAZE OF A USER (P) ON A SCREEN

    公开(公告)号:WO2022149094A1

    公开(公告)日:2022-07-14

    申请号:PCT/IB2022/050114

    申请日:2022-01-07

    Abstract: The invention relates to a method for capturing in real-time the gaze of a user (P) on a screen (13) of a computing device (12a; 12b; 12c), using an external device (16) with a camera and a computing device (12a; 12b; 12c). The calibration of the system (10) comprises: retrieving at least one gaze estimate (g) of at least one eye of the user from the external device (16); retrieving the corresponding point-of-regard (s) on the screen of the computing device as a 2d pixel coordinate (s) on the screen, retrieving an initial guess on the screen coordinate system (SCS prjor ), and computing parameters of a screen coordinate system (SCS) with an iterative process in which said screen coordinate system (SCS) parameters are initialized by said initial guess on the screen coordinate system (SCS prjor ). Capturing the gaze of a user (P) on the screen of the computing device in real-time comprises: retrieving a gaze ray (d) of the user (P) with the external device defined in the coordinate system (ECS) of said external device, and intersecting the gaze ray (d) of the user (P) with the screen of the computing device, as a function of the ECS and SCS parameters, to capture the gaze-on-screen in real-time.

    AUTOMATED CALIBRATION METHOD OF A SYSTEM COMPRISING AN EXTERNAL EYE TRACKING DEVICE AND A COMPUTING DEVICE

    公开(公告)号:WO2022084803A1

    公开(公告)日:2022-04-28

    申请号:PCT/IB2021/059318

    申请日:2021-10-12

    Abstract: The present invention relates to a method for calibrating a system (10) comprising an external eye-tracking device (16) and a computing device (12a; 12b) and for capturing the gaze of a user (P) on the screen (13) of the computing device in real-time. The calibration of the system (10) comprises: capturing with one or more cameras (17, 18) of the eye-tracking device (16) at least of one image of landmarks (f1, f2, f3...fn) of the face of the user (P) to identify the 3D position of each landmark in the coordinate system (ECS) of said eye tracking device; capturing with a camera (15) of the computing device (12a; 12b) the same landmarks (F1, F2, F3) of the face of the user (P) in the image coordinate system (ICS) of the computing device camera (15) to identify the 2D position of each landmark in the image coordinate system ICS; computing the 3D pose of the camera (15) of the computing device (12a, 12b), defined as the camera coordinate system (CCS), as a function of the 3D position and 2D position of each landmark (f1, f2, f3...fn) respectively in the coordinate system ECS and in the coordinate system ICS, and computing the 3D pose of the screen of the computing device, defined as the screen coordinate system (SCS), as a function of the camera coordinate system and mechanical parameters describing how the screen (13) is positioned with respect to the camera (15) of the computing device. Capturing the gaze of a user (P) on the screen (13) of the computing device in real-time comprises: retrieving a gaze ray (d) of the user (P) with the eye-tracking device (16), and intersecting the gaze ray (d) of the user (P) with the plane of the screen of the computing device, as a function of the ECS and SCS parameters, to capture the gaze-on-screen in real-time.

    A METHOD FOR CAPTURING AND DISPLAYING A VIDEO STREAM

    公开(公告)号:WO2021053604A1

    公开(公告)日:2021-03-25

    申请号:PCT/IB2020/058718

    申请日:2020-09-18

    Abstract: The present invention relates to a method for capturing and displaying a video stream, comprising: capturing with one or a plurality of cameras a plurality of video streams of a scene, said scene comprising at least one person; reconstructing from said plurality of video streams a virtual environment representing the scene, determining the gaze direction of said person using at least one of said plurality of video streams; projecting said virtual environment onto a plane normal to said gaze direction for generating a virtual representation corresponding to what that person is looking at and from the point of view of that person; displaying said virtual representation on a display.

Patent Agency Ranking