FACIAL FEATURES TRACKER WITH ADVANCED TRAINING FOR NATURAL RENDERING OF HUMAN FACES IN REAL-TIME

    公开(公告)号:US20200334853A1

    公开(公告)日:2020-10-22

    申请号:US16921723

    申请日:2020-07-06

    IPC分类号: G06T7/73 G06T13/40 G06K9/00

    摘要: Tracking units for facial features with advanced training for natural rendering of human faces in real-time are provided. An example device receives a video stream, and upon detecting a visual face, selects a 3D model from a comprehensive set of head orientation classes. The device determines modifications to the selected 3D model to describe the face, then projects a 2D model of tracking points of facial features based on the 3D model, and controls, actuates, or animates hardware based on the facial features tracking points. The device can switch among an example comprehensive set of 35 different head orientation classes for each video frame, based on suggestions computed from a previous video frame or from yaw and pitch angles of the visual head orientation. Each class of the comprehensive set is trained separately based on a respective collection of automatically marked images for that head orientation class.

    Facial features tracker with advanced training for natural rendering of human faces in real-time

    公开(公告)号:US20190279393A1

    公开(公告)日:2019-09-12

    申请号:US15912946

    申请日:2018-03-06

    IPC分类号: G06T7/73 G06K9/00 G06T13/40

    摘要: Tracking units for facial features with advanced training for natural rendering of human faces in real-time are provided. An example device receives a video stream, and upon detecting a visual face, selects a 3D model from a comprehensive set of head orientation classes. The device determines modifications to the selected 3D model to describe the face, then projects a 2D model of tracking points of facial features based on the 3D model, and controls, actuates, or animates hardware based on the facial features tracking points. The device can switch among an example comprehensive set of 35 different head orientation classes for each video frame, based on suggestions computed from a previous video frame or from yaw and pitch angles of the visual head orientation. Each class of the comprehensive set is trained separately based on a respective collection of automatically marked images for that head orientation class.

    HUMAN MONITORING SYSTEM INCORPORATING CALIBRATION METHODOLOGY

    公开(公告)号:US20190102638A1

    公开(公告)日:2019-04-04

    申请号:US16150225

    申请日:2018-10-02

    摘要: Method for monitoring eyelid opening values. In one embodiment video image data is acquired with a camera which data are representative of a person engaged in an activity, where the activity may be driving a vehicle, operating industrial equipment, or performing a monitoring or control function. When the person's head undergoes a change in yaw angle, such that eyelids of both eyes of the person are captured with the camera, but one eye is closer to the camera than the other eye, a weighting factor is applied, which factor varies as a function of the yaw angle such that a value representative of eyelid opening data based on both eyes is calculated.

    Method for generating a set of annotated images

    公开(公告)号:US20180357819A1

    公开(公告)日:2018-12-13

    申请号:US15621848

    申请日:2017-06-13

    发明人: Florin OPREA

    摘要: A method for generating a set of annotated images comprises acquiring a set of images of a subject, each acquired from a different point of view; and generating a 3D model of at least a portion of the subject, the 3D model comprising a set of mesh nodes defined by respective locations in 3D model space and a set of edges connecting pairs of mesh nodes as well as texture information for the surface of the model. A set of 2D renderings is generated from the 3D model, each rendering generated from a different point of view in 3D model space including providing with each rendering a mapping of x,y locations within each rendering to a respective 3D mesh node. A legacy detector is applied to each rendering to identify locations for a set of detector model points in each rendering. The locations for the set of detector model points in each rendering and the mapping of x,y locations provided with each rendering are analysed to determine a candidate 3D mesh node corresponding to each model point. A set of annotated images from the 3D model is then generated by adding meta-data to the images identifying respective x,y locations within the annotated images of respective model points.