-
公开(公告)号:US20250005851A1
公开(公告)日:2025-01-02
申请号:US18345843
申请日:2023-06-30
Applicant: QUALCOMM Incorporated
Inventor: Arpit BHATNAGAR , Chiranjib CHOUDHURI , Anupama S , Avani RAO , Ajit Deepak GUPTE
Abstract: Systems and techniques are described herein for generating models of faces. For instance, a method for generating models of faces is provided. The method may include obtaining one or more images of one or both eyes of a face of a user; obtaining audio data based on utterances of the user; and generating, using a machine-learning model, a three-dimensional model of the face of the user based on the one or more images and the audio data.
-
公开(公告)号:US20230410447A1
公开(公告)日:2023-12-21
申请号:US17845884
申请日:2022-06-21
Applicant: QUALCOMM Incorporated
Inventor: Ke-Li CHENG , Anupama S , Kuang-Man HUANG , Chieh-Ming KUO , Avani RAO , Chiranjib CHOUDHURI , Michel Adib SARKIS , Ajit Deepak GUPTE , Ning BI
CPC classification number: G06T19/20 , G06T7/75 , G06T17/00 , G06V40/171 , G06T2207/20081 , G06T2207/30201 , G06T2200/08 , G06T2219/2021
Abstract: Systems and techniques are provided for generating a three-dimensional (3D) facial model. For example, a process can include obtaining at least one input image associated with a face. In some aspects, the process can include obtaining a pose for a 3D facial model associated with the face. In some examples, the process can include generating, by a machine learning model, the 3D facial model associated with the face. In some cases, one or more parameters associated with a shape component of the 3D facial model are conditioned on the pose. In some implementations, the 3D facial model is configured to vary in shape based on the pose for the 3D facial model associated with the face.
-
3.
公开(公告)号:US20240104180A1
公开(公告)日:2024-03-28
申请号:US17932897
申请日:2022-09-16
Applicant: QUALCOMM Incorporated
Inventor: Anupama S , Chiranjib CHOUDHURI , Avani RAO , Ajit Deepak GUPTE
CPC classification number: G06F21/32 , G06V10/82 , G06V40/171
Abstract: Systems and techniques are provided for performing user authentication. For example, a process can include obtaining a plurality of images associated with a face and a facial expression of the user, wherein each respective image of the plurality of images includes a different portion of the face. An encoder neural network can be used to generate one or more predicted three-dimensional (3D) facial modeling parameters, wherein the encoder neural network generates the one or more predicted 3D facial modeling parameters based on the plurality of images. A reference 3D facial model associated with the face and the facial expression can be obtained. An error can be determined between the one or more predicted 3D facial modeling parameters and the reference 3D facial model, and the user can be authenticated user based on the error being less than a pre-determined authentication threshold.
-
公开(公告)号:US20240029354A1
公开(公告)日:2024-01-25
申请号:US17813556
申请日:2022-07-19
Applicant: QUALCOMM Incorporated
Inventor: Ke-Li CHENG , Anupama S , Kuang-Man HUANG , Chieh-Ming KUO , Avani RAO , Chiranjib CHOUDHURI , Michel Adib SARKIS , Ning BI , Ajit Deepak GUPTE
CPC classification number: G06T17/10 , G06T15/04 , G06V40/171 , G06V10/7715 , G06T2200/08
Abstract: Systems and techniques are provided for generating a texture for a three-dimensional (3D) facial model. For example, a process can include obtaining a first frame, the first frame including a first portion of a face. In some aspects, the process can include generating a 3D facial model based on the first frame and generating a first facial feature corresponding to the first portion of the face. In some examples, the process includes obtaining a second frame, the second frame including a second portion of the face. In some cases, the second portion of the face at least partially overlaps the first portion of the face. In some examples, the process includes combining the first facial feature with the second facial feature to generate an enhanced facial feature, wherein the combining is performed to enhance an appearance of select areas of the enhanced facial feature.
-
-
-