-
公开(公告)号:US12033259B2
公开(公告)日:2024-07-09
申请号:US17796399
申请日:2021-01-29
Applicant: Google LLC
Inventor: Vivek Kwatra , Christian Frueh , Avisek Lahiri , John Lewis
CPC classification number: G06T13/205 , G06T13/40 , G06T17/20
Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
-
公开(公告)号:US10514818B2
公开(公告)日:2019-12-24
申请号:US15092102
申请日:2016-04-06
Applicant: Google LLC
Inventor: Sergey Ioffe , Vivek Kwatra , Matthias Grundmann
IPC: G06F3/0481 , G06F16/58 , G06F16/438
Abstract: A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item.
-
公开(公告)号:US20230343010A1
公开(公告)日:2023-10-26
申请号:US17796399
申请日:2021-01-29
Applicant: Google LLC
Inventor: Vivek Kwatra , Christian Frueh , Avisek Lahiri , John Lewis
CPC classification number: G06T13/205 , G06T13/40 , G06T17/20
Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
-
公开(公告)号:US11042729B2
公开(公告)日:2021-06-22
申请号:US15831823
申请日:2017-12-05
Applicant: Google LLC
Inventor: Avneesh Sud , Steven Hickson , Vivek Kwatra , Nicholas Dufour
IPC: G06K9/00 , A61B5/16 , G06N3/08 , G06N3/04 , H04N5/232 , G06K9/62 , G06F3/01 , A61B5/00 , G01S3/00 , G06K9/46 , H04N13/344
Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
-
公开(公告)号:US20180314881A1
公开(公告)日:2018-11-01
申请号:US15831823
申请日:2017-12-05
Applicant: Google LLC
Inventor: Avneesh Sud , Steven Hickson , Vivek Kwatra , Nicholas Dufour
Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
-
公开(公告)号:US20240320892A1
公开(公告)日:2024-09-26
申请号:US18734327
申请日:2024-06-05
Applicant: Google LLC
Inventor: Vivek Kwatra , Christian Frueh , Avisek Lahiri , John Lewis
CPC classification number: G06T13/205 , G06T13/40 , G06T17/20
Abstract: Provided is a framework for generating photorealistic 3D talking faces conditioned only on audio input. In addition, the present disclosure provides associated methods to insert generated faces into existing videos or virtual environments. We decompose faces from video into a normalized space that decouples 3D geometry, head pose, and texture. This allows separating the prediction problem into regressions over the 3D face shape and the corresponding 2D texture atlas. To stabilize temporal dynamics, we propose an auto-regressive approach that conditions the model on its previous visual state. We also capture face illumination in our model using audio-independent 3D texture normalization.
-
公开(公告)号:US12053301B2
公开(公告)日:2024-08-06
申请号:US17339128
申请日:2021-06-04
Applicant: Google LLC
Inventor: Avneesh Sud , Steven Hickson , Vivek Kwatra , Nicholas Dufour
IPC: A61B5/00 , A61B5/16 , G01S3/00 , G06F3/01 , G06F18/214 , G06F18/2413 , G06N3/04 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V20/20 , G06V40/16 , H04N23/60 , H04N23/611 , G06V40/19 , H04N13/344
CPC classification number: A61B5/6803 , A61B5/165 , G01S3/00 , G06F3/013 , G06F18/214 , G06F18/2413 , G06N3/04 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V20/20 , G06V40/174 , H04N23/60 , H04N23/611 , A61B5/163 , A61B5/7267 , A61B5/7275 , G06V40/19 , H04N13/344
Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
-
公开(公告)号:US20210166072A1
公开(公告)日:2021-06-03
申请号:US17120581
申请日:2020-12-14
Applicant: Google LLC
Inventor: Vivek Kwatra , Ullas Gargi , Mehmet Emre Sargin , Henry Hao Tang
Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
-
公开(公告)号:US20250032045A1
公开(公告)日:2025-01-30
申请号:US18792056
申请日:2024-08-01
Applicant: Google LLC
Inventor: Avneesh Sud , Steven Hickson , Vivek Kwatra , Nicholas Dufour
IPC: A61B5/00 , A61B5/16 , G01S3/00 , G06F3/01 , G06F18/214 , G06F18/2413 , G06N3/04 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V20/20 , G06V40/16 , G06V40/19 , H04N13/344 , H04N23/60 , H04N23/611
Abstract: Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
-
公开(公告)号:US11556743B2
公开(公告)日:2023-01-17
申请号:US17120581
申请日:2020-12-14
Applicant: Google LLC
Inventor: Vivek Kwatra , Ullas Gargi , Mehmet Emre Sargin , Henry Hao Tang
Abstract: A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
-
-
-
-
-
-
-
-
-