DISPLAY METHOD AND ELECTRONIC DEVICE
    22.
    发明公开

    公开(公告)号:EP4321994A1

    公开(公告)日:2024-02-14

    申请号:EP22814852.4

    申请日:2022-04-06

    摘要: This application provides a display method and an electronic device, and relates to the field of terminal technologies. In this application, an icon animation and starting window image decoding may be performed in parallel. After a launch operation of a user is detected, an application launch animation may start to be displayed. This reduces a waiting latency and improves user experience. The method includes: in response to an application launch operation, starting to draw a starting window, and before drawing of the starting window is completed, in response to the application launch operation, starting to display an application icon launch animation. After the drawing of the starting window is completed, a starting window animation is displayed, to complete display of the application launch animation.

    METHODS AND SYSTEMS FOR EMOTION-CONTROLLABLE GENERALIZED TALKING FACE GENERATION

    公开(公告)号:EP4270391A1

    公开(公告)日:2023-11-01

    申请号:EP23154135.0

    申请日:2023-01-31

    IPC分类号: G10L21/00 G06T13/40

    摘要: This disclosure relates generally to methods and systems for emotion-controllable generalized talking face generation of an arbitrary face image. Most of the conventional techniques for the realistic talking face generation may not be efficient to control the emotion over the face and have limited scope of generalization to an arbitrary unknown target face. The present disclosure proposes a graph convolutional network that uses speech content feature along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation. The facial geometry-aware landmark representation is further used in by an optical flow-guided texture generation network for producing the texture. A two-branch optical flow-guided texture generation network with motion and texture branches is designed to consider the motion and texture content independently. The optical flow-guided texture generation network then renders emotional talking face animation from a single image of any arbitrary target face.