Providing a network for sharing and viewing artificial intelligence characters

    公开(公告)号:US12131000B2

    公开(公告)日:2024-10-29

    申请号:US18401383

    申请日:2023-12-30

    申请人: Theai, Inc.

    摘要: Systems and methods for providing a network for generating, sharing, and viewing artificial intelligence (AI) characters are provided. An example method includes providing a web-based interface enabling a user to generate and modify a template associated with an AI character generated by an AI character model, where the template includes parameters of the AI character model; receiving at least one value for at least one parameter of the AI character model; receiving a first request to store the template in a data store; storing the template in the data store and attributing the template to an account associated with the user; receiving, from the user, a second request to allow access to the template by at least one further user; and providing the access to the template to the at least one further user to enable the at least one further user to view and interact with the AI character.

    METHOD AND APPARATUS FOR GENERATING LIVE VIDEO

    公开(公告)号:US20240355027A1

    公开(公告)日:2024-10-24

    申请号:US18551308

    申请日:2022-04-11

    摘要: The method includes acquiring first feature data of a physical object in real time, where the physical object includes a torso, a main arm, an elbow joint, and a forearm, the first feature data is configured for representing rotation angles of multiple parts of the forearm around an axial direction, and the rotation angles of the multiple parts of the forearm around the axial direction are positively correlated with distances from the multiple parts of the forearm to the elbow joint of the physical object; controlling rotation angles of multiple parts of the forearm skin of a virtual model around an axis direction based on the first feature data acquired in real time, where the rotation angles of the multiple parts of the forearm skin around the axial direction are positively correlated with distances from the multiple parts of the forearm skin to the elbow joint of the virtual model; and generating the image frame of the live video according to the virtual model and the rotation angles of the multiple parts of the forearm skin of the virtual model around the axis direction.

    Face animation synthesis
    4.
    发明授权

    公开(公告)号:US12125147B2

    公开(公告)日:2024-10-22

    申请号:US17663176

    申请日:2022-05-12

    申请人: Snap Inc.

    摘要: A methodology for training a machine learning model to generate color-neutral input face images is described. For each training face image from a training dataset that is used for training the model, the training system generates an input face image, which has the color and lighting of a randomly selected image from the set of color source images, and which has facial features and expression of a face object from the training face image. Because, during training, the machine learning model is “confused” by changing the color and lighting of a training face image to a randomly selected different color and lighting, the trained machine learning model generates a color neutral embedding representing facial features from the training face image.

    GENERATING HUMAN MOTION SEQUENCES UTILIZING UNSUPERVISED LEARNING OF DISCRETIZED FEATURES VIA A NEURAL NETWORK ENCODER-DECODER

    公开(公告)号:US20240346737A1

    公开(公告)日:2024-10-17

    申请号:US18756135

    申请日:2024-06-27

    申请人: Adobe Inc.

    摘要: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing unsupervised learning of discrete human motions to generate digital human motion sequences. The disclosed system utilizes an encoder of a discretized motion model to extract a sequence of latent feature representations from a human motion sequence in an unlabeled digital scene. The disclosed system also determines sampling probabilities from the sequence of latent feature representations in connection with a codebook of discretized feature representations associated with human motions. The disclosed system converts the sequence of latent feature representations into a sequence of discretized feature representations by sampling from the codebook based on the sampling probabilities. Additionally, the disclosed system utilizes a decoder to reconstruct a human motion sequence from the sequence of discretized feature representations. The disclosed system also utilizes a reconstruction loss and a distribution loss to learn parameters of the discretized motion model.

    GRAPH SIMULATION FOR FACIAL MICRO FEATURES WITH DYNAMIC ANIMATION

    公开(公告)号:US20240346733A1

    公开(公告)日:2024-10-17

    申请号:US18628602

    申请日:2024-04-05

    IPC分类号: G06T13/40 G06T15/04 G06T17/00

    CPC分类号: G06T13/40 G06T15/04 G06T17/00

    摘要: The present invention sets forth a technique for simulating wrinkles under dynamic facial expression. This technique includes sampling a plurality of nodes from a three-dimensional (3D) representation of a facial structure, wherein each node represents a pore in the facial structure. The technique also generates one or more edges, with each of the one or more edges connecting a node of the plurality of nodes to a different node selected from the plurality of nodes. The technique further generates a wrinkle graph comprising the plurality of nodes, the one or more edges, and a plurality of edge weights associated with the edges included in the wrinkle graph. The technique may also modify the 3D representation of the facial structure based on the wrinkle graph and one or more dynamic expressions associated with the 3D representation.