TASK EXECUTION METHOD FOR LARGE MODEL, DEVICE, AND MEDIUM

    公开(公告)号:US20250094792A1

    公开(公告)日:2025-03-20

    申请号:US18968790

    申请日:2024-12-04

    Abstract: A task execution method for a large model, an electronic device, and a storage medium are provided, which relate to a field of artificial intelligence technology, particularly to fields of deep learning technology and large model technology. The method includes: executing a modality routing task by using a target computing unit based on a target feature to be processed to obtain a modality recognition result; executing a field routing task by using the target computing unit based on the target feature to be processed and a target field gating model parameter to obtain a field recognition result; and executing a feedforward task by using the target computing unit based on the target feature to be processed and a target feedforward task model parameter to obtain a task execution result

    TEXT PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

    公开(公告)号:US20230030471A1

    公开(公告)日:2023-02-02

    申请号:US17698242

    申请日:2022-03-18

    Abstract: The present disclosure provides a text processing method and apparatus, an electronic device and a storage medium, and relates to the field of artificial intelligence technologies such as deep learning and natural language processing. The method may include: configuring, for a to-be-processed text, attention patterns corresponding to heads in a Transformer model using a multi-head-attention mechanism respectively, wherein at least one head corresponds to a different attention pattern from the other N−1 heads, and N denotes a number of heads and is a positive integer greater than 1; and processing the text by using the Transformer model. Model performance and a corresponding text processing effect can be improved by using the solutions according to the present disclosure.

    DOCUMENT IMAGE UNDERSTANDING
    3.
    发明公开

    公开(公告)号:US20230177821A1

    公开(公告)日:2023-06-08

    申请号:US18063564

    申请日:2022-12-08

    CPC classification number: G06V10/82 G06V30/19147 G06V30/1444

    Abstract: A neural network training method and a document image understanding method is provided. The neural network training method includes: acquiring text comprehensive features of a plurality of first texts in an original image; replacing at least one original region in the original image to obtain a sample image including a plurality of first regions and a ground truth label for indicating whether each first region is a replaced region; acquiring image comprehensive features of the plurality of first regions; inputting the text comprehensive features of the plurality of first texts and the image comprehensive features of the plurality of first regions into a neural network model together to obtain text representation features of the plurality of first texts; determining a predicted label based on the text representation features of the plurality of first texts; and training the neural network model based on the ground truth label and the predicted label.

    METHOD AND APPARATUS FOR GENERATING NODE REPRESENTATION, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

    公开(公告)号:US20230004774A1

    公开(公告)日:2023-01-05

    申请号:US17578683

    申请日:2022-01-19

    Abstract: The present disclosure provides a method and apparatus for generating a node representation, an electronic device and a readable storage medium, and relates to the field of deep learning technologies. The method for generating a node representation includes: acquiring a heterogeneous graph to be processed; performing a sampling operation in the heterogeneous graph to be processed according to a first meta path, so as to obtain at least one first walk path; obtaining an initial node representation of each node in the heterogeneous graph to be processed according to the at least one first walk path; and generating the final node representation of each node according to the initial node representation of each node and initial node representations of neighbor nodes of each node. With the present disclosure, accuracy of the generated node representation may be improved.

    TRAINING METHOD AND APPARATUS FOR DOCUMENT PROCESSING MODEL, DEVICE, STORAGE MEDIUM AND PROGRAM

    公开(公告)号:US20220382991A1

    公开(公告)日:2022-12-01

    申请号:US17883908

    申请日:2022-08-09

    Abstract: The present disclosure provides a training method and apparatus for a document processing model, a device, a storage medium and a program, which relate to the field of artificial intelligence, and in particular, to technologies such as deep learning, natural language processing and text recognition. The specific implementation is: acquiring a first sample document; determining element features of a plurality of document elements in the first sample document and positions corresponding to M position types of each document element according to the first sample document; where the document element corresponds to a character or a document area in the first sample document; and performing training on a basic model according to the element features of the plurality of document elements and the positions corresponding to the M position types of each document element to obtain the document processing model.

    DATA PROCESSING
    9.
    发明申请

    公开(公告)号:US20250028958A1

    公开(公告)日:2025-01-23

    申请号:US18908380

    申请日:2024-10-07

    Abstract: A data processing method, and a data processing model and a training method therefor are provided, and relate to the field of artificial intelligence, and specifically, to natural language processing, deep learning technologies, and large model technologies. An implementation solution includes: determining input data, where the input data includes a plurality of tokens; determining a correlation between each of the plurality of tokens and each of a plurality of expert networks based on a gating matrix, where the plurality of expert networks are used to reinforce the plurality of tokens; allocating the plurality of tokens to the plurality of expert networks in a uniform manner based on the correlation and a preset capacity of each expert network, to reinforce the plurality of tokens; and determining a data processing result based on the plurality of reinforced tokens.

Patent Agency Ranking