Neural network computing-oriented modeling method and apparatus for distributed data routing

    公开(公告)号:US11805025B1

    公开(公告)日:2023-10-31

    申请号:US17848048

    申请日:2022-06-23

    Applicant: ZHEJIANG LAB

    CPC classification number: H04L41/145 H04L41/16 H04L45/44

    Abstract: The present disclosure provides a neural network computing-oriented modeling method and apparatus for distributed data routing. The method includes the following steps: S1, designing the distributed attribute of a physical tensor: abstracting a mapping relationship between a logic tensor and the physical tensor into three distributed attributes including a broadcast attribute, a scatter attribute and a local reduction attribute; S2, deducing the distributed attribute of an output tensor: specifying the distributed attribute of an input tensor, and then deducing the legal distributed attribute of the output tensor according to the known distributed attribute of the input tensor; and S3, judging, according to the distributed attribute situation, whether an intermediate communication primitive needs to be inserted to obtain the distributed attribute of a local physical tensor. The difficulty of distributed design and development is low, and the development of application of a deep neural network large model is promoted.

    Method and apparatus for constructing three-dimensional data set of pedestrian re-identification based on neural radiation field

    公开(公告)号:US12272177B2

    公开(公告)日:2025-04-08

    申请号:US17950033

    申请日:2022-09-21

    Applicant: ZHEJIANG LAB

    Abstract: Disclosed are a method and apparatus for constructing a three-dimensional data set of a pedestrian re-identification based on a neural radiation field. The method includes the following steps: S1: capturing images of pedestrians to be entered by a group of cameras at different viewing angles; S2: generating a three-dimensional spatial position point set by sampling through camera rays in the scenario, and converting observation directions of the cameras corresponding to the three-dimensional spatial position point set into three-dimensional Cartesian unit vectors; and S3: inputting, into a multi-layer sensor, the three-dimensional spatial position point set and the observation directions converted into the three-dimensional Cartesian unit vectors, to output corresponding densities and colors. The method and apparatus of the present disclosure gives a brand-new method for constructing a pedestrian re-identification data set, and provides a new idea of data set construction.

    Intermediate representation method and apparatus for parallel execution of graph computation

    公开(公告)号:US11782723B1

    公开(公告)日:2023-10-10

    申请号:US17992830

    申请日:2022-11-22

    Applicant: ZHEJIANG LAB

    CPC classification number: G06F9/3885 G06F8/433 G06F8/443

    Abstract: Disclosed are an intermediate representation method and apparatus for parallel execution of graph computation. The method includes the following steps: S1: compiling a neural network into a computational graph on a computer; S2: defining branch states of tensor variables in the computational graph; S3: defining a data dependency relationship of the tensor variables in the computational graph; S4: defining a control dependency relationship of the tensor variables in the computational graph; S5: building a data dependency relationship graph of the tensor variables in the computational graph; S6: building a control dependency relationship graph of the tensor variables in the computational graph; and S7: transforming control dependencies into data dependencies. The present application derives, based on the dependency relationship, a parallel computing method that can execute the branch threads in parallel in the global computational graph, and optimizes the compilation efficiency of the computational graph.

    Labeling method and apparatus for named entity recognition of legal instrument

    公开(公告)号:US11615247B1

    公开(公告)日:2023-03-28

    申请号:US17830786

    申请日:2022-06-02

    Applicant: ZHEJIANG LAB

    Abstract: Disclosed are a labeling method and apparatus for named entity recognition of a legal instrument. The method includes steps: step S1: acquiring a legal text, and transforming the legal text into an index table; step S2: outputting a sentence feature encoding result; step S3: performing training and prediction; step S4: obtaining a set; step S5: obtaining a multi-head score transfer matrix; step S6: obtaining a score transfer matrix corresponding to the legal text; step S7: determining a recognized nested entity; and S8: constructing an entity labeling template by using the recognized nested entity. According to the present disclosure, a user tries to complete recognition of nested entity labeling by changing an input of the BERT model, and a multi-head selection matrix labeling thought of the present disclosure is used to relieve the difficulty in recognizing a long text and a nested entity in an NER task to a larger extent.

    Graph optimization method and apparatus for neural network computation

    公开(公告)号:US11915135B2

    公开(公告)日:2024-02-27

    申请号:US17950028

    申请日:2022-09-21

    Applicant: ZHEJIANG LAB

    CPC classification number: G06N3/08 G06F18/29 G06N20/00

    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.

    Methods and apparatuses for executing tasks, storage mediums, and electronic devices

    公开(公告)号:US12039361B1

    公开(公告)日:2024-07-16

    申请号:US18494002

    申请日:2023-10-25

    Applicant: ZHEJIANG LAB

    CPC classification number: G06F9/48

    Abstract: The present disclosure discloses a method for executing a task. The method includes: a master computing device node in a computing cluster system receives a task code of a to-be-executed task; the master computing device node divides the to-be-executed task into subtasks, and for each of the subtasks, the master computing device node determines operators required to execute the subtask based on the task code; the master computing device node respectively distributes the subtasks to computing nodes in the computing cluster system, such that for each of the computing nodes, the computing node generates an executable task subgraph for the computing node based on the operators required to execute the subtask distributed to the computing node and data transmission relationships between the operators required to execute the subtask distributed to the computing node, and runs the executable task subgraph to execute the to-be-executed task.

    Method and apparatus of executing dynamic graph for neural network computation

    公开(公告)号:US11861505B2

    公开(公告)日:2024-01-02

    申请号:US17833088

    申请日:2022-06-06

    Applicant: ZHEJIANG LAB

    CPC classification number: G06N3/10 G06N3/04 G06N7/01

    Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.

Patent Agency Ranking