-
公开(公告)号:US11941507B2
公开(公告)日:2024-03-26
申请号:US17954109
申请日:2022-09-27
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Guang Chen
Abstract: Disclosed are a data flow method and apparatus for neural network computation. The data flow method for neural network computation includes initializing the lifecycle of a variable in a computational graph; and defining a propagation rule for a variable in use to flow through a node. A definition of the variable is produced at a precursor node of the node, such that an input set of valid variables flowing through the node contains the variable. The method may be used on neural network computation in a deep learning training system.
-
公开(公告)号:US11861505B2
公开(公告)日:2024-01-02
申请号:US17833088
申请日:2022-06-06
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Hujun Bao , Guang Chen
Abstract: The disclosure discloses a method of executing dynamic graph for neural network computation and the apparatus thereof. The method of executing dynamic graph includes the following steps: S1: constructing and distributing an operator and a tensor; S2: deducing an operator executing process by an operator interpreter; S3: constructing an instruction of a virtual machine at runtime by the operator interpreter; S4: sending the instruction to the virtual machine at runtime by the operator interpreter; S5: scheduling the instruction by the virtual machine; and S6: releasing an executed instruction by the virtual machine. According to the method of executing dynamic graph for neural network computation and the apparatus thereof provided by the disclosure, runtime is abstracted to be the virtual machine, and the virtual machine acquires a sub-graph of each step constructed by a user in real time through the interpreter and schedules, the virtual machines issues, and executes each sub-graph.
-
13.
公开(公告)号:US11526774B2
公开(公告)日:2022-12-13
申请号:US17564071
申请日:2021-12-28
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Haijun Shan , Jiaqing Fu
Abstract: Disclosed is a method for automatically compressing multi-task oriented pre-trained language model and a platform thereof. According to the method, a meta-network of a structure generator is designed, a knowledge distillation coding vector is constructed based on a knowledge distillation method of Transformer layer sampling, and a distillation structure model corresponding to a currently input coding vector is generated by using the structure generator; at the same time, a Bernoulli distribution sampling method is provided for training the structure generator; in each iteration, each encoder unit is transferred by Bernoulli distribution sampling to form a corresponding coding vector; by changing the coding vector input to the structure generator and a small batch of training data, the structure generator and the corresponding distillation structure are jointly trained, and a structure generator capable of generating weights for different distillation structures can be acquired.
-
公开(公告)号:US20220138414A1
公开(公告)日:2022-05-05
申请号:US17531813
申请日:2021-11-22
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Haijun Shan , Shengjian Hu
Abstract: Disclosed is a meta-knowledge fine tuning method and platform for a multi-task language model. The method is to obtain highly transferable shared knowledge, that is, meta-knowledge, on different data sets of tasks of the same category, perform interrelation and mutual reinforcement on the learning processes of the tasks of the same category that correspond to different data sets and are in different domains, so as to improve the fine tuning effect of downstream tasks of the same category on data sets of different domains in the application of the language model, and improve the parameter initialization ability and the generalization ability of a general language model for the tasks of the same category.
-
15.
公开(公告)号:US11941532B2
公开(公告)日:2024-03-26
申请号:US17726563
申请日:2022-04-22
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Wei Hua , Hujun Bao , Fei Yang
Abstract: Disclosed is a method for adapting a deep learning framework to a hardware device based on a unified backend engine, which comprises the following steps: S1, adding the unified backend engine to the deep learning framework; S2, adding the unified backend engine to the hardware device; S3, converting a computational graph, wherein the computational graph compiled and generated by the deep learning framework is converted into an intermediate representation of the unified backend engine; S4, compiling the intermediate representation, wherein the unified backend engine compiles the intermediate representation on the hardware device to generate an executable object; S5, running the executable object, wherein the deep learning framework runs the executable object on the hardware device; S6: managing memory of the unified backend engine.
-
公开(公告)号:US11934887B1
公开(公告)日:2024-03-19
申请号:US18466384
申请日:2023-09-13
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Fei Wu , Guang Chen , Feng Lin
CPC classification number: G06F9/5072 , G06F8/41 , G06F9/5066 , G06F2209/5016 , G06F2209/5017
Abstract: The present disclosure discloses a distributed model compilation system. A master node of the system determines the logic calculation graph of the model based on model information, divides the logic calculation graph into multiple logic calculation sub-graphs, generates a distributing message for each logic calculation sub-graph, and then transmits the distributing message to a slave node. Each of the slave nodes allocates a local computing resource to compile the logic calculation sub-graph based on the received distributing message, and transmits compilation completion information to the master node. The master node determines the completion of model compilation based on the compilation completion information returned by each slave node, and executes the target work based on the compiled model.
-
公开(公告)号:US11810366B1
公开(公告)日:2023-11-07
申请号:US18072002
申请日:2022-11-30
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Guang Chen
Abstract: Disclosed are a joint modeling method and apparatus for enhancing local features of pedestrians. The method includes the following steps: S1: acquiring an original surveillance video image data set, dividing the original surveillance video image data set into a training set and a test set in proportion; S2: cutting the surveillance video image training set to obtain image block vector sequences. In the present disclosure, local features of pedestrians in video images are extracted by a multi-head attention neural network, weight parameters of image channels are learned by channel convolution kernels, spatial features on the images are scanned through spatial convolution, local features of pedestrians are enhanced to improve the recognition rate of pedestrians, a feed-forward neural network and an activation function are adopted, so as to realize pedestrian re-recognition, thereby obtaining face images available.
-
公开(公告)号:US12272177B2
公开(公告)日:2025-04-08
申请号:US17950033
申请日:2022-09-21
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Guang Chen , Hujun Bao
Abstract: Disclosed are a method and apparatus for constructing a three-dimensional data set of a pedestrian re-identification based on a neural radiation field. The method includes the following steps: S1: capturing images of pedestrians to be entered by a group of cameras at different viewing angles; S2: generating a three-dimensional spatial position point set by sampling through camera rays in the scenario, and converting observation directions of the cameras corresponding to the three-dimensional spatial position point set into three-dimensional Cartesian unit vectors; and S3: inputting, into a multi-layer sensor, the three-dimensional spatial position point set and the observation directions converted into the three-dimensional Cartesian unit vectors, to output corresponding densities and colors. The method and apparatus of the present disclosure gives a brand-new method for constructing a pedestrian re-identification data set, and provides a new idea of data set construction.
-
公开(公告)号:US11823053B2
公开(公告)日:2023-11-21
申请号:US17714454
申请日:2022-04-06
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Wei Hua , Weiqiang Jia , Hujun Bao
Abstract: The disclosure discloses a method of neural network model computation-oriented intermediate representation and apparatus thereof. The method includes the following steps: S1, parsing an input model file so as to acquire topological structure information of a neural network; S2, constructing a logical computation graph; S21, inferring physical layout information of each operator in the logical computation graph; S22, inferring meta attributes of each operator in the logical computation graph; S23, inferring description information of input and output logical tensors of each operator in the logical computation graph; S3, constructing a physical computation graph; S31, generating a physical computation graph, etc.
-
20.
公开(公告)号:US11782723B1
公开(公告)日:2023-10-10
申请号:US17992830
申请日:2022-11-22
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Guang Chen , Lingfang Zeng , Aimin Pan
CPC classification number: G06F9/3885 , G06F8/433 , G06F8/443
Abstract: Disclosed are an intermediate representation method and apparatus for parallel execution of graph computation. The method includes the following steps: S1: compiling a neural network into a computational graph on a computer; S2: defining branch states of tensor variables in the computational graph; S3: defining a data dependency relationship of the tensor variables in the computational graph; S4: defining a control dependency relationship of the tensor variables in the computational graph; S5: building a data dependency relationship graph of the tensor variables in the computational graph; S6: building a control dependency relationship graph of the tensor variables in the computational graph; and S7: transforming control dependencies into data dependencies. The present application derives, based on the dependency relationship, a parallel computing method that can execute the branch threads in parallel in the global computational graph, and optimizes the compilation efficiency of the computational graph.
-
-
-
-
-
-
-
-
-