-
公开(公告)号:US11868869B1
公开(公告)日:2024-01-09
申请号:US18215784
申请日:2023-06-28
Applicant: ZHEJIANG LAB
Inventor: Gang Huang , Wei Hua , Yongfu Li
CPC classification number: G06N3/049
Abstract: The present invention relates to the field of smart grids, and provides a non-intrusive load monitoring method and device based on temporal attention mechanism. The method comprises the following steps: obtaining a total load data, an equipment load data, and corresponding sampling time of a building during a certain period of time; integrating the total load data and the equipment load data with the corresponding sampling time to obtain an enhanced total load data and an enhanced equipment load data; using a sliding window method to segment the enhanced total load data and the enhanced equipment load data, and constructing a deep learning training dataset; constructing a neural network model based on a deep learning training framework and training the model using the training dataset. The present invention can effectively extract the working time mode of the load and its inherent dependencies, thereby improving the accuracy of load monitoring.
-
公开(公告)号:US11436494B1
公开(公告)日:2022-09-06
申请号:US17717121
申请日:2022-04-10
Applicant: Zhejiang Lab
Inventor: Gang Huang , Longfei Liao , Wei Hua
Abstract: An optimal power flow computation method based on multi-task deep learning is provided, which is related to the field of smart power grids. The optimal power flow computation method based on multi-task deep learning includes: acquiring state data of a power grid at a certain dispatching moment, and amplifying collected data samples by means of sampling to acquire training data; applying an optimization method to acquire dispatching solutions of the power grid in different sampling states, and acquiring labels; designing a deep learning neural network model, learning feasibility and an optimal solution of an optimal power flow computation problem separately, and outputting a feasibility determination and an optimal solution prediction; simultaneously training, tasks of the feasibility determination and the optimal solution prediction in the optimal power flow computation problem; and determining whether there is a feasible dispatching solution, and outputting an optimal dispatching solution or an early warning.
-
公开(公告)号:US12117829B1
公开(公告)日:2024-10-15
申请号:US18505068
申请日:2023-11-08
Applicant: ZHEJIANG LAB
Inventor: Yuntao Liu , Yongdong Zhu , Zhifeng Zhao , Wei Hua , Qian Huang , Shuyuan Zhao , Daoxun Li , Zimian Wu
CPC classification number: G05D1/0022 , B60W60/00 , H04L67/12 , B60W2556/45
Abstract: The present disclosure discloses an autonomous vehicle remote control apparatus and a method based on heterogeneous networks. The apparatus comprises a vehicle information acquisition module, a first message sending module, a first message receiving module and a first remote control module. According to the present disclosure, the possibility of failure of remote control is avoided or greatly reduced by bypassing the area where the network quality does not support remote control when planning a vehicle path, heterogeneous network resources are reasonably utilized on the vehicle driving path, the real-time performance of obtaining vehicle-related information by a remote control terminal is improved, and the availability and reliability of remote control and the safety of vehicle driving are effectively enhanced.
-
4.
公开(公告)号:US11941532B2
公开(公告)日:2024-03-26
申请号:US17726563
申请日:2022-04-22
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Wei Hua , Hujun Bao , Fei Yang
Abstract: Disclosed is a method for adapting a deep learning framework to a hardware device based on a unified backend engine, which comprises the following steps: S1, adding the unified backend engine to the deep learning framework; S2, adding the unified backend engine to the hardware device; S3, converting a computational graph, wherein the computational graph compiled and generated by the deep learning framework is converted into an intermediate representation of the unified backend engine; S4, compiling the intermediate representation, wherein the unified backend engine compiles the intermediate representation on the hardware device to generate an executable object; S5, running the executable object, wherein the deep learning framework runs the executable object on the hardware device; S6: managing memory of the unified backend engine.
-
公开(公告)号:US11900618B2
公开(公告)日:2024-02-13
申请号:US18338328
申请日:2023-06-20
Applicant: ZHEJIANG LAB
Inventor: Yechi Ma , Wei Hua , Quan Feng , Shun Zhang
IPC: G06T7/246
CPC classification number: G06T7/251 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
Abstract: A system and a method for detecting a moving target based on multi-frame point clouds. The system comprises a voxel feature extraction module; a transformer module used for matching and fusing the feature tensor sequence, fusing a first feature tensor with a second feature tensor, fusing the fused result with a third feature tensor, fusing the fused result with a fourth feature tensor, and repeating the fusing steps with a next feature tensor to obtain a final fused feature tensor; and an identification module used for extracting features from the final fused feature tensor and outputting detection information of a target. The method comprises the following steps: S1, constructing each system module; S2, training the model by the data in a training set; S3, predicting by the trained model.
-
公开(公告)号:US11714995B2
公开(公告)日:2023-08-01
申请号:US17739205
申请日:2022-05-09
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Hujun Bao , Wei Hua , Weiqiang Jia
CPC classification number: G06N3/0454 , G06F8/36 , G06F9/4881 , G06F9/545
Abstract: Disclosed is a method for distributed type training adaptation and apparatus in a deep learning framework and an AI accelerator card. The method includes the following steps: S1: the deep learning framework supports single-card configuration in a newly added AI accelerator card, and sub-steps thereof are as follows: S11: the deep learning framework supports new hardware; S12: the deep learning framework supports a device thread of the new hardware; S13: the deep learning framework supports a memory operation of the new hardware; and S14: the deep learning framework supports an operator kernel function of the new hardware; S2: the deep learning framework supports multi-card configuration in the newly added AI accelerator card; S3: the deep learning framework supports tensor segmentation and multi-card distribution; and S4: the deep learning framework supports multi-card collective communication in the newly added AI accelerator card.
-
公开(公告)号:US11836966B2
公开(公告)日:2023-12-05
申请号:US17896055
申请日:2022-08-25
Applicant: ZHEJIANG LAB
Inventor: Wei Hua , Yechi Ma , Shun Zhang
CPC classification number: G06V10/761 , G06V10/82
Abstract: An efficient across-camera target re-identification method based on similarity, which obtains a plurality of matching pairs and similarity scores thereof through two groups of targets to be matched; wherein for the matching pairs that are not matched by both parties, only a part of the matching pairs with higher similarity scores are selected each time, and the matching pairs are traversed according to the order of the similarity scores thereof from large to small, and the matching pairs and the similarity scores thereof are output as a matching result; when any target to be matched in a matching pair already appears in the matching result, the target cannot be output as the matching result; unmatched matching pairs are repeated traversed until the matching result reaches the expectation. The method firstly solves the multi-target matching problem based on similarity, and greatly reduces the time complexity and improves the efficiency.
-
公开(公告)号:US11823053B2
公开(公告)日:2023-11-21
申请号:US17714454
申请日:2022-04-06
Applicant: ZHEJIANG LAB
Inventor: Hongsheng Wang , Wei Hua , Weiqiang Jia , Hujun Bao
Abstract: The disclosure discloses a method of neural network model computation-oriented intermediate representation and apparatus thereof. The method includes the following steps: S1, parsing an input model file so as to acquire topological structure information of a neural network; S2, constructing a logical computation graph; S21, inferring physical layout information of each operator in the logical computation graph; S22, inferring meta attributes of each operator in the logical computation graph; S23, inferring description information of input and output logical tensors of each operator in the logical computation graph; S3, constructing a physical computation graph; S31, generating a physical computation graph, etc.
-
-
-
-
-
-
-