-
公开(公告)号:US11748250B2
公开(公告)日:2023-09-05
申请号:US17536475
申请日:2021-11-29
Inventor: Jianjun Li , Meng Yao , Zhenjiang Wang , Yu Zhou
Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.
-
公开(公告)号:US20220197786A1
公开(公告)日:2022-06-23
申请号:US17536475
申请日:2021-11-29
Inventor: Jianjun Li , Meng Yao , Zhenjiang Wang , Yu Zhou
Abstract: This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved.
-
公开(公告)号:US11328521B2
公开(公告)日:2022-05-10
申请号:US16808780
申请日:2020-03-04
Inventor: Siyu Zeng , Degang Yang
Abstract: A map construction method, an electronic device and a readable storage medium are disclosed. The map construction method includes: obtaining three-dimensional point cloud data corresponding to observed objects in current target frame data, the observed objects including a feature object and a non-feature object; obtaining a feature parameter corresponding to the feature object based on three-dimensional point cloud data corresponding to the feature object, the feature parameter being used to indicate a position and a size of the feature object; and adjusting a point cloud density of three-dimensional point cloud data corresponding to the non-feature object to a preset value, to construct a map according to the feature parameter and the three-dimensional point cloud data of which the point cloud density is adjusted.
-
公开(公告)号:US20210318879A1
公开(公告)日:2021-10-14
申请号:US17224473
申请日:2021-04-07
Inventor: Yitong ZHAO , Xing WEI
Abstract: The present disclosure provides an instruction execution method, device, and electronic equipment. In the instruction execution method described above, after obtaining an exceptional signal generated by a neural network processor during an operation, the electronic equipment determines an exception processing instruction corresponding to the exceptional signal according to the exceptional signal, then it determines a first instruction queue needed to be executed by the neural network processor, and then it generates a second instruction queue based on the exception processing instruction and the first instruction queue, and finally it controls the neural network processor to execute the second instruction queue, so that errors encountered by the neural network processor can be timely processed, thereby shortening the error processing delay and improving the data processing efficiency of the hardware system in the electronic equipment.
-
15.
公开(公告)号:US20200250542A1
公开(公告)日:2020-08-06
申请号:US16746887
申请日:2020-01-19
Inventor: Zhichao LI , Yushu GAO , Yifeng GENG , Heng LUO
Abstract: Disclosed are a neural network training method, a neural network training device and an electronic device. The neural network training method includes: training a first neural network to be trained by using sample data; determining an indicator parameter of the first neural network in a current training process; determining an update manner corresponding to a preset condition if the indicator parameter meets the preset condition; and updating a parameter of a batch normalization layer in the first neural network based on the update manner. In this way, sparsing of a feature map output by a neural network is implemented, thereby reducing an amount of data to be transmitted and improving computation speed of a chip.
-
16.
公开(公告)号:US11856379B2
公开(公告)日:2023-12-26
申请号:US17434208
申请日:2019-11-27
Inventor: Changbao Zhu
CPC classification number: H04R3/12 , G10L15/25 , G10L15/30 , H04S7/302 , H04R2420/03
Abstract: Disclosed are a method, an apparatus and an electronic device for controlling audio playback of multiple loudspeakers, wherein the method comprises: determining the location information of each speaker and voice signals issued by each speaker; determining the area where each speaker is located according to the location information of each speaker; determining voice instruction corresponding to each voice signal; and controlling the multiple loudspeakers to play the audio indicated by the corresponding voice instruction respectively for the area where the speaker of each voice instruction is located. According to the method and apparatus and/or the electronic device in an embodiment of the present disclosure, different audios can be played for different areas in a preset space.
-
17.
公开(公告)号:US20230409886A1
公开(公告)日:2023-12-21
申请号:US18247408
申请日:2022-02-10
Inventor: Zhuoran ZHAO , Kai YU , Chang HUANG , Zhenjiang WANG , Jianjun LI , Delin LI , Yinan ZHANG
IPC: G06N3/0464
CPC classification number: G06N3/0464
Abstract: The present disclosure provides a method and apparatus for deconvolving feature data using convolution hardware. The method includes: reading a feature map and deconvolution kernel into on-chip memory, and padding zeroes to the feature map; determining convolution kernels based on the deconvolution kernel; removing a row and/or column of each convolution kernel whose elements all are invalid weights, to obtain an optimized convolution kernel, and removing a corresponding row and/or column in the zero-padded feature map to obtain an corresponding optimized feature map; convolving each optimized convolution kernel with corresponding optimized feature map using the multiply-add array, to obtain convolutional outputs; and interleaving and synthesizing the convolutional outputs to obtain an interleaving synthetic output including at least a deconvolutional output corresponding to the feature map and deconvolution kernel. The method reduces hardware complexity, chip area and power consumption, and many invalid operations, improving operating efficiency of convolution hardware.
-
公开(公告)号:US11645537B2
公开(公告)日:2023-05-09
申请号:US16746887
申请日:2020-01-19
Inventor: Zhichao Li , Yushu Gao , Yifeng Geng , Heng Luo
Abstract: Disclosed are a neural network training method, a neural network training device and an electronic device. The neural network training method includes: training a first neural network to be trained by using sample data; determining an indicator parameter of the first neural network in a current training process; determining an update manner corresponding to a preset condition if the indicator parameter meets the preset condition; and updating a parameter of a batch normalization layer in the first neural network based on the update manner. In this way, sparsing of a feature map output by a neural network is implemented, thereby reducing an amount of data to be transmitted and improving computation speed of a chip.
-
公开(公告)号:US11244473B2
公开(公告)日:2022-02-08
申请号:US16716801
申请日:2019-12-17
Inventor: Shuai Yang
Abstract: A positioning method of a mobile device includes: determining a first position and orientation parameter of a mobile device when a current frame image is captured, and determining a straight line corresponding to a preset sign in the current frame image; determining a plurality of second position and orientation parameters based on the first position and orientation parameter; determining, in a high-definition map, point cloud data within a preset range of a geographic location when the current frame image is captured; converting the point cloud data within the preset range into a pixel plane-coordinate system to obtain a plurality of second image coordinate sets; determining, based on distances from image coordinates in the plurality of second image coordinate sets to the straight line, a position and orientation parameter of the mobile device when the current frame image is captured among the plurality of second position and orientation parameters.
-
公开(公告)号:US11195098B2
公开(公告)日:2021-12-07
申请号:US16666344
申请日:2019-10-28
Inventor: Yukang Chen , Qian Zhang , Chang Huang
Abstract: Disclosed are a method for generating a neural network, an apparatus thereof, and an electronic device. The method includes: obtaining an optimal neural network and a worst neural network from a neural network framework by using an evolutionary algorithm; obtaining an optimized neural network from the optimal neural network by using a reinforcement learning algorithm; updating the neural network framework by adding the optimized neural network into the neural network framework and deleting the worst neural network from the neural network framework; and determining an ultimately generated neural network from the updated neural network framework. In this way, a neural network is optimized and updated from a neural network framework by combining the evolutionary algorithm and the reinforcement learning algorithm, thereby automatically generating a neural network structure rapidly and stably.
-
-
-
-
-
-
-
-
-