-
1.
公开(公告)号:US20220366259A1
公开(公告)日:2022-11-17
申请号:US17765711
申请日:2020-10-30
Applicant: CANON KABUSHIKI KAISHA
Inventor: Deyu Wang , Tse-wei Chen , Dongchao Wen , Junjie Liu , Wei Tao
Abstract: Provided are a method, an apparatus and a system for training a neural network, and a storage medium storing instructions. The neural network comprises a first neural network and a second neural network, training of the first neural network has not yet completed and training of the second neural network does not start. The method comprises: obtaining a first output by subjecting a sample image to the current first neural network, and obtaining a second output by subjecting the sample image to the current second neural network; and updating the current first neural network according to a first loss function value, and updating the current second neural network according to a second loss function value. The performance of the second neural network can be improved, and the overall training time of the first neural network and the second neural network can be reduced.
-
2.
公开(公告)号:US20200151514A1
公开(公告)日:2020-05-14
申请号:US16670940
申请日:2019-10-31
Applicant: CANON KABUSHIKI KAISHA
Inventor: Junjie Liu , Tse-Wei Chen , Dongchao Wen , Hongxing Gao , Wei Tao
Abstract: A training and application method for a neural network model is provided. The training method determines the first network model to be trained and sets a downscaling layer for at least one layer in the first network model, wherein the number of filters and filter kernel of the downscaling layer are identical to those of layers to be trained in the second network model. Filter parameters of the downscaling layer are transmitted to the second network model as training information. By this training method, training can also be performed even when the scale of the layer for training in the first network model is different from that of the layers to be trained in the second network model, and the amount of lost data is small.
-
3.
公开(公告)号:US20250013870A1
公开(公告)日:2025-01-09
申请号:US18893578
申请日:2024-09-23
Applicant: CANON KABUSHIKI KAISHA
Inventor: Hongxing Gao , Wei Tao , Tse-Wei Chen , Dongchao Wen , Junjie Liu
Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.
-
4.
公开(公告)号:US20210334622A1
公开(公告)日:2021-10-28
申请号:US17230577
申请日:2021-04-14
Applicant: CANON KABUSHIKI KAISHA
Inventor: Wei Tao , Tsewei Chen , Dongchao Wen , Junjie Liu , Deyu Wang
Abstract: A method for generating a multilayer neural network including acquiring a multilayer neural network, wherein the multilayer neural network includes at least convolutional layers and quantization layers; generating, for each of the quantization layers in the multilayer neural network, quantization threshold parameters based on a quantization bit parameter and a learnable quantization interval parameter in the quantization layer; and updating the multilayer neural network to obtain a fixed-point neural network based on the generated quantization threshold parameters and operation parameters for each layer in the multilayer neural network.
-
5.
公开(公告)号:US20210279574A1
公开(公告)日:2021-09-09
申请号:US17189014
申请日:2021-03-01
Applicant: CANON KABUSHIKI KAISHA
Inventor: Junjie Liu , Tsewei Chen , Dongchao Wen , Wei Tao , Deyu Wang
Abstract: A method of generating a quantized neural network comprises: determining, based on a floating-point weight in a neural network to be quantized, networks which correspond to the floating-point weights and are used for directly outputting quantized weights, respectively; quantizing, using the determined network, the floating-point weight corresponding to the network to obtain a quantized neural network; updating, based on a loss function value obtained via the quantized neural network, the determined network, the floating-point weight and the quantized weight in the quantized neural network.
-
6.
公开(公告)号:US20210065011A1
公开(公告)日:2021-03-04
申请号:US17003384
申请日:2020-08-26
Applicant: CANON KABUSHIKI KAISHA
Inventor: Junjie Liu , Tsewei Chen , Dongchao Wen , Wei Tao , Deyu Wang
Abstract: A training and application method, apparatus, system and storage medium of a neural network model is provided. The training method comprises: determining a constraint threshold range according to the number of training iterations and a calculation accuracy of the neural network model, and constraining a gradient of a weight to be within the constraint threshold range, so that when the gradient of a low-accuracy weight is distorted due to a quantization error, the distortion of the gradient is corrected by the constraint of the gradient, thereby making the trained network model achieve the expected performance.
-
7.
公开(公告)号:US12147901B2
公开(公告)日:2024-11-19
申请号:US16721624
申请日:2019-12-19
Applicant: CANON KABUSHIKI KAISHA
Inventor: Hongxing Gao , Wei Tao , Tsewei Chen , Dongchao Wen , Junjie Liu
Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.
-
8.
公开(公告)号:US11847569B2
公开(公告)日:2023-12-19
申请号:US16721606
申请日:2019-12-19
Applicant: CANON KABUSHIKI KAISHA
Inventor: Wei Tao , Hongxing Gao , Tsewei Chen , Dongchao Wen , Junjie Liu
Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and storage medium. A number of channels of a filter in at least one convolutional layer in the multi-layer neural network model is expanded, and a convolution computation is performed by using the filter after expanding the number of channels, so that the performance of the network model does not degrade while simplifying the network model.
-
9.
公开(公告)号:US11106945B2
公开(公告)日:2021-08-31
申请号:US16670940
申请日:2019-10-31
Applicant: CANON KABUSHIKI KAISHA
Inventor: Junjie Liu , Tsewei Chen , Dongchao Wen , Hongxing Gao , Wei Tao
Abstract: A training and application method for a neural network model is provided. The training method determines the first network model to be trained and sets a downscaling layer for at least one layer in the first network model, wherein the number of filters and filter kernel of the downscaling layer are identical to those of layers to be trained in the second network model. Filter parameters of the downscaling layer are transmitted to the second network model as training information. By this training method, training can also be performed even when the scale of the layer for training in the first network model is different from that of the layers to be trained in the second network model, and the amount of lost data is small.
-
10.
公开(公告)号:US20200210843A1
公开(公告)日:2020-07-02
申请号:US16721606
申请日:2019-12-19
Applicant: CANON KABUSHIKI KAISHA
Inventor: Wei Tao , Hongxing Gao , Tse-Wei Chen , Dongchao Wen , Junjie Liu
Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and storage medium. A number of channels of a filter in at least one convolutional layer in the multi-layer neural network model is expanded, and a convolution computation is performed by using the filter after expanding the number of channels, so that the performance of the network model does not degrade while simplifying the network model.
-
-
-
-
-
-
-
-
-