-
公开(公告)号:US20220222111A1
公开(公告)日:2022-07-14
申请号:US17707895
申请日:2022-03-29
Inventor: Haifeng Wang , Xiaoguang HU , Dianhai YU , Yanjun MA , Tian WU
Abstract: A scheduling method for a deep learning framework, a scheduling apparatus, an electronic device, a storage medium, and a program product is provided, and can be used in the field of artificial intelligence, especially in the fields of machine learning, deep learning, etc. The method includes: receiving a processing request for processing a plurality of tasks by using a dedicated processing unit, the processing request including scheduling requirements for the plurality of tasks, and each of the plurality of tasks being associated with execution of multi-batch data processing; and scheduling, based on the scheduling requirements for the plurality of tasks in batches of data, the dedicated processing unit to process the plurality of tasks.
-
12.
公开(公告)号:US20220004930A1
公开(公告)日:2022-01-06
申请号:US17480292
申请日:2021-09-21
Inventor: Qingqing DANG , Kaipeng DENG , Lielin JIANG , Sheng GUO , Xiaoguang HU , Chunyu ZHANG , Yanjun MA , Tian WU , Haifeng WANG
Abstract: Embodiments of the present disclosure provide a method and apparatus of training a model, an electronic device, a storage medium and a development system, which relate to a field of deep learning. The method may include calling a training preparation component to set at least a loss function and an optimization function for training the model, in response to determining that a training preparation instruction is received. The method further includes calling a training component to set a first data reading component, in response to determining that a training instruction is received. The first data reading component is configured to load a training data set for training the model. In addition, the method may further include training the model based on the training data set from the first data reading component, by using the loss function and the optimization function through the training component.
-
公开(公告)号:US20220004811A1
公开(公告)日:2022-01-06
申请号:US17479061
申请日:2021-09-20
Inventor: Ruoyu GUO , Yuning DU , Weiwei LIU , Xiaoting YIN , Qiao ZHAO , Qiwen LIU , Ran BI , Xiaoguang HU , Dianhai YU , Yanjun MA
IPC: G06K9/62
Abstract: There is provided a method and apparatus of training a model, a device, and a medium, which relate to artificial intelligence, and in particular to a deep learning and image processing technology. The method may include: determining a plurality of augmented sample sets associated with a plurality of original samples; determining a first constraint according to a first model based on the plurality of augmented sample sets; determining a second constraint according to the first model and a second model based on the plurality of augmented sample sets, wherein the second constraint is associated with a difference between outputs of the first model and the second model for one augmented sample, and the first model has a complexity lower than that of the second model; training the first model based on at least the first constraint and the second constraint, so as to obtain a trained first model.
-
公开(公告)号:US20230206075A1
公开(公告)日:2023-06-29
申请号:US17991077
申请日:2022-11-21
Inventor: Ji LIU , Zhihua WU , Danlei FENG , Minxu ZHANG , Xinxuan WU , Xuefeng YAO , Beichen MA , Dejing DOU , Dianhai YU , Yanjun MA
Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural network model.
-
公开(公告)号:US20210374490A1
公开(公告)日:2021-12-02
申请号:US17400693
申请日:2021-08-12
Inventor: Yuning DU , Yehua YANG , Shengyu WEI , Ruoyu GUO , Qiwen LIU , Qiao ZHAO , Ran BI , Xiaoguang HU , Dianhai YU , Yanjun MA
Abstract: The present disclosure provides a method and apparatus of processing an image, a device and a medium, which relates to a field of artificial intelligence, and in particular to a field of deep learning and image processing. The method includes: determining a background image of the image, wherein the background image describes a background relative to characters in the image; determining a property of characters corresponding to a selected character section of the image; replacing the selected character section with a corresponding section in the background image, so as to obtain an adjusted image; and combining acquired target characters with the adjusted image based on the property.
-
16.
公开(公告)号:US20230206668A1
公开(公告)日:2023-06-29
申请号:US18170902
申请日:2023-02-17
Inventor: Ruoyu GUO , Yuning DU , Chenxia LI , Qiwen LIU , Baohua LAI , Yanjun MA , Dianhai YU
CPC classification number: G06V30/19147 , G06V30/19173 , G06V30/18 , G06V30/16
Abstract: The present disclosure provides a vision processing and model training method, device, storage medium and program product. A specific implementation solution is as follows: establishing an image classification network with the same backbone network as the vision model, performing a self-monitoring training on the image classification network by using an unlabeled first data set; initializing a weight of a backbone network of the vision model according to a weight of a backbone network of the trained image classification network to obtain a pre-training model, the structure of the pre-training model being consistent with that of the vision model, and optimize the weight of the backbone network by using real data set in a current computer vision task scenario, so as to be more suitable for the current computer vision task; then, training the pre-training model by using a labeled second data set to obtain a trained vision model.
-
17.
公开(公告)号:US20230185702A1
公开(公告)日:2023-06-15
申请号:US17856091
申请日:2022-07-01
Inventor: Tian WU , Yanjun MA , Dianhai YU , Yehua YANG , Yuning DU
CPC classification number: G06F11/3688 , G06N3/08
Abstract: A method and apparatus is provided for generating and applying a deep learning model based on a deep learning framework, and relates to the field of computers. A specific implementation solution includes that a basic operating environment is established on a target device, where the basic operating environment is used for providing environment preparation for an overall generation process of a deep learning model; a basic function of the deep learning model is generated in the basic operating environment according to at least one of a service requirement and a hardware requirement, to obtain a first processing result; an extended function of the deep learning model is generated in the basic operating environment based on the first processing result, to obtain a second processing result; and a preset test script is used to perform function test on the second processing result, to output a test result.
-
公开(公告)号:US20230164446A1
公开(公告)日:2023-05-25
申请号:US17885035
申请日:2022-08-10
Inventor: Shengyu WEI , Yuning DU , Cheng CUI , Ruoyu GUO , Shuilong DONG , Bin LU , Tingquan GAO , Qiwen LIU , Xiaoguang HU , Dianhai YU , Yanjun MA
CPC classification number: H04N5/2353 , G06T7/11 , G06T7/80 , G02F1/13306 , G06T2207/20081
Abstract: An imaging exposure control method and apparatus, a device and a storage medium, which relate to the field of artificial intelligence technologies, such as machine learning technologies and intelligent imaging technologies, are disclosed. An implementation includes performing semantic segmentation on a preformed image to obtain semantic segmentation images of at least two semantic regions; estimating an exposure duration of each semantic region based on the semantic segmentation image and the preformed image; and controlling exposure of each semantic region during imaging based on the exposure duration of each semantic region.
-
公开(公告)号:US20230031579A1
公开(公告)日:2023-02-02
申请号:US17938457
申请日:2022-10-06
Inventor: Guanghua YU , Qingqing DANG , Haoshuang WANG , Guanzhong WANG , Xiaoguang HU , Dianhai YU , Yanjun MA , Qiwen LIU , Can WEN
IPC: G06V10/77 , G06V10/82 , G06V10/764 , G06V10/80
Abstract: A method for detecting an object in an image includes: obtaining an image to be detected; generating a plurality of feature maps based on the image to be detected by a plurality of feature extracting networks in a neural network model trained for object detection, in which the plurality of feature extracting networks are connected sequentially, and input data of a latter feature extracting network in the plurality of feature extracting networks is based on output data and input data of a previous feature extracting network; and generating an object detection result based on the plurality of feature maps by an object detecting network in the neural network model.
-
20.
公开(公告)号:US20220374704A1
公开(公告)日:2022-11-24
申请号:US17558355
申请日:2021-12-21
Inventor: Danlei FENG , Long LIAN , Dianhai YU , Xuefeng YAO , Xinxuan WU , Zhihua WU , Yanjun MA
Abstract: The disclosure provides a neural network training method and apparatus, an electronic device, a medium and a program product, and relates to the field of artificial intelligence, in particular to the fields of deep learning and distributed learning. The method includes: acquiring a neural network for deep learning; constructing a deep reinforcement learning model for the neural network; and determining, through the deep reinforcement learning model, a processing unit selection for the plurality of the network layers based on a duration for training each of the network layers by each type of the plurality of types of the processing units, and a cost of each type of the plurality of types of the processing units, wherein the processing unit selection comprises the type of the processing unit to be used for each of the plurality of the network layers, and the processing unit selection is used for making a total cost of the processing units used by the neural network below a cost threshold, in response to a duration for pipelining parallel computing for training the neural network being shorter than a present duration.
-
-
-
-
-
-
-
-
-