-
公开(公告)号:US20240378077A1
公开(公告)日:2024-11-14
申请号:US18782617
申请日:2024-07-24
Inventor: Guoxia WANG , Jinle ZENG , Xiyuan XIAO , Jiabin YANG , Dianhai YU , Haifeng WANG
Abstract: A method of executing a task for a large language model, a device, and a storage medium are provided, which relate to a field of artificial intelligence technology, and in particular to fields of deep learning, large language model, natural language processing and computer vision technologies. The method includes: determining, by using a determination unit, a target attention task from a plurality of attention tasks to be processed, based on a sparse representation corresponding to a feature to be processed, where the target attention task is a task corresponding to a non-fully masked region of the feature, the sparse representation represents a mask position of the feature, and the mask position represents mask endpoint positions in at least two non-intersecting intervals in a mask matrix corresponding to the feature; and executing the target attention task by using a computing unit, so as to obtain an attention feature.
-
22.
公开(公告)号:US20230206668A1
公开(公告)日:2023-06-29
申请号:US18170902
申请日:2023-02-17
Inventor: Ruoyu GUO , Yuning DU , Chenxia LI , Qiwen LIU , Baohua LAI , Yanjun MA , Dianhai YU
CPC classification number: G06V30/19147 , G06V30/19173 , G06V30/18 , G06V30/16
Abstract: The present disclosure provides a vision processing and model training method, device, storage medium and program product. A specific implementation solution is as follows: establishing an image classification network with the same backbone network as the vision model, performing a self-monitoring training on the image classification network by using an unlabeled first data set; initializing a weight of a backbone network of the vision model according to a weight of a backbone network of the trained image classification network to obtain a pre-training model, the structure of the pre-training model being consistent with that of the vision model, and optimize the weight of the backbone network by using real data set in a current computer vision task scenario, so as to be more suitable for the current computer vision task; then, training the pre-training model by using a labeled second data set to obtain a trained vision model.
-
23.
公开(公告)号:US20230185702A1
公开(公告)日:2023-06-15
申请号:US17856091
申请日:2022-07-01
Inventor: Tian WU , Yanjun MA , Dianhai YU , Yehua YANG , Yuning DU
CPC classification number: G06F11/3688 , G06N3/08
Abstract: A method and apparatus is provided for generating and applying a deep learning model based on a deep learning framework, and relates to the field of computers. A specific implementation solution includes that a basic operating environment is established on a target device, where the basic operating environment is used for providing environment preparation for an overall generation process of a deep learning model; a basic function of the deep learning model is generated in the basic operating environment according to at least one of a service requirement and a hardware requirement, to obtain a first processing result; an extended function of the deep learning model is generated in the basic operating environment based on the first processing result, to obtain a second processing result; and a preset test script is used to perform function test on the second processing result, to output a test result.
-
公开(公告)号:US20230164446A1
公开(公告)日:2023-05-25
申请号:US17885035
申请日:2022-08-10
Inventor: Shengyu WEI , Yuning DU , Cheng CUI , Ruoyu GUO , Shuilong DONG , Bin LU , Tingquan GAO , Qiwen LIU , Xiaoguang HU , Dianhai YU , Yanjun MA
CPC classification number: H04N5/2353 , G06T7/11 , G06T7/80 , G02F1/13306 , G06T2207/20081
Abstract: An imaging exposure control method and apparatus, a device and a storage medium, which relate to the field of artificial intelligence technologies, such as machine learning technologies and intelligent imaging technologies, are disclosed. An implementation includes performing semantic segmentation on a preformed image to obtain semantic segmentation images of at least two semantic regions; estimating an exposure duration of each semantic region based on the semantic segmentation image and the preformed image; and controlling exposure of each semantic region during imaging based on the exposure duration of each semantic region.
-
公开(公告)号:US20230031579A1
公开(公告)日:2023-02-02
申请号:US17938457
申请日:2022-10-06
Inventor: Guanghua YU , Qingqing DANG , Haoshuang WANG , Guanzhong WANG , Xiaoguang HU , Dianhai YU , Yanjun MA , Qiwen LIU , Can WEN
IPC: G06V10/77 , G06V10/82 , G06V10/764 , G06V10/80
Abstract: A method for detecting an object in an image includes: obtaining an image to be detected; generating a plurality of feature maps based on the image to be detected by a plurality of feature extracting networks in a neural network model trained for object detection, in which the plurality of feature extracting networks are connected sequentially, and input data of a latter feature extracting network in the plurality of feature extracting networks is based on output data and input data of a previous feature extracting network; and generating an object detection result based on the plurality of feature maps by an object detecting network in the neural network model.
-
26.
公开(公告)号:US20220374704A1
公开(公告)日:2022-11-24
申请号:US17558355
申请日:2021-12-21
Inventor: Danlei FENG , Long LIAN , Dianhai YU , Xuefeng YAO , Xinxuan WU , Zhihua WU , Yanjun MA
Abstract: The disclosure provides a neural network training method and apparatus, an electronic device, a medium and a program product, and relates to the field of artificial intelligence, in particular to the fields of deep learning and distributed learning. The method includes: acquiring a neural network for deep learning; constructing a deep reinforcement learning model for the neural network; and determining, through the deep reinforcement learning model, a processing unit selection for the plurality of the network layers based on a duration for training each of the network layers by each type of the plurality of types of the processing units, and a cost of each type of the plurality of types of the processing units, wherein the processing unit selection comprises the type of the processing unit to be used for each of the plurality of the network layers, and the processing unit selection is used for making a total cost of the processing units used by the neural network below a cost threshold, in response to a duration for pipelining parallel computing for training the neural network being shorter than a present duration.
-
公开(公告)号:US20220035614A1
公开(公告)日:2022-02-03
申请号:US17500779
申请日:2021-10-13
Inventor: Liujie ZHANG , Xiang LAN , Huihuang ZHENG , Hongyu LIU , Wei ZHOU , Yanjun MA , Dianhai YU , Haifeng WANG
Abstract: The present disclosure discloses a method, an apparatus and an electronic device for deploying an operator in a deep learning framework and relates to the field of artificial intelligence technology such as deep learning. And the solution is: acquiring a source file of the operator; compiling the source file of the operator to form a dynamic link library of the operator; generating an interface file transferred from the dynamic link library of the operator; generating an installable library file according to the dynamic link library and the interface file; installing the installable library file to a target programming language library.
-
28.
公开(公告)号:US20220004526A1
公开(公告)日:2022-01-06
申请号:US17480294
申请日:2021-09-21
Inventor: Liujie ZHANG , Yamei LI , Huihuang ZHENG , Hongyu LIU , Xiang LAN , Dianhai YU , Yanjun MA , Tian WU , Haifeng WANG
Abstract: According to exemplary embodiments of the present disclosure, there is provided a method and apparatus of converting a schema in a deep learning framework, and a computer storage medium. The method of converting the schema in the deep learning framework includes: updating a first schema, based on first syntax elements in the first schema and a context relationship between the first syntax elements in the first schema, so as to obtain an updated first schema; generating second syntax elements corresponding to updated first syntax elements in the updated first schema, based on a mapping relationship between the updated first syntax elements in the updated first schema and second syntax elements in a second schema system; and combining the second syntax elements according to a context relationship between the updated first syntax elements, so as to generate a second schema
-
-
-
-
-
-
-