-
公开(公告)号:US20210318878A1
公开(公告)日:2021-10-14
申请号:US16607087
申请日:2019-10-12
发明人: Zhibiao ZHAO , Jian OUYANG , Hefei ZHU , Qingshu CHEN , Wei QI
摘要: According to various embodiments, methods and systems are provided to accelerate artificial intelligence (AI) model training with advanced interconnect communication technologies and systematic zero-value compression over a distributed training system. According to an exemplary method, during each iteration of a Scatter-Reduce process performed on a cluster of processors arranged in a logical ring to train a neural network model, a processor receives a compressed data block from a prior processor in the logical ring, performs an operation on the received compressed data block and a compressed data block generated on the processor to obtain a calculated data block, and sends the calculated data block to a following processor in the logical ring. A compressed data block calculated from corresponding data blocks from the processors can be identified on each processor and distributed to each other processor and decompressed therein for use in the AI model training.
-
公开(公告)号:US20210174174A1
公开(公告)日:2021-06-10
申请号:US16622789
申请日:2019-11-15
发明人: Hefei ZHU , Jian OUYANG , Zhibiao ZHAO , Xiaozhang GONG , Qingshu CHEN
摘要: A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU.
-
公开(公告)号:US20210072996A1
公开(公告)日:2021-03-11
申请号:US16729989
申请日:2019-12-30
发明人: Qingshu CHEN , Zhibiao ZHAO , Hefei ZHU , Xiaozhang GONG , Yong WANG , Jian OUYANG
摘要: Methods, apparatuses, devices, and storage media for performing a processing task are provided. A portion of portions of the processing task can include a group of operations that are to be performed at a processing unit of processing units. The group of operations can include operations of a first type and operations of a second type. In the method, a first queue for performing the operations of the first type and a second queue for performing the operations of the second type can be built, respectively. Based on a definition of the processing task, a dependency relationship between a group of operations to be performed at the processing unit and a group of operations to be performed at other processing units in the plurality of processing units can be obtained. Operations in the first queue and operations in the second queue can be performed respectively based on the dependency relationship.
-
-