METHOD AND SYSTEM FOR ACCELERATING AI TRAINING WITH ADVANCED INTERCONNECT TECHNOLOGIES

    公开(公告)号:US20210318878A1

    公开(公告)日:2021-10-14

    申请号:US16607087

    申请日:2019-10-12

    摘要: According to various embodiments, methods and systems are provided to accelerate artificial intelligence (AI) model training with advanced interconnect communication technologies and systematic zero-value compression over a distributed training system. According to an exemplary method, during each iteration of a Scatter-Reduce process performed on a cluster of processors arranged in a logical ring to train a neural network model, a processor receives a compressed data block from a prior processor in the logical ring, performs an operation on the received compressed data block and a compressed data block generated on the processor to obtain a calculated data block, and sends the calculated data block to a following processor in the logical ring. A compressed data block calculated from corresponding data blocks from the processors can be identified on each processor and distributed to each other processor and decompressed therein for use in the AI model training.

    DISTRIBUTED AI TRAINING TOPOLOGY BASED ON FLEXIBLE CABLE CONNECTION

    公开(公告)号:US20210174174A1

    公开(公告)日:2021-06-10

    申请号:US16622789

    申请日:2019-11-15

    IPC分类号: G06N3/063 G06F9/50 G06F13/36

    摘要: A data processing system includes a central processing unit (CPU) and accelerator cards coupled to the CPU over a bus, each of the accelerator cards having a plurality of data processing (DP) accelerators to receive DP tasks from the CPU and to perform the received DP tasks. At least two of the accelerator cards are coupled to each other via an inter-card connection, and at least two of the DP accelerators are coupled to each other via an inter-chip connection. Each of the inter-card connection and the inter-chip connection is capable of being dynamically activated or deactivated, such that in response to a request received from the CPU, any one of the accelerator cards or any one of the DP accelerators within any one of the accelerator cards can be enabled or disabled to process any one of the DP tasks received from the CPU.

    METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR PERFORMING PROCESSING TASK

    公开(公告)号:US20210072996A1

    公开(公告)日:2021-03-11

    申请号:US16729989

    申请日:2019-12-30

    摘要: Methods, apparatuses, devices, and storage media for performing a processing task are provided. A portion of portions of the processing task can include a group of operations that are to be performed at a processing unit of processing units. The group of operations can include operations of a first type and operations of a second type. In the method, a first queue for performing the operations of the first type and a second queue for performing the operations of the second type can be built, respectively. Based on a definition of the processing task, a dependency relationship between a group of operations to be performed at the processing unit and a group of operations to be performed at other processing units in the plurality of processing units can be obtained. Operations in the first queue and operations in the second queue can be performed respectively based on the dependency relationship.

    METHOD FOR VIRTUAL MACHINE MIGRATION WITH CHECKPOINT AUTHENTICATION IN VIRTUALIZATION ENVIRONMENT

    公开(公告)号:US20220214902A1

    公开(公告)日:2022-07-07

    申请号:US17142933

    申请日:2021-01-06

    申请人: Baidu USA LLC

    摘要: Systems and methods are disclosed for migrating a virtual machine (VM) having a virtual function that maps resources of an artificial intelligence (AI) accelerator to the VM. A driver for the AI accelerator can generate a checkpoint of VM processes that make calls to the AI accelerator, and can the checkpoint can include a list and configuration of resources mapped to the AI accelerator by the virtual function. The driver can also access the code, data, and memory of the AI accelerator to generate a checkpoint of the AI accelerator status. When the VM is migrated to a new host, then either, or both, of these checkpoint frames can be used to ensure that resuming the VM on a new host having appropriate AI accelerator resources, can be successful resumed on the new host. One or both checkpoint frames can be captured based upon an event, in anticipation of the need to migrate the VM.

    ASYMMETRIC QUANTIZATION FOR COMPRESSION AND FOR ACCELERATION OF INFERENCE FOR NEURAL NETWORKS

    公开(公告)号:US20210004679A1

    公开(公告)日:2021-01-07

    申请号:US16877582

    申请日:2020-05-19

    申请人: Baidu USA, LLC

    IPC分类号: G06N3/08 G06N3/04

    摘要: Presented herein are embodiments of an improved asymmetric quantization, which may generally be referred to as improved asymmetric quantization (IAQ) embodiments. IAQ embodiments combine the benefits of conventional asymmetric quantization and symmetric quantization but also provide additional computation efficiencies. Embodiments of IAQ adopt an asymmetric range of the weights of a neural network layer, so they circumvent the limitation of symmetric range of symmetric quantization. Moreover, the inference process of a neural network quantized by an IAQ embodiment is much faster than that of the neural network quantized by conventional asymmetric quantization by quantizing an offset value of each layer.

    METHOD FOR VIRTUAL MACHINE MIGRATION WITH ARTIFICIAL INTELLIGENCE ACCELERATOR STATUS VALIDATION IN VIRTUALIZATION ENVIRONMENT

    公开(公告)号:US20220214903A1

    公开(公告)日:2022-07-07

    申请号:US17142946

    申请日:2021-01-06

    申请人: Baidu USA LLC

    IPC分类号: G06F9/455 G06F9/48 G06F9/445

    摘要: Systems and methods are disclosed for migrating a virtual machine (VM) having a virtual function that maps resources of an artificial intelligence (AI) accelerator to the VM. A driver for the AI accelerator can generate a checkpoint of VM processes that make calls to the AI accelerator, and can the checkpoint can include a list and configuration of resources mapped to the AI accelerator by the virtual function. The driver can also access the code, data, and memory of the AI accelerator to generate a checkpoint of the AI accelerator status. When the VM is migrated to a new host, then either, or both, of these checkpoint frames can be used to ensure that resuming the VM on a new host having appropriate AI accelerator resources, can be successful resumed on the new host. One or both checkpoint frames can be captured based upon an event, in anticipation of the need to migrate the VM.