摘要:
A method is provided for implementing a deep neural network on a server component that includes a host component including a CPU and a hardware acceleration component coupled to the host component. The deep neural network includes a plurality of layers. The method includes partitioning the deep neural network into a first segment and a second segment, the first segment including a first subset of the plurality of layers, the second segment including a second subset of the plurality of layers, configuring the host component to implement the first segment, and configuring the hardware acceleration component to implement the second segment.
摘要:
A hardware acceleration component is provided for implementing a convolutional neural network. The hardware acceleration component includes an array of N rows and M columns of functional units, an array of N input data buffers configured to store input data, and an array of M weights data buffers configured to store weights data. Each of the N input data buffers is coupled to a corresponding one of the N rows of functional units. Each of the M weights data buffers is coupled to a corresponding one of the M columns of functional units. Each functional unit in a row is configured to receive a same set of input data. Each functional unit in a column is configured to receive a same set of weights data from the weights data buffer coupled to the row. Each of the functional units is configured to perform a convolution of the received input data and the received weights data, and the M columns of functional units are configured to provide M planes of output data.
摘要:
A method is provided for processing on an acceleration component a deep neural network. The method includes configuring the acceleration component to perform forward propagation and backpropagation stages of the deep neural network. The acceleration component includes an acceleration component die and a memory stack disposed in an integrated circuit package. The memory stack has a memory bandwidth greater than about 50 GB/sec and a power efficiency of greater than about 20 MB/sec/mW.
摘要:
A smart NIC (Network Interface Card) is provided with features to enable the smart NIC to operate as an in-line NIC between a host's NIC and a network. The smart NIC provides pass-through transmission of network flows for the host. Packets sent to and from the host pass through the smart NIC. As a pass-through point, the smart NIC is able to accelerate the performance of the pass-through network flows by analyzing packets, inserting packets, dropping packets, inserting or recognizing congestion information, and so forth. In addition, the smart NIC provides a lightweight transport protocol (LTP) module that enables it to establish connections with other smart NICs. The LTP connections allow the smart NICs to exchange data without passing network traffic through their respective hosts.
摘要:
A smart NIC (Network Interface Card) is provided with features to enable the smart NIC to operate as an in-line NIC between a host's NIC and a network. The smart NIC provides pass-through transmission of network flows for the host. Packets sent to and from the host pass through the smart NIC. As a pass-through point, the smart NIC is able to accelerate the performance of the pass-through network flows by analyzing packets, inserting packets, dropping packets, inserting or recognizing congestion information, and so forth. In addition, the smart NIC provides a lightweight transport protocol (LTP) module that enables it to establish connections with other smart NICs. The LTP connections allow the smart NICs to exchange data without passing network traffic through their respective hosts.