Invention Publication
- Patent Title: OUTPUT DRAIN PATH FACILITATING FLEXIBLE SCHEDULE-BASED DEEP NEURAL NETWORK ACCELERATOR
-
Application No.: US18474464Application Date: 2023-09-26
-
Publication No.: US20240013040A1Publication Date: 2024-01-11
- Inventor: Arnab Raha , Deepak Abraham Mathaikutty , Umer Iftikhar Cheema , Dinakar Kondru
- Applicant: Intel Corporation
- Applicant Address: US CA Santa Clara
- Assignee: Intel Corporation
- Current Assignee: Intel Corporation
- Current Assignee Address: US CA Santa Clara
- Main IPC: G06N3/063
- IPC: G06N3/063 ; G06N3/048 ; G06N3/0464

Abstract:
A drain module may drain activations in an output tensor of a convolution from a PE array that performs the convolution. The drain module may extract activations generated in a collection of PE columns. The activations generated in the PE columns in the collection may be concatenated, e.g., activations generated in the first PE column of the collection may be followed by activations generated in the second PE column of the collection, and so on. The activations in the output tensor may be rearranged into activation vectors. Each activation vector may include activations in different output channels of the deep learning operation. The activations in each activation vector may have the same (X, Y) coordinate in the output tensor. The drain module may determine a memory address for an activation based on the activation's (X, Y, Z) coordinate in the output tensor and write the activation to the memory address.
Information query