Multi-port stream switch for stream interconnect network

    公开(公告)号:US11323391B1

    公开(公告)日:2022-05-03

    申请号:US16833029

    申请日:2020-03-27

    Applicant: XILINX, INC.

    Abstract: Some examples described herein relate to multi-port stream switches of data processing engines (DPEs) of an electronic device, such as a programmable device. In an example, a programmable device includes a plurality of DPEs. Each DPE of the DPEs includes a hardened processor core and a stream switch. The stream switch is connected to respective stream switches of ones of the DPEs that neighbor the respective DPE in respective ones of directions. The stream switch has input ports associated with each direction of the directions and has output ports associated with each direction of the directions. For each direction of the directions, each input port of the input ports associated with the respective direction is selectively connectable to one of the output ports associated with the respective direction.

    Data selection network for a data processing engine in an integrated circuit

    公开(公告)号:US11061673B1

    公开(公告)日:2021-07-13

    申请号:US15944393

    申请日:2018-04-03

    Applicant: Xilinx, Inc.

    Abstract: An example core for data processing engine (DPE) includes a first register file configured to provide a first plurality of output lanes, a processor, coupled to the register file, including: a multiply-accumulate (MAC) circuit, and a first permute circuit coupled between the first register file and the MAC circuit. The first permute circuit is configured to generate a first vector by selecting a first set of output lanes from the first plurality of output lanes, and a second permute circuit coupled between the first register file and the MAC circuit. The second permute circuit is configured to generate a second vector by selecting a second set of output lanes from the first plurality of output lanes.

    Cascade streaming between data processing engines in an array

    公开(公告)号:US11016822B1

    公开(公告)日:2021-05-25

    申请号:US15944578

    申请日:2018-04-03

    Applicant: Xilinx, Inc.

    Abstract: Examples herein describe techniques for communicating directly between cores in an array of data processing engines. In one embodiment, the array is a 2D array where each of the data processing engines includes one or more cores. In addition to the cores, the data processing engines can include a memory module (with memory banks for storing data) and an interconnect which provides connectivity between the cores. Using the interconnect, however, can add latency when transmitting data between the cores. In the embodiments herein, the array includes core-to-core communication links that directly connect one core in the array to another core. The cores can use these communication links to bypass the interconnect and the memory module to transmit data directly.

    Event-based debug, trace, and profile in device with data processing engine array

    公开(公告)号:US11567881B1

    公开(公告)日:2023-01-31

    申请号:US15944602

    申请日:2018-04-03

    Applicant: Xilinx, Inc.

    Abstract: A device may include an array of data processing engines (DPEs) on a die and an event broadcast network. Each of the DPEs includes a core, a memory module, event logic in at least one of the core or the memory module, and an event broadcast circuitry coupled to the event logic. The event logic is capable of detecting an occurrence of one or more events in the core or the memory module. The event broadcast circuitry is capable of receiving an indication of a detected event detected by the event logic. The event broadcast network includes interconnections between the event broadcast circuitry of the DPEs. Detected events can trigger or initiate various responses, such as debugging, tracing, and profiling.

    Communicating between data processing engines using shared memory

    公开(公告)号:US11379389B1

    公开(公告)日:2022-07-05

    申请号:US15944179

    申请日:2018-04-03

    Applicant: Xilinx, Inc.

    Abstract: Examples herein describe techniques for transferring data between data processing engines in an array using shared memory. In one embodiment, certain engines in the array have connections to the memory in neighboring engines. For example, each engine may have its own assigned memory module which can be accessed directly (e.g., without using a streaming or memory mapped interconnect). In addition, the surrounding engines (referred to herein as the neighboring engines) may also include direct connections to the memory module. Using these direct connections, the cores can load and/or store data in the neighboring memory modules.

Patent Agency Ranking