APPLICATION IMPLEMENTATION AND BUFFER ALLOCATION FOR A DATA PROCESSING ENGINE ARRAY

    公开(公告)号:US20230185548A1

    公开(公告)日:2023-06-15

    申请号:US17643622

    申请日:2021-12-10

    Applicant: Xilinx, Inc.

    CPC classification number: G06F8/433

    Abstract: Implementing an application can include generating, from the application, a compact data flow graph (DFG) including load nodes, inserting, in the compact DFG, a plurality of virtual buffer nodes (VBNs) for each of a plurality of buffers of a data processing engine (DPE) array to be allocated to nets of the application, and, forming groups of one or more load nodes of the compact DFG based on shared buffer requirements of the loads on a per net basis. Virtual driver nodes (VDNs) that map to drivers of nets can be added to the compact DFG, where each group of the compact DFG is driven by a dedicated VDN. Connections between VDNs and load nodes through selected ones of the VBNs are created according to a plurality of constraints. The plurality of buffers are allocated to the nets based on the compact DFG as connected.

    Optimizing hardware design throughput by latency aware balancing of re-convergent paths

    公开(公告)号:US11604751B1

    公开(公告)日:2023-03-14

    申请号:US17316584

    申请日:2021-05-10

    Applicant: XILINX, INC.

    Abstract: Embodiments herein describe techniques for preventing a stall when transmitting data between a producer and a consumer in the same integrated circuit (IC). A stall can occur when there is a split point and a convergence point between the producer and consumer. To prevent the stall, the embodiments herein adjust the latencies of one of the paths (or both paths) such that a maximum latency of the shorter path is greater than, or equal to, the minimum latency of the longer path. When this condition is met, this means the shortest path has sufficient buffers (e.g., a sufficient number of FIFOs and registers) to queue/store packets along its length so that a packet can travel along the longer path and reach the convergence point before the buffers in the shortest path are completely full (or just become completely full).

    Application implementation and buffer allocation for a data processing engine array

    公开(公告)号:US11733980B2

    公开(公告)日:2023-08-22

    申请号:US17643622

    申请日:2021-12-10

    Applicant: Xilinx, Inc.

    CPC classification number: G06F8/433

    Abstract: Implementing an application can include generating, from the application, a compact data flow graph (DFG) including load nodes, inserting, in the compact DFG, a plurality of virtual buffer nodes (VBNs) for each of a plurality of buffers of a data processing engine (DPE) array to be allocated to nets of the application, and, forming groups of one or more load nodes of the compact DFG based on shared buffer requirements of the loads on a per net basis. Virtual driver nodes (VDNs) that map to drivers of nets can be added to the compact DFG, where each group of the compact DFG is driven by a dedicated VDN. Connections between VDNs and load nodes through selected ones of the VBNs are created according to a plurality of constraints. The plurality of buffers are allocated to the nets based on the compact DFG as connected.

Patent Agency Ranking