SPATIAL DISTRIBUTION IN A 3D DATA PROCESSING UNIT

    公开(公告)号:US20230359577A1

    公开(公告)日:2023-11-09

    申请号:US18224859

    申请日:2023-07-21

    Applicant: XILINX, INC.

    Inventor: Jaideep DASTIDAR

    CPC classification number: G06F13/4208 G06F21/602

    Abstract: The embodiments herein describe a 3D SmartNIC that spatially distributes compute, storage, or network functions in three dimensions using a plurality of layers. That is, unlike current SmartNIC that can perform acceleration functions in a 2D, a 3D Smart can distribute these functions across multiple stacked layers, where each layer can communicate directly or indirectly with the other layers.

    HARDWARE COHERENT COMPUTATIONAL EXPANSION MEMORY

    公开(公告)号:US20220100523A1

    公开(公告)日:2022-03-31

    申请号:US17035484

    申请日:2020-09-28

    Applicant: XILINX, INC.

    Abstract: Embodiments herein describe transferring ownership of data (e.g., cachelines or blocks of data comprising multiple cachelines) from a host to hardware in an I/O device. In one embodiment, the host and I/O device (e.g., an accelerator) are part of a cache-coherent system where ownership of data can be transferred from a home agent (HA) in the host to a local HA in the I/O device—e.g., a computational slave agent (CSA). That way, a function on the I/O device (e.g., an accelerator function) can request data from the local HA without these requests having to be sent to the host HA. Further, the accelerator function can indicate whether the local HA tracks the data on a cacheline-basis or by a data block (e.g., multiple cachelines). This provides flexibility that can reduce overhead from tracking the data, depending on the function's desired use of the data.

    HOST ENDPOINT ADAPTIVE COMPUTE COMPOSABILITY

    公开(公告)号:US20240061805A1

    公开(公告)日:2024-02-22

    申请号:US17892955

    申请日:2022-08-22

    Applicant: XILINX, INC.

    CPC classification number: G06F15/7896

    Abstract: Embodiments herein describe a processor system that includes an integrated, adaptive accelerator. In one embodiment, the processor system includes multiple core complex chiplets that each contain one or processing cores for a host CPU. In addition the processor system includes an accelerator chiplet. The processor system can assign one or more of the core complex chiplets to the accelerator chiplet to form an IO device while the remaining core complex chiplets form the CPU for the host. In this manner, rather than the accelerator and the CPU having independent computer resources, the accelerator can be integrated into the processor system of the host so that hardware resources can be divided between the CPU and the accelerator depending on the needs of the particular application(s) executed by the host.

    ZONED ACCELERATOR EMBEDDED PROCESSING
    4.
    发明公开

    公开(公告)号:US20230222082A1

    公开(公告)日:2023-07-13

    申请号:US17574342

    申请日:2022-01-12

    Applicant: XILINX, INC.

    CPC classification number: G06F13/4265 G06F13/1684

    Abstract: Embodiments herein describe end-to-end bindings to create zones that extend between different components in a SoC, such as an I/O gateway, a processor subsystem, a NoC, storage and data accelerators, programmable logic, etc. Each zone can be assigned to a different domain that is controlled by a tenant such as an external host, or software executing on that host. Embodiments herein create end-to-end bindings between acceleration engines, I/O gateways, and embedded cores in SoCs. Instead of these components being treated as disparate monolithic components, the bindings divide up the hardware and memory resources across components that make up the SoC, into different zones. Those zones in turn can have unique bindings to multiple tenants. The bindings can be configured in bridges between components to divide resources into the zones to enable tenants of those zones to have dedicated available resources that are secure from the other tenants.

    CACHE COHERENT ACCELERATION FUNCTION VIRTUALIZATION

    公开(公告)号:US20230004442A1

    公开(公告)日:2023-01-05

    申请号:US17903084

    申请日:2022-09-06

    Applicant: XILINX, INC.

    Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.

    FINE-GRAINED MULTI-TENANT CACHE MANAGEMENT

    公开(公告)号:US20220292024A1

    公开(公告)日:2022-09-15

    申请号:US17826074

    申请日:2022-05-26

    Applicant: XILINX, INC.

    Abstract: The embodiments herein describe a multi-tenant cache that implements fine-grained allocation of the entries within the cache. Each entry in the cache can be allocated to a particular tenant—i.e., fine-grained allocation—rather than having to assign all the entries in a way to a particular tenant. If the tenant does not currently need those entries (which can be tracked using counters), the entries can be invalidated (i.e., deallocated) and assigned to another tenant. Thus, fine-grained allocation provides a flexible allocation of entries in a hardware cache that permits an administrator to reserve any number of entries for a particular tenant, but also permit other tenants to use this bandwidth when the reserved entries are not currently needed by the tenant.

    ADAPTIVE INTEGRATED PROGRAMMABLE DATA PROCESSING UNIT

    公开(公告)号:US20240061799A1

    公开(公告)日:2024-02-22

    申请号:US17892949

    申请日:2022-08-22

    Applicant: XILINX, INC.

    CPC classification number: G06F13/4027 G06F13/28

    Abstract: An integrated circuit device includes multiple heterogeneous functional circuit blocks and interface circuitry that permits the heterogeneous functional circuit blocks to exchange data with one another using communication protocols of the respective heterogeneous functional circuit blocks. The IC device includes fixed-function circuitry, user-configurable circuitry (e.g., programmable logic), and/or embedded processors/cores. A functional circuit block may be configured in fixed-function circuitry or in the user-configurable circuitry (i.e., as a plug-in). The interface circuitry includes a network-on-a-chip (NoC), an adaptor configured in the user-configurable circuitry, and/or memory. The memory may be accessible to the functional circuit blocks through an adaptor configured the user-configurable circuitry and/or through the NoC. The IC device may be configured as a SmartNIC, DPU, or other type of system-on-a-chip (SoC).

    MULTI-TENANT AWARE DATA PROCESSING UNITS
    8.
    发明公开

    公开(公告)号:US20240061796A1

    公开(公告)日:2024-02-22

    申请号:US17892989

    申请日:2022-08-22

    Applicant: XILINX, INC.

    CPC classification number: G06F13/20 G06F2213/40

    Abstract: Embodiments herein describe creating tag bindings that can be used to assign tags to data corresponding to different tenants using a data processing unit (DPU) such as a SmartNIC, Artificial Intelligence Unit, Network Storage Unit, Database Acceleration Units, and the like. In one embodiment, the DPUs include tag gateways at the interface between a host and network element (e.g., a switch) that recognize and tag the data corresponding to the tenants. These tags are then recognized by data processing engines (DPEs) in the DPU such as AI engines, cryptographic engines, encryption engines, Direct Memory Access (DMA) engines, and the like. These DPEs can be configured to perform tag policies that provide security isolation and performance isolation between the tenants.

    ADAPTIVE INTEGRITY LEVELS IN ELECTRONIC AND PROGRAMMABLE LOGIC SYSTEMS

    公开(公告)号:US20230222217A1

    公开(公告)日:2023-07-13

    申请号:US17571288

    申请日:2022-01-07

    Applicant: XILINX, INC.

    Inventor: Jaideep DASTIDAR

    CPC classification number: G06F21/57 G06F2221/034

    Abstract: Methods and apparatus for adaptive integrity levels in electronic and programmable logic systems. In one example, an interface for communication between a first component and a second component is provided. The interface includes logic configured to change an integrity level for a communication from the first component to the second component during operation of the first component and the second component.

    DISAGGREGATED SWITCH CONTROL PATH WITH DIRECT-ATTACHED DISPATCH

    公开(公告)号:US20210382838A1

    公开(公告)日:2021-12-09

    申请号:US16894446

    申请日:2020-06-05

    Applicant: XILINX, INC.

    Abstract: Embodiments herein describe techniques for separating data transmitted between I/O functions in an integrated component and a host into separate data paths. In one embodiment, data packets are transmitted using a direct data path that bypasses a switch in the integrated component. In contrast, configuration packets (e.g., hot-swap, hot-add, hot-remove data, some types of descriptors, etc.) are transmitted to the switch which then forwards the configuration packets to their destination. The direct path for the data packets does not rely on switch connectivity (and its accompanying latency) to transport bandwidth sensitive traffic between the host and the I/O functions, and instead avoids (e.g., bypasses) the bandwidth, resource, store/forward, and latency properties of the switch. Meanwhile, the software compatibility attributes, such as hot plug attributes (which are not latency or bandwidth sensitive), continue to be supported by using the switch to provide a configuration data path.

Patent Agency Ranking