Adaptive acceleration of transport layer security

    公开(公告)号:US11983264B2

    公开(公告)日:2024-05-14

    申请号:US17457839

    申请日:2021-12-06

    Applicant: XILINX, INC.

    CPC classification number: G06F21/54 G06F21/6209 G06F21/85

    Abstract: Embodiments herein describe offloading encryption activities to a network interface controller/card (NIC) (e.g., a SmartNIC) which frees up server compute resources to focus on executing customer applications. In one embodiment, the smart NIC includes a system on a chip (SoC) implemented on an integrated circuit (IC) that includes an embedded processor. Instead of executing a transport layer security (TLS) stack entirely in the embedded processor, the embodiments herein offload certain TLS tasks to a Public Key Infrastructure (PKI) accelerator such as generating public-private key pairs.

    Compressed tag coherency messaging
    12.
    发明授权

    公开(公告)号:US11271860B1

    公开(公告)日:2022-03-08

    申请号:US16686067

    申请日:2019-11-15

    Applicant: XILINX, INC.

    Abstract: An example cache-coherent packetized network system includes: a home agent; a snooped agent; and a request agent configured to send, to the home agent, a request message for a first address, the request message having a first transaction identifier of the request agent; where the home agent is configured to send, to the snooped agent, a snoop request message for the first address, the snoop request message having a second transaction identifier of the home agent; and where the snooped agent is configured to send a data message to the request agent, the data message including a first compressed tag generated using a function based on the first address.

    PERIPHERAL I/O DEVICE WITH ASSIGNABLE I/O AND COHERENT DOMAINS

    公开(公告)号:US20200327089A1

    公开(公告)日:2020-10-15

    申请号:US16380860

    申请日:2019-04-10

    Applicant: Xilinx, Inc.

    Abstract: Examples herein describe a peripheral I/O device with a hybrid gateway that permits the device to have both I/O and coherent domains. That is, the I/O device can benefit from a traditional I/O model where the I/O device driver manages some of the compute resources in the I/O device as well as the benefits of adding other compute resources in the I/O device to the same coherent domain used by the hardware in the host computing system. As result, the compute resources in the coherent domain of the peripheral I/O device can communicate with the host in a similar manner as, e.g., CPU-to-CPU communication in the host. At the same time, the compute resources in the I/O domain can benefit from the advantages of the traditional I/O device model which provides efficiencies when doing large memory transfers between the host and the I/O device (e.g., DMA).

    Domain assist processor-peer for coherent acceleration

    公开(公告)号:US10698842B1

    公开(公告)日:2020-06-30

    申请号:US16380856

    申请日:2019-04-10

    Applicant: Xilinx, Inc.

    Abstract: Examples herein describe a peripheral I/O device with a domain assist processor (DAP) and a domain specific accelerator (DSA) that are in the same coherent domain as CPUs and memory in a host computing system. Peripheral I/O devices were previously unable to participate in a cache-coherent shared-memory multiprocessor paradigm with hardware resources in the host computing system. As a result, domain assist processing for lightweight processor functions (e.g., open source functions such as gzip, open source crypto libraries, open source network switches, etc.) either are performed using CPUs resources in the host or by provisioning a special processing system in the peripheral I/O device (e.g., using programmable logic in a FPGA). The embodiments herein use a DAP in the peripheral I/O device to perform the lightweight processor functions that would otherwise be performed by hardware resources in the host or by a special processing system in the peripheral I/O device.

    Transparent port aggregation in multi-chip transport protocols

    公开(公告)号:US10664422B1

    公开(公告)日:2020-05-26

    申请号:US16527504

    申请日:2019-07-31

    Applicant: XILINX, INC.

    Abstract: Various implementations of a multi-chip system operable according to a predefined transport protocol are disclosed. In one embodiment, a system comprises a first IC comprising a processing element communicatively coupled with first physical ports. The system further comprises a second IC comprising second physical ports communicatively coupled with a first set of the first physical ports via first physical links, and one or more memory devices that are communicatively coupled with the second physical ports and accessible by the processing element via the first physical links. The first IC further comprises a data structure describing a first level of port aggregation to be applied across the first set. The second IC comprises a first distribution function configured to provide ordering to data communicated using the second physical ports. The first distribution function is based on the first level of port aggregation.

    Transparent port aggregation in multi-chip transport protocols

    公开(公告)号:US10409743B1

    公开(公告)日:2019-09-10

    申请号:US16024500

    申请日:2018-06-29

    Applicant: Xilinx, Inc.

    Abstract: Various implementations of a multi-chip system operable according to a predefined transport protocol are disclosed. In one embodiment, a system comprises a first IC comprising a memory controller communicatively coupled with first physical ports. The system further comprises a second IC comprising second physical ports communicatively coupled with a first set of the first physical ports via first physical links, and one or more memory devices that are communicatively coupled with the second physical ports and accessible by the memory controller via the first physical links. The first IC further comprises an identification map table describing a first level of port aggregation to be applied across the first set. The second IC comprises a first distribution function configured to provide ordering to data communicated using the second physical ports. The first distribution function is based on the first level of port aggregation.

    Adaptive integrated programmable device platform

    公开(公告)号:US12261603B2

    公开(公告)日:2025-03-25

    申请号:US18320168

    申请日:2023-05-18

    Applicant: Xilinx, Inc.

    Abstract: A System-on-Chip includes a data processing engine array. The data processing engine array includes a plurality of data processing engines organized in a grid. The plurality of data processing engines are partitioned into at least a first partition and a second partition. The first partition includes one or more first data processing engines of the plurality of data processing engines. The second partition includes one or more second data processing engines of the plurality of data processing engines. Each partition is configured to implement an application that executes independently of the other partition.

    Spatial distribution in a 3D data processing unit

    公开(公告)号:US12147369B2

    公开(公告)日:2024-11-19

    申请号:US18224859

    申请日:2023-07-21

    Applicant: XILINX, INC.

    Inventor: Jaideep Dastidar

    Abstract: The embodiments herein describe a 3D SmartNIC that spatially distributes compute, storage, or network functions in three dimensions using a plurality of layers. That is, unlike current SmartNIC that can perform acceleration functions in a 2D, a 3D Smart can distribute these functions across multiple stacked layers, where each layer can communicate directly or indirectly with the other layers.

    Multi-tenant aware data processing units

    公开(公告)号:US12086083B2

    公开(公告)日:2024-09-10

    申请号:US17892989

    申请日:2022-08-22

    Applicant: XILINX, INC.

    CPC classification number: G06F13/20 G06F2213/40

    Abstract: Embodiments herein describe creating tag bindings that can be used to assign tags to data corresponding to different tenants using a data processing unit (DPU) such as a SmartNIC, Artificial Intelligence Unit, Network Storage Unit, Database Acceleration Units, and the like. In one embodiment, the DPUs include tag gateways at the interface between a host and network element (e.g., a switch) that recognize and tag the data corresponding to the tenants. These tags are then recognized by data processing engines (DPEs) in the DPU such as AI engines, cryptographic engines, encryption engines, Direct Memory Access (DMA) engines, and the like. These DPEs can be configured to perform tag policies that provide security isolation and performance isolation between the tenants.

Patent Agency Ranking