REMOTE STORAGE FOR HARDWARE MICROSERVICES HOSTED ON XPUS AND SOC-XPU PLATFORMS

    公开(公告)号:US20220113911A1

    公开(公告)日:2022-04-14

    申请号:US17558268

    申请日:2021-12-21

    Abstract: Methods, apparatus, and software for remote storage of hardware microservices hosted on other processing units (XPUs) and SOC-XPU Platforms. The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). Software, via execution on the SOC, enables the platform to pre-provision storage space on a remote storage node and assign the storage space to the platform, wherein the pre-provisioned storage space includes one or more container images to be implemented as one or more hardware (HW) microservice front-ends. The XPU/FPGA is configured to implement one or more accelerator functions used to accelerate HW microservice backend operations that are offloaded from the one or more HW microservice front-ends. The platform is also configured to pre-provision a remote storage volume containing worker node components and access and persistently store worker node components.

    TECHNOLOGIES FOR HARDWARE MICROSERVICES ACCELERATED IN XPU

    公开(公告)号:US20230185760A1

    公开(公告)日:2023-06-15

    申请号:US17549727

    申请日:2021-12-13

    CPC classification number: G06F15/7889 G06F15/7821 G06F15/7871 G06F2015/768

    Abstract: Methods, apparatus, and software and for hardware microservices accelerated in other processing units (XPUs). The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). The FPGA is configured to implement one or more Hardware (HW) accelerator functions associated with HW microservices. Execution of microservices is split between a software front-end that executes on the SOC and a hardware backend comprising the HW accelerator functions. The software front-end offloads a portion of a microservice and/or associated workload to the HW microservice backend implemented by the accelerator functions. An XPU or FPGA proxy is used to provide the microservice front-ends with shared access to HW accelerator functions, and schedules/multiplexes access to the HW accelerator functions using, e.g., telemetry data generated by the microservice front-ends and/or the HW accelerator functions. The platform may be an infrastructure processing unit (IPU) configured to accelerate infrastructure operations.

    PACKET BASED IN-LINE PROCESSING FOR DATA CENTER ENVIRONMENTS

    公开(公告)号:US20230344894A1

    公开(公告)日:2023-10-26

    申请号:US18216524

    申请日:2023-06-29

    CPC classification number: H04L67/025

    Abstract: An apparatus is described. The apparatus includes a host side interface to couple to one or more central processing units (CPUs) that support multiple microservice endpoints. The apparatus includes a network interface to receive from a network a packet having multiple frames that belong to different streams, the multiple frames formatted according to a text transfer protocol. The apparatus includes circuitry to: process the frames according to the text transfer protocol and build content of a microservice functional call embedded within a message that one of the frames transports; and, execute the microservice function call.

    EXTENDED INTER-KERNEL COMMUNICATION PROTOCOL FOR THE REGISTER SPACE ACCESS OF THE ENTIRE FPGA POOL IN NON-STAR MODE

    公开(公告)号:US20220382944A1

    公开(公告)日:2022-12-01

    申请号:US17327210

    申请日:2021-05-21

    Abstract: Methods and apparatus for an extended inter-kernel communication protocol for discovery of accelerator pools configured in a non-star mode. Under a discovery algorithm, discovery requests are sent from a root node to non-root nodes in the accelerator pool using an inter-kernel communication protocol comprising a data transmission protocol built over a Media Access Control (MAC) layer and transported over links coupled between IO ports on accelerators. The discovery requests are used to discover each of the nodes in the accelerator pool and determine the topology of the nodes. During this process, MAC address table entries are generated at the various nodes comprising (key, value) pairs of MAC IO port addresses identifying destination nodes and that may be reached by each node and the shortest path to reach such destination nodes. The discovery algorithm may also be used to discover storage related information for the accelerators. The accelerators may comprise FPGAs or other processing units, such as GPUs and Vector Processing Units (VPUs).

Patent Agency Ranking