ADVANCED INTERLEAVING TECHNIQUES FOR FABRIC BASED POOLING ARCHITECTURES

    公开(公告)号:US20220222010A1

    公开(公告)日:2022-07-14

    申请号:US17710657

    申请日:2022-03-31

    Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.

    REMOTE STORAGE FOR HARDWARE MICROSERVICES HOSTED ON XPUS AND SOC-XPU PLATFORMS

    公开(公告)号:US20220113911A1

    公开(公告)日:2022-04-14

    申请号:US17558268

    申请日:2021-12-21

    Abstract: Methods, apparatus, and software for remote storage of hardware microservices hosted on other processing units (XPUs) and SOC-XPU Platforms. The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). Software, via execution on the SOC, enables the platform to pre-provision storage space on a remote storage node and assign the storage space to the platform, wherein the pre-provisioned storage space includes one or more container images to be implemented as one or more hardware (HW) microservice front-ends. The XPU/FPGA is configured to implement one or more accelerator functions used to accelerate HW microservice backend operations that are offloaded from the one or more HW microservice front-ends. The platform is also configured to pre-provision a remote storage volume containing worker node components and access and persistently store worker node components.

    INTELLIGENT RESOURCE SELECTION FOR RECEIVED CONTENT

    公开(公告)号:US20200259763A1

    公开(公告)日:2020-08-13

    申请号:US16859792

    申请日:2020-04-27

    Abstract: Examples described herein relate to a device configured to allocate memory resources for packets received by the network interface based on received configuration settings. In some examples, the device is a network interface. Received configuration settings can include one or more of: latency, memory bandwidth, timing of when the content is expected to be accessed, or encryption parameters. In some examples, memory resources include one or more of: a cache, a volatile memory device, a storage device, or persistent memory. In some examples, based on a configuration settings not being available, the network interface is to perform one or more of: dropping a received packet, store the received packet in a buffer that does not meet the configuration settings, or indicate an error. In some examples, configuration settings are conditional where the settings are applied if one or more conditions is met.

    SCALABLE AND ACCELERATED FUNCTION AS A SERVICE CALLING ARCHITECTURE

    公开(公告)号:US20200226009A1

    公开(公告)日:2020-07-16

    申请号:US16836650

    申请日:2020-03-31

    Abstract: Examples described herein relate to requesting execution of a workload by a next function with data transport overhead tailored based on memory sharing capability with the next function. In some examples, data transport overhead is one or more of: sending a memory address pointer, virtual memory address pointer or sending data to the next function. In some examples, the memory sharing capability with the next function is based on one or more of: whether the next function shares an enclave with a sender function, the next function shares physical memory domain with a sender function, or the next function shares virtual memory domain with a sender function. In some examples, selection of the next function from among multiple instances of the next function based on one or more of: sharing of memory domain, throughput performance, latency, cost, load balancing, or service legal agreement (SLA) requirements.

    TECHNOLOGIES FOR HARDWARE MICROSERVICES ACCELERATED IN XPU

    公开(公告)号:US20230185760A1

    公开(公告)日:2023-06-15

    申请号:US17549727

    申请日:2021-12-13

    CPC classification number: G06F15/7889 G06F15/7821 G06F15/7871 G06F2015/768

    Abstract: Methods, apparatus, and software and for hardware microservices accelerated in other processing units (XPUs). The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). The FPGA is configured to implement one or more Hardware (HW) accelerator functions associated with HW microservices. Execution of microservices is split between a software front-end that executes on the SOC and a hardware backend comprising the HW accelerator functions. The software front-end offloads a portion of a microservice and/or associated workload to the HW microservice backend implemented by the accelerator functions. An XPU or FPGA proxy is used to provide the microservice front-ends with shared access to HW accelerator functions, and schedules/multiplexes access to the HW accelerator functions using, e.g., telemetry data generated by the microservice front-ends and/or the HW accelerator functions. The platform may be an infrastructure processing unit (IPU) configured to accelerate infrastructure operations.

Patent Agency Ranking