ADVANCED INTERLEAVING TECHNIQUES FOR FABRIC BASED POOLING ARCHITECTURES

    公开(公告)号:US20220222010A1

    公开(公告)日:2022-07-14

    申请号:US17710657

    申请日:2022-03-31

    Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.

    DYNAMIC LOAD BALANCING FOR POOLED MEMORY

    公开(公告)号:US20220197819A1

    公开(公告)日:2022-06-23

    申请号:US17691743

    申请日:2022-03-10

    Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.

    TECHNOLOGIES FOR PRE-CONFIGURING ACCELERATORS BY PREDICTING BIT-STREAMS

    公开(公告)号:US20210334138A1

    公开(公告)日:2021-10-28

    申请号:US17365898

    申请日:2021-07-01

    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.

    RESOURCE MANAGEMENT FOR COMPONENTS OF A VIRTUALIZED EXECUTION ENVIRONMENT

    公开(公告)号:US20210258265A1

    公开(公告)日:2021-08-19

    申请号:US17169073

    申请日:2021-02-05

    Abstract: Examples described herein relate to at least one processor that is to perform a command to build a container using multiple routines and allocate resources to at least one routine based on specification of a service level agreement (SLA) associated with each of the at least one routine. In some examples, the container is compatible with one or more of: Docker containers, Rkt containers, LXD containers, OpenVZ containers, Linux-VServer, Windows Containers, Hyper-V Containers, unikernels, or Java containers. In some examples, a service level is to specify one or more of: time to completion of a routine or resource allocation to the routine. In some examples, the resources include one or more of: cache allocation, memory allocation, memory bandwidth, network interface bandwidth, or accelerator allocation.

    PROACTIVE DATA PREFETCH WITH APPLIED QUALITY OF SERVICE

    公开(公告)号:US20200004685A1

    公开(公告)日:2020-01-02

    申请号:US16568048

    申请日:2019-09-11

    Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.

Patent Agency Ranking