TECHNOLOGIES FOR KERNEL SCALE-OUT
    22.
    发明申请

    公开(公告)号:US20190065260A1

    公开(公告)日:2019-02-28

    申请号:US15858316

    申请日:2017-12-29

    Abstract: Technologies for scaling provisioning of kernel instances in a system as a function of a topology of accelerated kernels include a compute device having a compute engine. The compute engine receives, from a sled, a kernel configuration request to provision a kernel on an accelerator device. The sled is to execute a workload. The kernel accelerates a task in the workload. The compute engine determines, as a function of one or more requirements of the workload, a topology of kernels to service the request. The topology maps data communication between kernels. The compute engine configures the kernel on the accelerator device according to the determined topology. Other embodiments are also described and claimed

    TECHNOLOGIES FOR MIGRATING VIRTUAL MACHINES
    23.
    发明申请

    公开(公告)号:US20190065231A1

    公开(公告)日:2019-02-28

    申请号:US15859388

    申请日:2017-12-30

    Abstract: Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.

    Proactive data prefetch with applied quality of service

    公开(公告)号:US11573900B2

    公开(公告)日:2023-02-07

    申请号:US16568048

    申请日:2019-09-11

    Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.

Patent Agency Ranking