Abstract:
A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.”
Abstract:
Technologies for efficiently managing the allocation of memory in a shared memory pool include a memory sled. The memory sled includes a memory pool of byte-addressable memory devices. The memory sled also includes a memory pool controller coupled to the memory pool. The memory pool controller receives a request to provision memory to a compute sled. Further, the memory pool controller maps, in response to the request, each of the memory devices of the memory pool to the compute sled. The memory pool controller additionally assigns access rights to the compute sled as a function of one or more memory characteristics of the compute sled. The memory characteristics are indicative of an amount of memory in the memory pool to be used by the compute sled and the access rights are indicative of access permissions to one or more memory address ranges associated with the one or more memory devices.
Abstract:
Technologies for managing errors in a remotely accessible memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory sled is to write test data to a byte-addressable memory region in the memory pool. The memory region is to be accessed by a remote compute sled. The memory sled is also to read data from the memory region to which the test data was written, compare the read data to the test data to determine whether a threshold number of errors are present in the read data, and send, in response to a determination that the threshold number of errors are present in the read data, a notification to the remote compute sled that the memory region is faulty.
Abstract:
A rework grid array interposer with direct power is described. The interposer has a foundation layer mountable between a motherboard and a package. A heater is embedded in the foundation layer to provide local heat to reflow solder to enable at least one of attachment or detachment of the package. A connector is mounted on the foundation layer and coupled to the heater and to the package to provide a connection path directly with the power supply and not via the motherboard. One type of interposer interfaces with a package having a solderable extension. Another interposer has a plurality of heater zones embedded in the foundation layer.
Abstract:
In an embodiment, a computing device may include a control unit. The control unit may acquire a request from a central processing unit (CPU), contained in the computing device, that may be executing a basic input/output system (BIOS) associated with the computing device. The request may include a request for a value that may represent a maximum authorized storage size for a storage contained in the computing device. The control unit may generate the value and send the value to the CPU. The CPU may generate a system address map based on the value. The CPU may send the system address map to the control unit which may acquire the system address map and configure an address decoder, contained in the computing device, based on the acquired system address map.
Abstract:
In one embodiment, an apparatus includes an interface to couple a plurality of devices of a system, the interface to enable communication according to a Compute Express Link (CXL) protocol, and a power management circuit coupled to the interface. The power management circuit may: receive, from a first device of the plurality of devices, a request according to the CXL protocol for updated power credits; identify at least one other device of the plurality of devices to provide at least some of the updated power credits; and communicate with the first device and the at least one other device to enable the first device to increase power consumption according to the at least some of the updated power credits. Other embodiments are described and claimed.
Abstract:
Technologies for efficiently managing the allocation of memory in a shared memory pool include a memory sled. The memory sled includes a memory pool of byte-addressable memory devices. The memory sled also includes a memory pool controller coupled to the memory pool. The memory pool controller receives a request to provision memory to a compute sled. Further, the memory pool controller maps, in response to the request, each of the memory devices of the memory pool to the compute sled. The memory pool controller additionally assigns access rights to the compute sled as a function of one or more memory characteristics of the compute sled. The memory characteristics are indicative of an amount of memory in the memory pool to be used by the compute sled and the access rights are indicative of access permissions to one or more memory address ranges associated with the one or more memory devices.
Abstract:
Systems and methods of implementing server architectures that can facilitate the servicing of memory components in computer systems. The systems and methods employ nonvolatile memory/storage modules that include nonvolatile memory (NVM) that can be used for system memory and mass storage, as well as firmware memory. The respective NVM/storage modules can be received in front or rear-loading bays of the computer systems. The systems and methods further employ single, dual, or quad socket processors, in which each processor is communicably coupled to at least some of the NVM/storage modules disposed in the front or rear-loading bays by one or more memory and/or input/output (I/O) channels. By employing NVM/storage modules that can be received in front or rear-loading bays of computer systems, the systems and methods provide memory component serviceability heretofore unachievable in computer systems implementing conventional server architectures.
Abstract:
Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.
Abstract:
Systems and methods of implementing server architectures that can facilitate the servicing of memory components in computer systems. The systems and methods employ nonvolatile memory/storage modules that include nonvolatile memory (NVM) that can be used for system memory and mass storage, as well as firmware memory. The respective NVM/storage modules can be received in front or rear-loading bays of the computer systems. The systems and methods further employ single, dual, or quad socket processors, in which each processor is communicably coupled to at least some of the NVM/storage modules disposed in the front or rear-loading bays by one or more memory and/or input/output (I/O) channels. By employing NVM/storage modules that can be received in front or rear-loading bays of computer systems, the systems and methods provide memory component serviceability heretofore unachievable in computer systems implementing conventional server architectures.