Abstract:
Examples described herein relate to a memory controller, when connected to at least one memory device in a multi-tiered memory system comprising a near memory and far memory, is to allocate a region of the near memory to a requester based on receipt of a request. In some examples, the memory controller includes circuitry to transmit at least one memory read command and address information to the multi-tiered memory system to read data from the multi-tiered memory system and circuitry to transmit at least one memory write command and address information to the multi-tiered memory system to write data to the multi-tiered memory system, wherein the near memory comprises at least one memory connected to the memory controller via a memory interface and the far memory comprises at least one memory connected to the memory controller via a network.
Abstract:
Examples described herein relate to circuitry to utilize a proportional, derivative, integral neural network (PIDNN) controller to adjust one or more parameters allocated to a first group of one or more workloads based on one or more target parameters for a second group of one or more workloads. In some examples, the second group of one or more workloads are a same, lower, or higher priority level than that of the first group of one or more workloads.
Abstract:
Technologies for generating an analytical model for a workload of a data center include an analytics server to receive raw data from components of a data center. The analytics server retrieves a workbook that includes analytical algorithms from a workbook marketplace server, and uses the analytical algorithms to analyze the raw data to generate the analytical model for the workload based on the raw data. The analytics server further generates an optimization trigger to be transmitted to a controller component of the data center that may be based on the analytical model and one or more previously generated analytical models. The workbook marketplace server may include a plurality of workbooks, each of which may include one or more analytical algorithms from which to generate a different analytical model for the workload of the data center.
Abstract:
Technologies for datacenter management include one or more computing racks each including a rack controller. The rack controller may receive system, performance, or health metrics for the components of the computing rack. The rack controller generates regression models to predict component lifespan and may predict logical machine lifespans based on the lifespan of the included hardware components. The rack controller may generate notifications or schedule maintenance sessions based on remaining component or logical machine lifespans. The rack controller may compose logical machines using components having similar remaining lifespans. In some embodiments the rack controller may validate a service level agreement prior to executing an application based on the probability of component failure. A management interface may generate an interactive visualization of the system state and optimize the datacenter schedule based on optimization rules derived from human input in response to the visualization. Other embodiments are described and claimed.
Abstract:
Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.
Abstract:
Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
Abstract:
Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.