Abstract:
Examples may include techniques to distribute queries in a fabric of nodes configured to process the queries. A load balancing switch coupled to the nodes can receive indications of resource metrics from the nodes and can schedule and distribute the queries based on the resource metrics and network metrics identified by the switch. The switch can include programmable circuitry to receive selected resource metrics and identify selected network metrics and to distribute queries to nodes based on the metrics and distribution logic.
Abstract:
A processor or system may include a memory controller to store, in a pre-allocated portion of bit-addressable, random access persistent memory (PM), a relationship between a group of addresses being stored in the PM according to a set of instructions when executed. The memory controller is further to retrieve the relationship when accessing an address from the groups of addresses.
Abstract:
Technologies for software testing include a computing device having persistent memory that includes a platform simulator and an application or other code module to be tested. The computing device generates a checkpoint for the application at a test location using the platform simulator. The computing device executes the application from the test location to an end location and traces all writes to persistent memory using the platform simulator. The computing device generates permutations of persistent memory writes that are allowed by the hardware specification of the computing device simulated by the platform simulator. The computing device replays each permutation from the checkpoint, simulates a power failure, and then invokes a user-defined test function using the platform simulator. The computing device may test different permutations of memory writes until the application's use of persistent memory is validated. Other embodiments are described and claimed.
Abstract:
A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a vector cache line write back instruction. The vector cache line write back instruction is to indicate a source packed memory indices operand that is to include a plurality of memory indices. The processor also includes a cache coherency system coupled with the packed data registers and the decode unit. The cache coherency system, in response to the vector cache line write back instruction, to cause, any dirty cache lines, in any caches in a coherency domain, which are to have stored therein data for any of a plurality of memory addresses that are to be indicated by any of the memory indices of the source packed memory indices operand, to be written back toward one or more memories. Other processors, methods, and systems are also disclosed.
Abstract:
A processor includes a decoder to receive an instruction that indicates first and second source packed data operands and at least one shift count. An execution unit is operable, in response to the instruction, to store a result packed data operand. Each result data element includes a first least significant bit (LSB) portion of a first data element of a corresponding pair of data elements in a most significant bit (MSB) portion, and a second MSB portion of a second data element of the corresponding pair in a LSB portion. One of the first LSB portion of the first data element and the second MSB portion of the second data element has a corresponding shift count number of bits. The other has a number of bits equal to a size of a data element of the first source packed data minus the corresponding shift count.
Abstract:
There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
Abstract:
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
Abstract:
An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.
Abstract:
Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
Abstract:
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.