Abstract:
Technologies for using fabric supported sequencers in fabric architectures includes a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to receive an sequencer access message from one of the plurality of computing nodes that includes an identifier of a sequencing counter corresponding to a sequencer session and one or more operation parameters. The network switch is additionally configured to perform an operation on a value associated with the identifier of the sequencing counter as a function of the one or more operation parameters, increment the identifier of the sequencing counter, and associate a result of the operation with the incremented identifier of the sequencing counter. The network switch is further configured to transmit an acknowledgment of successful access to the computing node that includes the result of the operation and the incremented identifier of the sequencing counter. Other embodiments are described herein.
Abstract:
Technology for an apparatus is described. The apparatus can receive a command to copy data. The command can indicate a first address, a second address and an offset value. The apparatus can determine a first non-uniform memory access (NUMA) domain ID for the first address and a second NUMA domain ID for the second address. The apparatus can identify a first computing node with memory that corresponds to the first NUMA domain ID and a second computing node with memory that corresponds to the second NUMA domain ID. The apparatus can generate an instruction for copying data in a first memory range of the first computing node to a second memory range of the second computing node. The first memory range can be defined by the first address and the offset value and the second memory range can be defined by the second address and the offset value.
Abstract:
A processor in described having an interface to non-volatile random access memory and logic circuitry. The logic circuitry is to identify cache lines modified by a transaction which views the non-volatile random access memory as the transaction's persistence storage. The logic circuitry is also to identify cache lines modified by a software process other than a transaction that also views said non-volatile random access memory as persistence storage.
Abstract:
Embodiments of systems, apparatuses, and methods for performing in a computer processor vector packed unary encoding using masks in response to a single vector packed unary encoding using masks instruction that includes a source vector register operand, a destination writemask register operand, and an opcode are described.
Abstract:
Methods, apparatus, systems, and articles of manufacture are disclosed to estimate workload complexity. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate payload interface circuitry to extract workload objective information and service level agreement (SLA) criteria corresponding to a workload, and acceleration circuitry to select a pre-processing model based on (a) the workload objective information and (b) feedback corresponding to workload performance metrics of at least one prior workload execution iteration, execute the pre-processing model to calculate a complexity metric corresponding to the workload, and select candidate resources based on the complexity metric.
Abstract:
Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated. In one aspect, machine learning models trained on historical data are employed to project harvested energy levels that are used in detecting energy threshold conditions.
Abstract:
An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.
Abstract:
Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
Abstract:
A fabric interface apparatus, including: a fabric interface logic to communicatively couple to a fabric; a data interface to communicatively couple to a compute platform including memory resources in at least two memory tiers; and a tier-aware read/write engine (TARWE) to: receive an incoming packet via the fabric; parse a header of the incoming packet to identify a hint for directing the incoming packet to a preferred memory tier; and write the incoming packet to the preferred memory tier.
Abstract:
Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.