Abstract:
Various embodiments are generally directed to techniques for improving the efficiency of exchanging packets between pairs of VMs within a communications server. An apparatus may include a processor component; a network interface to couple the processor component to a network; a virtual switch to analyze contents of at least one packet of a set of packets to be exchanged between endpoint devices through the network and the communications server, and to route the set of packets through one or more virtual servers of multiple virtual servers based on the contents; and a transfer component of a first virtual server of the multiple virtual servers to determine whether to route the set of packets to the virtual switch or to transfer the set of packets to a second virtual server of the multiple virtual servers in a manner that bypasses the virtual switch based on a routing rule.
Abstract:
Examples may include techniques to securely provision, configure, and de-provision virtual network functions for a software defined network or a cloud infrastructure elements. A policy for a virtual network function may be received, at a secure execution partition of circuitry, and the virtual network function configured to implement the policy by the secure execution partition of the circuitry. The secure execution partition may connect to the virtual network function through a virtual switch and may cause the virtual network function to implement a network function based on the policy.
Abstract:
Embodiments of apparatuses and methods for adaptive data compression and associated contextual information are described. In various embodiments, an apparatus may include a context monitoring module to gather contextual information for transmission of data and a policy module to gather user preference on cost associated with transmission of data. The apparatus may further include an analysis module to determine whether to compress data prior to transmission, based at least in part on the contextual information and the user preference. Other embodiments may be described and/or claimed.
Abstract:
Technologies for secure inter-virtual-machine shared memory communication include a computing device with hardware virtualization support. A virtual machine monitor (VMM) authenticates a view switch component of a target virtual machine. The VMM adds configures a secure memory view to access a shared memory segment. The shared memory segment may include memory pages of a source virtual machine or the VMM. The view switch component switches to the secure memory view without generating a virtual machine exit event, using the hardware virtualization support. The view switch component may switch to the secure memory view by modifying an extended page table (EPT) pointer. The target virtual machine accesses the shared memory segment via the secure memory view. The target virtual machine and the source virtual machine may coordinate ownership of memory pages using a secure view control structure stored in the shared memory segment. Other embodiments are described and claimed.
Abstract:
Computer-readable storage media, computing devices and methods associated with file cache management are discussed herein. In embodiments, a computing device may include a file cache and a file cache manager coupled with the file cache. The file cache manager may be configured to implement a context-aware eviction policy to identify a candidate file for deletion from the file cache, from a plurality of individual files contained within the file cache, based at least in part on file-level context information associated with the individual files. In embodiments, the file-level context information may include an indication of access recency and access frequency associated with the individual files. In such embodiments, identifying the candidate file for deletion from the file cache may be based, at least in part, on both the access recency and the access frequency of the individual files. Other embodiments may be described and/or claimed.
Abstract:
In accordance with embodiments disclosed herein, there is provided systems and methods for providing a virtual shared cache mechanism. A processing device includes a plurality of clusters allocated into a virtual private shared cache. Each of the clusters includes a plurality of cores and a plurality of cache slices co-located within the plurality of cores. The processing device also includes a virtual shared cache including the plurality of clusters such that the cache data in the plurality of cache slices is shared among the plurality of clusters.
Abstract:
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (MLC) and a last level cache (LLC), and each of these caches is private to the tile. A controller coupled to the tiles includes a cache power control logic to receive utilization information regarding the core and the tile cache hierarchy of a tile and to cause the LLC of the tile to be independently power gated, based at least in part on this information. Other embodiments are described and claimed.
Abstract:
In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (MLC) and a last level cache (LLC), and each of these caches is private to the tile. A controller coupled to the tiles includes a cache power control logic to receive utilization information regarding the core and the tile cache hierarchy of a tile and to cause the LLC of the tile to be independently power gated, based at least in part on this information. Other embodiments are described and claimed.
Abstract:
A network interface device (NID) may determine whether the received data units of the computer system are to be compressed before transmitting the data units. The NID may determine the compression energy value consumed to compress the first K1 data units and a second transmission energy value to transmit the compressed first K1 data units. The NID may then estimate a first transmission energy value that may be consumed by the NID to transmit uncompressed first K1 data units using the second transmission energy value. The NID may then use the first and second transmission energy value and the compression energy value to determine if the remaining (N−K1) data units of the first data stream.
Abstract:
Systems and methods of managing break events may provide for detecting a first break event from a first event source and detecting a second break event from a second event source. In one example, the event sources can include devices coupled to a platform as well as active applications on the platform. Issuance of the first and second break events to the platform can be coordinated based on at least in part runtime information associated with the platform.