摘要:
Technologies for identifying service functions that may be performed in parallel in a service function chain include a computing device for running one or more virtual machines for each of a plurality of service functions based on a preferred service function chain being selected. To identify which service functions may be performed in parallel, the computing device may determine which service functions are not required to be performed on a critical path of the service function chain and/or which service functions are not required to be performed in real-time. Additionally, selecting the preferred service function chain may be based on selection criteria.
摘要:
Methods and systems may provide for determining a status of a mobile platform, wherein the status indicates whether the mobile platform is stationary, and adapting a detection schedule of one or more location sensors on the mobile platform based at least in part on whether the mobile platform is stationary. Additionally, one or more location updates may be generated based at least in part on information from the one or more location sensors. In one example, a location request is received, wherein the detection schedule is adapted further based on quality of service (QoS) information associated with the location request, and wherein the one or more location updates are generated in response to the location request.
摘要:
Methods and systems may provide for determining quality of service (QoS) information for a job associated with an application, and determining a condition prediction for a wireless channel of a mobile platform. Additionally, the job may be scheduled for communication over the wireless channel based at least in part on the QoS information and the condition prediction. In one example, scheduling the job includes imposing a delay in the communication if the condition prediction indicates that a throughput of the wireless channel is below a threshold and the delay complies with a latency constraint of the QoS information.
摘要:
An intelligent cloud aware computing distribution architecture for a device. A network conditions monitor is to observe and identify decision impact factors of tasks in a runtime environment. A dynamic profiler, coupled to the network conditions monitor, is to receive runtime information regarding the decision impact factors identified by the network conditions monitor and produce a profile based on the decision impact factors. Runtime offload decision making logic is to process the profile produced by the dynamic profiler based on the received decision impact factors according a predetermined policy and to determine final offloading decisions based on the predetermined policy and the processed decision impact factors. The runtime offload decision making logic is to provide the final offloading decisions to the applications on the device for executing the tasks locally or remotely based on the determined final offloading decision.
摘要:
Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.
摘要:
Various embodiments may be generally directed to full duplex (FDX) communications on a wireless channel. More specifically, in various embodiments described herein, FDX communications may occur on a wireless channel between a FDX capable device, such as an access point (AP), and two or more half-duplex (HDX) capable devices, such as a plurality of stations (STAs). For instance, the AP may transmit information to a first station (STA) via a wireless channel at the same time as receiving information from a second STA via the wireless channel. In some embodiments, the AP may arrange the FDX communications.
摘要:
Technologies to monitor and manage platform, device, processor and power characteristics throughout a system utilizing a remote entity such as controller node. By remotely monitoring and managing system operation and performance over time, future system performance requirements may be anticipated, allowing system parameters to be adjusted proactively in a more coordinated way. The controller node may monitor, control and predict traffic flows in the system and provide performance modification instructions to any of the computer nodes and a network switch to better optimize performance. The target systems collaborate with the controller node by respectively monitoring internal resources, such as resource availability and performance requirements to provide necessary resources for optimizing operating parameters of the system. The controller node may collect local system information for one or all of the computer nodes to dynamically steer traffic to a specific set of computers for processing to meet desired performance and power requirements.
摘要:
In embodiments, apparatuses, methods and storage media (transitory and non-transitory) are described that are associated with end-to-end datacenter performance control. In various embodiments, an apparatus for computing may receive a datacenter performance target, determine an end-to-end datacenter performance level based at least in part on quality of service data collected from a plurality of nodes, and send a mitigation command based at least in part on a result of a comparison of the end-to-end datacenter performance level determined to the datacenter performance target. In various embodiments, the apparatus for computing may include one or more processors, a memory, a datacenter performance monitor to receive a datacenter performance target corresponding to a service level agreement, and a mitigation module to send a mitigation command based at least in part on a result of a comparison of an end-to-end datacenter performance level to a datacenter performance target.
摘要:
Systems and methods may provide for determining an absolute energy break-even time for a first low power state with respect to a current state of a system. A relative energy break-even time may also be determined for the first low power state with respect to a second low power state based on at least in part the absolute energy break-even time. In addition, an operating state may be selected for the system based on at least in part the relative energy break-even time.