摘要:
Embodiments of the inventions are generally directed to methods, apparatuses, and systems for the dynamic evaluation and delegation of network access control. In an embodiment, a platform includes a switch to control a network connection and an endpoint enforcement engine coupled with the switch. The endpoint enforcement engine may be capable of dynamically switching among a number of network access control modes responsive to an instruction received from the network connection.
摘要:
Embodiments of the inventions are generally directed to methods, apparatuses, and systems for the dynamic evaluation and delegation of network access control. In an embodiment, a platform includes a switch to control a network connection and an endpoint enforcement engine coupled with the switch. The endpoint enforcement engine may be capable of dynamically switching among a number of network access control modes responsive to an instruction received from the network connection.
摘要:
Embodiments of the inventions are generally directed to methods, apparatuses, and systems for the dynamic evaluation and delegation of network access control. In an embodiment, a platform includes a switch to control a network connection and an endpoint enforcement engine coupled with the switch. The endpoint enforcement engine may be capable of dynamically switching among a number of network access control modes responsive to an instruction received from the network connection.
摘要:
Embodiments of the invention are generally directed to systems, methods, and apparatuses for controlling a network connection based, at least in part, on dual-switching. In an embodiment, a tunnel proxy is coupled with a host execution environment. The tunnel proxy includes logic to provide a security protocol client and logic to provide a security protocol server. In one embodiment, the tunnel proxy provides a proxy for a policy decision point to the host execution environment. Other embodiments are described and claimed.
摘要:
Various systems and methods for providing intent-based workload orchestration described herein. A data center system may include a plurality of compute nodes and an orchestration node. The orchestration node may be configured to identify a workload for execution on the plurality of compute nodes; identify intents that define requirements for the execution of the workload on the plurality of compute nodes; monitor the execution of the workload to produce monitoring data; and control the execution of the workload based on the intents and the monitoring data, to dynamically adapt to changed conditions during the execution of the workload.
摘要:
Various systems and methods for providing secure and reliable node lifecycle in elastic workloads are described here. A compute node may be configured to: receive data describing a first elastic workload of the plurality of elastic workloads, the first elastic workload to execute on a first virtual execution environment, the first virtual execution environment associated with a first security context; determine a common resource that is used by the plurality of elastic workloads; store the common resource in a memory accessible by the first virtual execution environment; and execute the first elastic workload, wherein the first elastic workload has access to the common resource, and wherein the plurality of elastic workloads is executed in isolation from one another based on respective security contexts.
摘要:
Various systems and methods for providing Monte Carlo as a service are described here. A networked computing device may be configured to receive data describing an elastic workload that is partitioned among multiple nodes, execute a Monte Carlo simulation using at least a portion of the data describing the elastic workload, to obtain a workload configuration that distributes the elastic workload over a plurality of nodes, and present the workload configuration.
摘要:
Various systems and methods for providing consensus-based named function execution are described herein. A system is configured to access an interest packet received from a user device, the interest packet including a function name of a function and a data payload; broadcast the interest packet to a plurality of compute nodes, wherein the plurality of compute nodes are configured to execute a respective instance of the function; receive a plurality of responses from the plurality of compute nodes, the plurality of responses including respective results of the execution of the respective instances of the function; analyze the plurality of responses using a consensus protocol to identify a consensus result; and transmit the consensus result to the user device.
摘要:
The present disclosure is related to managing a caching system based on object fetch costs, where the fetch cost are based on the access latency, cache misses, and time to reuse of individual objects. The caching system may be a multi-tiered caching system that includes multiple storage tiers, where an object management system determines whether to retain or evict an object from a cache of a particular storage tier based on the object's fetch cost. Additionally, eviction can include moving objects from a current storage tier to another storage tier based on the current storage tier and fetch costs.
摘要:
System and techniques for fault tolerant telemetry of distributed devices are described herein. A node includes a hardware component that receives telemetry from an entity resident on the node. The hardware component signs the telemetry with a cryptographic key to create signed telemetry and stores the signed telemetry in memory of the hardware component. Then, upon request from a remote entity, the hardware component provides the signed telemetry.