摘要:
Technologies for offloading data object replication and service function chain management include a switch communicatively coupled to one or more computing nodes capable of executing virtual machines and storing data objects. The switch is configured to determine metadata of a service function chain, transmit a network packet to a service function of the service function chain being executed by one or more of the computing nodes for processing the network packet. The switch is further configured to receive feedback from service function, update the metadata based on the feedback, and transmit the network packet to a next service function of the service function chain. Additionally or alternatively, the switch is configured to identify a plurality of computing nodes (i.e., storage nodes) at which to store a received data object, replicate the data object based on the number of storage nodes, and transmit each of the received data object and replicated data object(s) to different corresponding storage nodes. Other embodiments are described and claimed.
摘要:
Devices and techniques for hardware accelerated packet processing are described herein. A device can communicate with one or more hardware switches. The device can detect characteristics of a plurality of packet streams. The device may distribute the plurality of packet streams between the one or more hardware switches and software data plane components based on the detected characteristics of the plurality of packet streams, such that at least one packet stream is designated to be processed by the one or more hardware switches. Other embodiments are also described.
摘要:
Devices and techniques for hardware accelerated packet processing are described herein. A device can communicate with one or more hardware switches. The device can detect characteristics of a plurality of packet streams. The device may distribute the plurality of packet streams between the one or more hardware switches and software data plane components based on the detected characteristics of the plurality of packet streams, such that at least one packet stream is designated to be processed by the one or more hardware switches. Other embodiments are also described.
摘要:
Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
摘要:
Generally discussed herein are systems, devices, and methods for routing interests and/or content in an information centric network. A router can include a memory and routing circuitry coupled to the memory, the routing circuitry configured to receive a packet, receive one or more attributes including at least one of (1) a network attribute, (2) a platform attribute, and (3) a content attribute, determine which neighbor node is to receive the packet next based on the received one or more attributes, and forward the packet to the determined neighbor node.
摘要:
Methods and systems may provide for determining a next active window for a platform and notifying one or more of a plurality of devices of the platform of the next active window being determined. Additionally, one or more of the plurality of devices may be notified of an onset of the next active window. In one example, a pre-warm message is issued to notify one or more of the plurality of devices of the next active window being determined.
摘要:
Systems and methods may provide for aggregating a first idle duration from a first device associated with a platform and a second idle duration from a second device associated with the platform. Additionally, an idle state may be selected for the platform based at least in part on the first idle duration and the second idle duration. In one example, the idle durations are classified as deterministic, estimated or statistical.
摘要:
A method and apparatus for selectively parking routers used for routing traffic in mesh interconnects. Various router parking (RP) algorithms are disclosed, including an aggressive RP algorithm where a minimum number of routers are kept active to ensure adequate network connectivity between active nodes and/or intercommunicating nodes, leading to a maximum reduction in static power consumption, and a conservative RP algorithm that favors network latency considerations over static power consumption while also reducing power. An adaptive RP algorithm is also disclosed that implements aspects of the aggressive and conservative RP algorithms to balance power consumption and latency considerations in response to ongoing node utilization and associated traffic. The techniques may be implemented in internal network structures, such as for single chip computers, as well as external network structures, such as computing clusters and massively parallel computer architectures. Performance modeling has demonstrated substantial power reduction may be obtained using the router parking techniques while maintaining Quality of Service performance objectives.
摘要:
Devices and methods for optimizing semi-active workloads are described herein. A network interface device may be configured to offload data packet acknowledgment responsibilities of a host platform by transmitting, to the sender of the packets, acknowledgements of packets received throughout a time duration. Upon completion of the time duration, the network interface device may trigger the host platform to perform batch processing of the data packets received during the time duration.
摘要:
A mechanism is described for facilitating dynamic and remote memory collaboration at computing devices according to one embodiment of the invention. A method of embodiments of the invention includes dynamically classifying a computing device of a plurality of computing devices as a memory server, where the plurality of computing devices are coupled to each other over a network. The method may further include offering, by the memory server, of memory to be used by one or more of the plurality of computing devices classified as one or more memory clients, and remotely granting, by the memory server, of the memory to the one or more memory clients.