Abstract:
Technologies for supporting concurrency of a flow lookup table at a network device. The flow lookup table includes a plurality of candidate buckets that each includes one or more entries. The network device includes a flow lookup table write module configured to perform a displacement operation of a key/value pair to move the key/value pair from one bucket to another bucket via an atomic instruction and increment a version counter associated with the buckets affected by the displacement operation. The network device additionally includes a flow lookup table read module to check the version counters during a lookup operation on the flow lookup table to determine whether a displacement operation is affecting the presently read value of the buckets. Other embodiments are described herein and claimed.
Abstract:
Method and apparatus for reliable multicast communication over wireless network are provided. According to embodiments of the invention, the method includes determining a priority category for a multicast communication to be transmitted. The method includes designating, for the multicast communication, one of the multicast communication recipients as a leader based on the priority category and multicast diagnostics information received from the multicast communication recipients. The leader is assigned to transmit to the multicast communication source an acknowledgment frame indicating receipt of a multicast communication frame received from the source.
Abstract:
This disclosure is directed to data prioritization, storage and protection in a vehicular communication system. A black box (BB) in a vehicle may receive data from an on-board unit (OBU) and a vehicular control architecture (VCA). The OBU may interact with at least one RSU that is part of an intelligent transportation system (ITS) via at least two channels, at least one of the at least two channels being reserved for high priority messages. The OBU may transmit ITS data to the BB via a secure communication channel, which may be stored along with vehicular data received from the VCA in encrypted form. In response to a request for data, the BB may authenticate a requesting party, determine at least part of the stored data to which the authenticated party is allowed and sign the at least part of the stored data before providing it to the authenticated party.
Abstract:
Example apparatus to perform service failover as disclosed herein are to detect a failure condition associated with execution of a service by a first compute platform, the execution of the service responsive to a first request. Disclosed example apparatus are also to send a second request to a second compute platform to execute the service. Disclosed example apparatus are further to monitor a queue of the first compute platform for a response to the first request, the response to indicate execution of the service by the first compute platform has completed, and when the response is detected in the queue, discard the response from the queue.
Abstract:
In the present disclosure, functions associated with the central office of an evolved packet core network are co-located onto a computer platform or sub-components through virtualized function instances. This reduces and/or eliminates the physical interfaces between equipment and permits functional operation of the evolved packet core to occur at a network edge.
Abstract:
At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system cause the system to send a unique identifier to a license server, establish a secure channel based on the unique identifier, request a license for activating an appliance from a license server over the secure channel, receive license data from the license server over the secure channel; determine whether the license is valid, and activate the appliance in response to a determination that the license data is valid.
Abstract:
At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system cause the system to send a unique identifier to a license server, establish a secure channel based on the unique identifier, request a license for activating an appliance from a license server over the secure channel, receive license data from the license server over the secure channel; determine whether the license is valid, and activate the appliance in response to a determination that the license data is valid.
Abstract:
Methods, apparatus, systems, and articles of manufacture providing a tiered elastic cloud storage to increase data resiliency are disclosed. An example instructions cause one or more processors to at least execute the instructions to: generate a storage scheme for files based on a categorization of the files and resource capabilities of an edge-based device and a cloud-based device, the categorization including a first group of files to be stored locally at an end user computing device, a second group of files to be stored externally at the edge-based device, and a third group of files to be stored externally at the cloud-based device; in response to an acknowledgement from at least one of the edge-based device or the cloud-based device, generate a map corresponding to locations of the files; store the first group of files in local storage; and cause transmission of the second group of files to the edge-based device and the third group of files to the cloud-based device
Abstract:
Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.