Abstract:
Network entities may indicate and negotiate one or more new protocols for communications using a current protocol. Indications may include one or more protocols which are supported, one or more protocols which are preferred, and the level of desire of preferences. Indications may further include schedules of times during which certain protocols are supported and/or schedules of functions for which certain protocols are preferred. Indications may be evaluated and acted upon immediately or stored for future reference. Evaluation may include comparison of relative desire levels and needs of various entities. Protocols may be messaging protocols, transport protocols, or combinations thereof.
Abstract:
The description relates to wireless protocol verification. One example can obtain information relating to a wireless protocol and receive information relating to wireless communications associated with a wireless device. The example can compare the wireless communications with the wireless protocol and generate a verification report that conveys whether the wireless communications comply with the wireless protocol.
Abstract:
The invention provides a method and system for user desired delay estimation for mobile-cloud applications, wherein the method comprises the steps of collecting data for a mobile application using at least one of sensors, application logger and user feedback module of a mobile device, the step of inferring the quality of experience based on the collected data. The method further comprises the steps of determining the desired delay by taking the quality of experience and optionally some additional statistical data based on the collected data into account and the step of offloading a task to the cloud together with a corresponding desired delay. Furthermore, the invention provides a method and system for resource allocation, wherein the method comprises the step of receiving a task to be executed along with a corresponding desired delay from a mobile device, the step of determining a resource allocation strategy based on the desired delay and the task to be processed and the step of executing the task using the allocated resources and sending the result to the mobile device.
Abstract:
A multi-ring reliable messaging system is formed by interconnecting a plurality of token rings via a pair of gateways that includes an active gateway that is configured to communicate with the token rings and a standby gateway that also is configured to communicate with the token rings. The active gateway receives an original message via a first token ring, generates an associated message for a second token ring based on the original message, and propagates the associated message toward the second token ring. The active gateway supports total order delivery of messages within the token rings and causal-order delivery of messages between the token rings. The standby gateway monitors for original and associated messages received via the token rings in a manner for preventing loss of messages when the active gateway fails.
Abstract:
Disclosed herein are methods, systems, and software for handling secure transport of data between end users and content serving devices. In one example, a method of operating a content server includes identifying a content request from an end user device. The method further includes, responsive to the user request, determining a transmission control protocol window size and a secure layer protocol block size. The method also provides scaling the secure layer protocol block size to match the transmission control protocol window size, and transferring secure layer protocol packets to the end user device using the scaled secure layer protocol block size.
Abstract:
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found. The lookup front-end further processes the response message and provides a corresponding response to the host.
Abstract:
Apparatus, computer program products and methods allow inclusion and segmentation of multiple SDUs within a PDU, and generate information (such as header information) necessary to identify, e.g., the position of the segmented SDUs within the PDU. Prior knowledge of the typically used (i.e., predetermined) SDU size is used to perform segmentation operations, hi one exemplary variant, apparatus, methods and computer program products determine if a particular one of a set of input data units (SDUs) is to be segmented in order to fit a portion of the set, including a segment of the particular input data unit, into an output data unit (PDU); segment the input data unit in response to determining the particular input data unit is to be segmented; add the portion of the set to a data portion of the output data unit, and add into the output data unit an indication of a position of the segment in the data portion. In another variant, apparatus, computer program products and methods receive first data units, each of the first data units including a plurality of data portions having a plurality of second data units, each of at least two of the first data units including an indication of a position of a segment of a second data unit in associated ones of the data portions; combine using at least the indications the segments to create a complete second data unit; and output the complete second data unit.
Abstract:
A network system which provides asymmetrical processing for networking functions and data path offload. A network interface unit is operably connected to a plurality of processing entities and a plurality of memory units that define a shared memory space. The network interface unit further comprises a memory access module that includes a plurality of memory access channels, a packet classifier, and a plurality of scheduling control modules that are operable to control processing of data transported by the network. In various embodiments of the invention, predetermined subsets of the plurality of processing entities are operably associated with predetermined subsets of the plurality of memory units thereby defining a plurality of asymmetrical data processing partitions. The packet classifier is operable to provide an association between packets and the plurality of asymmetrical data processing partitions. In various embodiments of the invention, the asymmetrical data processing partitions can comprise a plurality of processor cores, a single processor core, a combination of strands of an individual processor core or a single strand of an individual processor core. The asymmetrical data processing partitions are scalable by adding additional processing entities.