摘要:
A statistical transition map is built based on mobile wireless device user mobility history data. This data is useful to assist various wireless local area network applications. Received signal strength and location trace information associated with movements of mobile wireless devices in a wireless network is collected. The received signal strength and location trace information is converted to a sequence of natural language pseudo-location word labels representing pseudo-locations of each mobile wireless device as each mobile wireless device moves about with respect to a plurality of wireless access point devices in the wireless network. A statistical transition map is generated for each mobile wireless device from the sequence of natural language pseudo-location word labels using a natural language model. A probability of a next pseudo-location for a particular mobile wireless device is computed based on its current location and its statistical transition map.
摘要:
Techniques for improving the performance of flow control mechanisms such as Pause are provided. The techniques provide for maintaining a fair distribution of available bandwidth while also allowing for fewer packet drops, and maximizing link utilization, in a distributed system. For example, in one embodiment, techniques are provided for achieving a fair share allocation of an egress port's bandwidth across a plurality of ingress ports contending for the same egress port.
摘要:
An example method includes sending a virtual output queue (VOQ) length of a VOQ to an egress chip. The VOQ relates to a flow routed through an egress port associated with the egress chip. The method also includes receiving fair share information for the VOQ from the egress chip, and enforcing a control action on the incoming packets based on the fair share information. An ingress chip and the egress chip can be provided in a VOQ switch. The control action is a selected one of a group of actions, the group consisting of: (a) dropping packets, (b) pausing packets, and (c) marking packets. The method can further include receiving VOQ lengths of corresponding VOQs from respective ingress chips, where the VOQs relate to the flow. The method can also include calculating respective fair share information for each VOQ, and sending the fair share information to the respective ingress chips.
摘要:
In one embodiment, a universal programming module on a first device collects context and state information from a local application executing on the first device, and provides the context and state information to a context mobility agent on the first device. The context mobility agent establishes a peer-to-peer connection with a second device, and transfers the context and state information to the second device, such that a remote application may be configured to execute according to the transferred context and state information from the first device. In another embodiment, the context mobility agent receives remote context and remote state information from the second device, wherein the remote application had been executing according to the remote context and remote state information, and provides the remote context and remote state information to the universal programming module to configure the local application to execute according to the remote context and remote state information.
摘要:
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.
摘要:
A method for communicating optically between nodes of an optical network, including forming, between a first node and a second node of the network, a set of lightpaths, each of the set of lightpaths having a respective configuration, and transferring communication traffic between the first and second nodes via the set of lightpaths. The method also includes forming a determination for the set of lightpaths that a communication traffic level associated therewith is less than a predetermined threshold, and in response to the determination, removing a lightpath having a given configuration from the set of lightpaths to form a reduced set of lightpaths. The method further includes transferring the communication traffic between the first and second nodes via the reduced set of lightpaths, while reducing a level of power consumption in the removed lightpath and while maintaining the given configuration of the removed lightpath.
摘要:
A scalable method and apparatus that detects frequent and dispersed invariants is disclosed. More particularly, the application discloses a system that can simultaneously track frequency rates and dispersion criteria of unknown invariants. In other words, the application discloses an invariant detection system implemented in hardware (and/or software) that allows detection of invariants (e.g., byte sequences) that are highly prevalent (e.g., repeating with a high frequency) and dispersed (e.g., originating from many sources and destined to many destinations).
摘要:
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.