摘要:
In general, techniques are described for utilizing anonymous cookies within computer networks to protect customer identities. In particular, a network device is configured to communicate with an edge router of a service provider network that provides access to a public network having network destinations. The network device includes a control unit and an interface. The control unit executes a content delivery layer and a privacy services layer. The content delivery layer receives a network communication sent from one of the customer devices to the public network. The privacy services layer replaces a destination-specified cookie within the network communication with an anonymous cookie, each of which conform to an application layer protocol. The anonymous cookie also specifies a pseudonym for the one of the customer devices that originated the network communication. The at least one interface then forwards the network communication including the anonymous cookie to the network destination.
摘要:
In one embodiment, a method, comprising producing a first policy vector based on a first portion of a data packet received at a multi-stage switch. The method also includes producing a second policy vector based on a second portion of the data packet different than the first portion of the data packet. A third policy vector is produced based on a combination of at least the first policy vector and at least the second policy vector. The third policy vector including a combination of bit values configured to trigger an element at the multi-stage switch to process the data packet.
摘要:
A method and system for detecting a pattern derived from or related to a data signature in data packets is provided. An intrusion detection module accepts a data packet and compares all or portions of the data packet with a set of data patterns. One or more data patterns may be related to, or indicate the existence of, or derived from a virus or other data structure, software code, software program, portions of content of a data packet, a universal resource locater, and/or a traffic classification indicator.
摘要:
A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a media access controller engine and a data transfer engine. The media access controller engine couples the compute engine to a communications network. The data transfer engine couples the media access controller engine to a set of cache memory. In further embodiments, a compute engine includes two media access controller engines. A reception media access controller engine receives data from the communications network. A transmission media access controller engine transmits data to the communications network. The compute engine also includes two data transfer engines. A streaming output engine stores network data from the reception media access controller engine in cache memory. A streaming input engine transfers data from cache memory to the transmission media access controller engine. In one implementation, the compute engine performs different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.
摘要:
A multi-processor unit includes a set of processing clusters. Each processing cluster is coupled to a data ring and a snoop ring. The unit also includes a snoop controller adapted to process memory requests from each processing cluster. The data ring enables clusters to exchange requested information. The snoop ring is coupled to the snoop controller—enabling the snoop controller to forward each cluster's memory requests to the other clusters in the form of snoop requests.
摘要:
A set of cache memory includes a set of first tier cache memory and a second tier cache memory. In the set of first tier cache memory each first tier cache memory is coupled to a compute engine in a set of compute engines. The second tier cache memory is coupled to each first tier cache memory in the set of first tier cache memory. The second tier cache memory includes a data ring interface and a snoop ring interface.
摘要:
A processor including at least one execution unit generating out-of-order results and out-of-order condition codes. Precise architectural state of the processor is maintained by providing a results buffer having a number of slots and providing a condition code buffer having the same number of slots as the results buffer, each slot in the condition code buffer in one-to-one correspondence with a slot in the results buffer. Each live instruction in the processor is assigned a slot in the results buffer and the condition code buffer. Each speculative result produced by the execution units is stored in the assigned slot in the results buffer. When an instruction is retired, the results for that instruction are transferred to an architectural result register and any condition codes generated by that instruction are transferred to an architectural condition code register.
摘要:
A system, apparatus and method for ensuring program correctness in an out-of-order processor spite of younger load instructions being boosted past an older store utilizing a memory disambiguation buffer ("MDB"). The memory disambiguation buffer stores all memory operations that have not yet been retired. Each entry has several fields amongst which are the data and the addresses of the memory operations. An incoming load checks its address against the addresses of all the stores. If there is a match against an older store, then the load must have received old data from the data cache and the load operation is replayed to seek data from the memory disambiguation buffer on the replay. If on the other hand, there were no matches on any older store, the load is assumed to have received the right data from the data cache (assuming a data cache hit). An incoming store checks its address against the addresses of all younger loads. If there is a match against any younger load, then the younger load is replayed along with all of its dependents.
摘要:
A processor including at least one execution unit generating out-of-order results and out-of-order condition codes. Precise architectural state of the processor is maintained by providing a results buffer having a number of slots and providing a condition code buffer having the same number of slots as the results buffer, each slot in the condition code buffer in one-to-one correspondence with a slot in the results buffer. Each live instruction in the processor is assigned a slot in the results buffer and the condition code buffer. Each speculative result produced by the execution units is stored in the assigned slot in the results buffer. When an instruction is retired, the results for that instruction are transferred to an architectural result register and any condition codes generated by that instruction are transferred to an architectural condition code register.
摘要:
A system and method for thermal overload detection and protection for a processor which allows the processor to run at near maximum potential for the vast majority of its execution life. This is effectuated by the provision of circuitry to detect when the processor has exceeded its thermal thresholds and which then causes the processor to automatically reduce the clock rate to a fraction of the nominal clock while execution continues. When the thermal condition has stabilized, the clock may be raised in a stepwise fashion back to the nominal clock rate. Throughout the period of cycling the clock frequency from nominal to minimum and back, the program continues to be executed. Also provided is a queue activity rise time detector and method to control the rate of acceleration of a functional unit from idle to full throttle by a localized stall mechanism at the boundary of each stage in the pipe. This mechanism can detect when an idle queue is suddenly overwhelmed with input such that over a short period of approximately 10-20 machine cycles, the queue activity rate has increased from idle to near stall threshold.