摘要:
A network content service apparatus includes a set of compute elements adapted to perform a set of network services; and a switching fabric coupling compute elements in said set of compute elements. The set of network services includes firewall protection, Network Address Translation, Internet Protocol forwarding, bandwidth management, Secure Sockets Layer operations, Web caching, Web switching, and virtual private networking. Code operable on the compute elements enables the network services, and the compute elements are provided on blades which further include at least one input/output port.
摘要:
A network content service apparatus includes a set of compute elements adapted to perform a set of network services; and a switching fabric coupling compute elements in said set of compute elements. The set of network services includes firewall protection, Network Address Translation, Internet Protocol forwarding, bandwidth management, Secure Sockets Layer operations, Web caching, Web switching, and virtual private networking. Code operable on the compute elements enables the network services, and the compute elements are provided on blades which further include at least one input/output port.
摘要:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
摘要:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
摘要:
A faucet supplies water at a computer controlled temperture. Hot and cold water valves are connected to hot and cold water supplies. A mixing connection is attached between the valves for mixing the hot and cold water together and supplying it at a faucet discharge. Each valve has a movable valve member which can be moved toward and away from a valve seat to control the flow of hot or cold water. A stepper motor is connected to each of the valve members and can be controlled by a digital error signal to rotate, in steps, either to increase or decrease the flow of hot or cold water. A temperature sensor is provided at the faucet outlet for sensing the actual temperature. A microcomputer receives signals corresponding to the actual temperature. The actual temperature is compared to a selected set point temperature which is programmed into the microcomputer. If an error exists between the actual and set point temperatures, control signals are supplied to the stepper motors for changing the flow of hot or cold water to move the actual temperature toward the set point temperature.
摘要:
Advanced processors for executing software applications on different operating system are presented including: a number of processor cores each configured to execute multiple threads, wherein each of the number of processor cores includes a data cache and an instruction cache; a data switch interconnect ring arrangement directly coupled with the data cache of each of the number of processor cores and configured to pass memory related information among the number of processor cores; a messaging network directly coupled with the instruction cache of each of the number of processor cores and a number of communication ports; and a memory management unit (MMU) coupled with each of the number of processor cores, the MMU having a first translation-lookaside buffer (TLB) portion, a second TLB portion, and a third TLB portion, wherein each TLB portion is operable in several modes, wherein each TLB portion includes a number of entries.
摘要:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
摘要:
An apparatus includes a compute engine coupled to a first tier cache memory including a data array. The first tier cache receives memory access requests from the compute engine. A second tier cache memory is coupled to the first tier cache to receive memory access requests for memory locations not owned by the first tier cache. To avoid stale data storage, the first tier cache does not load the data array with data returned by the second tier cache under the following condition—the second tier cache returns the data in response to a cacheable load operation from a memory location after the compute engine issues a subsequent store operation to the same memory location.
摘要:
An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
摘要:
A pipelined data processor has instructions at different stages of execution. Some of the instructions specify virtual addresses into a file of registers having physical addresses. A speculative translator maps the virtual registers of an instruction at one pipeline stage into physical registers for speculative use by the instruction at a later pipeline stage. The registers have multiple differently translated regions. Failure of speculative renaming reverts to an archive copy of renaming data.