摘要:
Parallel compression is performed on an input data stream by processing circuitry. The processing circuitry includes hashing circuitry, match engines, pipeline circuitry and a match selector. The hashing circuitry identifies multiple locations in one or more history buffers for searching for a target data in the input data stream. The match engines perform multiple searches in parallel for the target data in the one or more history buffers. The pipeline circuitry performs pipelined searches for multiple sequential target data in the input data stream in consecutive clock cycles. Then the match selector selects a result from the multiple searches and pipelined searches to compress the input data stream.
摘要:
An embodiment may include first circuitry and second circuitry. The first circuitry may compress, at least in part, based at least in part upon a first set of statistics, input to produce first output exhibiting a first compression ratio. If the first compression ratio is less than a desired compression ratio, the second circuitry may compress, at least in part, based at least in part upon a second set of statistics, the first output to produce second output. The first set of statistics may be based, at least in part, after an initial compression, upon other data that has been previously compressed and is associated, at least in part, with the input. The second set of statistics may be based at least in part upon the input. Many alternatives, variations, and modifications are possible.
摘要:
A method and apparatus for providing a low power mode for a processor while maintaining snoop throughput are disclosed. In one embodiment, an apparatus includes a cache, a processor, and a frequency controller. The frequency controller is to operate the apparatus in a low power mode in which the operating frequency of the cache is higher than the operating frequency of the processor.
摘要:
An embodiment may include first circuitry and second circuitry. The first circuitry may compress, at least in part, based at least in part upon a first set of statistics, input to produce first output exhibiting a first compression ratio. If the first compression ratio is less than a desired compression ratio, the second circuitry may compress, at least in part, based at least in part upon a second set of statistics, the first output to produce second output. The first set of statistics may be based, at least in part, after an initial compression, upon other data that has been previously compressed and is associated, at least in part, with the input. The second set of statistics may be based at least in part upon the input. Many alternatives, variations, and modifications are possible.
摘要:
Elements of a computer system are tested by generating harassing transactions on a bus. A first transaction is detected on the bus. The first transaction including a first data request to a first address. In response to and based upon detecting the first address, a second data request is generated to a second address. The second data request is issued on the bus as a second transaction while the first transaction is pending on the bus.
摘要:
An processor includes first and second execution cores that operate in an FRC mode, an FRC check unit to compare results from the first and second execution cores, and an error check unit to detect recoverable errors in the first and second cores. The FRC check unit temporarily stores results from the first or second core, and a timer is activated if a mismatch is detected. If the error detector detects a recoverable error before the timer interval expires, a recovery routine is activated. If the timer interval expires first, a reset routine is activated.
摘要:
Cache coherency is maintained between the dedicated caches of a chip multiprocessor by writing back data from one dedicated cache to another without routing the data off-chip. Various specific embodiments are described, using write buffers, fill buffers, and multiplexers, respectively, to achieve the on-chip transfer of data between dedicated caches.
摘要:
A validation FUB is a hardware system within the agent that can place a computer system in a stress condition. A validation FUB may monitor transactions posted on an external bus and generate other transactions in response to the monitored transactions. The validation FUB may be a programmable element whose response may be defined by an external input. Accordingly, the validation FUB may test a wide variety of system events.
摘要:
Implemented within a computer system, a cache memory element having a cache hierarchy including a first level cache and at least a second level cache. In the event that a processor core requests a copy of a selected cache line and intends to modify contents of the selected cache line and the selected cache line cannot be supplied by the first or second level cache, tag information is solely written into the second level cache and higher level caches. This preserves databus bandwidth and enhances performance of the computer system.