摘要:
A method, an apparatus, and a computer program product are provided for the handling of write mask operations in an XDR™ DRAM memory system. This invention eliminates the need for a two-port array because the mask generation is done as the data is received. Less logic is needed for the mask calculation because only 144 of the 256 possible byte values are decoded. The mask value is generated and stored in a mask array. Independently, the write data is stored in a write buffer. The mask value is utilized to generate a write mask command. Once the write mask command is issued, the write data and the mask value are transmitted to a multiplexer. The multiplexer masks the write data using the mask value, so that the masked data can be stored in the XDR DRAMS.
摘要:
Disclosed is an apparatus which operates to substantially evenly distribute commands and/or data packets issued from a managed program or other entity over a given time period. The even distribution of these commands or data packets minimizes congestion in critical resources such as memory, I/O devices and/or the bus for transferring the data between source and destination. Any unmanaged commands or data packets are treated as in conventional technology.
摘要:
A system and method for using a data-only transfer protocol to store atomic cache line data in a local storage area is presented. A processing engine includes an atomic cache and a local storage. When the processing engine encounters a request to transfer cache line data from the atomic cache to the local storage (e.g., GETTLAR command), the processing engine utilizes a data-only transfer protocol to pass cache line data through the external bus node and back to the processing engine. The data-only transfer protocol comprises a data phase and does not include a prior command phase or snoop phase due to the fact that the processing engine communicates to the bus node instead of an entire computer system when the processing engine sends a data request to transfer data to itself.
摘要:
Disclosed is an apparatus which operates to substantially evenly distribute commands and/or data packets issued from a managed program or other entity over a given time period. The even distribution of these commands or data packets minimizes congestion in critical resources such as memory, I/O devices and/or the bus for transferring the data between source and destination. Any unmanaged commands or data packets are treated as in conventional technology.
摘要:
An apparatus, a method, and a computer program product are provided for time reduction for an array read access control consisting of a bitcell with logic gating and a pull down device included, therein. To reduce gate delay this design implements gating logic inside the bitcell. The multiplex select gating signals are brought into the bitcell, and are gated with the data array. The gating logic controls the pull down device, and MUX select signals can be produced as a readout of the bitcell. This design reduces gate delay because the dependency upon the gating logic is overridden and the number of stages is reduced.
摘要:
The present invention provides for a plurality of partitioned ways of an associative cache. A pseudo-least recently used binary tree is provided, as is a way partition binary tree, and signals are derived from the way partition binary tree as a function of a mapped partition. Signals from the way partition binary tree and the pseudo-least recently used binary tree are combined. A cache line replacement signal is employable to select one way of a partition as a function of the pseudo-least recently used binary tree and the signals derived from the way partition binary tree.
摘要:
Disclosed is an electronic chip containing a plurality of electronic circuit partitions, distributed over the area of the chip, each including a processor core and a clock phase domain different from cores in other partitions of the chip. A source of same frequency, but different phase clock signals representing different clock domains, provides different phase signals to adjacent partitions for the purpose of reducing instantaneous magnitude switching currents. Intra-chip communication circuitry distributes control and data signals between partitions.
摘要:
A cache memory having a mechanism for managing cache lines replacement is disclosed. The cache memory comprises multiple cache lines partitioned into a first group and a second group. The number of cache lines in the second group is preferably larger than the number of cache lines in the first group. A replacement logic block selectively chooses a cache line from one of the two groups of cache lines for replacement during an allocation cycle.
摘要:
An interleaved data cache array which is divided into two sub arrays is provided for utilization within a data processing system. Each subarray includes a plurality of cache lines wherein each cache line includes a selected block of data, a parity field, a first content addressable field containing a portion of an effective address for the selected block of data, a second content addressable field contains a portion of the real address for the selected block of data and a data status field. By utilizing two separate content addresssable fields for the effective address and real address offset and alias problems may be efficiently resolved. A virtual address aliasing condition is identified by searching each cache line for a match between a portion of a desired effective address and the content of the first content addressable field. The desired effective address is translated into a desired real address and a portion of the desired real address is then utilized to search each cache line for a match with the content of the second content addressable field if no match was found within the first content addressable field during the previous cycle. An offset condition is identified by comparing the translated real address to the content of the second content addressable field in a cache line when a match has occurred between the desired effective address and the content of the first content addressable field within that cache line.
摘要:
Systems and methods for controlling access by a set of agents to a resource, where the agents have corresponding priorities associated with them, and where a monitor associated with the resource controls accesses by the agents to the resource based on the priorities. One embodiment is implemented in a computer system having multiple processors that are connected to a processor bus. The processor bus includes a shaping monitor configured to control access by the processors to the bus. The shaping monitor attempts to distribute the accesses from each of the processors throughout a base period according to priorities assigned to the processors. The shaping monitor allocates slots to the processors in accordance with their relative priorities. Priorities are initially assigned according to the respective bandwidth needs of the processors, but may be modified based upon comparisons of actual to expected accesses to the bus.