摘要:
A dynamically configured memory controller to prevent speculative memory accesses of non-well behaved memory. Such dynamic configuration of a memory controller may be accomplished by providing to the memory controller guard information associated with memory requests. The memory controller may then prevent speculative memory accesses when the guard information indicates that the memory requests are of non-well behaved memory. The guard information provided to the memory controller may include guard information associated with each memory request provided to the memory controller.
摘要:
A communication system and method of communicating including a slave function connected to a master function by a single address bus, a write data bus and a read data bus so as to allow for overlapping multiple cycle read and write operations between the master function and the slave function. Preferably the communication system includes a plurality of slave functions connected to a master function by the single address bus, the write data bus and the read data bus. A plurality of master functions may be connected to the slave functions through a bus arbiter connected to the plurality of master functions by an address bus, a write data bus and a read data bus for each master function. The bus arbiter receives requests for communication operations from the plurality of master functions and selectively transmits the communication operations to the slave functions. In a preferred embodiment of the present invention, the master function and the slave function are further connected by a plurality of transfer qualifier signals which may specify whether the operation is a read or a write operation, the size of the transfer, the direction of the transfer or the type of transfer so as to further facilitate multiple cycle transfers with a single address specified on the single address bus.
摘要:
A process and system for transferring data including at least one slave device connected to at least one master device through an arbiter device. The master and slave devices are connected by a single address bus, a write data bus and a read data bus. The arbiter device receives requests for data transfers from the master devices and selectively transmits the requests to the slave devices. The master devices and the slave devices are further connected by a plurality of transfer qualifier signals which may specify predetermined characteristics of the requested data transfers. Control signals are also communicated between the arbiter device and the slave devices to allow appropriate slave devices to latch addresses of requested second transfers during the pendency of current or primary data transfers so as to obviate an address transfer latency typically required for the second transfer. The design is configured to advantageously function in mixed systems which may include address-pipelining and non-address-pipelining slave devices.
摘要:
Bus performance in a computer system having multiple devices accessing a common shared bus may be improved by increasing throughput and decreasing latency while accounting for dynamic changes in bus usage. Devices submit a priority level along with a bus request to a bus controller. Upon receiving multiple requests, an arbiter of the bus controller compares the priority levels associated with the different bus requests and grants control of the bus to the device having the highest priority level. During each cycle that a device has control of the bus, a feedback logic circuit of the bus controller determines whether other bus requests are pending, and if so, determines the highest pending request priority level. Signals corresponding to the results of these determinations are fed back to each device. The device having control of the bus uses the combination of the currently pending request priority level and the device's own latency timer to determine whether it should maintain control of the bus or relinquish control of the bus.
摘要:
A communication system is provided for use in processing systems and the like for carrying out data transfer operations between a first data bus and a peripheral device associated with a second data bus, wherein the first data bus operates at a first clock speed and wherein the second data bus operates. at a second clock speed which is different from the first clock speed and a 1/N integer multiple of the first clock speed. A sample signal associated with the second clock speed is received and the speed of operation of a state machine of a peripheral controller is dynamically adjusted in response to the sample signal such that the state machine of the peripheral controller operates at the second clock speed and causes operations on the second data bus to occur synchronously at the second clock speed.
摘要:
A method and apparatus for causing a data line to be fetched in an order consistent with the data structure of a processor's modified little endian mode or big endian mode of operation is accomplished when the processor requests a particular word that is not currently stored in cache memory. The request includes an address of the particular word and an indication is provided as to whether the processor is operating in the modified little endian mode or the big endian mode. A memory manager, upon receiving the request, retrieves a line of data from memory (storage device) based on the address and the mode of operation. For example, when the big endian mode is used, the line of data is retrieved using a target word first ordering and when the modified little endian mode is used, the line of data is retrieved using a reverse target word first ordering.
摘要:
Systems and method for memory management units (MMUs) configured to automatically pre-fill a translation lookaside buffer (TLB) with address translation entries expected to be used in the future, thereby reducing TLB miss rate and improving performance. The TLB may be pre-filled with translation entries, wherein addresses corresponding to the pre-fill may be selected based on predictions. Predictions may be derived from external devices, or based on stride values, wherein the stride values may be a predetermined constant or dynamically altered based on access patterns. Pre-filling the TLB may effectively remove latency involved in determining address translations for TLB misses from the critical path.
摘要:
The disclosure is directed to a weakly-ordered processing system and method for enforcing strongly-ordered memory access requests in a weakly-ordered processing system. The processing system includes a plurality of memory devices and a plurality of processors. Each of the processors are configured to generate memory access requests to one or more of the memory devices, with each of the memory access requests having an attribute that can be asserted to indicate a strongly-ordered request. The processing system further includes a bus interconnect configured to interface the processors to the memory devices, the bus interconnect being further configured to enforce ordering constraints on the memory access requests based on the attributes.
摘要:
A first processing unit and a second processing unit can access a system memory that stores a common page table that is common to the first processing unit and the second processing unit. The common page table can store virtual memory addresses to physical memory addresses mapping for memory chunks accessed by a job of an application. A page entry, within the common page table, can include a first set of attribute bits that defines accessibility of the memory chunk by the first processing unit, a second set of attribute bits that defines accessibility of the same memory chunk by the second processing unit, and physical address bits that define a physical address of the memory chunk.
摘要:
Configuring a surrogate memory accessing agent using an instruction for translating and storing a data value is described. In one embodiment, the instruction is received that includes a first operand specifying a data value to be translated and a second operand specifying a virtual address associated with a location of a surrogate memory accessing agent register in which to store the data value. The data value can be translated to a first physical address. The virtual address can be translated to a second physical address. The first physical address is stored in the surrogate memory accessing agent register based on the second physical address.