摘要:
A transfer request bus and transfer request bus node is described which is suitable for use in a data transfer controller processing multiple concurrent transfer requests despite the attendant collisions which result when conflicting transfer requests occur. Transfer requests are passed from an upstream transfer request node to downstream transfer request node and then to a transfer request controller with queue. At each node a local transfer request can also be inserted to be passed on to the transfer controller queue. Collisions at each transfer request node are resolved using a token passing scheme wherein a transfer request node possessing the token allows a local request to be inserted in preference to the upstream request.
摘要:
A transfer controller with hub and ports is viewed as a communication hub between the various locations of a global memory map. A request queue manager serves as a crucial part of the transfer controller. The request queue manager receives these data transfer request packets from plural transfer requests nodes. The request queue manager sorts transfer request packets by their priority level and stores them in the queue manager memory. The request queue manager processes dispatches transfer request packets to a free data channel based upon priority level and first-in-first-out within priority level.
摘要:
A data processing apparatus includes a central processing unit and a memory configurable as cache memory and directly addressable memory. The memory is selectively configurable as cache memory and directly addressable memory by configuring a selected number of ways as directly addressable memory and configuring remaining ways as cache memory. Control logic inhibits indication that tag bits matches address bits and that a cache entry is the least recently used for cache eviction if the corresponding way is configured as directly addressable memory. In an alternative embodiment, the memory is selectively configurable as cache memory and directly addressable memory by configuring a selected number of sets equal to 2M, where M is an integer, as cache memory and configuring remaining sets as directly addressable memory.
摘要:
A data processing system having a central processing unit, at least one level one cache, a level two unified cache, a directly addressable memory and a direct memory access unit includes a snoop unit generating snoop accesses to the at least one level one cache upon a direct memory access to the directly addressable memory. The snoop unit generates a write snoop access to both level one caches upon a direct memory access write to or a direct memory access read from the directly addressable memory. The level one cache also invalidates a cache entry upon a snoop hit and also writes back a dirty cache entry to the directly addressable memory. A level two memory is selectively configurable as part level two unified cache and part directly addressable memory.
摘要:
This invention is a data processing system including a central processing unit executing program instructions to manipulate data, at least one level one cache, a level two unified cache, a directly addressable memory and a direct memory access unit adapted for connection to an external memory. A superscalar memory transfer controller schedules plural non-interfering memory movements to and from the level two unified cache and the directly addressable memory each memory cycle in accordance with a predetermined priority of operation. The level one cache preferably includes a level one instruction cache and a level one data cache. The superscalar memory transfer controller is capable of scheduling plural cache tag memory read accesses and one cache tag memory write access in a single memory cycle. The superscalar memory transfer controller is capable of scheduling plural of cache access state machines in a single memory cycle. The superscalar memory transfer controller is capable of scheduling plural memory accesses to non-interfering memory banks of the level two unified cache in a single memory cycle.
摘要:
A method generates a list of allowed states in a cache design by applying each input transaction sequentially to all found legal cache states. If application of an input transaction to a current search cache results in a new cache state, then this new cache state is added to the list of legal cache states and to a list of search cache states. This is repeated for all input transactions and all such found legal cache states. At the same time a sequence of input transactions reaching each new cache state is formed. This new sequence is the sequence of input transactions for the prior cache state and the current input transaction. The method generates a series of test sequences from the list of allowed states and their corresponding sequence of input transactions which are applied to the control logic cache design and to a reference memory. If the response of the control logic cache design fails to match the response of the reference memory, then a design fault is detected.