Abstract:
This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU that crosses a memory page boundary, the L1I cache controller request a next page translation along with the current page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the next page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the next page boundary.
Abstract:
Disclosed embodiments include an electronic device having a processor core, a memory, a register, and a data load unit to receive a plurality of data elements stored in the memory in response to an instruction. All of the data elements hare the same data size, which is specified by one or more coding bits. The data load unit includes an address generator to generate addresses corresponding to locations in the memory at which the data elements are located, and a formatting unit to format the data elements. The register is configured to store the formatted data elements, and the processor core is configured to receive the formatted data elements from the register.
Abstract:
One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.
Abstract:
One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.
Abstract:
A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.
Abstract:
This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.
Abstract:
One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.
Abstract:
This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.
Abstract:
A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.
Abstract:
Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.