Abstract:
A memory system wherein data retrieval is simultaneously initiated in both L2 cache and main memory, which allows memory latency associated with arbitration, memory DRAM address translation, and the like to be minimized in the event that the data sought by the processor is not in the L2 cache (miss). The invention allows for any memory access to be interrupted in the storage control unit prior to any memory signals being activated. The L2 and memory access controls are in a single component, i.e. the storage control unit (SCU). Both the L2 and the memory have a unique port into the CPU which allows data to be directly transferred. This eliminates the overhead associated with storing the data in an intermediate device, such as a cache or memory controller.
Abstract:
A request is received from a first node over a communication fabric, the request to acquire an access right of a cache line for accessing data stored in a memory location of a memory, the first node being one of a plurality of nodes sharing the memory. In response to the request, a second node is determined based on the cache line that has cached a copy of the data of the cache line in its local memory. A first message is transmitted to the second node over the communication fabric requesting the second node to invalidate the cache line. In response to a response received from the second node indicating that the cache line has been invalidated, a second message is transmitted to the first node over the communication fabric to grant the access right of the cache line to the first node.
Abstract:
A system and method to access data from a portion of a level two memory or from a level one memory is disclosed. In a particular embodiment, the system includes a level one cache and a level two memory. A first portion of the level two memory is coupled to an input port and is addressable in parallel with the level one cache.
Abstract:
Techniques for block-based indexing are described. In one embodiment, for example, an apparatus may comprise a multicore processor element, an assignment component for execution by the multicore processor element to generate a plurality of block-attribute pairs, each block- attribute pair corresponding to an attribute value and one of a plurality of data blocks, and an indexing component for execution by the multicore processor element to generate an index block for the plurality of data blocks based on the plurality of block-attribute pairs, the indexing component to perform parallel indexing of the plurality of block-attribute pairs using multiple indexing instances. Other embodiments are described and claimed.
Abstract:
A method and system are described for improving memory access. The invention will improve memory access (400, 500) in systems where program code and data stored in memory (150, 160) have low locality. The invention builds on that the access to at least some addresses of the memory will take longer time than the access to other addresses, such as, for example, page type memory.
Abstract:
Efficient use of a cache memory in a computer system is achieved, the system comprising a processor (12), a local bus comprising local address (110) and local data buses (111) coupled to the processor, a cache memory (16) coupled to the local bus, a bus interface (20) coupled to the local bus for coupling the processor to a main memory via an external bus (141, 142) and a transparent write cache policy (TWCP) controller (14) functionally coupled between the processor and bus interface. The TWCP controller looks for a data write operation initiated by the processor, and signals the processor that the data write is complete before actual completion, to free the processor to engage in one or more subsequent operations that do not require the external bus. The TWCP controller causes the bus interface to complete the data write to main memory in parallel with the one or more subsequent operations.
Abstract:
Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.
Abstract:
A speculative read request is received from a host device over a buffered memory access link for data associated with a particular address. A read request is sent for the data to a memory device. The data is received from the memory device in response to the read request and the received data is sent to the host device as a response to a demand read request received subsequent to the speculative read request.
Abstract:
A data handling system includes a memory that includes a cache memory (120) and a main memory (130). The memory further includes a controller (140) for simultaneously initiating two data access operations to the cache memory and to the main memory by providing a main memory access address (M_Add) with a time-delay increment added to a cache memory access address (C_Add) based on an access time delay between an initial data access time to the main memory relative to the cache memory. The main memory further includes a plurality of data access paths divided into a plurality of propagation stages interconnected between a plurality of memory arrays in the main memory wherein each of the propagation stages further implementing a local clock for asynchronously propagating a plurality of data access signals to access data stored in a plurality memory cells in each of the main memory arrays. The data handling system further requests a plurality sets of data from the memory wherein the cache memory is provided with a capacity for storing only a first few data for the plurality sets of data with remainder of data of the plurality sets of data stored in the main memory and the main memory and the cache memory having substantially a same cycle time for completing a data access operation.
Abstract:
A memory controller controls a buffer which stores the most recently used addresses and associated data, but the data stored in the buffer is only a portion of a row of data (termed row head data) stored in main memory. In a memory access initiated by the CPU, both the buffer and main memory are accessed simultaneously. If the buffer contains the address requested, the buffer immediately begins to provide the associated row head data in a burst to the cache memory. Meanwhile, the same row address is activated in the main memory bank corresponding to the requested address found in the buffer. After the buffer provides the row head data, the remainder of the burst of requested data is provided by the main memory to the CPU.