Abstract:
A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor, a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.
Abstract:
Systems, apparatuses, and methods for sorting memory pages in a multi-level heterogeneous memory architecture. The system may classify pages into a first “hot” category or a second “cold” category. The system may attempt to place the “hot” pages into the memory level(s) closest to the systems' processor cores. The system may track parameters associated with each page, with the parameters including number of accesses, types of accesses, power consumed per access, temperature, wearability, and/or other parameters. Based on these parameters, the system may generate a score for each page. Then, the system may compare the score of each page to a threshold. If the score of a given page is greater than the threshold, the given page may be designated as “hot”. If the score of the given page is less than the threshold, the given page may be designated as “cold”.
Abstract:
To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.
Abstract:
A memory module is responsive to control signaling for a random access memory (RAM) module, and performs translation of received memory addresses so that it can map a relatively small address space of an operating system to a larger physical address space of its storage arrays. The memory module can therefore be employed in systems requiring a large amount of memory, such as systems using many processors, without requiring specialized operating systems for addressing the larger physical address space.
Abstract:
In one form, a computer system includes a central processing unit, a memory controller coupled to the central processing unit and capable of accessing non-volatile random access memory (NVRAM), and an NVRAM-aware operating system. The NVRAM-aware operating system causes the central processing unit to selectively execute selected ones of a plurality of application programs, and is responsive to a predetermined operation to cause the central processing unit to execute a memory persistence procedure using the memory controller to access the NVRAM.
Abstract:
A type of conditional probability fetcher prefetches data, such as for a cache, from another memory by maintaining information relating to memory elements in a group of memory elements fetched from the second memory. The information may be an aggregate number of memory elements that have been fetched for different memory segments in the group. The information is maintained responsive to fetching one or more memory elements from a segment of memory elements in the group of memory elements. Prefetching one or more remaining memory elements in a particular segment of memory elements from the second memory into the first memory occurs when the information relating to the memory elements in the group of memory elements indicates that a prefetching condition has been satisfied.
Abstract:
The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.
Abstract:
Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor.
Abstract:
A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.
Abstract:
A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.