Abstract:
Methods and systems are provided for fork-safe memory allocation from memory-mapped files. A child process may be provided a memory mapping at a same virtual address as a parent process, but the memory mapping may map the virtual address to a different location within a file than for the parent process.
Abstract:
A system and method for efficiently performing user storage virtualization for data stored in a storage system including a plurality of solid-state storage devices. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels. Each level stores pairs of a key value and a pointer value. The levels are sorted by time. New records are inserted in a created newest (youngest) level. No edits are performed in-place. All levels other than the youngest may be read only. The system may further include an overlay table which identifies those keys within the mapping table that are invalid.
Abstract:
Systems and methods pertain to a method of memory management. Gaps are unused portions of a physical memory in sections of the physical memory mapped to virtual addresses by entries of a translation look-aside buffer (TLB). Sizes and alignment of the sections in the physical memory may be based on the number of entries in the TLB, which leads to the gaps. One or more gaps identified in the physical memory are reclaimed or reused, where the one or more gaps are collected to form a dynamic buffer, by mapping physical addresses of the gaps to virtual addresses of the dynamic buffer.
Abstract:
This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU, the L1I cache controller does a-priori lookup of whether the virtual address plus the fetch packet count of expected program fetches crosses a page boundary. If the access crosses a page boundary, the L1I cache controller will request a second page translation along with the first page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the second page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the page boundary.
Abstract:
A data access system including a processor and a final level cache module. The processor is configured to generate a request to access a first physical address. The final level cache module includes a dynamic random access memory (DRAM), a final level cache controller, and a DRAM controller. The final level cache controller is configured to (i) receive the request from the processor, and (ii) convert the first physical address to a first virtual address. The DRAM controller is configured to (i) convert the first virtual address to a second physical address, and (ii) access the DRAM based on the second physical address.
Abstract:
A memory region stores a data structure that contains a mapping between a virtual address space and a physical address space of a memory. A portion of the mapping is cached in a cache memory. In response to a miss in the cache memory responsive to a lookup of a virtual address of a request, an indication is sent to the buffer device. In response to the indication, a hardware controller on the buffer device performs a lookup of the data structure in the memory region to find a physical address corresponding to the virtual address.
Abstract:
The invention relates to a multi-core processor system, in particular a single-package multi-core processor system, comprising at least two processor cores, preferably at least four processor cores, each of said at least two cores, preferably at least four processor cores, having a local LEVEL-1 cache, a tree communication structure combining the multiple LEVEL-1 caches, the tree having at least one node, preferably at least three nodes for a four processor core multi-core processor, and TAG information is associated to data managed within the tree, usable in the treatment of the data.
Abstract:
Systems and methods relate to performing address translations in a multithreaded memory management unit (MMU). Two or more address translation requests can be received by the multithreaded MMU and processed in parallel to retrieve address translations to addresses of a system memory. If the address translations are present in a translation cache of the multithreaded MMU, the address translations can be received from the translation cache and scheduled for access of the system memory using the translated addresses. If there is a miss in the translation cache, two or more address translation requests can be scheduled in two or more translation table walks in parallel.
Abstract:
Example embodiments hide the page miss translation latency for program fetches. In example embodiments, whenever an access is requested by a CPU, the L1l cache controller (111) does a-priori lookup of whether the virtual address plus the fetch packet count of expected program fetches crosses a page boundary (1614, 1622). If the access crosses a page boundary (1622), the L1l cache controller (111) will request a second page translation along with the first page. This pipelines requests to the μΤLΒ (1501) without waiting for L1l cache controller (111) to begin processing the second page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored (1624) locally in L1l cache controller (111) and used when the access crosses the page boundary.