Address hashing in a multiple memory controller system

    公开(公告)号:US12236130B2

    公开(公告)日:2025-02-25

    申请号:US18318672

    申请日:2023-05-16

    Applicant: Apple Inc.

    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.

    ACCELERATING WINDOWS FAST STARTUP FOR DRAM-LESS SSD

    公开(公告)号:US20250036424A1

    公开(公告)日:2025-01-30

    申请号:US18359520

    申请日:2023-07-26

    Abstract: Methods, systems, and devices for providing computer implemented services are disclosed. To provide the computer implemented services while managing limited hardware resources necessary to provide the services, a hibernation may be performed. To do so, a hibernation manager may facilitate management and storage of hibernation data for use during hibernation and startup of a system. To manage and store the hibernation data, the hibernation manager may identify an allocation of high-performance storage for the hibernation data, obtain a compression pipeline based on the allocation, and stream the hibernation data through the compression pipeline. By doing so, the speed in which the hibernation data is written and read may be increased. Thus, hibernation and startup of the system may be enhanced.

    Accelerating windows fast startup for dram-less SSD

    公开(公告)号:US12210883B1

    公开(公告)日:2025-01-28

    申请号:US18359520

    申请日:2023-07-26

    Abstract: Methods, systems, and devices for providing computer implemented services are disclosed. To provide the computer implemented services while managing limited hardware resources necessary to provide the services, a hibernation may be performed. To do so, a hibernation manager may facilitate management and storage of hibernation data for use during hibernation and startup of a system. To manage and store the hibernation data, the hibernation manager may identify an allocation of high-performance storage for the hibernation data, obtain a compression pipeline based on the allocation, and stream the hibernation data through the compression pipeline. By doing so, the speed in which the hibernation data is written and read may be increased. Thus, hibernation and startup of the system may be enhanced.

    Cache memory that supports tagless addressing

    公开(公告)号:US12124382B2

    公开(公告)日:2024-10-22

    申请号:US17992443

    申请日:2022-11-22

    Applicant: Rambus Inc.

    CPC classification number: G06F12/1063 G06F12/0802 G06F12/1009 G06F12/1054

    Abstract: The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.

    System for Memory Resident Data Movement Offload and Associated Methods

    公开(公告)号:US20240289281A1

    公开(公告)日:2024-08-29

    申请号:US18115607

    申请日:2023-02-28

    CPC classification number: G06F12/1063 G06F12/0882 G06F13/1642

    Abstract: A memory access engine is configured to receive a request comprising a command and to determine whether the command comprises an atomic command. If the command comprises the atomic command, the memory access engine determines whether the command includes a virtual address or a physical address. Based on determining that the command includes a virtual address, the memory access engine translates the virtual address to a corresponding physical address. The memory access engine determines an opcode included in the command and, based on the opcode, adds the command and the physical address to a particular queue of a plurality of queues. While a central processing unit (CPU) performs processing tasks, the memory access engine, based on the command, operates a memory fabric and, after receiving a message from the memory fabric indicating that the memory command has been completed, updates a status associated with the command to a completed status.

Patent Agency Ranking