HOST-ASSISTED MEMORY-SIDE PREFETCHER
    1.
    发明申请

    公开(公告)号:WO2021257281A1

    公开(公告)日:2021-12-23

    申请号:PCT/US2021/035535

    申请日:2021-06-02

    Abstract: Methods, apparatuses, and techniques related to a host-assisted memory-side prefetcher are described herein. In general, prefetchers monitor the pattern of memory-address requests by a host device and use the pattern information to determine or predict future memory-address requests and fetch data associated with those predicted requests into a faster memory. In many cases, prefetchers that can make predictions with high performance use appreciable processing and computing resources, power, and cooling. Generally, however, producing a prefetching configuration that the prefetcher uses involves more resources than making predictions. The described host-assisted memory-side prefetcher uses the greater computing resources of the host device (102) to produce at least an updated prefetching configuration (404). The memory-side prefetcher uses the prefetching configuration to predict the data to prefetch into the faster memory, which allows a higher-performance prefetcher to be implemented in the memory device with a reduced resource burden on the memory device (202).

    DATA DEFINED CACHES FOR SPECULATIVE AND NORMAL EXECUTIONS

    公开(公告)号:WO2021021444A1

    公开(公告)日:2021-02-04

    申请号:PCT/US2020/042167

    申请日:2020-07-15

    Abstract: A cache system, having: a first cache; a second cache; a configurable data bit; and a logic circuit coupled to a processor to control the caches based on the configurable bit. When the configurable bit is in a first state, the logic circuit is configured to: implement commands for accessing a memory system via the first cache, when an execution type is a first type; and implement commands for accessing the memory system via the second cache, when the execution type is a second type. When the configurable data bit is in a second state, the logic circuit is configured to: implement commands for accessing the memory system via the second cache, when the execution type is the first type; and implement commands for accessing the memory system via the first cache, when the execution type is the second type.

    APPARATUSES AND METHODS FOR AN OPERATING SYSTEM CACHE IN A SOLID STATE DEVICE
    3.
    发明申请
    APPARATUSES AND METHODS FOR AN OPERATING SYSTEM CACHE IN A SOLID STATE DEVICE 审中-公开
    用于固态装置中的操作系统高速缓存的装置和方法

    公开(公告)号:WO2018075290A1

    公开(公告)日:2018-04-26

    申请号:PCT/US2017/055845

    申请日:2017-10-10

    Inventor: JUNG, Juyoung

    Abstract: The present disclosure includes apparatuses and methods for an operating system cache in a solid state device (SSD). An example apparatus includes the SSD, which includes an In-SSD volatile memory, a non-volatile memory, and an interconnect that couples the non-volatile memory to the In-SSD volatile memory. The SSD also includes a controller configured to receive a request for performance of an operation and to direct that a result of the performance of the operation is accessible in the In-SSD volatile memory as an In-SSD main memory operating system cache.

    Abstract translation: 本公开包括用于固态设备(SSD)中的操作系统高速缓存的装置和方法。 示例装置包括SSD,其包括In-SSD易失性存储器,非易失性存储器以及将非易失性存储器耦合到In-SSD易失性存储器的互连。 SSD还包括控制器,该控制器被配置为接收对执行操作的请求并且指示该操作的执行结果在In-SSD易失性存储器中作为In-SSD主存储器操作系统高速缓存可访问。 p>

    FILE SYSTEM AND HOST PERFORMANCE BOOSTER FOR FLASH MEMORY

    公开(公告)号:WO2022205161A1

    公开(公告)日:2022-10-06

    申请号:PCT/CN2021/084657

    申请日:2021-03-31

    Abstract: Disclosed herein are system, method, and computer program product aspects for managing a storage system. In an aspect, a host device may generate a configuration corresponding to a file and transmit the configuration to a memory device, such as 3D NAND memory. The configuration instructs the memory device to refrain from transmitting a logic-to-physical (L2P) dirty entry notification to the host device. The L2P dirty entry notification corresponds to the file. The host device may also generate a second configuration corresponding to the file and transmit the second configuration to the memory device. The second configuration instructs the memory device to resume transmitting the L2P dirty entry notification corresponding to the file to the host device.

    SYSTEM AND METHOD FOR LOCAL CACHE SYNCHRONIZATION

    公开(公告)号:WO2022129992A1

    公开(公告)日:2022-06-23

    申请号:PCT/IB2020/062033

    申请日:2020-12-16

    Applicant: COUPANG CORP.

    Abstract: A computer-implemented method for synchronizing local caches is disclosed. The method may include receiving a content update which is an update to a data entry stored in local caches of each of a plurality of remote servers. The method may include transmitting the content update to a first remote server to update a corresponding data entry in a local cache of the first remote server. Further, the method may include generating an invalidation command, indicating the change in the corresponding data entry. The method may include transmitting the invalidation command from the first remote server to the message server. The method may include generating, by the message server, a plurality of partitions based on the received invalidation command. The method may include transmitting, from the message server to each of the remote servers, the plurality of partitions, so that the remote servers update their respective local caches.

    缓存管理方法、装置、存储介质和固态非易失存储设备

    公开(公告)号:WO2021232743A1

    公开(公告)日:2021-11-25

    申请号:PCT/CN2020/132910

    申请日:2020-11-30

    Inventor: 向雄

    Abstract: 本公开提供了一种缓存管理方法、装置、存储介质和固态非易失存储设备,该缓存管理方法包括:根据各第一L2P映射表子层单元的压缩比确定统一压缩比,第一L2P映射表子层单元为压缩格式的L2P映射表子层单元;根据统一压缩比和缓存容量确定缓存空间的拆分数量,缓存空间为高速缓冲存储器的存储空间;按照拆分数量拆分缓存空间,得到多个子空间;采用子空间存储第一L2P映射表子层单元。方法根据各第一L2P映射表子层单元的压缩比确定统一压缩比,根据统一压缩比和缓存容量,从而将存储空间拆分成合适数量的子空间,使得子空间可以存储更多的第一L2P映射表子层单元,从而提高了高速缓冲存储器的缓存空间的利用率。

    ADAPTIVE CONTEXT METADATA MESSAGE FOR OPTIMIZED TWO-CHIP PERFORMANCE

    公开(公告)号:WO2021262260A1

    公开(公告)日:2021-12-30

    申请号:PCT/US2021/020095

    申请日:2021-02-26

    Abstract: Aspects of a storage device including a master chip controller and a slave chip processor and memory including a plurality of memory locations are provided which allow for simplified processing of descriptors associated with host commands in the slave chip based on an adaptive context metadata message from the master chip. When the controller receives a host command, the controller in the master chip provides to the processor in the slave chip a descriptor associated with a host command, an instruction to store the descriptor in the one of the memory locations, and the adaptive context metadata message mapping a type of the descriptor to the one of the memory locations. The processor may then process the descriptor stored in the one of the memory locations based on the message, for example, by refraining from identifying certain information indicated in the descriptor. Reduced latency in command execution may thereby result.

    LOGICAL-TO-PHYSICAL MAPPING OF DATA GROUPS WITH DATA LOCALITY

    公开(公告)号:WO2021127349A1

    公开(公告)日:2021-06-24

    申请号:PCT/US2020/065870

    申请日:2020-12-18

    Abstract: A system includes integrated circuit (IC) dies having memory cells and a processing device coupled to the IC dies. The processing device performs operations including storing, within a zone map data structure, zones of a logical block address (LBA) space sequentially mapped to physical address space of the IC dies. A zone map entry in the zone map data structure corresponds to a data group written to one or more of the IC dies. The operations further include storing, within a block set data structure indexed by a block set identifier of the zone map entry, a die identifier and a block identifier for each data block of multiple data blocks of the data group, and writing multiple data groups, which are sequentially mapped across the zones, sequentially across the IC dies. Each data block can correspond to a media (or erase) block of the IC dies.

    EFFICIENT OBLIVIOUS PERMUTATION
    10.
    发明申请

    公开(公告)号:WO2018200046A1

    公开(公告)日:2018-11-01

    申请号:PCT/US2018/013136

    申请日:2018-01-10

    Applicant: GOOGLE LLC

    Abstract: A method (700) for obliviously moving N data blocks (102) stored in memory hardware (114) includes organizing memory locations (118) of the memory hardware into substantially formula (I) data buckets (350) each containing formula (I) data blocks, and allocating substantially formula (I) buffer buckets (360) associated with new memory locations in the memory hardware. Each buffer bucket is associated with a corresponding cache slot (370) allocated at the client (104) for storing cached permutated data blocks. The method further includes iteratively providing the substantially formula (I) data blocks to the client. The client is configured to apply a random permutation on the substantially formula (I) data blocks within each corresponding received data bucket to generate permutated data blocks and determine a corresponding buffer bucket and a corresponding cache slot for each permutated data block.

Patent Agency Ranking