캐시 관리 장치 및 방법
    1.
    发明申请

    公开(公告)号:WO2022191622A1

    公开(公告)日:2022-09-15

    申请号:PCT/KR2022/003337

    申请日:2022-03-10

    发明人: 이상원 안미진

    摘要: 본 발명에 따른 캐시 관리 장치는 휘발성 메모리를 이용하는 메인 버퍼; 비휘발성 메모리를 이용하며, 상기 메인 버퍼와 동일 처리 속도를 갖는 추가 버퍼; 상기 메인 버퍼보다 큰 저장 용량을 갖는 비휘발성 메모리를 이용하는 스토리지; 및 상기 메인 버퍼에 저장되지 못하는 희생 페이지를 시간 지역성, 쓰기 비대칭성, 순환 접근성에 기초하여 상기 추가 버퍼 또는 스토리지에 선택적으로 저장하는 버퍼 컨트롤러를 포함하는 것을 특징으로 한다.

    EVICTION OF A CACHE LINE BASED ON A MODIFICATION OF A SECTOR OF THE CACHE LINE

    公开(公告)号:WO2020176832A1

    公开(公告)日:2020-09-03

    申请号:PCT/US2020/020294

    申请日:2020-02-28

    摘要: An indication to perform an eviction operation on a cache line in a cache can be received. A determination can be made as to whether at least one sector of the cache line is associated with invalid data. In response to determining that at least one sector of the cache line is associated with invalid data, a read operation can be performed to retrieve valid data associated with the at least one sector. The at least one sector of the cache line that is associated with the invalid data can be modified based on the valid data. Furthermore, the eviction operation can be performed on the cache line with the modified at least one sector.

    ストレージ装置
    3.
    发明申请

    公开(公告)号:WO2020095583A1

    公开(公告)日:2020-05-14

    申请号:PCT/JP2019/038973

    申请日:2019-10-02

    摘要: ストレージ装置は、ホスト装置と接続される。ストレージ装置は、データを格納する不揮発性メモリと、不揮発性メモリを複数の論理領域に分割して管理する論理領域管理部と、論理領域管理部が管理する複数の論理領域に関する情報を格納する論理情報格納部と、ホスト装置から指定され、分割された複数の論理領域に対応するアクセスパターンを管理するアクセスパターン管理部と、アクセスパターンに関する情報を格納するアクセスパターン格納部と、ホスト装置から、複数の論理領域のうち、いずれかの領域にアクセスがあった場合に、アクセスパターンに関する情報から、アクセスされた領域に対応するアクセスパターンを選択するアクセスパターン処理管理部と、アクセスパターン処理管理部で特定されたアクセスパターンに基づいて不揮発性メモリに対する処理を行い、データをホスト装置に転送するアクセス処理管理・実行部と、を備える。

    REDUCING PROBABILISTIC FILTER QUERY LATENCY
    4.
    发明申请

    公开(公告)号:WO2019045961A1

    公开(公告)日:2019-03-07

    申请号:PCT/US2018/045602

    申请日:2018-08-07

    摘要: Systems and techniques for reducing probabilistic filter query latency are described herein. A query for a probabilistic filter that is stored on a first media may be received from a caller. In response to receiving the query, cached segments of the probabilistic filter stored on a second media may be obtained. Here, the probabilistic filter provides a set membership determination that is conclusive in a determination that an element is not in a set. The query may be executed on the cached segments resulting in a partial query result. Retrieval of remaining data of the probabilistic filter from the first media to the second media may be initiated without intervention from the caller. Here, the remaining data corresponds to the query and data that is not in the cached segment. The partial query results may then be returned to the caller.

    METHOD OF OPERATING A CACHE
    5.
    发明申请

    公开(公告)号:WO2019029793A1

    公开(公告)日:2019-02-14

    申请号:PCT/EP2017/070026

    申请日:2017-08-08

    发明人: IBAYAN, Ariel

    摘要: The invention relates to a method (100) of operating a cache module (400) comprising cache lines which are the smallest memory blocks of the cache module (400), wherein the method (100) comprises a step (110) of receiving an incoming message for storing, the step (110) of receiving comprising: determining (112) size of the message to in turn determine number of cache lines required for the message; finding (116) available cache lines required for the determined number of cache lines, wherein the step (116) of finding comprises: i. utilizing (116i) an algorithm using a de Bruijn sequence to find an available first cache line by determining the location of a least significant bit of value 1; ii. storing (116ii) the message or, if more than one cache line is required, part of the message in the first cache line in the cache module (400); iii. storing (116iii) the location of the first cache line in a lookup table (300) indexing details of the stored message; iv. repeating steps i to iii if more than one cache line is required for the message. The invention further relates to a computer program product and an electronic control unit comprising, among others, a processor configured to perform the method (100). The invention further relates to a vehicle control unit comprising a plurality of the electronic control units in electronic communication with each other by way of a data bus system.

    EXTERNAL STORAGE DEVICE WITH INFORMATION PROCESSING CAPABILITY
    6.
    发明申请
    EXTERNAL STORAGE DEVICE WITH INFORMATION PROCESSING CAPABILITY 审中-公开
    具有信息处理能力的外部存储设备

    公开(公告)号:WO2017082822A1

    公开(公告)日:2017-05-18

    申请号:PCT/SG2016/050552

    申请日:2016-11-08

    IPC分类号: G06F3/06 G06F12/0866

    摘要: An external storage device is disclosed, the external storage device comprising a controller, a non-volatile medium for storing firmware comprising analytical programs and a storage memory medium comprising a storage partition and a reserve partition. The storage partition accessible by a host device and configured to store data files and the reserve partition inaccessible by the host device. The controller is configured to execute the analytical programs and perform file processing actions on the data files to generate information files. The controller is configured to store the information files in the reserve partition, and subsequently in the storage partition when the external storage device is idle. In embodiments, the external storage device further comprises an inbuilt power source so that the file processing actions can be performed and the information files can be stored in the storage partition even when the external storage is decoupled from the host device.

    摘要翻译: 公开了外部存储设备,外部存储设备包括控制器,用于存储包括分析程序的固件的非易失性介质以及包括存储分区和保留分区的存储介质。 存储分区可由主机设备访问并配置为存储数据文件和主机设备不可访问的保留分区。 控制器被配置为执行分析程序并对数据文件执行文件处理动作以生成信息文件。 控制器被配置为在外部存储设备空闲时将信息文件存储在保留分区中,并且随后存储在存储分区中。 在实施例中,外部存储设备还包括内置电源,使得即使当外部存储器与主机设备分离时,也可以执行文件处理动作并且可以将信息文件存储在存储分区中。

    DISTRIBUTED CACHE LIVE MIGRATION
    7.
    发明申请
    DISTRIBUTED CACHE LIVE MIGRATION 审中-公开
    分布式高速缓存实时迁移

    公开(公告)号:WO2017069648A1

    公开(公告)日:2017-04-27

    申请号:PCT/RU2015/000696

    申请日:2015-10-21

    摘要: A system, comprising: a first host comprising a first cache and associated with a virtual machine, VM; and a second host comprising a second cache; wherein the first host is adapted to send cache data of the first cache to the second host in response to a notification, said cache data associated with the VM and said notification indicating that the VM is to be migrated from the first host to the second host, and wherein the first host is adapted to send write operations associated with the VM to the second host in response to receiving the notification; and wherein the second host is adapted to apply, in response to receiving the notification, read operations associated with cache data of the VM to the first cache if said cache data is not present in the second cache, and to apply write operations associated with cache data of the VM to the second cache.

    摘要翻译: 一种系统,包括:第一主机,其包括第一高速缓存并且与虚拟机VM相关联; 以及包括第二高速缓存的第二主机; 其中,所述第一主机适用于响应于通知向所述第二主机发送所述第一缓存的缓存数据,所述缓存数据与所述VM相关联,并且所述通知指示所述VM将从所述第一主机迁移到所述第二主机 并且其中所述第一主机适于响应于接收到所述通知而将与所述VM相关联的写入操作发送到所述第二主机; 并且其中所述第二主机适用于响应于接收到所述通知,如果所述高速缓存数据不存在于所述第二高速缓存中,则将与所述VM的高速缓存数据相关联的读操作应用于所述第一高速缓存,并且应用与高速缓存相关联的写操作 VM的数据到第二个缓存。

    SSD ADDRESS TABLE CACHE MANAGEMENT
    9.
    发明申请

    公开(公告)号:WO2021221772A1

    公开(公告)日:2021-11-04

    申请号:PCT/US2021/019828

    申请日:2021-02-26

    摘要: Aspects of a storage device including a cache having a logical-to-physical (L2P) mapping table, a scratchpad buffer, and a controller are provided to optimize cache storage of L2P mapping information. A controller receives a random pattern of logical addresses and identifies each logical address within one of multiple probability distributions. Based on a frequency of occurrence of each logical address, the controller stores a control page including the logical address within either a partition of the L2P mapping table which is associated with the corresponding probability distribution, or in the scratchpad buffer. The frequency of occurrence of each logical address is determined based on whether the logical address is within one or more standard deviations from a mean of each probability distribution. As a result, frequently occurring control pages are stored in cache, while infrequently occurring control pages are stored in the scratchpad buffer.