FLEXIBLE CACHE STRUCTURE FOR CACHING COMPRESSED AND UNCOMPRESSED DATA

    公开(公告)号:US20240320156A1

    公开(公告)日:2024-09-26

    申请号:US18588532

    申请日:2024-02-27

    IPC分类号: G06F12/0877

    CPC分类号: G06F12/0877

    摘要: A device in which each field in a first RAM together with a respective field in a second RAM form a respective entry of a cache RAM. Caching circuitry is operable to use the respective field in the first RAM to hold a first portion of a single cacheline, and the respective field in the second RAM to hold the corresponding tag of the single cacheline and a remaining portion of the single cacheline. The caching circuitry is further arranged so as, upon a cache hit by a subsequent memory access operation requesting to access data for which a corresponding cacheline has already been cached, to retrieve the corresponding tag and the remaining portion of the respective cacheline from the second RAM in a first one of a sequence of clock cycles.

    FLEXIBLE CACHE STRUCTURE FOR CACHING COMPRESSED AND UNCOMPRESSED DATA

    公开(公告)号:US20240320155A1

    公开(公告)日:2024-09-26

    申请号:US18588289

    申请日:2024-02-27

    IPC分类号: G06F12/0877 G06F12/0895

    CPC分类号: G06F12/0877 G06F12/0895

    摘要: A device in which each field in a first RAM together with a respective field in a second RAM form a respective entry of a cache RAM. Caching circuitry is operable to select between applying a first mode and a second mode in at least one entry in the cache RAM. In the first mode, the respective field in the first RAM is used to hold a first portion of a single cacheline in a first format, and the respective field in the second RAM is used to hold the corresponding tag of the single cacheline and a remaining portion of the single cacheline. In the second mode, the first RAM is used to hold a plural cachelines in a second format shorter than the first format, and the corresponding entry in the second RAM is used to hold the corresponding tags of the plural cachelines.

    Data Storage Device with Memory Services for Storage Access Queues

    公开(公告)号:US20240264944A1

    公开(公告)日:2024-08-08

    申请号:US18432518

    申请日:2024-02-05

    发明人: Luca Bert

    摘要: A computing device having a computer express link (CXL) connection between a memory sub-system and a host system and having storage access queues configured at least in part in the memory sub-system. The memory sub-system can attach, as a memory device, a portion of its fast random access memory over the connection to the host system. One or more storage access queues can be configured in the memory device. The host system can use a cache-coherent memory access protocol to communicate storage access messages over the connection to the random access memory of the memory sub-system. Optionally, the host system can have a memory with second storage access queues usable to access the storage services of the memory sub-system over the connection using a storage access protocol.

    Caching data based on greenhouse gas data

    公开(公告)号:US11966336B2

    公开(公告)日:2024-04-23

    申请号:US17521707

    申请日:2021-11-08

    申请人: SAP SE

    IPC分类号: G06F12/0837 G06F12/0877

    CPC分类号: G06F12/0837 G06F12/0877

    摘要: Some embodiments provide a program that receives a first set of data and a first greenhouse gas emission value. The program stores, in a cache, the first set of data and the first greenhouse gas emission value. The program receives a second set of data and a second greenhouse gas emission value. The program stores, in the cache, the second set of data and the second greenhouse gas emission value. The program receives a third set of data and a third greenhouse gas emission value. The program determines one of the first and second sets of data to remove from the cache based on the first and second greenhouse gas emission values. The program replaces, in the cache, one of the first and second sets of data and the corresponding first or second greenhouse gas emission value with the third set of data and the third greenhouse gas emission value.

    Hybrid allocation of data lines in a streaming cache memory

    公开(公告)号:US11934311B2

    公开(公告)日:2024-03-19

    申请号:US17736557

    申请日:2022-05-04

    摘要: Various embodiments include a system for managing cache memory in a computing system. The system includes a sectored cache memory that provides a mechanism for sharing sectors in a cache line among multiple cache line allocations. Traditionally, different cache line allocations are assigned to different cache lines in the cache memory. Further, cache line allocations may not use all of the sectors of the cache line, leading to low utilization of the cache memory. With the present techniques, multiple cache lines share the same cache line, leading to improved cache memory utilization relative to prior techniques. Further, sectors of cache allocations can be assigned to reduce data bank conflicts when accessing cache memory. Reducing such data bank conflicts can result in improved memory access performance, even when cache lines are shared with multiple allocations.

    INFORMATION PROCESSING APPARATUS AND MEMORY ACCESS CONTROL METHOD

    公开(公告)号:US20230281129A1

    公开(公告)日:2023-09-07

    申请号:US18066061

    申请日:2022-12-14

    申请人: Fujitsu Limited

    发明人: Ken IIZAWA

    摘要: An information processing apparatus includes: calculation circuits that each executes deep learning; a shared memory that is shared by the calculation circuits; an access information memory that holds, for each of the calculation circuits, a write request for writing data generated in forward propagation processing by the calculation circuits to the shared memory, a read request for reading the data used in backward propagation processing by the calculation circuits from the shared memory, and a start time of backward propagation processing; and a processor that schedules data transfer between the calculation circuits and the shared memory based on the write request, the read request, and the start time of backward propagation processing such that the data is transferred from the shared memory to a calculation circuit that executes backward propagation processing by the start time of backward propagation processing, and accesses the shared memory based on a scheduling result.