DYNAMIC ALLOCATION OF CACHE MEMORY AS RAM
    1.
    发明申请

    公开(公告)号:WO2023033955A1

    公开(公告)日:2023-03-09

    申请号:PCT/US2022/038644

    申请日:2022-07-28

    Applicant: APPLE INC.

    Abstract: An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.

    ELIMINATING WRITE DISTURB FOR SYSTEM METADATA IN A MEMORY SUB-SYSTEM

    公开(公告)号:WO2023028166A1

    公开(公告)日:2023-03-02

    申请号:PCT/US2022/041406

    申请日:2022-08-24

    Abstract: A plurality of memory units residing in a first location of a memory device is identified, wherein the first location of the memory device corresponds to a first layer of a plurality of layers of the memory device. It is determined whether a write disturb capability associated with the first location of the memory device satisfies a threshold criterion. Responsive to determining that the write disturb capability associated with the first location of the memory device satisfies the threshold criterion, a plurality of logical addresses associated with the plurality of memory units is remapped to a second location of the memory device, wherein the second location of the memory device corresponds to a second layer of the plurality of layers of the memory device, and wherein a write disturb capability associated with the second location of the memory device does not satisfy the threshold criterion.

    内存管理方法、装置、设备和存储介质

    公开(公告)号:WO2022062524A1

    公开(公告)日:2022-03-31

    申请号:PCT/CN2021/102857

    申请日:2021-06-28

    Inventor: 周轶刚

    Abstract: 本发明实施例公开了一种内存管理方法、装置、设备和存储介质,通过获取内存中的目标PTE,目标PTE中包括第一信息,该第一信息用于表征目标内存页面的数据访问热度;根据该第一信息,确定目标内存页面的目标存储位置;将目标内存页面交换至目标存储位置。由于目标PTE中携带了能够表征数据访问热度的第一信息,因此CPU可以直接根据该目标PTE,将对应的目标内存页面交换至与该目标内存页面的数据访问热度相匹配的目标存储位置,而无需通过软件轮询内存中页表的方法统计页面访问热度,使CPU对内存的管理效率更高,降低了CPU的工作负载,提高了内存管理的有效性和系统综合性能。

    CACHE MEDIA MANAGEMENT
    4.
    发明申请

    公开(公告)号:WO2021142334A1

    公开(公告)日:2021-07-15

    申请号:PCT/US2021/012794

    申请日:2021-01-08

    Abstract: An exempt portion of a data cache of a memory sub-system is identified. The exempt portion includes a first set of data blocks comprising first data written by a host system to the data cache. A collected portion of the data cache of the memory sub-system is identified. The collected portion includes a second set of data blocks comprising second data written by the host system. A media management operation is performed on the collected portion of the data cache to relocate the second data to a storage area of the memory sub-system that is at a higher data density than the data cache, wherein the exempt portion of the data cache is exempt from the media management operation.

    OPERATIONS IN MEMORY
    5.
    发明申请

    公开(公告)号:WO2021041109A1

    公开(公告)日:2021-03-04

    申请号:PCT/US2020/046959

    申请日:2020-08-19

    Abstract: Apparatuses and methods can be related to performing operations in memory. Operations can be performed in the background while the memory is performing different operations. For example, comparison operations can be performed by the memory device while the memory device is reading data. The results of the comparison operations can be stored in registers of the memory device. The registers can be made accessible externally to the memory device.

    SYSTEMS AND METHODS FOR EFFICIENTLY MAPPING NEURAL NETWORKS TO PROGRAMMABLE LOGIC DEVICES

    公开(公告)号:WO2020073910A1

    公开(公告)日:2020-04-16

    申请号:PCT/CN2019/110069

    申请日:2019-10-09

    Abstract: Systems and methods are provided for efficiently mapping neural networks to programmable logic devices (PLDs). A method for mapping a neural network to an FPGA may include receiving a data structure defining an architecture of the PLD; receiving a data structure defining an architecture of the neural network; partitioning the architecture of the PLD into a plurality of layers, each layer having a starting primitive adjacent to a first off-chip buffer and an ending primitive adjacent to a second off-chip buffer; mapping the architecture of the neural network onto one or more of the plurality of layers such that a data transfer size is at least locally minimized; scheduling the mapped architecture of the neural network for execution on the one or more of the plurality of layers; and outputting an execution sequence based on the scheduled and mapped architecture of the neural network.

Patent Agency Ranking