-
公开(公告)号:US20240354254A1
公开(公告)日:2024-10-24
申请号:US18759068
申请日:2024-06-28
Applicant: Lodestar Licensing Group LLC
Inventor: Richard C. Murphy
IPC: G06F12/0864 , G06F9/30 , G06F12/0811 , G06F12/084 , G06F12/0895 , G11C7/10 , G11C11/4091 , G11C11/4096 , G11C11/4094 , G11C19/00
CPC classification number: G06F12/0864 , G06F9/30036 , G06F12/0811 , G06F12/084 , G06F12/0895 , G11C7/1006 , G11C11/4091 , G11C11/4096 , G06F2212/1012 , G06F2212/1044 , G06F2212/283 , G06F2212/6032 , G11C11/4094 , G11C19/00
Abstract: The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.
-
公开(公告)号:US11921642B2
公开(公告)日:2024-03-05
申请号:US17986781
申请日:2022-11-14
Applicant: Rambus Inc.
Inventor: Trung Diep , Hongzhong Zheng
IPC: G06F12/1009 , G06F12/0811 , G06F12/0864 , G11C7/10
CPC classification number: G06F12/1009 , G06F12/0811 , G06F12/0864 , G11C7/1072 , G06F2212/283 , G06F2212/656
Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
-
公开(公告)号:US11809322B2
公开(公告)日:2023-11-07
申请号:US17472977
申请日:2021-09-13
Applicant: Advanced Micro Devices, Inc.
Inventor: Vydhyanathan Kalyanasundharam , Kevin M. Lepak , Amit P. Apte , Ganesh Balakrishnan , Eric Christopher Morton , Elizabeth M. Cooper , Ravindra N. Bhargava
IPC: G06F12/0817 , G06F12/128 , G06F12/0811 , G06F12/0871 , G06F12/0831
CPC classification number: G06F12/0817 , G06F12/0811 , G06F12/0831 , G06F12/0871 , G06F12/128 , G06F2212/283 , G06F2212/604 , G06F2212/621
Abstract: Systems, apparatuses, and methods for maintaining a region-based cache directory are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. The system also includes a cache directory to help manage cache coherency among the different cache subsystems of the system. In order to reduce the number of entries in the cache directory, the cache directory tracks coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Accordingly, the system includes a region-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the system. The cache directory includes a reference count in each entry to track the aggregate number of cache lines that are cached per region. If a reference count of a given entry goes to zero, the cache directory reclaims the given entry.
-
公开(公告)号:US20230205701A1
公开(公告)日:2023-06-29
申请号:US18117820
申请日:2023-03-06
Applicant: Micron Technology, Inc.
Inventor: Laurent Isenegger , Robert M. Walker , Cagdas Dirik
IPC: G06F12/0862 , G06F12/0831 , G06F3/06
CPC classification number: G06F12/0862 , G06F12/0835 , G06F3/0683 , G06F3/061 , G06F3/0658 , G06F2212/283
Abstract: A method includes receiving, at a direct memory access (DMA) controller of a memory device, a first command from a first cache controller coupled to the memory device to prefetch first data from the memory device and sending the prefetched first data, in response to receiving the first command, to a second cache controller coupled to the memory device. The method can further include receiving a second command from a second cache controller coupled to the memory device to prefetch second data from the memory device, and sending the prefetched second data, in response to receiving the second command, to a third cache controller coupled to the memory device.
-
公开(公告)号:US20190251032A1
公开(公告)日:2019-08-15
申请号:US16266997
申请日:2019-02-04
Applicant: Linear Algebra Technologies Limited
Inventor: Richard Richmond
IPC: G06F12/0884 , G06F12/0895 , G06F12/0875 , G06F12/0842 , G06F12/0804 , G06F12/0811
CPC classification number: G06F12/0884 , G06F12/0804 , G06F12/0811 , G06F12/0842 , G06F12/0875 , G06F12/0895 , G06F2212/283 , G06F2212/455
Abstract: Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.
-
公开(公告)号:US20190196986A1
公开(公告)日:2019-06-27
申请号:US15850247
申请日:2017-12-21
Applicant: International Business Machines Corporation
Inventor: Shakti Kapoor
IPC: G06F13/16 , G06F12/0811 , G06F12/0888
CPC classification number: G06F13/161 , G06F3/061 , G06F3/0622 , G06F3/064 , G06F12/0802 , G06F12/0811 , G06F12/0888 , G06F13/1673 , G06F2212/283 , G06F2212/6046 , G06F2212/621
Abstract: Disclosed is a method, apparatus, and/or computer program product for reducing latency in a processor with regard to the execution of noncacheable operations that includes receiving noncacheable operations from one or both of the level 2 cache and a level 3 cache, sending the noncacheable operations to a noncacheable unit (NCU) associated with a core of the processor, executing the noncacheable operations by the NCU, and sending results of the executed noncacheable operations to a host bridge for output to an input/out device. The noncacheable operations bypass the core of the processor.
-
公开(公告)号:US20190188137A1
公开(公告)日:2019-06-20
申请号:US15846008
申请日:2017-12-18
Applicant: Advanced Micro Devices, Inc.
Inventor: Vydhyanathan Kalyanasundharam , Kevin M. Lepak , Amit P. Apte , Ganesh Balakrishnan , Eric Christopher Morton , Elizabeth M. Cooper , Ravindra N. Bhargava
IPC: G06F12/0817 , G06F12/128 , G06F12/0811 , G06F12/0831 , G06F12/0871
CPC classification number: G06F12/0817 , G06F12/0811 , G06F12/0831 , G06F12/0871 , G06F12/128 , G06F2212/283 , G06F2212/604 , G06F2212/621
Abstract: Systems, apparatuses, and methods for maintaining a region-based cache directory are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. The system also includes a cache directory to help manage cache coherency among the different cache subsystems of the system. In order to reduce the number of entries in the cache directory, the cache directory tracks coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Accordingly, the system includes a region-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the system. The cache directory includes a reference count in each entry to track the aggregate number of cache lines that are cached per region. If a reference count of a given entry goes to zero, the cache directory reclaims the given entry.
-
公开(公告)号:US20190087341A1
公开(公告)日:2019-03-21
申请号:US15709285
申请日:2017-09-19
Applicant: INTEL CORPORATION
Inventor: Seth H. Pugsley , Manjunath Shevgoor , Christopher B. Wilkerson
IPC: G06F12/0862 , G06F9/30 , G06F12/0811
CPC classification number: G06F12/0862 , G06F9/30047 , G06F12/0806 , G06F12/0811 , G06F12/0897 , G06F2212/283 , G06F2212/602 , G06F2212/6028
Abstract: In one embodiment, a processor comprises a first prefetcher to generate prefetch requests to prefetch data into a mid-level cache; a second prefetcher to generate prefetch requests to prefetch data into the mid-level cache; and a prefetcher selector to select a prefetcher configuration for the first prefetcher and the second prefetcher based on at least one memory access metric, wherein the prefetcher configuration is to specify whether the first prefetcher is to be enabled to issue, to the mid-level cache, prefetch requests for data of a particular page and whether the second prefetcher is to be enabled to issue, to the mid-level cache, prefetch requests for data of the particular page.
-
公开(公告)号:US20180349029A1
公开(公告)日:2018-12-06
申请号:US15609569
申请日:2017-05-31
Applicant: Micron Technology, Inc.
Inventor: Ali Mohammadzadeh , Jung Sheng Hoei , Dheeraj Srinivasan , Terry M. Grunzke
IPC: G06F3/06 , G06F12/0811
CPC classification number: G06F3/0619 , G06F3/0616 , G06F3/0656 , G06F3/0659 , G06F3/0679 , G06F12/0811 , G06F2212/283 , G06F2212/601
Abstract: The present disclosure relates to apparatuses and methods to control memory operations on buffers. An example apparatus includes a memory device and a host. The memory device includes a buffer and an array of memory cells, and the buffer includes a plurality of caches. The host includes a system controller, and the system controller is configured to control performance of a memory operation on data in the buffer. The memory operation is associated with data movement among the plurality of caches.
-
公开(公告)号:US20180321864A1
公开(公告)日:2018-11-08
申请号:US15585808
申请日:2017-05-03
Applicant: Western Digital Technologies, Inc.
Inventor: Shay Benisty
IPC: G06F3/06 , G06F1/32 , G06F12/0811
CPC classification number: G06F3/0625 , G06F1/3287 , G06F3/0634 , G06F3/0659 , G06F3/0688 , G06F12/0811 , G06F2212/283 , G06F2212/3042
Abstract: Systems and methods for processing non-contiguous submission and completion queues are disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into the submission queue. The submission and completion queues may be contiguous or non-contiguous in host device memory. Non-contiguous queues may be defined by a link to a list on the host device that lists the non-contiguous sections in memory. In practice, the memory device stores the list in one type of memory (such as DRAM) and the link in a different type of memory (such as always on memory or non-volatile memory). In this way, the link may be accessed in various modes (such as low power mode) in order to recreate the list in DRAM.
-
-
-
-
-
-
-
-
-