-
公开(公告)号:US20240231655A9
公开(公告)日:2024-07-11
申请号:US17970132
申请日:2022-10-20
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts , Haojie Ye
IPC: G06F3/06
CPC classification number: G06F3/0632 , G06F3/0604 , G06F3/0622 , G06F3/0673
Abstract: Disclosed in some examples are systems, devices, machine-readable mediums, and methods for customizing an in-memory versioning mode for each memory location according to a predicted access behavior to optimize memory device performance. Usage data in a previous time period may be utilized along with policy rules to determine whether to configure a particular memory address as a zero copy or direct copy mode. For example, memory addresses that are read frequently may be configured as direct copy mode to reduce the read latency penalty. This improves the functioning of the memory system by reducing read latency for memory addresses that are frequently read but written to less frequently, and reduces write latency for memory locations that are frequently written to, but not read as frequently.
-
公开(公告)号:US20240134545A1
公开(公告)日:2024-04-25
申请号:US17970132
申请日:2022-10-19
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts , Haojie Ye
IPC: G06F3/06
CPC classification number: G06F3/0632 , G06F3/0604 , G06F3/0622 , G06F3/0673
Abstract: Disclosed in some examples are systems, devices, machine-readable mediums, and methods for customizing an in-memory versioning mode for each memory location according to a predicted access behavior to optimize memory device performance. Usage data in a previous time period may be utilized along with policy rules to determine whether to configure a particular memory address as a zero copy or direct copy mode. For example, memory addresses that are read frequently may be configured as direct copy mode to reduce the read latency penalty. This improves the functioning of the memory system by reducing read latency for memory addresses that are frequently read but written to less frequently, and reduces write latency for memory locations that are frequently written to, but not read as frequently.
-
公开(公告)号:US11829627B2
公开(公告)日:2023-11-28
申请号:US17403366
申请日:2021-08-16
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts , Aliasger Tayeb Zaidy
CPC classification number: G06F3/0647 , G06F3/0625 , G06F3/0685 , G06N3/04 , G06N3/08
Abstract: Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.
-
公开(公告)号:US11797439B2
公开(公告)日:2023-10-24
申请号:US17931541
申请日:2022-09-12
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts
IPC: G06F12/02 , G06F12/0802
CPC classification number: G06F12/0292 , G06F12/0802
Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.
-
公开(公告)号:US11775370B2
公开(公告)日:2023-10-03
申请号:US18054019
申请日:2022-11-09
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts
CPC classification number: G06F11/073 , G06F11/1068 , G06N3/08
Abstract: Methods, systems, and apparatuses related to a memory fault map for an accelerated neural network. An artificial neural network can be accelerated by operating memory outside of the memory's baseline operating parameters. Doing so, however, often increases the amount of faulty data locations in the memory. Through creation and use of the disclosed fault map, however, artificial neural networks can be trained more quickly and using less bandwidth, which reduces the neural networks' sensitivity to these additional faulty data locations. Hardening a neural network to these memory faults allows the neural network to operate effectively even when using memory outside of that memory's baseline operating parameters.
-
公开(公告)号:US11755488B2
公开(公告)日:2023-09-12
申请号:US17491119
申请日:2021-09-30
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts
IPC: G06F12/0862 , G06F3/06
CPC classification number: G06F12/0862 , G06F3/064 , G06F3/0611 , G06F3/0622 , G06F3/0649 , G06F3/0688 , G06F2212/6024 , G06F2212/6026
Abstract: Systems, apparatuses, and methods for predictive memory access are described. Memory control circuitry instructs a memory array to read a data block from or write the data block to a location targeted by a memory access request, determines memory access information including a data value correlation parameter determined based on data bits used to indicate a raw data value in the data block and/or an inter-demand delay correlation parameter determined based on a demand time of the memory access request, predicts that read access to another location in the memory array will subsequently be demanded by another memory access request based on the data value correlation parameter and/or the inter-demand delay correlation parameter, and instructs the memory array to output another data block stored at the other location to a different memory level that provides faster data access speed before the other memory access request is received.
-
公开(公告)号:US11693593B2
公开(公告)日:2023-07-04
申请号:US17082947
申请日:2020-10-28
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts , Sean Stephen Eilert
CPC classification number: G06F3/0659 , G06F3/065 , G06F3/0619 , G06F3/0656 , G06F3/0673 , G06F12/06 , G06F2212/1008
Abstract: Various embodiments enable versioning of data stored on a memory device, where the versioning allows the memory device to maintain different versions of data within a set of physical memory locations (e.g., a row) of the memory device. In particular, some embodiments provide for a memory device or a memory sub-system that uses versioning of stored data to facilitate a rollback operation/behavior, a checkpoint operation/behavior, or both. Additionally, some embodiments provide for a transactional memory device or a transactional memory sub-system that uses versioning of stored data to enable rollback of a memory transaction, commitment of a memory transaction, or handling of a read or write command associated with respect to a memory transaction.
-
公开(公告)号:US20230114921A1
公开(公告)日:2023-04-13
申请号:US18054019
申请日:2022-11-09
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts
Abstract: Methods, systems, and apparatuses related to a memory fault map for an accelerated neural network. An artificial neural network can be accelerated by operating memory outside of the memory's baseline operating parameters. Doing so, however, often increases the amount of faulty data locations in the memory. Through creation and use of the disclosed fault map, however, artificial neural networks can be trained more quickly and using less bandwidth, which reduces the neural networks' sensitivity to these additional faulty data locations. Hardening a neural network to these memory faults allows the neural network to operate effectively even when using memory outside of that memory's baseline operating parameters.
-
公开(公告)号:US11507516B2
公开(公告)日:2022-11-22
申请号:US16997811
申请日:2020-08-19
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts , Joseph Thomas Pawlowski
IPC: G06F12/00 , G06F13/00 , G06F13/28 , G06F12/0895 , G06F12/0862
Abstract: Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.
-
公开(公告)号:US11507443B2
公开(公告)日:2022-11-22
申请号:US16846259
申请日:2020-04-10
Applicant: Micron Technology, Inc.
Inventor: David Andrew Roberts
Abstract: Methods, systems, and apparatuses related to a memory fault map for an accelerated neural network. An artificial neural network can be accelerated by operating memory outside of the memory's baseline operating parameters. Doing so, however, often increases the amount of faulty data locations in the memory. Through creation and use of the disclosed fault map, however, artificial neural networks can be trained more quickly and using less bandwidth, which reduces the neural networks' sensitivity to these additional faulty data locations. Hardening a neural network to these memory faults allows the neural network to operate effectively even when using memory outside of that memory's baseline operating parameters.
-
-
-
-
-
-
-
-
-