-
21.
公开(公告)号:US20230229556A1
公开(公告)日:2023-07-20
申请号:US17897048
申请日:2022-08-26
Applicant: Micron Technology, Inc.
Inventor: Patrick Estep , Steve Pawlowski , Emanuele Confalonieri , Nicola Del Gatto , Paolo Amato
IPC: G06F11/10
CPC classification number: G06F11/1076 , G06F2211/1009
Abstract: There are provided methods and systems for improving RAS features of a memory device. For example, there is provided a system that includes a memory and a memory side cache. The system further includes a processor that is configured to minimize accesses to the memory by executing certain operations. The operations can include computing a new parity based on old data, new data, and an old parity in response to data from the memory side cache being written to the memory.
-
公开(公告)号:US20230056665A1
公开(公告)日:2023-02-23
申请号:US17405217
申请日:2021-08-18
Applicant: Micron Technology, Inc.
Inventor: Patrick Estep , Tony M. Brewer
Abstract: Devices and techniques for providing receipts for event messages in a processor are described herein. A system includes multiple memory-compute nodes coupled to one another over a scale fabric; a set of registers; and an event manager hardware circuitry to: receive an event message corresponding to an event, and the event associated with an event mode; track a counter value representing a number of received event messages related to the event, the counter value stored in the set of registers; compare the number of received event messages to a trigger value; and in response to the number of received event messages equaling the trigger value: use an atomic operation to reset the counter value in the set of registers while maintaining the event mode; and alert a thread of the event.
-
公开(公告)号:US20220318162A1
公开(公告)日:2022-10-06
申请号:US17240492
申请日:2021-04-26
Applicant: Micron Technology, Inc.
Inventor: Bryan Hornung , Tony M. Brewer , Douglas Vanesko , Patrick Estep
Abstract: Linear interpolation is performed within a memory system. The memory system receives a floating-point point index into an integer-indexed memory array. The memory system accesses the two values of the two adjacent integer indices, performs the linear interpolation, and provides the resulting interpolated value. In many system architectures, the critical limitation on system performance is the data transfer rate between memory and processing elements. Accordingly, reducing the amount of data transferred improves overall system performance and reduces power consumption.
-
公开(公告)号:US20220206804A1
公开(公告)日:2022-06-30
申请号:US17405371
申请日:2021-08-18
Applicant: Micron Technology, Inc.
Inventor: Douglas Vanesko , Bryan Hornung , Patrick Estep
IPC: G06F9/30
Abstract: Various examples are directed to systems and methods for executing a loop in a reconfigurable compute fabric. A first flow controller may initiate a first thread at a first synchronous flow to execute a first portion of a first iteration of the loop. A second flow controller may receive a first asynchronous message instructing the second flow controller to initiate a first thread at a second synchronous flow to execute a second portion of the first iteration. The second flow controller may determine that the first iteration of the loop is the last iteration of the loop to be executed and initiate the first thread at the second synchronous flow with a last iteration flag set.
-
25.
公开(公告)号:US20240192955A1
公开(公告)日:2024-06-13
申请号:US18426237
申请日:2024-01-29
Applicant: Micron Technology, Inc.
Inventor: Douglas Vanesko , Bryan Hornung , Patrick Estep
CPC classification number: G06F9/30065 , G06F9/30072 , G06F9/30087 , G06F9/3009 , G06F15/7867 , G06F15/825
Abstract: Various examples are directed to systems and methods for executing a loop in a reconfigurable compute fabric. A first flow controller may initiate a first thread at a first synchronous flow to execute a first portion of a first iteration of the loop. A second flow controller may receive a first asynchronous message instructing the second flow controller to initiate a first thread at a second synchronous flow to execute a second portion of the first iteration. The second flow controller may determine that the first iteration of the loop is the last iteration of the loop to be executed and initiate the first thread at the second synchronous flow with a last iteration flag set.
-
公开(公告)号:US20240192892A1
公开(公告)日:2024-06-13
申请号:US18531267
申请日:2023-12-06
Applicant: Micron Technology, Inc.
Inventor: Patrick Estep , Sean S. Eilert , Ameen D. Akel
IPC: G06F3/06
CPC classification number: G06F3/0659 , G06F3/0604 , G06F3/0689
Abstract: Systems, apparatuses, and methods related to data reconstruction based on queue depth comparison are described. To avoid accessing the “congested” channel, a read command to access the “congested” channel can be executed by accessing the other relatively “idle” channels and utilize data read from the “idle” channels to reconstruct data corresponding to the read command.
-
27.
公开(公告)号:US11907718B2
公开(公告)日:2024-02-20
申请号:US17405371
申请日:2021-08-18
Applicant: Micron Technology, Inc.
Inventor: Douglas Vanesko , Bryan Hornung , Patrick Estep
CPC classification number: G06F9/30065 , G06F9/3009 , G06F9/30072 , G06F9/30087 , G06F15/7867 , G06F15/825
Abstract: Various examples are directed to systems and methods for executing a loop in a reconfigurable compute fabric. A first flow controller may initiate a first thread at a first synchronous flow to execute a first portion of a first iteration of the loop. A second flow controller may receive a first asynchronous message instructing the second flow controller to initiate a first thread at a second synchronous flow to execute a second portion of the first iteration. The second flow controller may determine that the first iteration of the loop is the last iteration of the loop to be executed and initiate the first thread at the second synchronous flow with a last iteration flag set.
-
公开(公告)号:US20230280940A1
公开(公告)日:2023-09-07
申请号:US17684129
申请日:2022-03-01
Applicant: Micron Technology, Inc.
Inventor: Nicola Del Gatto , Emanuele Confalonieri , Paolo Amato , Patrick Estep , Stephen S. Pawlowski
IPC: G06F3/06 , G06F12/0864
CPC classification number: G06F3/0659 , G06F3/0656 , G06F3/0689 , G06F3/0622 , G06F3/0619 , G06F12/0864
Abstract: A memory controller can include a front end portion configured to interface with a host, a central controller portion configured to manage data, a back end portion configured to interface with memory devices. The memory controller can include interface management circuitry coupled to a cache and a memory device. The memory controller can receive, by the interface management controller, a first signal indicative of data associated with a memory access request from a host. The memory controller can transmit a second signal indicative of the data to cache the data in a first location in the cache. The memory controller can transmit a third signal indicative of the data to cache the data in a second location in the cache.
-
公开(公告)号:US20230068168A1
公开(公告)日:2023-03-02
申请号:US17405738
申请日:2021-08-18
Applicant: Micron Technology, Inc.
Inventor: Patrick Estep
Abstract: Devices and techniques for neural network transpose layer removal are described herein. A neural network model that includes matrices of synaptic weights arranged in several layers is obtained. The neural network model is inspected to determine whether a transposition of a matrix to a fully connected layer exists. If there is a matrix transposition, then a modified neural network model is created by changing values of the fully connected layer to correspond to values in the matrix prior to the transposition and eliminating the transposition. The modified neural network model can then be provided to computer hardware to perform inference operations.
-
公开(公告)号:US20230058935A1
公开(公告)日:2023-02-23
申请号:US17405646
申请日:2021-08-18
Applicant: Micron Technology, Inc.
Inventor: Tony Brewer , Patrick Estep , Skyler Arron Windh
Abstract: A hybrid threading processor (HTP) supports thread creation by executing an instruction that indicates an amount of storage space to reserve for return values. Before a thread is created, the indicated amount of space is reserved. The newly created child thread sends a return packet back to the parent thread when the child thread completes. The thread writes its return information into the reserved space and waits for the parent thread to execute a thread join instruction. The thread join instruction takes the returned information from the reserved space and transfers it to the parent thread's register state. The reserved space is released once the child thread is joined. Using a configurable amount of space for each child thread may allow for more child threads to be executed simultaneously.
-
-
-
-
-
-
-
-
-