CLOSED-LOOP EQUALIZATION METHODS
    41.
    发明公开

    公开(公告)号:US20240273016A1

    公开(公告)日:2024-08-15

    申请号:US18439673

    申请日:2024-02-12

    CPC classification number: G06F12/0246

    Abstract: Methods, systems, and devices for closed-loop equalization methods are described. A memory device may receive, from a host device, a request to perform an equalization operation on a signal. The signal may include a pattern corresponding to the equalization operation. The memory device may receive the signal from the host device. The memory device may perform the equalization operation on the signal to determine one or more filter parameters for filtering the signal. The equalization operation may include filtering the signal and measuring one or more quality metrics for the filtered signal. The memory device may transmit, to the host device, an indication of the one or more quality metrics for the filtered signal. The memory device may determine whether the one or more filter parameters are stored at the memory device, the one or more filter parameters associated with a first mode corresponding to a first speed for communications.

    WORKLOAD-BASED SCAN OPTIMIZATION
    42.
    发明公开

    公开(公告)号:US20240248646A1

    公开(公告)日:2024-07-25

    申请号:US18623881

    申请日:2024-04-01

    Abstract: A method performed by a processing device receives a plurality of write operation requests, where each of the write operation requests specifies a respective one of the memory units, identifies one or more operating characteristic values, where each operating characteristic value reflects one or more memory access operations performed on a memory device, and determines whether the operating characteristic values satisfy one or more threshold criteria. Responsive to determining that the operating characteristic values satisfy the one or more threshold criteria, the method performs a plurality of write operations, where each of the write operations writes data to the respective one of the memory units, and performs a multiple-read scan operation subsequent to the plurality of write operations, where the multiple-read scan operation reads data from each of the memory units.

    TECHNIQUES FOR CONCURRENT HOST SYSTEM ACCESS AND DATA FOLDING

    公开(公告)号:US20240220144A1

    公开(公告)日:2024-07-04

    申请号:US18534363

    申请日:2023-12-08

    CPC classification number: G06F3/064 G06F3/0611 G06F3/0673 G06F12/0292

    Abstract: Methods, systems, and devices for techniques for concurrent host system access and data folding are described. A memory system may determine to transfer (e.g., fold) data from a set of source data blocks to a set of destination data blocks. The memory system may receive a command to access a first source data block of the set of source data blocks concurrent with the data transfer. The memory system may generate a first order for transferring respective portions of the data that is based on a second order associated with a sequential read of the data from the set of destination data blocks. Based on the accessing the first source data block being concurrent with the data transfer, the first order may exclude a first portion of the data from the first source data block such that the data transfer and the accessing may be concurrently performed.

    ENHANCED READ PERFORMANCE FOR MEMORY DATA WORD DECODING USING POWER ALLOCATION BASED ON ERROR PATTERN DETECTION

    公开(公告)号:US20240176701A1

    公开(公告)日:2024-05-30

    申请号:US18519458

    申请日:2023-11-27

    CPC classification number: G06F11/1068 G06F11/0757 G06F11/0772

    Abstract: Methods, systems, and devices to enhance read performance for memory data word decoding using power allocation based on error pattern detection in both QLC and TLC in both QLC and TLC products are described. A plurality of data words may be processed using a first decoder engine of a decoder of a memory device according to a first power setting. The decoder may detect a pattern of errors in the plurality of data words. The decoder may further communicate a status signal based on detecting the pattern of errors. The resource manager may allocate based on the status signal, a second amount of power credits to the decoder. The decoder may process a portion of the plurality of data words using a second decoder engine according to the second amount of power credits.

    PEAK POWER MANAGEMENT WITH DYNAMIC DATA PATH OPERATION CURRENT BUDGET MANAGEMENT

    公开(公告)号:US20240152295A1

    公开(公告)日:2024-05-09

    申请号:US18503246

    申请日:2023-11-07

    CPC classification number: G06F3/0659 G06F3/0604 G06F3/0631 G06F3/0679

    Abstract: A memory device includes a plurality of memory dies, each memory die of the plurality of memory dies including a memory array and control logic, operatively coupled with the memory array, to perform operations including identifying a data path operation with respect to the memory die. The memory die is associated with a channel. The operations further include determining, based on at least one value derived from a current budget ready status and a cache ready status, whether the channel is ready for the memory die to handle the data path operation, and in response to determining that the channel is ready for the memory die to handle the data path operation, causing the data path operation to be handled by the memory die.

    Queue management for a memory system

    公开(公告)号:US11940874B2

    公开(公告)日:2024-03-26

    申请号:US17883051

    申请日:2022-08-08

    Abstract: Methods, systems, and devices for queue management for a memory system are described. The memory system may include a first decoder associated with a first error control capability and a second decoder associated with a second error control capability. The memory system may receive a command and identify an expected latency for performing an error control operation on the command. The memory system may determine whether to assign the command to a first queue associated with the first decoder or a second queue associated with the second decoder based at least in part on the expected latency for processing the command using the first decoder. Upon assigning the command to a decoder, the command may be processed by the first queue or the second queue.

Patent Agency Ranking