INTERNAL PROCESSOR BUFFER
    11.
    发明申请
    INTERNAL PROCESSOR BUFFER 有权
    内部处理器缓冲器

    公开(公告)号:US20130318294A1

    公开(公告)日:2013-11-28

    申请号:US13960634

    申请日:2013-08-06

    Inventor: Robert Walker

    Abstract: One or more of the present techniques provide a compute engine buffer configured to maneuver data and increase the efficiency of a compute engine. One such compute engine buffer is connected to a compute engine which performs operations on operands retrieved from the buffer, and stores results of the operations to the buffer. Such a compute engine buffer includes a compute buffer having storage units which may be electrically connected or isolated, based on the size of the operands to be stored and the configuration of the compute engine. The compute engine buffer further includes a data buffer, which may be a simple buffer. Operands may be copied to the data buffer before being copied to the compute buffer, which may save additional clock cycles for the compute engine, further increasing the compute engine efficiency.

    Abstract translation: 一种或多种本技术提供了一种配置成操纵数据并提高计算引擎效率的计算引擎缓冲器。 一个这样的计算引擎缓冲器连接到计算引擎,该计算引擎对从缓冲器检索的操作数执行操作,并将操作的结果存储到缓冲器。 这样的计算引擎缓冲器包括基于要存储的操作数的大小和计算引擎的配置的具有可以电连接或隔离的存储单元的计算缓冲器。 计算引擎缓冲器还包括数据缓冲器,其可以是简单的缓冲器。 在复制到计算缓冲区之前,操作数可以复制到数据缓冲区,这可能为计算引擎节省额外的时钟周期,进一步提高了计算引擎的效率。

    Memory sub-system for increasing bandwidth for command scheduling

    公开(公告)号:US11586390B2

    公开(公告)日:2023-02-21

    申请号:US17498415

    申请日:2021-10-11

    Abstract: Initialization is performed based on the commands received at the command queue. To perform initialization, a bank touch count list that includes a list of banks being accessed by the commands and a bank touch count for each of the banks in the list is updated. The bank touch count identifies the number of commands accessing each of the banks. The bank touch count list is updated by assigning a bank priority rank to each of the banks based on their bank touch count, respectively. Once initialized, the commands in the command queue are scheduled by inserting each of the commands into priority queues based on the bank touch count list.

    Memory sub-system for increasing bandwidth for command scheduling

    公开(公告)号:US11144240B2

    公开(公告)日:2021-10-12

    申请号:US16111974

    申请日:2018-08-24

    Abstract: Initialization is performed based on the commands received at the command queue. To perform initialization, a bank touch count list that includes a list of banks being accessed by the commands and a bank touch count for each of the banks in the list is updated. The bank touch count identifies the number of commands accessing each of the banks. The bank touch count list is updated by assigning a bank priority rank to each of the banks based on their bank touch count, respectively. Once initialized, the commands in the command queue are scheduled by inserting each of the commands into priority queues based on the bank touch count list.

    MEMORY SUB-SYSTEM FOR DECODING NON-POWER-OF-TWO ADDRESSABLE UNIT ADDRESS BOUNDARIES

    公开(公告)号:US20210227361A1

    公开(公告)日:2021-07-22

    申请号:US17204522

    申请日:2021-03-17

    Abstract: A system generating, using a first addressable unit address decoder, a first addressable unit address based on an input address, an interleaving factor, and a number of first addressable units. The system then generating, using an internal address decoder, an internal address based on the input address, the interleaving factor, and the number of first addressable units. Generating the internal address includes: determining a lower address value by extracting lower bits of the internal address, determining an upper address value by extracting upper bits of the internal address, and adding the lower address value to the upper address value to generate the internal address. Using an internal power-of-two address boundary decoder and the internal address, the system then generating a second addressable unit address, a third addressable unit address, a fourth addressable unit address, and a fifth addressable unit address.

    Conditional operation in an internal processor of a memory device

    公开(公告)号:US10394753B2

    公开(公告)日:2019-08-27

    申请号:US15395602

    申请日:2016-12-30

    Inventor: Robert Walker

    Abstract: An internal processor of a memory device is configured to selectively execute instructions in parallel, for example. One such internal processor includes a plurality of arithmetic logic units (ALUs), each connected to conditional masking logic, and each configured to process conditional instructions. A condition instruction may be received by a sequencer of the memory device. Once the condition instruction is received, the sequencer may enable the conditional masking logic of the ALUs. The sequencer may toggle a signal to the conditional masking logic such that the masking logic masks certain instructions if a condition of the condition instruction has been met, and masks other instructions if the condition has not been met. In one embodiment, each ALU in the internal processor may selectively perform instructions in parallel.

    Systems and methods for accessing memory
    18.
    发明授权
    Systems and methods for accessing memory 有权
    用于访问内存的系统和方法

    公开(公告)号:US09183057B2

    公开(公告)日:2015-11-10

    申请号:US13746141

    申请日:2013-01-21

    Inventor: Robert Walker

    Abstract: Methods of mapping memory cells to applications, methods of accessing memory cells, systems, and memory controllers are described. In some embodiments, a memory system including multiple physical channels is mapped into regions, such that any region spans each physical channel of the memory system. Applications are allocated memory in the regions, and performance and power requirements of the applications are associated with the regions. Additional methods and systems are also described.

    Abstract translation: 描述了将存储器单元映射到应用的方法,访问存储器单元,系统和存储器控制器的方法。 在一些实施例中,包括多个物理信道的存储器系统被映射到区域中,使得任何区域跨越存储器系统的每个物理信道。 应用程序在区域中分配内存,并且应用程序的性能和功耗要求与区域相关联。 还描述了附加的方法和系统。

    Control of page access in memory
    19.
    发明授权
    Control of page access in memory 有权
    控制内存中的页面访问

    公开(公告)号:US08738837B2

    公开(公告)日:2014-05-27

    申请号:US13750560

    申请日:2013-01-25

    Inventor: Robert Walker

    Abstract: The present techniques provide systems and methods of controlling access to more than one open page in a memory component, such as a memory bank. Several components may request access to the memory banks. A controller can receive the requests and open or close the pages in the memory bank in response to the requests. In some embodiments, the controller assigns priority to some components requesting access, and assigns a specific page in a memory bank to the priority component. Further, additional available pages in the same memory bank may also be opened by other priority components, or by components with lower priorities. The controller may conserve power, or may increase the efficiency of processing transactions between components and the memory bank by closing pages after time outs, after transactions are complete, or in response to a number of requests received by masters.

    Abstract translation: 本技术提供了控制对存储器组件(诸如存储体)中的多个打开页面的访问的系统和方法。 几个组件可以请求访问存储体。 控制器可以接收请求,并根据请求打开或关闭存储体中的页面。 在一些实施例中,控制器分配一些请求访问的组件的优先级,并将存储体中的特定页面分配给优先级组件。 此外,同一内存库中的其他可用页面也可能被其他优先级组件或具有较低优先级的组件打开。 控制器可以节省功率,或者可以通过在超时之后,事务完成之后或响应于主机接收的多个请求来关闭页面,从而提高组件与存储体之间的处理交易的效率。

    Write request buffer capable of responding to read requests

    公开(公告)号:US12254213B2

    公开(公告)日:2025-03-18

    申请号:US17558465

    申请日:2021-12-21

    Abstract: Described apparatuses and methods relate to a write request buffer for a memory system that may support a nondeterministic protocol. A host device and connected memory device may include a controller with a read queue and a write queue. A controller includes a write request buffer to buffer write addresses and write data associated with write requests directed to the memory device. The write request buffer can include a write address buffer that stores unique write addresses and a write data buffer that stores most-recent write data associated with the unique write addresses. Incoming read requests are compared with the write requests stored in the write request buffer. If a match is found, the write request buffer can service the requested data without forwarding the read request downstream to backend memory. Accordingly, the write request buffer can improve the latency and bandwidth in accessing a memory device over an interconnect.

Patent Agency Ranking