Partitionable memory interfaces
    21.
    发明授权
    Partitionable memory interfaces 有权
    可分区存储器接口

    公开(公告)号:US09417816B2

    公开(公告)日:2016-08-16

    申请号:US14146618

    申请日:2014-01-02

    Inventor: David A. Roberts

    CPC classification number: G06F3/0659 G06F3/061 G06F3/0673 G06F13/16 Y02D10/14

    Abstract: A memory device receives a plurality of read commands and/or write commands in parallel. The memory device transmits data corresponding to respective read commands on respective portions of a data bus and receives data corresponding to respective write commands on respective portions of the data bus. The memory device includes I/O logic to receive the plurality of read commands in parallel, to transmit the data corresponding to the respective read commands on respective portions of the data bus, and to receive the data corresponding to the respective write commands on respective portions of the data bus.

    Abstract translation: 存储器装置并行地接收多个读取命令和/或写入命令。 存储器件在数据总线的相应部分上发送与各个读取命令对应的数据,并且在数据总线的相应部分上接收对应于各个写命令的数据。 存储器件包括并行接收多个读取命令的I / O逻辑,以在数据总线的相应部分上发送对应于相应读取命令的数据,并且在相应部分上接收对应于各个写入命令的数据 的数据总线。

    Method and apparatus for controlling cache line storage in cache memory

    公开(公告)号:US12124373B2

    公开(公告)日:2024-10-22

    申请号:US18185058

    申请日:2023-03-16

    Inventor: David A. Roberts

    Abstract: A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls refresh operation so that data refresh does not occur for clean data only banks or the refresh rate is reduced for clean data only banks. Partitions that store dirty data can also store clean data, however other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.

    Sorting Instances of Input Data for Processing through a Neural Network

    公开(公告)号:US20200174748A1

    公开(公告)日:2020-06-04

    申请号:US16206879

    申请日:2018-11-30

    Inventor: David A. Roberts

    Abstract: An electronic device including a neural network processor and a presorter is described. The presorter determines a sorted order to be used by the neural network processor for processing a set of instances of input data through the neural network, the determining including rearranging an initial order of some or all of the instances of input data so that instances of input data having specified similarities among the some or all of the instances of input data are located nearer to one another in the sorted order. The presorter provides, to the neural network processor, the sorted order to be used for controlling an order in which instances of input data from among the set of instances of input data are processed through the neural network. A controller in the electronic device adjusts operation of the presorter based on efficiencies of the presorter and the neural network processor.

    Method and apparatus for reducing memory access latency

    公开(公告)号:US10515671B2

    公开(公告)日:2019-12-24

    申请号:US15272894

    申请日:2016-09-22

    Inventor: David A. Roberts

    Abstract: Logic such as a memory controller writes primary data from an incoming write request as well as corresponding replicated primary data (which is a copy of the primary data) to one or more different memory banks of random access memory in response to determining a memory access contention condition for the address (including a range of addresses) corresponding to the incoming write request. When the memory bank containing the primary data is busy servicing a write request, such as to another row of memory in the bank, a read request for the primary data is serviced by reading the replicated primary data from the different memory bank of the random access memory to service the incoming read request.

    FAST THREAD WAKE-UP THROUGH EARLY LOCK RELEASE

    公开(公告)号:US20190317832A1

    公开(公告)日:2019-10-17

    申请号:US15952143

    申请日:2018-04-12

    Abstract: A thread holding a lock notifies a sleeping thread that is waiting on the lock that the lock holding thread is “about” to release the lock. In response to the notification, the waiting thread is woken up. While the waiting thread is woken up, the lock holding thread completes other operations prior to actually releasing the lock and then releases the lock. The notification to the waiting thread hides latency associated with waking up the waiting thread by allowing operations that wake up the waiting thread to occur while the lock holding thread is performing the other operations prior to releasing the thread.

    TECHNIQUES FOR IMPROVED LATENCY OF THREAD SYNCHRONIZATION MECHANISMS

    公开(公告)号:US20190317831A1

    公开(公告)日:2019-10-17

    申请号:US15952149

    申请日:2018-04-12

    Abstract: A memory fence or other similar operation is executed with reduced latency. An early fence operation is executed and acts as a hint to the processor executing the thread that executes the fence. This hint causes the processor to begin performing sub-operations for the fence earlier than if no such hint were executed. Examples of sub-operations for the fence include operations to make data written to by writes prior to the fence operation available to other threads. A resolving fence, which occurs after the early fence, performs the remaining sub-operations for the fence. By triggering some or all of the sub-operations for a memory fence that will occur in the future, the early fence operation reduces the amount of latency associated with that memory fence operation.

    PREEMPTIVE SIGNAL INTEGRITY CONTROL
    30.
    发明申请

    公开(公告)号:US20190286513A1

    公开(公告)日:2019-09-19

    申请号:US15921489

    申请日:2018-03-14

    Abstract: Techniques are provided herein for pre-emptively reinforcing one or more buses of a computing device against the effects of signal noise that could cause a reduction in signal integrity. The techniques generally include detecting an event (or “trigger”) that would tend to indicate that a reduction in signal integrity will occur, examining a reinforcement action policy and system status to determine what reinforcement action to take, and performing the reinforcement action.

Patent Agency Ranking