Distributed gather/scatter operations across a network of memory nodes

    公开(公告)号:US10805392B2

    公开(公告)日:2020-10-13

    申请号:US15221554

    申请日:2016-07-27

    Abstract: Devices, methods, and systems for distributed gather and scatter operations in a network of memory nodes. A responding memory node includes a memory; a communications interface having circuitry configured to communicate with at least one other memory node; and a controller. The controller includes circuitry configured to receive a request message from a requesting node via the communications interface. The request message indicates a gather or scatter operation, and instructs the responding node to retrieve data elements from a source memory data structure and store the data elements to a destination memory data structure. The controller further includes circuitry configured to transmit a response message to the requesting node via the communications interface. The response message indicates that the data elements have been stored into the destination memory data structure.

    METHOD AND APPARATUS FOR CONTROLLING CACHE LINE STORAGE IN CACHE MEMORY

    公开(公告)号:US20190205253A1

    公开(公告)日:2019-07-04

    申请号:US15857837

    申请日:2017-12-29

    Inventor: David A. Roberts

    Abstract: A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls a refresh operation so that a data refresh does not occur for the clean data only banks or the refresh rate is reduced for the clean data only banks. Partitions that store dirty data can also store clean data; however, other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.

    System and method for dynamically allocating memory to hold pending write requests

    公开(公告)号:US10310997B2

    公开(公告)日:2019-06-04

    申请号:US15273013

    申请日:2016-09-22

    Abstract: A processing system employs a memory module as a temporary write buffer to store write requests when a write buffer at a memory controller reaches a threshold capacity, and de-allocates the temporary write buffer when the write buffer capacity falls below the threshold. Upon receiving a write request, the memory controller stores the write request in a write buffer until the write request can be written to main memory. The memory controller can temporarily extend the memory controller's write buffer to the memory module, thereby accommodating temporary periods of high memory activity without requiring a large permanent write buffer at the memory controller.

    HIGH PERFORMANCE CONTEXT SWITCHING FOR VIRTUALIZED FPGA ACCELERATORS

    公开(公告)号:US20190146829A1

    公开(公告)日:2019-05-16

    申请号:US15809940

    申请日:2017-11-10

    Abstract: A hardware context manager in a field-programmable gate array (FPGA) device includes configuration logic configured to program one or more programming regions in the FPGA device based on configuration data for implementing a target configuration of the one or more programming regions. Context management logic in the hardware context manager is coupled with the configuration logic and saves a first context corresponding to the target configuration by retrieving first state information from the set of one or more programming regions, where the first state information is generated based on the target configuration, and storing the retrieved first state information in a context memory. The context management logic restores the first context by transferring the first state information from the context memory to the one or more programming regions, and causing the configuration logic to program the one or more programming regions based on the configuration data.

    METHOD AND APPARATUS FOR PROVIDING THERMAL WEAR LEVELING

    公开(公告)号:US20190051576A1

    公开(公告)日:2019-02-14

    申请号:US15674607

    申请日:2017-08-11

    Abstract: Exemplary embodiments provide thermal wear spreading among a plurality of thermal die regions in an integrated circuit or among dies by using die region wear-out data that represents a cumulative amount of time each of a number of thermal die regions in one or more dies has spent at a particular temperature level. In one example, die region wear-out data is stored in persistent memory and is accrued over a life of each respective thermal region so that a long term monitoring of temperature levels in the various die regions is used to spread thermal wear among the thermal die regions. In one example, spreading thermal wear is done by controlling task execution such as thread execution among one or more processing cores, dies and/or data access operations for a memory.

    METHOD AND APPARATUS FOR MEMORY VULNERABILITY PREDICTION

    公开(公告)号:US20190034251A1

    公开(公告)日:2019-01-31

    申请号:US15662524

    申请日:2017-07-28

    Abstract: Described herein are a method and apparatus for memory vulnerability prediction. A memory vulnerability predictor predicts the reliability of a memory region when it is first accessed, based on past program history. The memory vulnerability predictor uses a table to store reliability predictions and predicts reliability needs of a new memory region. A memory management module uses the reliability information to make decisions, (such as to guide memory placement policies in a heterogeneous memory system).

    Modifying Carrier Packets based on Information in Tunneled Packets

    公开(公告)号:US20180337863A1

    公开(公告)日:2018-11-22

    申请号:US15600048

    申请日:2017-05-19

    Inventor: David A. Roberts

    Abstract: The described embodiments include an electronic device that handles network packets. During operation, the electronic device receives a carrier packet, the carrier packet that includes a tunneled packet in a payload of the carrier packet, wherein the tunneled packet includes a packet priority of the tunneled packet and the carrier packet includes a packet priority of the carrier packet. The electronic device then updates the packet priority of the carrier packet based on the packet priority of the tunneled packet.

Patent Agency Ranking