Method and apparatus for providing thermal wear leveling

    公开(公告)号:US11551990B2

    公开(公告)日:2023-01-10

    申请号:US15674607

    申请日:2017-08-11

    Abstract: Exemplary embodiments provide thermal wear spreading among a plurality of thermal die regions in an integrated circuit or among dies by using die region wear-out data that represents a cumulative amount of time each of a number of thermal die regions in one or more dies has spent at a particular temperature level. In one example, die region wear-out data is stored in persistent memory and is accrued over a life of each respective thermal region so that a long term monitoring of temperature levels in the various die regions is used to spread thermal wear among the thermal die regions. In one example, spreading thermal wear is done by controlling task execution such as thread execution among one or more processing cores, dies and/or data access operations for a memory.

    Preemptive signal integrity control

    公开(公告)号:US11487605B2

    公开(公告)日:2022-11-01

    申请号:US15921489

    申请日:2018-03-14

    Abstract: Techniques are provided herein for pre-emptively reinforcing one or more buses of a computing device against the effects of signal noise that could cause a reduction in signal integrity. The techniques generally include detecting an event (or “trigger”) that would tend to indicate that a reduction in signal integrity will occur, examining a reinforcement action policy and system status to determine what reinforcement action to take, and performing the reinforcement action.

    SELF-REGULATING POWER MANAGEMENT FOR A NEURAL NETWORK SYSTEM

    公开(公告)号:US20220229712A1

    公开(公告)日:2022-07-21

    申请号:US17712380

    申请日:2022-04-04

    Abstract: A neural network runs a known input data set using an error free power setting and using an error prone power setting. The differences in the outputs of the neural network using the two different power settings determine a high level error rate associated with the output of the neural network using the error prone power setting. If the high level error rate is excessive, the error prone power setting is adjusted to reduce errors by changing voltage and/or clock frequency utilized by the neural network system. If the high level error rate is within bounds, the error prone power setting can remain allowing the neural network to operate with an acceptable error tolerance and improved efficiency. The error tolerance can be specified by the neural network application.

    Instructions for performing multi-line memory accesses

    公开(公告)号:US11023410B2

    公开(公告)日:2021-06-01

    申请号:US16127607

    申请日:2018-09-11

    Abstract: A system is described that performs memory access operations. The system includes a processor in a first node, a memory in a second node, a communication interconnect coupled to the processor and the memory, and an interconnect controller in the first node coupled between the processor and the communication interconnect. Upon executing a multi-line memory access instruction, the processor prepares a memory access operation for accessing, in the memory, a block of data including at least some of each of at least two lines of data. The processor then causes the interconnect controller to use a single remote direct memory access memory transfer to perform the memory access operation for the block of data via the communication interconnect.

    Method and apparatus for memory vulnerability prediction

    公开(公告)号:US10684902B2

    公开(公告)日:2020-06-16

    申请号:US15662524

    申请日:2017-07-28

    Abstract: Described herein are a method and apparatus for memory vulnerability prediction. A memory vulnerability predictor predicts the reliability of a memory region when it is first accessed, based on past program history. The memory vulnerability predictor uses a table to store reliability predictions and predicts reliability needs of a new memory region. A memory management module uses the reliability information to make decisions, (such as to guide memory placement policies in a heterogeneous memory system).

    High-performance on-module caching architectures for non-volatile dual in-line memory module (NVDIMM)

    公开(公告)号:US10672474B2

    公开(公告)日:2020-06-02

    申请号:US16533278

    申请日:2019-08-06

    Abstract: A high-performance on-module caching architecture for hybrid memory modules is provided. A hybrid memory module includes a cache controller, a first volatile memory coupled to the cache controller, a first multiplexing data buffer coupled to the first volatile memory and the cache controller, and a first non-volatile memory coupled to the first multiplexing data buffer and the cache controller, wherein the first multiplexing data buffer multiplexes data between the first volatile memory and the first non-volatile memory and wherein the cache controller enables a tag checking operation to occur in parallel with a data movement operation. The hybrid memory module includes a volatile memory tag unit coupled to the cache controller, wherein the volatile memory tag unit includes a line connection that allows the cache controller to store a plurality of tags in the volatile memory tag unit and retrieve the plurality of tags from the volatile memory tag unit.

    Dynamic cache bypassing
    67.
    发明授权

    公开(公告)号:US10599578B2

    公开(公告)日:2020-03-24

    申请号:US15377537

    申请日:2016-12-13

    Abstract: A processing system fills a memory access request for data from a processor core by bypassing a cache when a write congestion condition is detected, and when transferring the data to the cache would cause eviction of a dirty cache line. The cache is bypassed by transferring the requested data to the processor core or to a different cache. Accordingly, the processing system can temporarily bypass the cache storing the dirty cache line when filling a memory access request, thereby avoiding the eviction and write back to main memory of a dirty cache line when a write congestion condition exists.

    Nondeterministic memory access requests to non-volatile memory

    公开(公告)号:US10482043B2

    公开(公告)日:2019-11-19

    申请号:US15663403

    申请日:2017-07-28

    Abstract: A memory module includes a memory, a cache to cache copies of information stored in the memory, and a controller. The controller is configured to access first data from the memory or the cache in response to receiving a read request from a processor. The controller is also configured to transmit a first signal a first nondeterministic time interval after receiving the read request. The first signal indicates that the first data is available. The controller is further configured to transmit a second signal a first deterministic time interval after receiving a first transmit request from the processor in response to the first signal. The second signal includes the first data. The memory module also includes a buffer to store a write request until completion and a counter that is incremented in response to receiving the write request and decremented in response to completing the write request.

    SELF-REGULATING POWER MANAGEMENT FOR A NEURAL NETWORK SYSTEM

    公开(公告)号:US20190235940A1

    公开(公告)日:2019-08-01

    申请号:US15884638

    申请日:2018-01-31

    CPC classification number: G06F11/076 G06K9/03 G06N3/02 G06N5/04 G06N20/00

    Abstract: A neural network runs a known input data set using an error free power setting and using an error prone power setting. The differences in the outputs of the neural network using the two different power settings determine a high level error rate associated with the output of the neural network using the error prone power setting. If the high level error rate is excessive, the error prone power setting is adjusted to reduce errors by changing voltage and/or clock frequency utilized by the neural network system. If the high level error rate is within bounds, the error prone power setting can remain allowing the neural network to operate with an acceptable error tolerance and improved efficiency. The error tolerance can be specified by the neural network application.

Patent Agency Ranking