INTER-NODE MESSAGING CONTROLLER
    2.
    发明公开

    公开(公告)号:EP4020226A1

    公开(公告)日:2022-06-29

    申请号:EP21195477.1

    申请日:2021-09-08

    申请人: Intel Corporation

    摘要: A processor package comprises a first core, a local cache in the first core, and an inter-node messaging controller (INMC) in the first core. The INMC is to receive an inter-node message from a sender thread executing on the first core, wherein the message is directed to a receiver thread executing on a second core. In response, the INMC is to store a payload from the inter-node message in a local message queue in the local cache of the first core. After storing the payload, the INMC is to use a remote atomic operation to reserve a location at a tail of a shared message queue in a local cache of the second core. After reserving the location, the INMC is to use an inter-node-put operation to write the payload directly to the local cache of the second core. Other embodiments are described and claimed.

    APPARATUSES, METHODS, AND SYSTEMS TO ACCELERATE STORE PROCESSING

    公开(公告)号:EP3719655A1

    公开(公告)日:2020-10-07

    申请号:EP20159442.1

    申请日:2020-02-26

    申请人: Intel Corporation

    发明人: Pham, Binh Dan, Chen

    摘要: Systems, methods, and apparatuses relating to circuitry to accelerate store processing are described. In one embodiment, a processor includes a (e.g., LI) cache, a fill buffer, a store buffer, and a cache controller to allocate a first entry of a plurality of entries in the fill buffer to store a first storage request when the first storage request misses in the cache, send a first request for ownership to another cache corresponding to the first storage request, detect a hit in the cache for a second storage request, update a globally observable buffer to indicate the first entry in the fill buffer for the first storage request is earlier in program order than the second storage request in the store buffer, allocate, before the second storage request is removed from the store buffer, a second entry of the plurality of entries in the fill buffer to store the third storage request when the third storage request misses in the cache, send a second request for ownership to another cache corresponding to the third storage request, and update the globally observable buffer to indicate the second entry in the fill buffer for the third storage request is later in program order than the second storage request in the store buffer.

    IMPLEMENTING COHERENCY WITH REFLECTIVE MEMORY

    公开(公告)号:EP2979192B1

    公开(公告)日:2018-05-30

    申请号:EP13879778.2

    申请日:2013-03-28

    IPC分类号: G06F12/0804 G06F12/0837

    摘要: Techniques for updating data in a reflective memory region of a first memory device are described herein. In one example, a method for updating data in a reflective memory region of a first memory device includes receiving an indication that data is to be flushed from a cache device to the first memory device. The method also includes detecting a memory address corresponding to the data is within the reflective memory region of the first memory device and sending data from the cache device to the first memory device with a flush operation. Additionally, the method includes determining that the data received by the first memory device is modified data. Furthermore, the method includes sending the modified data to a second memory device in a second computing system.

    METHODS FOR BIAS MODE MANAGEMENT IN MEMORY SYSTEMS

    公开(公告)号:EP4109282A1

    公开(公告)日:2022-12-28

    申请号:EP22179209.6

    申请日:2022-06-15

    IPC分类号: G06F12/0837 G06F12/02

    摘要: A method for managing a memory system may include monitoring one or more accesses of a page of memory, determining, based on the monitoring, an access pattern of the page of memory, and selecting, based on the access pattern, a coherency bias for the page of memory. The monitoring may include maintaining an indication of the one or more accesses. The determining may include comparing the indication to a threshold. Maintaining the indication may include changing the indication in a first manner based on an access of the page of memory by a first apparatus. Maintaining the indication may include changing the indication in a second manner based on an access of the page of memory by a second apparatus. The first manner may counteract the second manner.

    ADAPTIVE REMOTE ATOMICS
    8.
    发明公开

    公开(公告)号:EP4020225A1

    公开(公告)日:2022-06-29

    申请号:EP21198178.2

    申请日:2021-09-22

    申请人: Intel Corporation

    摘要: Disclosed embodiments relate to atomic memory operations. In one example, an apparatus includes multiple processor cores, a cache hierarchy, a local execution unit, and a remote execution unit, and an adaptive remote atomic operation unit. The cache hierarchy includes a local cache at a first level and a shared cache at a second level. The local execution unit is to perform an atomic operation at the first level if the local cache is a storing a cache line including data for the atomic operation. The remote execution unit is to perform the atomic operation at the second level. The adaptive remote atomic operation unit is to determine whether to perform the first atomic operation at the first level or at the second level and whether to copy the cache line from the shared cache to the local cache.

    OBJECT MEMORY DATA FLOW INSTRUCTION EXECUTION

    公开(公告)号:EP4012548A1

    公开(公告)日:2022-06-15

    申请号:EP22155662.4

    申请日:2016-01-20

    申请人: Ultrata LLC

    摘要: Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to an instruction set of an object memory fabric. This object memory fabric instruction set can be used to provide a unique instruction model based on triggers defined in metadata of the memory objects. This model represents a dynamic dataflow method of execution in which processes are performed based on actual dependencies of the memory objects. This provides a high degree of memory and execution parallelism which in turn provides tolerance of variations in access delays between memory objects. In this model, sequences of instructions are executed and managed based on data access. These sequences can be of arbitrary length but short sequences are more efficient and provide greater parallelism.

    IMPLEMENTING COHERENCY WITH REFLECTIVE MEMORY
    10.
    发明公开
    IMPLEMENTING COHERENCY WITH REFLECTIVE MEMORY 有权
    实现与镜像内存的一致性

    公开(公告)号:EP2979192A4

    公开(公告)日:2016-11-16

    申请号:EP13879778

    申请日:2013-03-28

    IPC分类号: G06F12/0804 G06F12/0837

    摘要: Techniques for updating data in a reflective memory region of a first memory device are described herein. In one example, a method for updating data in a reflective memory region of a first memory device includes receiving an indication that data is to be flushed from a cache device to the first memory device. The method also includes detecting a memory address corresponding to the data is within the reflective memory region of the first memory device and sending data from the cache device to the first memory device with a flush operation. Additionally, the method includes determining that the data received by the first memory device is modified data. Furthermore, the method includes sending the modified data to a second memory device in a second computing system.