LIGHT-WEIGHT CACHE COHERENCE FOR DATA PROCESSORS WITH LIMITED DATA SHARING

    公开(公告)号:US20180074958A1

    公开(公告)日:2018-03-15

    申请号:US15264804

    申请日:2016-09-14

    Abstract: A data processing system includes a plurality of processors, local memories associated with a corresponding processor, and at least one inter-processor link In response to a first processor performing a load or store operation on an address of a corresponding local memory that is not currently in the local cache, a local cache allocates a first cache line and encodes a local state with the first cache line. In response to a load operation from an address of a remote memory that is not currently in the local cache, the local cache allocates a second cache line and encodes a remote state with the second cache line. The first processor performs subsequent loads and stores on the first cache line in the local cache in response to the local state, and subsequent loads from the second cache line in the local cache in response to the remote state.

    Asynchronous cache flushing
    3.
    发明授权

    公开(公告)号:US10049044B2

    公开(公告)日:2018-08-14

    申请号:US15181415

    申请日:2016-06-14

    Abstract: Proactive flush logic in a computing system is configured to perform a proactive flush operation to flush data from a first memory in a first computing device to a second memory in response to execution of a non-blocking flush instruction. Reactive flush logic in the computing system is configured to, in response to a memory request issued prior to completion of the proactive flush operation, interrupt the proactive flush operation and perform a reactive flush operation to flush requested data from the first memory to the second memory.

    Light-weight cache coherence for data processors with limited data sharing

    公开(公告)号:US10042762B2

    公开(公告)日:2018-08-07

    申请号:US15264804

    申请日:2016-09-14

    Abstract: A data processing system includes a plurality of processors, local memories associated with a corresponding processor, and at least one inter-processor link. In response to a first processor performing a load or store operation on an address of a corresponding local memory that is not currently in the local cache, a local cache allocates a first cache line and encodes a local state with the first cache line. In response to a load operation from an address of a remote memory that is not currently in the local cache, the local cache allocates a second cache line and encodes a remote state with the second cache line. The first processor performs subsequent loads and stores on the first cache line in the local cache in response to the local state, and subsequent loads from the second cache line in the local cache in response to the remote state.

Patent Agency Ranking