DATA FAST PATH IN HETEROGENEOUS SOC
    1.
    发明申请

    公开(公告)号:US20200097421A1

    公开(公告)日:2020-03-26

    申请号:US16200622

    申请日:2018-11-26

    Abstract: According to one general aspect, an apparatus may include a processor coupled with a memory controller via a first path and a second path. The first path may traverse a coherent interconnect that couples the memory controller with a plurality of processors, including the processor. The second path may bypass the coherent interconnect and has a lower latency than the first path. The processor may be configured to send a memory access request to the memory controller and wherein the memory access request includes a path request to employ either the first path or the second path. The apparatus may include the memory controller configured to fulfill the memory access request and, based at least in part upon the path request, send at least part of the results of the memory access to the processor via either the first path or the second path.

    SYSTEM AND METHOD FOR EARLY DRAM PAGE-ACTIVATION

    公开(公告)号:US20200210337A1

    公开(公告)日:2020-07-02

    申请号:US16289650

    申请日:2019-02-28

    Abstract: A system and a method provide a memory-access technique that effectively parallelizes DRAM operations and coherency operations to reduce memory-access latency. The system may include a memory controller, an interconnect and a processor. The interconnect may be coupled to the memory controller. The processor may be coupled to the memory controller through a first path and a second path in which the first path is through the interconnect and the second path bypasses the interconnect. The processor may be configured to send substantially concurrently a memory access request to the memory controller via the first path and send a page activation request or a hint request to the memory controller via the second path so that the DRAM access operations appear to be masked, or hidden by the coherency operations.

    BYPASS PREDICTOR FOR AN EXCLUSIVE LAST-LEVEL CACHE

    公开(公告)号:US20200210347A1

    公开(公告)日:2020-07-02

    申请号:US16289645

    申请日:2019-02-28

    Abstract: A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero.

    BYPASS PREDICTOR FOR AN EXCLUSIVE LAST-LEVEL CACHE

    公开(公告)号:US20210374064A1

    公开(公告)日:2021-12-02

    申请号:US17402492

    申请日:2021-08-13

    Abstract: A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero.

    SPECULATIVE DRAM READ, IN PARALLEL WITH CACHE LEVEL SEARCH, LEVERAGING INTERCONNECT DIRECTORY

    公开(公告)号:US20200301838A1

    公开(公告)日:2020-09-24

    申请号:US16424452

    申请日:2019-05-28

    Abstract: According to one general aspect, an apparatus may include a processor configured to issue a first request for a piece of data from a cache memory and a second request for the piece of data from a system memory. The apparatus may include the cache memory configured to temporarily store a subset of data. The apparatus may include a memory interconnect. The a memory interconnect may be configured to receive the second request for the piece of data from the system memory. The a memory interconnect may be configured to determine if the piece of memory is stored in the cache memory. The a memory interconnect may be configured to, if the piece of memory is determined to be stored in the cache memory, cancel the second request for the piece of data from the system memory.

Patent Agency Ranking