GLOBALLY ADDRESSABLE MEMORY FOR DEVICES LINKED TO HOSTS

    公开(公告)号:US20190042455A1

    公开(公告)日:2019-02-07

    申请号:US16136036

    申请日:2018-09-19

    Abstract: Systems, methods, and devices can include ports comprising hardware to support the multilane link, wherein the multi-lane link comprises a first set of bundled lanes configured in a first direction and a second set of bundled lanes configured in a second direction, the second direction is opposite to the first direction, the first set of bundled lanes comprises an equal number of lanes as the second set of bundled lanes. An input/output (I/O) bridge logic implemented at least partially in hardware can receive across the multilane link an cache invalidation request received on a port compliant with an I/O protocol. A memory controller logic implemented at least partially in hardware can invalidate a cache line based on receiving the cache invalidation request on the I/O protocol. The memory controller can transmit across the multilane link a memory invalidation response message on a port compliant with a device-attached memory access protocol.

    CHANGING CACHE OWNERSHIP IN CLUSTERED MULTIPROCESSOR

    公开(公告)号:US20170255553A1

    公开(公告)日:2017-09-07

    申请号:US15602603

    申请日:2017-05-23

    CPC classification number: G06F12/084 G06F2212/2542 G06F2212/271

    Abstract: A chip multiprocessor may include a first cluster and a second cluster, each having multiple cores of a processor, multiple co-located cache slices, and a memory controller. The processor stores directory information in a memory to indicate cluster cache ownership of a first address space to the first cluster. In response to a request to change the cluster cache ownership of the first address space to a second address space of the second cluster, the processor provides a quiesce period during which to block new read or write requests to the first cluster and the second cluster; drain read or write requests issued on the first cluster and the second cluster; and remove the block on new read or write requests. The processor may also update the directory information to change the cluster cache ownership of the first address space to the second address space of the second cluster.

    Method and apparatus for distributed snoop filtering

    公开(公告)号:US09727475B2

    公开(公告)日:2017-08-08

    申请号:US14497740

    申请日:2014-09-26

    CPC classification number: G06F12/0875 G06F12/0831 G06F2212/452

    Abstract: An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.

    Techniques to provide cache coherency based on cache type

    公开(公告)号:US11016894B2

    公开(公告)日:2021-05-25

    申请号:US15670171

    申请日:2017-08-07

    Abstract: Techniques and apparatus to manage cache coherency for different types of cache memory are described. In one embodiment, an apparatus may include at least one processor, at least one cache memory, and logic, at least a portion comprised in hardware, the logic to receive a memory operation request associated with the at least one cache memory, determine a cache status of the memory operation request, the cache status indicating one of a giant cache status or a small cache status, perform the memory operation request via a small cache coherence process responsive to the cache status being a small cache status, and perform the memory operation request via a giant cache coherence process responsive to the cache status being a small cache status. Other embodiments are described and claimed.

    HIGH BANDWIDTH LINK LAYER FOR COHERENT MESSAGES

    公开(公告)号:US20190095363A1

    公开(公告)日:2019-03-28

    申请号:US16141729

    申请日:2018-09-25

    Abstract: Systems, methods, and devices can include link layer logic that is to identify, by a link layer device, first data received from the memory in a first protocol format, identify, by the link layer device, second data received from the cache in a second protocol format, multiplex, by the link layer device, a portion of the first data and a portion of the second data to produce multiplexed data; and generate, by the link layer device, a flow control unit (flit) that includes the multiplexed data.

    Changing cache ownership in clustered multiprocessor

    公开(公告)号:US09940238B2

    公开(公告)日:2018-04-10

    申请号:US15602603

    申请日:2017-05-23

    CPC classification number: G06F12/084 G06F2212/2542 G06F2212/271

    Abstract: A chip multiprocessor may include a first cluster and a second cluster, each having multiple cores of a processor, multiple co-located cache slices, and a memory controller. The processor stores directory information in a memory to indicate cluster cache ownership of a first address space to the first cluster. In response to a request to change the cluster cache ownership of the first address space to a second address space of the second cluster, the processor provides a quiesce period during which to block new read or write requests to the first cluster and the second cluster; drain read or write requests issued on the first cluster and the second cluster; and remove the block on new read or write requests. The processor may also update the directory information to change the cluster cache ownership of the first address space to the second address space of the second cluster.

Patent Agency Ranking