Cache allocation policy
    4.
    发明授权

    公开(公告)号:US11755477B2

    公开(公告)日:2023-09-12

    申请号:US17563675

    申请日:2021-12-28

    CPC classification number: G06F12/0802 G06F2212/604

    Abstract: A cache includes an upstream port, a downstream port, a cache memory, and a control circuit. The control circuit temporarily stores memory access requests received from the upstream port, and checks for dependencies for a new memory access request with older memory access requests temporarily stored therein. If one of the older memory access requests creates a false dependency with the new memory access request, the control circuit drops an allocation of a cache line to the cache memory for the older memory access request while continuing to process the new memory access request.

    LAST LEVEL CACHE ACCESS DURING NON-CSTATE SELF REFRESH

    公开(公告)号:US20230195644A1

    公开(公告)日:2023-06-22

    申请号:US17556617

    申请日:2021-12-20

    CPC classification number: G06F12/0897 G06F2212/60

    Abstract: A data processor includes a data fabric, a memory controller, a last level cache, and a traffic monitor. The data fabric is for routing requests between a plurality of requestors and a plurality of responders. The memory controller is for accessing a volatile memory. The last level cache is coupled between the memory controller and the data fabric. The traffic monitor is coupled to the last level cache and operable to monitor traffic between the last level cache and the memory controller, and based on detecting an idle condition in the monitored traffic, to cause the memory controller to command the volatile memory to enter self-refresh mode while the last level cache maintains an operational power state and responds to cache hits over the data fabric.

    BANDWIDTH MATCHED SCHEDULER
    6.
    发明申请

    公开(公告)号:US20190319891A1

    公开(公告)日:2019-10-17

    申请号:US15951844

    申请日:2018-04-12

    Abstract: A computing system uses a memory for storing data, one or more clients for generating network traffic and a communication fabric with network switches. The network switches include centralized storage structures, rather than separate input and output storage structures. The network switches store particular metadata corresponding to received packets in a single, centralized collapsing queue where the age of the packets corresponds to a queue entry position. The payload data of the packets are stored in a separate memory, so the relatively large amount of data is not shifted during the lifetime of the packet in the network switch. The network switches select sparse queue entries in the collapsible queue, deallocate the selected queue entries, and shift remaining allocated queue entries toward a first end of the queue with a delay proportional to the radix of the network switches.

    Direct mapping mode for associative cache

    公开(公告)号:US11422935B2

    公开(公告)日:2022-08-23

    申请号:US17033287

    申请日:2020-09-25

    Abstract: A method of controlling a cache is disclosed. The method comprises receiving a request to allocate a portion of memory to store data. The method also comprises directly mapping a portion of memory to an assigned contiguous portion of the cache memory when the request to allocate a portion of memory to store the data includes a cache residency request that the data continuously resides in cache memory. The method also comprises mapping the portion of memory to the cache memory using associative mapping when the request to allocate a portion of memory to store the data does not include a cache residency request that data continuously resides in the cache memory.

Patent Agency Ranking