Enhanced virtual channel switching

    公开(公告)号:US11888751B2

    公开(公告)日:2024-01-30

    申请号:US17672481

    申请日:2022-02-15

    Abstract: A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.

    DIRTY CACHE LINE WRITE-BACK TRACKING

    公开(公告)号:US20220138110A1

    公开(公告)日:2022-05-05

    申请号:US17085321

    申请日:2020-10-30

    Inventor: Frank R. Dropps

    Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.

    Dynamic bandwidth sharing on a fiber loop using silicon photonics

    公开(公告)号:US11184103B1

    公开(公告)日:2021-11-23

    申请号:US16882411

    申请日:2020-05-22

    Abstract: A fiber loop includes a plurality of processors coupled to each other and a controller coupled to each of the plurality of processors. The controller is configured to: assign to each of the plurality of processors a number of wavelengths for interconnect communications between the plurality of processors; receive, from a first processor of the plurality of processors, a request for one or more additional wavelengths; determine whether an interconnect bandwidth utilization on the fiber loop is less than a threshold; and in response to determining that the interconnect bandwidth utilization on the fiber loop is less than the threshold, reassign, to the first processor, one or more wavelengths that are assigned to a second processor of the plurality of processors.

    Ternary content addressable memory-enhanced cache coherency acceleration

    公开(公告)号:US11169921B2

    公开(公告)日:2021-11-09

    申请号:US16408346

    申请日:2019-05-09

    Inventor: Frank R. Dropps

    Abstract: A system and method for cache coherency within multiprocessor environments is provided. Each node controller of a plurality of nodes within a multiprocessor system receives a cache coherency protocol request from local processor sockets and other node controller(s). A ternary content addressable memory (TCAM) accelerator in the node controller determines if the cache coherency protocol request comprises a snoop request and, if it is determined to be a snoop request, searching the TCAM based on an address within the cache coherency protocol request. In response to detecting only one match between an entry of the TCAM and the received snoop request, sending a response to the requesting local processor a response without having to access a coherency directory.

    Management of coherency directory cache entry ejection

    公开(公告)号:US10997074B2

    公开(公告)日:2021-05-04

    申请号:US16399378

    申请日:2019-04-30

    Inventor: Frank R. Dropps

    Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.

    Packet-based adaptive forward error correction

    公开(公告)号:US10225045B2

    公开(公告)日:2019-03-05

    申请号:US15427941

    申请日:2017-02-08

    Abstract: A system, method, and storage medium provide dynamic, packet-based adaptive forward error correction over a lossy bidirectional data communication medium that couples a transmitting device to a receiving device. The transmitting device repeatedly transmits encoded data packets formed by applying, to unencoded data, a forward error correction (FEC) algorithm having a level N that indicates a number of correctable errors. The receiving device attempts to decode the encoded data packets using the FEC algorithm, requesting retransmission of a packet if there are too many errors to correct. The transmitting device decreases the level N when it does not receive such a request within a given duration. By contrast, the transmitting device increases the level N when it receives a sequence of such requests having a threshold length, each request being received less than the given duration after the previous request.

    Management of coherency directory cache entry ejection

    公开(公告)号:US11625326B2

    公开(公告)日:2023-04-11

    申请号:US17301949

    申请日:2021-04-20

    Inventor: Frank R. Dropps

    Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.

    CACHE COHERENCY MANAGEMENT FOR MULTI-CATEGORY MEMORIES

    公开(公告)号:US20200349076A1

    公开(公告)日:2020-11-05

    申请号:US16399230

    申请日:2019-04-30

    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.

    Network source arbitration
    10.
    发明授权

    公开(公告)号:US10476810B1

    公开(公告)日:2019-11-12

    申请号:US15963296

    申请日:2018-04-26

    Abstract: Example implementations relate to arbitrating access to a shared resource for a plurality of data streams. An example implementation includes selecting a data stream from the plurality of data streams according to an arbitration scheme. A data packet of the selected data stream may be granted access to the shared resource. A source count associated with a source of the data packet may be adjusted, and the arbitration scheme may be blocked from selecting the data stream where the source count exceeds a threshold.

Patent Agency Ranking