NODE CONTROLLER TO MANAGE ACCESS TO REMOTE MEMORY

    公开(公告)号:US20190114275A1

    公开(公告)日:2019-04-18

    申请号:US15786098

    申请日:2017-10-17

    Inventor: Frank R. Dropps

    Abstract: A node controller to manage access to and provide responses from a remote memory for a plurality of processor nodes. A learning block monitors requests to a given data block in the remote memory and monitors parameters associated with the requests. The learning block updates a respective weighting value for each of the parameters associated with the requests to the given data block. Event detection circuitry stores the parameters and the weighting values for each of the parameters associated with an address for the given data block to determine a subsequent memory action for the prospective data block in the remote memory.

    Virtual Channel and Resource Assignment
    22.
    发明申请

    公开(公告)号:US20180293184A1

    公开(公告)日:2018-10-11

    申请号:US15483880

    申请日:2017-04-10

    Abstract: A high-performance computing system, method, and storage medium manage accesses to multiple memory modules of a computing node, the modules having different access latencies. The node allocates its resources into pools according to pre-determined memory access criteria. When another computing node requests a memory access, the node determines whether the request satisfies any of the criteria. If so, the associated pool of resources is selected for servicing the request; if not, a default pool is selected. The node then services the request if the pool of resources is sufficient. Otherwise, various error handling processes are performed. Each memory access criterion may relate to a memory address range assigned to a memory module, a type of request, a relationship between the nodes, a configuration of the requesting node, or a combination of these.

    HYPERVISOR-ASSISTED SCALABLE DISTRIBUTED SYSTEMS

    公开(公告)号:US20240345857A1

    公开(公告)日:2024-10-17

    申请号:US18133895

    申请日:2023-04-12

    CPC classification number: G06F9/45558 G06F2009/45579 G06F2009/45583

    Abstract: A first hypervisor running on a first processor cluster is provided. During operation, the first hypervisor can determine a first set of processing nodes and a first memory unit of the first processor cluster in response to the booting up of a first Basic Input/Output System (BIOS) of the first processor cluster. The first hypervisor can discover a second hypervisor running on a second processor cluster comprising a second set of processing nodes and a second memory unit. The first hypervisor can operate, with the second hypervisor, a distributed system comprising the first and second sets of processing nodes and the first and second memory units. The first hypervisor can then operate, with the second hypervisor, a global virtual machine on the distributed system. The virtual memory space of the global virtual machine can be mapped to respective memory spaces of the first and second processor clusters.

    ENHANCED VIRTUAL CHANNEL SWITCHING
    24.
    发明公开

    公开(公告)号:US20230262001A1

    公开(公告)日:2023-08-17

    申请号:US17672481

    申请日:2022-02-15

    Abstract: A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.

    DYNAMIC BANDWIDTH SHARING ON A FIBER LOOP USING SILICON PHOTONICS

    公开(公告)号:US20210367699A1

    公开(公告)日:2021-11-25

    申请号:US16882411

    申请日:2020-05-22

    Abstract: A fiber loop includes a plurality of processors coupled to each other and a controller coupled to each of the plurality of processors. The controller is configured to: assign to each of the plurality of processors a number of wavelengths for interconnect communications between the plurality of processors; receive, from a first processor of the plurality of processors, a request for one or more additional wavelengths; determine whether an interconnect bandwidth utilization on the fiber loop is less than a threshold; and in response to determining that the interconnect bandwidth utilization on the fiber loop is less than the threshold, reassign, to the first processor, one or more wavelengths that are assigned to a second processor of the plurality of processors.

    MANAGEMENT OF COHERENCY DIRECTORY CACHE ENTRY EJECTION

    公开(公告)号:US20210240625A1

    公开(公告)日:2021-08-05

    申请号:US17301949

    申请日:2021-04-20

    Inventor: Frank R. Dropps

    Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.

    Packet-Based Adaptive Forward Error Correction

    公开(公告)号:US20180227078A1

    公开(公告)日:2018-08-09

    申请号:US15427941

    申请日:2017-02-08

    CPC classification number: H04L1/0042 H04L1/0009 H04L1/0058 H04L1/18 H04L45/72

    Abstract: A system, method, and storage medium provide dynamic, packet-based adaptive forward error correction over a lossy bidirectional data communication medium that couples a transmitting device to a receiving device. The transmitting device repeatedly transmits encoded data packets formed by applying, to unencoded data, a forward error correction (FEC) algorithm having a level N that indicates a number of correctable errors. The receiving device attempts to decode the encoded data packets using the FEC algorithm, requesting retransmission of a packet if there are too many errors to correct. The transmitting device decreases the level N when it does not receive such a request within a given duration. By contrast, the transmitting device increases the level N when it receives a sequence of such requests having a threshold length, each request being received less than the given duration after the previous request.

    Dirty cache line write-back tracking

    公开(公告)号:US11556478B2

    公开(公告)日:2023-01-17

    申请号:US17085321

    申请日:2020-10-30

    Inventor: Frank R. Dropps

    Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.

    Cache coherency management for multi-category memories

    公开(公告)号:US11556471B2

    公开(公告)日:2023-01-17

    申请号:US16399230

    申请日:2019-04-30

    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.

Patent Agency Ranking