Hash operation manipulations
    52.
    发明授权

    公开(公告)号:US12019606B1

    公开(公告)日:2024-06-25

    申请号:US17061104

    申请日:2020-10-01

    申请人: Innovium, Inc.

    摘要: Certain hash-based operations in network devices and other devices, such as mapping and/or lookup operations, are improved by manipulating a hash key prior to executing a hash function on the hash key and/or by manipulating outputs of a hash function. A device may be configured to manipulate hash keys and/or outputs using manipulation logic based on one or more predefined manipulation values. A similar hash-based operation may be performed by multiple devices within a network of computing devices. Different devices may utilize different predefined manipulation values for their respective implementations of the manipulation logic. For instance, each device may assign itself a random mask value for key transformation logic as part of an initialization process when the device powers up and/or each time the device reboots. In an embodiment, described techniques may increase the entropy of hashing function outputs in certain contexts, thereby increasing the effectiveness of certain hashing functions.

    Delay-based automatic queue management and tail drop

    公开(公告)号:US11784932B2

    公开(公告)日:2023-10-10

    申请号:US17091916

    申请日:2020-11-06

    申请人: Innovium, Inc.

    摘要: Approaches, techniques, and mechanisms are disclosed for improving operations of a network switching device and/or network-at-large by utilizing queue delay as a basis for measuring congestion for the purposes of Automated Queue Management (“AQM”) and/or other congestion-based policies. Queue delay is an exact or approximate measure of the amount of time a data unit waits at a network device as a consequence of queuing, such as the amount of time the data unit spends in an egress queue while the data unit is being buffered by a traffic manager. Queue delay may be used as a substitute for queue size in existing AQM, Weighted Random Early Detection (“WRED”), Tail Drop, Explicit Congestion Notification (“ECN”), reflection, and/or other congestion management or notification algorithms. Or, a congestion score calculated based on the queue delay and one or more other metrics, such as queue size, may be used as a substitute.

    Handling interface clock rate mismatches between network devices

    公开(公告)号:US11671281B1

    公开(公告)日:2023-06-06

    申请号:US17224081

    申请日:2021-04-06

    申请人: Innovium, Inc.

    摘要: The performance of a switch or other network device is improved by adjusting the number of idle bytes transmitted between data units—that is, the size of the interpacket gap—to increase the bandwidth of a network interface. In some embodiments, the adjustments may be made in a manner designed to compensate for potential mismatches between the clock rate of the network interface and clock rates of interfaces of other network devices when retransmitting data received from those other network devices. In yet other embodiments, the adjustments may be designed to increase available bandwidth for other purposes. In an embodiment, the idle reduction logic is in a Media Access Control (“MAC”) layer of a network interface. The idle reduction logic may be enabled or disabled based on user preference, or programmatically based on factors such as a transmission utilization level for the MAC layer, buffer fill level, and so forth.

    Auto load balancing
    55.
    发明授权

    公开(公告)号:US11483232B1

    公开(公告)日:2022-10-25

    申请号:US17192819

    申请日:2021-03-04

    申请人: Innovium, Inc.

    摘要: Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.

    Network switch with integrated gradient aggregation for distributed machine learning

    公开(公告)号:US11328222B1

    公开(公告)日:2022-05-10

    申请号:US16409703

    申请日:2019-05-10

    申请人: Innovium, Inc.

    摘要: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.

    Automatic flow management
    57.
    发明授权

    公开(公告)号:US11245632B2

    公开(公告)日:2022-02-08

    申请号:US16927683

    申请日:2020-07-13

    申请人: Innovium, Inc.

    摘要: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.

    AUTOMATIC FLOW MANAGEMENT
    58.
    发明申请

    公开(公告)号:US20220014473A1

    公开(公告)日:2022-01-13

    申请号:US16927683

    申请日:2020-07-13

    申请人: Innovium, Inc.

    摘要: Packet-switching operations in a network device are managed based on the detection of excessive-rate traffic flows. A network device receives a data unit, determines the traffic flow to which the data unit belongs, and updates flow tracking information for that flow. The network device utilizes the tracking information to determine when a rate at which the network device is receiving data belonging to the flow exceeds an excessive-rate threshold and is thus an excessive-rate flow. The network device may enable one or more excessive-rate policies on an excessive-rate traffic flow. Such a policy may include any number of features that affect how the device handles data units belonging to the flow, such as excessive-rate notification, differentiated discard, differentiated congestion notification, and reprioritization. Memory and other resource optimizations for such flow tracking and management are also described.

    Auto load balancing
    59.
    发明授权

    公开(公告)号:US11128561B1

    公开(公告)日:2021-09-21

    申请号:US16524575

    申请日:2019-07-29

    申请人: Innovium, Inc.

    摘要: Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.

    Network switch with integrated compute subsystem for distributed artificial intelligence and other applications

    公开(公告)号:US10931588B1

    公开(公告)日:2021-02-23

    申请号:US16409695

    申请日:2019-05-10

    申请人: Innovium, Inc.

    摘要: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.