Shared buffer memory architecture

    公开(公告)号:US10884829B1

    公开(公告)日:2021-01-05

    申请号:US16867490

    申请日:2020-05-05

    申请人: Innovium, Inc.

    摘要: An improved buffer for networking devices and other computing devices comprises multiple memory instances, each having a distinct set of entries. Transport data units (“TDUs”) are divided into storage data units (“SDUs”), and each SDU is stored within a separate entry of a separate memory instance in a logical bank. A grid of the memory instances is organized into overlapping horizontal logical banks and vertical logical banks. A memory instance may be shared between horizontal and vertical logical banks. When overlapping logical banks are accessed concurrently, the memory instance that they share may be inaccessible to one of the logical banks. Accordingly, when writing a TDU, a parity SDU may be generated for the TDU and also stored within its logical bank. The TDU's content within the shared memory instance may then be reconstructed from the parity SDU without having to read the shared memory instance.

    Memory-based power stabilization in a network device

    公开(公告)号:US11307773B1

    公开(公告)日:2022-04-19

    申请号:US16374530

    申请日:2019-04-03

    申请人: Innovium, Inc.

    摘要: According to an embodiment, power demands of a computing device or component thereof may be stabilized by performing redundant operations during periods of otherwise low power demand. In so doing, the current load of the device/component remains relatively stable, potentially greatly reducing voltage droops and overshoots. This can reduce the peak voltage and peak power rating of the device/component. In certain embodiments, such as in network switches and routers, the redundant operations may include queries against a content addressable memory (CAM), such as a ternary content addressable memory (TCAM). Moreover, in an embodiment the queries may be designed to always, or at least be highly likely to, miss the entries in the CAM, thereby ensuring maximum power usage. In another embodiment, the redundant operations include read operations on a random access memory (RAM). In other embodiments, redundant operations may be performed with respect to other power-intensive subsystems.

    Network chip yield improvement architectures and techniques

    公开(公告)号:US11481350B1

    公开(公告)日:2022-10-25

    申请号:US16940003

    申请日:2020-07-27

    申请人: Innovium, Inc.

    IPC分类号: G06F13/40 H03M9/00 G06F13/42

    摘要: Network chip utility is improved using multi-core architectures with auxiliary wiring between cores to permit cores to utilize components from otherwise inactive cores. The architectures permit, among other advantages, the re-purposing of functional components that reside in defective or otherwise non-functional cores. For instance, a four-core network chip with certain defects in three or even four cores could still, through operation of the techniques described herein, be utilized in a two or even three-core capacity. In an embodiment, the auxiliary wiring may be used to redirect data from a Serializer/Deserializer (“SerDes”) block of a first core to packet-switching logic on a second core, and vice-versa. In an embodiment, the auxiliary wiring may be utilized to circumvent defective components in the packet-switching logic itself. In an embodiment, a core may utilize buffer memories, forwarding tables, or other resources from other cores instead of or in addition to its own.

    Efficient buffer utilization for network data units

    公开(公告)号:US11470016B1

    公开(公告)日:2022-10-11

    申请号:US16924340

    申请日:2020-07-09

    申请人: Innovium, Inc.

    摘要: Approaches, techniques, and mechanisms are disclosed for efficiently buffering data units within a network device. A traffic manager or other network device component receives Transport Data Units (“TDUs”), which are sub-portions of Protocol Data Units (“PDUs”). Rather than buffer an entire TDU together, the component divides the TDU into multiple Storage Data Units (“SDUs”) that can fit in SDU buffer entries within physical memory banks. A TDU-to-SDU Mapping (“TSM”) memory stores TSM lists that indicate which SDU entries store SDUs for a given TDU. Physical memory banks in which the SDUs are stored may be grouped together into logical SDU banks that are accessed together as if a single bank. The TSM memory may include a number of distinct TSM banks, with each logical SDU bank having a corresponding TSM bank. Techniques for maintaining inter-packet and intra-packet linking data compatible with such buffers are also disclosed.

    High-throughput multi-node integrated circuits

    公开(公告)号:US11265268B1

    公开(公告)日:2022-03-01

    申请号:US16780214

    申请日:2020-02-03

    申请人: Innovium, Inc.

    IPC分类号: H04L12/935 H04L49/00 H04L1/00

    摘要: The technology described in this document can be embodied in an integrated circuit device comprises a first data processing unit comprising one or more input ports for receiving incoming data, one or more inter-unit data links that couple the first data processing unit to one or more other data processing units, a first ingress management module connected to the one or more inter-unit data links, the first ingress management module configured to store the incoming data, and forward the stored data to the one or more inter-unit data links as multiple data packets, and a first ingress processing module. The integrated circuit device also comprises a second data processing unit comprising one or more output ports for transmitting outgoing data, and a second ingress management module connected to the one or more inter-unit data links.

    Programmable delay-based power stabilization

    公开(公告)号:US11567560B1

    公开(公告)日:2023-01-31

    申请号:US16399652

    申请日:2019-04-30

    申请人: Innovium, Inc.

    摘要: Power demands of a computing system, such as a network device and/or a component thereof, are stabilized by introducing a programmable delay into identical or substantially similar subsystems within an integrated circuit. Each subsystem reads a potentially different delay value from an associated storage, memory, or input, and waits for some time indicated by the delay value before beginning execution. For example, in a group of identical subsystems that process data concurrently, some or all of the subsystems begin processing their respective data after a different amount of delay, thus staggering their respective executions and lowering the risk of aligned edges when some or all of the subsystems concurrently step their power demands up or down. This, in turn, reduces peak power and voltage. In an embodiment, rather than being fixed at the design stage, each subsystem's delay value is programmable at some point after fabrication.

    Reducing power consumption in an electronic device

    公开(公告)号:US11159455B1

    公开(公告)日:2021-10-26

    申请号:US16234697

    申请日:2018-12-28

    申请人: Innovium, Inc.

    摘要: Ingress packet processors in a device receive network packets from ingress ports. A crossbar in the device receives, from the ingress packet processors, packet data of the packets and transmits information about the packet data to a plurality of traffic managers in the device. Each traffic manager computes a total amount of packet data to be written to buffers across the plurality of traffic managers, where each traffic manager manages one or more buffers that store packet data. Each traffic manager compares the total amount of packet data to one or more threshold values. Upon determining that the total amount of packet data is equal to or greater than a threshold value, each traffic manager drops a portion of the packet data, and writes a remaining portion of the packet data to the buffers managed by the traffic manager.

    Instantaneous garbage collection of network data units

    公开(公告)号:US10999223B1

    公开(公告)日:2021-05-04

    申请号:US16378220

    申请日:2019-04-08

    申请人: Innovium, Inc.

    IPC分类号: H04L12/883 G06F12/02

    摘要: Approaches, techniques, and mechanisms are disclosed for reutilizing discarded link data in a buffer space for buffering data units in a network device. Rather than wasting resources on garbage collection of such link data when a data unit is dropped, the link data is used as a free list that indicates buffer entries in which new data may be stored. In an embodiment, operations of the buffer may further be enhanced by re-using the discarded link data as link data for a new data unit. The link data for a formerly buffered data unit may be assigned exclusively to a new data unit, which uses the discarded link data to determine where to store its constituent data. As a consequence, the discarded link data actually serves as valid link data for the new data unit, and new link data need not be generated for the new data unit.

    Efficient buffer utilization for network data units

    公开(公告)号:US10938739B1

    公开(公告)日:2021-03-02

    申请号:US16186349

    申请日:2018-11-09

    申请人: Innovium, Inc.

    摘要: Approaches, techniques, and mechanisms are disclosed for efficiently buffering data units within a network device. A traffic manager or other network device component receives Transport Data Units (“TDUs”), which are sub-portions of Protocol Data Units (“PDUs”). Rather than buffer an entire TDU together, the component divides the TDU into multiple Storage Data Units (“SDUs”) that can fit in SDU buffer entries within physical memory banks. A TDU-to-SDU Mapping (“TSM”) memory stores TSM lists that indicate which SDU entries store SDUs for a given TDU. Physical memory banks in which the SDUs are stored may be grouped together into logical SDU banks that are accessed together as if a single bank. The TSM memory may include a number of distinct TSM banks, with each logical SDU bank having a corresponding TSM bank. Techniques for maintaining inter-packet and intra-packet linking data compatible with such buffers are also disclosed.

    Posted operation data control
    10.
    发明授权

    公开(公告)号:US10789001B1

    公开(公告)日:2020-09-29

    申请号:US15357464

    申请日:2016-11-21

    申请人: Innovium, Inc.

    IPC分类号: G06F13/36 G06F3/06 G06F13/42

    摘要: Methods, systems, and apparatus, including a managed device comprising memory storage, one or more control registers, and circuitry to perform operations of receiving, from a control system, one or more posted write operations directed to the one or more control registers; based on the one or more posted write operations, storing in the one or more control registers, data specifying at least a system address of a memory of the control system, where the system address corresponds to a starting address of a predetermined section of the memory; and transferring managed device data from the memory storage to the predetermined section of the memory of the control system by issuing, to the control system and based on the system address of the memory, one or more posted write operations to write the managed device data to the predetermined section of the memory.