Abstract:
A memory controller system includes a memory command storage module to store commands for a plurality of memory banks. The system includes a plurality of control mechanisms, each of which includes first and second pointers, to provide, in combination with a next field in each module location, a link list of commands for a given one of the plurality of memory banks.
Abstract:
A context forwarding bus efficiently communicates control and data between processing elements in a processor unit having a plurality of processing elements. Control and data information is transferred over a first bus from processing element to processing element.
Abstract:
A memory controller system includes a memory command storage module to store commands for a plurality of memory banks. The system includes a plurality of control mechanisms, each of which includes first and second pointers, to provide, in combination with a next field in each module location, a link list of commands for a given one of the plurality of memory banks.
Abstract:
A network device comprises an ingress processor (400) and an egress processor (450) connected via a flow control bus (485). Flow control messages received by a traffic manager module (489) comprised by the ingress processor are sent to a transmit path (498) and a control path (497).
Abstract:
A method of forming bare silicon substrates is described. A bare silicon substrate is measured, wherein measuring is performed by a non-contact capacitance measurement device to obtain a signal at a point on the substrate. The signal or a thickness indicated by the signal is communicated to a controller. An adjusted polishing parameter according to the signal or thickness indicated by the signal is determined. After determining an adjusted polishing parameter, the bare silicon substrate is polished on a polisher using the adjusted polishing parameter.
Abstract:
Method and apparatus to support expansion of compute engine code space by sharing adjacent control stores using interleaved addressing schemes. Instructions corresponding to an original instruction thread are partitioned into multiple interleaved sequences that are stored in respective control stores. During thread execution, instructions are retrieved from the control stores in a repeated order based on the interleaving scheme. For example, in one embodiment two compute engines share two control stores. Thus, instructions for a given thread are sequentially loaded from the control stores in an alternating manner. In another embodiment, four control stores are shared by four compute engines. In this case, the instructions in a thread are interleave using four stores, and each store is accessed every fourth instruction in the code sequence. Schemes are also provided for handling branching operations to maintain synchronized access to the control stores.
Abstract:
A context forwarding bus efficiently communicates control and data between processing elements in a processor unit having a plurality of processing elements. Control and data information is transferred over a first bus from processing element to processing element.
Abstract:
Method and apparatus to support expansion of compute engine code space by sharing adjacent control stores using interleaved addressing schemes. Instructions corresponding to an original instruction thread are partitioned into multiple interleaved sequences that are stored in respective control stores. During thread execution, instructions are retrieved from the control stores in a repeated order based on the interleaving scheme. For example, in one embodiment two compute engines share two control stores. Thus, instructions for a given thread are sequentially loaded from the control stores in an alternating manner. In another embodiment, four control stores are shared by four compute engines. In this case, the instructions in a thread are interleave using four stores, and each store is accessed every fourth instruction in the code sequence. Schemes are also provided for handling branching operations to maintain synchronized access to the control stores.
Abstract:
A buffer circuit (31), for example a repeater or receiver circuit for a signal wire of an on-chip bus, receives an input signal, and produces an output signal. The buffer circuit (31) comprises a first inverting stage (7) and a second inverter stage (9). The second inverting stage (9) provides the drive for the output (5). The first inverting stage (7) has additional circuitry (15, 17, 19, 21, 23, 25, 27, 29) for controlling the strengths of the pull up path and the pull down path. The pull up/down paths are dynamically controlled according to the status of one or more aggressor signals. In one embodiment the switching threshold is lowered only in the worst case delay scenario, i.e. when the signal wire (3) is at a different logic level to the aggressor signals. In another embodiment, the switching threshold is raised when the signal wire and aggressor signals are all at the same logic level, thereby reducing crosstalk.
Abstract:
A buffer circuit (31), for example a repeater or receiver circuit for a signal wire of an on-chip bus, receives an input signal, and produces an output signal. The buffer circuit (31) comprises a first inverting stage (7) and a second inverter stage (9). The second inverting stage (9) provides the drive for the output (5). The first inverting stage (7) has additional circuitry (15, 17, 19, 21, 23, 25, 27, 29) for controlling the strengths of the pull up path and the pull down path. The pull up/down paths are dynamically controlled according to the status of one or more aggressor signals. In one embodiment the switching threshold is lowered only in the worst case delay scenario, i.e. when the signal wire (3) is at a different logic level to the aggressor signals. In another embodiment, the switching threshold is raised when the signal wire and aggressor signals are all at the same logic level, thereby reducing crosstalk.