Abstract:
A method of compiling code includes assigning an endian type to data. An endian flip operation is performed based on the endian type of the data and a target system. Other embodiments are described and claimed.
Abstract:
A network device comprises an ingress processor (400) and an egress processor (450) connected via a flow control bus (485). Flow control messages received by a traffic manager module (489) comprised by the ingress processor are sent to a transmit path (498) and a control path (497).
Abstract:
Method and apparatus to support expansion of compute engine code space by sharing adjacent control stores using interleaved addressing schemes. Instructions corresponding to an original instruction thread are partitioned into multiple interleaved sequences that are stored in respective control stores. During thread execution, instructions are retrieved from the control stores in a repeated order based on the interleaving scheme. For example, in one embodiment two compute engines share two control stores. Thus, instructions for a given thread are sequentially loaded from the control stores in an alternating manner. In another embodiment, four control stores are shared by four compute engines. In this case, the instructions in a thread are interleave using four stores, and each store is accessed every fourth instruction in the code sequence. Schemes are also provided for handling branching operations to maintain synchronized access to the control stores.
Abstract:
This invention relates to a fluid dispenser, where a siphon tube is positively directed to the desired/optimum location at the base of the dispensers' reservoir by a directing device which retains a portion of the siphon tube. By directing the siphon tube in this manner, the user can dispense the fluid contained within the reservoir. This is regardless of the amount of fluid contained in the reservoir, and also allows the user to dispense the fluid and prevent contamination of the user or the environment.
Abstract:
Methods and apparatus, including computer program products, implementing techniques for monitoring a state of a device of a switched fabric network, the device including on-chip queues to store queue descriptors and a data buffer to store data packets, each queue descriptor having a corresponding data packet; detecting a first trigger condition to transition the device from a first state to a second state; and recovering space in the data buffer in response to the first trigger condition detecting, the recovering comprising selecting one or more of the on-chip queues for discard, and removing the data packets corresponding to queue descriptors in the selected one or more on-chip queues from the data buffer.
Abstract:
Method and apparatus to support expansion of compute engine code space by sharing adjacent control stores using interleaved addressing schemes. Instructions corresponding to an original instruction thread are partitioned into multiple interleaved sequences that are stored in respective control stores. During thread execution, instructions are retrieved from the control stores in a repeated order based on the interleaving scheme. For example, in one embodiment two compute engines share two control stores. Thus, instructions for a given thread are sequentially loaded from the control stores in an alternating manner. In another embodiment, four control stores are shared by four compute engines. In this case, the instructions in a thread are interleave using four stores, and each store is accessed every fourth instruction in the code sequence. Schemes are also provided for handling branching operations to maintain synchronized access to the control stores.