摘要:
A window processing system is disclosed for fabricating window frames. A welding station has welding heads to weld or fuse vinyl frame parts together. The frames are taken to a cleaning station having a number of cleaning heads that are independently actuated to move into a position relative selected portions of the window frame to clean off burrs, weld beads etc from the welded window frame. The cleaning process involves both training of a controller to recognize certain frame profiles and a compensation process for adjusting the cleaning process for individual variations in the frame that occur during fabrication. Real-time cleaning involves coupling a visual sensor to a moving support that also supports a cleaning tool.
摘要:
A system comprising a host and a network interface card or host bus adapter. The host is configured to perform transport protocol processing. The network interface card is configured to directly place data from a network into a buffer memory in the host.
摘要:
Long-acting injectable formulations are formed from a) a therapeutic agent selected from insecticides, acaricides, parasiticides, growth enhancers or oil-soluble NSAIDs, b) hydrogenated castor oil, and c) a hydrophobic carrier comprising i) triacetin, benzyl benzoate, ethyl oleate, or a combination thereof, and ii) acylated monoglycerides, propyl dicaprylates/dicaprates, caprylic/capric acid triglycerides or a combination thereof.
摘要:
There is disclosed a topical multiple-point-application formulation containing a solution of a polymeric material and an avermectin compound (active ingredient) which has been discovered to provide superior efficacy against ectoparasites, such as fleas and ticks and endoparasites such as nematodes and heartworms, when compared to conventional formulations. The formulation contains the avermectin active ingredient and up to 50% of the polymeric material.
摘要:
An AC to DC converter comprises a bridge rectifier followed by a boost circuit. The boost circuit includes an inductor, diode and load capacitor in series and a shunting switch connected to shunt the diode and load capacitor. The control circuit for switching the shunting switch comprises a differential circuit, a multiplier and a duty cycle generator in a feedback loop which maintains a constant output voltage on the capacitor. To eliminate the response to ripple on the output voltage, the differential circuit does not respond to voltages within a dead band.
摘要:
A method and system comprising a host system and a host bus adapter (HBA). The HBA is configured to handle a Virtual Interface and Transmission Control Protocol (TCP)/Internet Protocol (IP) processing for applications running on the host system.
摘要:
A system includes a plurality of computers interconnected by a network including one or more switching nodes. The computers transfer messages over virtual circuits established thereamong. A computer, as a source computer for one or more virtual circuit(s), schedules transmission of messages on a round-robin basis as among the virtual circuits for which it is source computer. Each switching node which forms part of a path for respective virtual circuits also forwards messages for virtual circuits in a round-robin manner, and, a computer, as a destination computer for one or more virtual circuit(s), schedules processing of received messages in a round-robin manner. Round-robin transmission, forwarding and processing at the destination provides a degree of fairness in message transmission as among the virtual circuits established over the network. In addition, messages are transmitted in one or more cells, with the round-robin transmission being on a cell basis, so as to reduce delays which may occur for short messages if a long messages were transmitted in full for one virtual circuit before beginning transmission of a short message for another virtual circuit. For each virtual circuit, the destination computer and each switching node along the path for the virtual circuit can generate a virtual circuit flow control message for transmission to the source computer to temporarily limit transmission over the virtual circuit if the amount of resources being taken up by messages for the virtual circuit exceeds predetermined thresholds, further providing fairness as among the virtual circuits. In addition, each switching node or computer can generate link flow control messages for transmission to neighboring devices in the network to temporarily limit transmission thereto if the amount of resources taken up by all virtual circuits exceeds predetermined thresholds, so as to reduce the likelihood of message loss.
摘要:
A system comprises a plurality of devices which communicate over a network. At least one of the devices transmits information to at least one other of the devices in information messages over a virtual circuit established therebetween using the network. The other device can transmit information concerning, for example, predetermined conditions in the other device in connection with the virtual circuit using signalling messages, which are transmitted by the other device over the virtual circuit to the one device. The one device includes a plurality of mailboxes associated with the virtual circuit, and the other device, that is, the device that is to transmit signalling messages, includes a transmit signal queue including a plurality of entries each associated with one of the mailboxes. A processor on the other device, to enable transmission of a signalling message including the condition information to be transferred to a mailbox, loads the condition information to be transferred into the transmit signal queue entry associated with the mailbox. The other device, in turn, transmits the signalling message to the one device, which loads the condition information in the appropriate mailbox. A processor on the one device retrieves the condition information from the mailbox to determine the condition information as communicated thereto by the other device.
摘要:
A memory controller receives reads, memory writes, and cache writes. A pending read is selected and issued to memory. When a response is received from memory, all cache writes are checked to determine whether any correspond to the pending read. If there is a corresponding cache write, the data from the corresponding cache write is used to respond to the pending read. Otherwise, prior memory writes arc checked to determine whether any correspond to the pending read. If there is a corresponding prior memory write, the data from the corresponding prior memory write is used to respond to the pending read. A coherency check from associated caches may also be performed, and the appropriate data returned to the processor that requested the read. Three queues may control the order in which memory access is performed. A read queue that contains read requests is typically given highest priority, and therefore reads are generally serviced first. A wait queue contains read requests and memory write requests, and is incremented to the pending read before the pending read is completed. As the wait queue is incremented, memory writes from the wait queue are entered onto a ready queue. Each request retrieved from the wait queue is checked against pending requests in the ready queue. Cache writes are entered directly onto the ready queue. When either a conflict is detected for the pending ready, or when the ready queue contains a certain amount of requests, the ready queue is flushed.
摘要:
A fault tolerant power supply comprises a first rectifier and a second rectifier with a boost circuit for correcting line current harmonics. The second rectifier, the boost circuit and output diodes are connected in parallel with the first rectifier. A capacitor circuit is charged through the second rectifier and boost converter when a 240 volt line input is present, but the circuit is charged through the first rectifier when a 120 volt line input is present. With failure of the boost converter circuit or with removal of the boost converter circuit from the system, the capacitor circuit is charged through the first rectifier.