摘要:
A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues.
摘要:
A technique for maintaining input/output (I/O) command ordering on a bus includes assigning a channel identifier to I/O commands of an I/O stream. In this case, the channel identifier indicates the I/O commands belong to the I/O stream. A command location indicator is assigned to each of the I/O commands. The command location indicator provides an indication of which one of the I/O commands is a start command in the I/O stream and which of the I/O commands are continue commands in the I/O stream. The I/O commands are issued in a desired completion order. When a first one of the I/O commands does not complete successfully, the I/O commands in the I/O stream are reissued on the bus starting at the first one of the I/O commands that did not complete successfully.
摘要:
A surface preparation device for cleaning and/or preparing a surface has a handle with a head piece attached to the upper end of the handle. The head piece has a substantially rigid core surrounded at least in part by a membrane covered foam and is capable of focusing pressure on specific locations across a variety of surface areas. A user grasps the handle and places the head piece on the surface area to be cleaned and/or prepared. By exerting pressure on the head piece, the head piece conforms to the shape of the rigid core which results in a significant amount of pressure being exerted on the surface area to be cleaned and/or prepared. The user then moves the head piece while it is still exerting pressure in order to clean and/or prepare the specific surface area.
摘要:
An improved method of adding food-additive ingredients to a food product, particularly a reduced fat fried snack product, and an ingredient suspension containing a flowable edible, preferably a nondigestible fat, and food-additive ingredients. The method consists of suspending the encapsulated or powdered ingredients in the flowable edible fat, and applying the suspension in a controlled amount to the surface of a food product. The preferred food product is a fabricated reduced fat or fat-free potato chip which is a fried snack made by frying a dough in a nondigestible fat to a moisture content of less than 5%. The ingredient suspension is applied to the surface of the fried snack soon after emerging from the fryer. The food product has a light, crispy, improved crunchy texture, improved flavor and a fat content of from about 20% to about 38% nondigestible fat, and is fortified with food-additive ingredients.
摘要:
A method for transferring data between non-contiguous buffers in a memory and an I/O device via a system I/O bus uses a descriptor queue stored in memory. Each descriptor points to a buffer and includes the length of the buffer. The I/O device is provided with the base address of the queue, the length of the queue and a current address which at initialization is the same as the base address. When data is to be transferred a device driver located in the processor sends the number of available descriptors (DescrEnq) to the I/O device which accesses the descriptors individually or in burst mode to gain access to the data buffers identified by the descriptors.
摘要:
A method for transferring data between non-contiguous buffers in a memory and an I/O device via a system I/O bus uses a descriptor queue stored in memory. Each descriptor points to a buffer and includes the length of the buffer. The I/O device is provided with the base address of the queue, the length of the queue and a current address which at initialization is the same as the base address. When data is to be transferred a device driver located in the processor sends the number of available descriptors (DescrEnq) to the I/O device which accesses the descriptors individually or in burst mode to gain access to the data buffers identified by the descriptors.
摘要:
A LAN adapter for transferring data frames from a LAN to memory buffers in a processor in which the LAN driver follows either the ODI or the NDIS specification. The adapter accumulates the frame length and compares this to the storage capacity of the buffer. If the frame length does not exceed the buffer capacity and the LAN driver implements the ODI specification, the adapter will indicate good status to the driver. If the frame length exceeds the buffer capacity the adapter will either send bad status to the ODI driver or reuse the buffer and send no status. If the driver follows NDIS, status is sent at the end of the frame.
摘要:
A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues.
摘要:
A modular server rack cooling structure for cooling at least one server in at least one server rack of a data center assembly includes at least a first supporting member and at least a first heat exchanger. The first heat exchanger is coupled to the first supporting member, which is configured to position the first heat exchanger in heat transfer relationship with the at least one server. The first heat exchanger is not attached to the at least one server rack. The modular server rack cooling structure is also applied to a system that includes at least a first rack and at least a second rack disposed opposite from one another to form a hot aisle or a cold aisle. A method is disclosed for installing additional heat exchangers on the support structure of a modular server rack cooling structure to meet increased cooling capacity requirements without requiring additional space.
摘要:
In pause time based flow control systems having station-level granularity, a station or switch may detect congestion or incipient congestion and send a flow control frame to an upstream station, commanding that upstream station to temporarily stop (pause) sending data for a period of time specified in the flow control frame. The traffic pause gives the downstream station time to empty its buffers of at least some of the excess traffic it has been receiving. Since each downstream station operates independently in generating flow control frames, it is possible for the same upstream station to receive multiple, overlapping pause commands. If an upstream station which is already paused receives subsequent flow control frames from the same downstream station that triggered the pause, the upstream station's pause timer is rewritten using the pause times in the successive flow control frames. If the upstream station receives flow control frames from different downstream stations, the upstream station updates the pause timer only if the pause time in the most recent flow control message is greater than the remaining part of the previously established pause time.