摘要:
A method of microwave assisted nucleic acid amplification by PCR is disclosed. The method includes denaturing, annealing, and extending a nucleic acid sample, with at least the denaturing and extension steps being carried out under the influence of microwave radiation, while preventing the temperature of the sample from varying more than 40° C. from start to finish, and while maintaining the temperature of the sample from start to finish at no more than 60° C.
摘要:
A method is disclosed for carrying out microwave assisted chemical reactions. The method includes the steps of placing reactants in a microwave-transparent vessel, placing the vessel and its contents into a microwave cavity, applying microwave radiation within the cavity and to the vessel and its contents while concurrently externally cooling the vessel conductively.
摘要:
An aircraft emergency warning system having a wireless transmitter camouflaged as personal effects. The wireless transmitter is capable of transmitting an alarm signal. A cockpit alarm control system is capable of receiving the transmitted alarm signal. The cockpit alarm control system is capable of outputting an activation signal. An aircraft surveillance system is electrically connected to the cockpit alarm control system. The aircraft surveillance system is capable of responding to the activation signal.
摘要:
A method and apparatus for rapidly and accurately determining the fat and oil content of a sample using microwave drying and NMR analysis is disclosed. The method and apparatus incorporate a low mass, porous, hydrophilic and lipophilic sample pad that ensures that the entire sample is subjected to NMR analysis. The method and apparatus according to the invention are suitable for rapidly determining the fat and oil content of samples collected during a production process and for process or quality control.
摘要:
A system and method is provided for integrating corneal topographic data and ocular wavefront data with primary ametropia measurements to create a soft contact lens design. Corneal topographic data is used to design a better fitting soft contact lens by achieving a contact lens back surface which is uniquely matched to a particular corneal topography, or which is an averaged shape based on the particular corneal topography. In the case of a uniquely matched contact lens back surface, the unique back surface design also corrects for the primary and higher order optical aberrations of the cornea. Additionally, ocular wavefront analysis is used to determine the total optical aberration present in the eye. The total optical aberration, less any corneal optical aberration corrected utilizing the contact lens back surface, is corrected via the contact lens front surface design. The contact lens front surface is further designed to take into account the conventional refractive prescription elements required for a particular eye. As a result, the lens produced exhibits an improved custom fit, optimal refractive error correction and vision.
摘要:
A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests. Also, the adaptive memory arbitration scheme introduces a flexible method of adjustable priority-weighting which permits selected devices to transact a programmable number of consecutive memory accesses without those devices losing request priority.
摘要:
A computer system includes a CPU and a memory device coupled by a bridge logic unit. CPU to memory write requests (including the data to be written) are temporarily stored in a queue in the bridge logic unit. The bridge logic unit preferably begins a write cycle to the memory device before all of the write data has been stored in the queue and available to the memory device. By beginning the memory cycle as early as possible, the total amount of time required to store all of the write data in the queue and then de-queue the data from the queue is reduced. Consequently, many CPU to memory write transactions are performed more efficiently and generally with less latency than previously possible.
摘要:
A computer system includes a CPU, a memory device, two expansion buses, and a bridge logic unit coupling together the CPU, the memory device and the expansion buses. The CPU couples to the bridge logic unit via a CPU bus and the memory device couples to the bridge logic unit via a memory bus. The bridge logic unit generally routes bus cycle requests from one of the four buses to another of the buses while concurrently routing bus cycle requests to another pair of buses. The bridge logic unit preferably includes four interfaces, one each to the CPU, memory device and the two expansion buses. Each pair of interfaces are coupled by at least one queue; write requests are stored (or “posted”) in write queues and read data are stored in read queues. Because each interface can communicate concurrently with all other interfaces via the read and write queues, the possibility exists that a first interface cannot access a second interface because the second interface is busy processing read or write requests from a third interface, thus starving the first interface for access to the second interface. To remedy this starvation problem, the bridge logic unit prevents the third interface from posting additional write requests to its write queue, thereby permitting the first interface access to the second interface. Further, read cycles may be retried from one interface to allow another interface to complete its bus transactions.
摘要:
A computer is provided having a bus interface unit coupled between a CPU bus, a PCI bus and/or a graphics bus. The bus interface unit includes controllers linked to the respective buses and further includes a plurality of queues placed within address and data paths linking the various controllers. An interface controller coupled between a peripheral bus (excluding the CPU local bus) determines if an address forwarded from a peripheral device is the first address within a sequence of addresses used to select a set of quad words constituting a cache line. If that address (i.e., target address) is not the first address (i.e., initial address) in that sequence, then the target address is modified so that it becomes the initial address in that sequence. An offset between the target address and the modified address is denoted as a count value. The initial address aligns the reads to a cacheline boundary and stores in successive order the quad words of the cacheline in the queue of the bus interface unit. Quad words arriving in the queue prior to a quad word attributed to the target address are discarded. This ensures the interface controller, and eventually the peripheral device, will read quad words in successive address order, and all subsequently read quad words will also be sent in successive order until the peripheral read transaction is complete.
摘要:
An apparatus for monitoring and decoding processor bus cycles and flushing a second level cache upon decoding a special flush acknowledge cycle. The CPU preferably includes an internal cache and a flush input for receiving a signal commanding the CPU to flush its internal cache. After flushing its cache by performing any necessary cycles to write back dirty data to main memory, the CPU performs a special flush acknowledge cycle to inform external devices that the flush procedure has been completed. A cache controller detects the flush acknowledge cycle and provides a flush signal to the second level cache. The cache controller then provides an end of cycle signal to the CPU to indicate that the flush cycle has been acknowledged.