Abstract:
Provided is a wind powered system for reducing energy consumption of a power source, such as an internal combustion engine or an electric motor. In one embodiment, the wind powered system comprises a wind turbine operatively connected to an internal combustion engine, for example via a direct mechanical connection, a hydrostatic drive system or a pneumatic drive system in order to reduce the amount of fuel required by the engine to operate an electricity generating means. A controller may be optionally provided to modulate the load on the wind turbine in order to maximize the extraction of available power according to local wind conditions. In another embodiment, the wind turbine is connected to an air compressor for providing a supply of air in order to offset energy consumption of a conventional compressed air system.
Abstract:
A vertical axis wind turbine comprising at least two overlapping rotor portions, each having a curved or semi-circular horizontal cross-section, each rotor portion having an outer leading edge that is angled relative to vertical from bottom to top in the direction of rotation of the wind turbine. The magnitude of the angle is in the range of from 5 to 30°. The angled leading edge improves aerodynamic performance of the wind turbine relative to the absence of the angle, particularly for turbines with three or more rotor portions.
Abstract:
A method, system and computer program product for instruction fetching within a processor instruction unit, utilizing a loop buffer, one or more virtual loop buffers, and/or an instruction buffer. During instruction fetch, modified instruction buffers coupled to an instruction cache (I-cache) temporarily store instructions from a single branch, backwards short loop. The modified instruction buffers may be a loop buffer, one or more virtual loop buffers, and/or an instruction buffer. Instructions are stored in the modified instruction buffers for the length of the loop cycle. The instruction fetch within the instruction unit of a processor retrieves the instructions for the short loop from the modified buffers during the loop cycle, rather than from the instruction cache.
Abstract:
A method, system and processor for improving the performance of an in-order processor. A processor may include an execution unit with an execution pipeline that includes a backup pipeline and a regular pipeline. The backup pipeline may store a copy of the instructions issued to the regular pipeline. The execution pipeline may include logic for allowing instructions to flow from the backup pipeline to the regular pipeline following the flushing of the instructions younger than the exception detected in the regular pipeline. By maintaining a backup copy of the instructions issued to the regular pipeline, instructions may not need to be flushed from separate execution pipelines and re-fetched. As a result, one may complete the results of the execution units to the architected state out of order thereby allowing the completion point to vary among the different execution pipelines.
Abstract:
Tracking the order of issued instructions using a counter is presented. In one embodiment, a saturating, decrementing counter is used. The counter is initialized to a value that corresponds to the processor's commit point. Instructions are issued from a first issue queue to one or more execution units and one or more second issue queues. After being issued by the first issue queue, the counter associated with each instruction is decremented during each instruction cycle until the instruction is executed by one of the execution units. Once the counter reaches zero it will be completed by the execution unit. If a flush condition occurs, instructions with counters equal to zero are maintained (i.e., not flushed or invalidated), while other instructions in the pipeline are invalidated based upon their counter values.
Abstract:
A method, system and processor for improving the performance of an in-order processor. A processor may include an execution unit with an execution pipeline that includes a backup pipeline and a regular pipeline. The backup pipeline may store a copy of the instructions issued to the regular pipeline. The execution pipeline may include logic for allowing instructions to flow from the backup pipeline to the regular pipeline following the flushing of the instructions younger than the exception detected in the regular pipeline. By maintaining a backup copy of the instructions issued to the regular pipeline, instructions may not need to be flushed from separate execution pipelines and re-fetched. As a result, one may complete the results of the execution units to the architected state out of order thereby allowing the completion point to vary among the different execution pipelines.
Abstract:
Dynamic power management in a processor design is presented. A pipeline stage's stall detection logic detects a stall condition, and sends a signal to idle detection logic to gate off the pipeline's register clocks. The stall detection logic also monitors a downstream pipeline stage's stall condition, and instructs the idle detection logic to gate off the pipeline stage's registers when the downstream pipeline stage is in a stall condition as well. In addition, when the pipeline stage's stall detection logic detects a stall condition, either from the downstream pipeline stage or from its own pipeline units, the pipeline stage's stall detection logic informs an upstream pipeline stage to gate off its clocks and thus, conserve more power.
Abstract:
A method, an apparatus and a computer program product are provided for the managing of SIMD instructions and GP instructions within an instruction pipeline of a processor. The SIMD instructions and the GP instructions share the same “front-end” pipelines within an Instruction Unit. Within the shared pipelines the Instruction Unit checks the GP instructions for dependencies and resolves these dependencies. At the dispatch point within the pipelines the Instruction Unit sends valid GP instructions to the GP Unit and SIMD instructions to an SIMD issue queue. In the SIMD issue queue the Instruction Unit checks the SIMD instructions for dependencies and resolves these dependencies. Then the SIMD issue queue dispatches the SIMD instructions to the SIMD Unit. Accordingly, dependencies involving SIMD instructions do not affect GP instructions because the SIMD dependencies are checked and resolved independently.
Abstract:
The present invention provides for a cache-accessing system employing a binary tree with decision nodes. A cache comprising a plurality of sets is provided. A locking or streaming replacement strategy is employed for individual sets of the cache. A replacement management table is also provided. The replacement management table is employable for managing a replacement policy of information associated with the plurality of sets. A pseudo least recently used function is employed to determine the least recently used set of the cache, for such reasons as set replacement. An override signal line is also provided. The override signal is employable to enable an overwrite of a decision node of the binary tree. A value signal is also provided. The value signal is employable to overwrite the decision node of the binary tree.
Abstract:
A headset monitoring system includes a plurality of headsets and a base station. Each headset includes a microphone, a speaker, a transceiver, and a memory device for storing an identification code, and the base station includes a transceiver, a microprocessor, a memory device, and a user interface. The base station is configured to send and receive data to and from the headsets and is further configured to identify headsets that are not functioning properly.