摘要:
A bruxism detection system is provided that includes a chin-mounted acceleration sensor for detection of teeth grinding and teeth tapping. The system generally includes an acceleration sensor that is adapted to be removably and externally mounted with respect to an individual's chin, a bruxism recording and processing system operable on a local processor, and bruxism analysis software that is operable on the local processor or an adjoint processor (or a combination thereof). The system is designed to be used in any convenient location, including an individual's home, and is generally reusable by multiple people, thus reducing the cost of bruxism diagnosis and bringing a reliable and effective diagnosis tool to the general public.
摘要:
A digital computer system includes a scalar CPU, a vector processor, and a shared cache memory. The scalar CPU has an execution unit, a memory management unit, and a cache controller unit. The execution unit generates load/store memory addresses for vector load/store instructions. The load/store addresses are translated by the memory management unit, and stored in a write buffer that is also used for buffering scalar write addresses and write data. The cache controller coordinates-loads and stores between the vector processor and the shared cache with scalar reads and writes to the cache. Preferably the cache controller permits scalar reads to precede scalar writes and vector load/stores by checking for conflicts with scalar writes and vector load/stores in the write queue, and also permits vector load/stores to precede vector operates by checking for conflicts with vector operate information stored in a vector register scoreboard. Preferably the cache controller includes vector logic which is responsive to vector information written in intra-processor registers by the execution unit. The vector logic keeps track of the vector length and blocks extra memory addresses generated by the execution unit for the vector elements. The vector logic also blocks the memory addresses of masked vector elements so that these addresses are not translated by the memory management unit.
摘要:
An apparatus that provides configurable processing includes a fetch module, a decoder, and a dynamic arithmetic unit. The fetch module is operable to fetch at least one instruction and provide it to the decoder. The decoder receives the instruction and decodes it. The dynamic arithmetic logic unit receives the decoded instruction and configures at least one configurable arithmetic logic unit to perform an operation contained within the decoded instruction.
摘要:
Information is stored in temporary storage and subsequently transferred to a memory over a bus. The temporary storage is provided with a plurality of entries each of which has a selected size that is smaller than a size of the bus. Information that is designated for a common area of the memory is stored in different entries, and the different entries are linked. Before being transferred to memory, the information from linked entries is assembled. The assembled information is then transferred over the bus to memory. Embodiments of the temporary storage include a write queue and a write buffer.
摘要:
Circuitry and methodology for pulse capture employs S-R latch, precharge, and switch circuitries for quickly sensing and capturing a logic pulse from dynamic logic circuitry. The present invention while having general application to any dynamic logic circuitry has particular application to random access memory (RAM), content addressable memory (CAM), and adder circuitries.
摘要:
In accordance with the present invention, an adder tree structure includes at least two adder stages. In the circuit and method according to the present invention, the first of the two adder stages generates two bits of a common weight and other more significant bits of a weight one bit more significant than the two bits of the common weight. The second of the two adder stages includes an adder that receives the more significant bits generated in the first of the two adder stages. The second adder stage also includes an AND gate which receives and logically AND's the two bits of the common weight to generate a carry-in bit for the adder in the second stage. The above adder tree structure and adding method have an advantage of permitting more input terminals of adders to contain information about the input values to the adder tree structure. Therefore, the adders are used more efficiently and less adders are required to perform a specific function.
摘要:
A decoupling capacitor for an integrated circuit and method of forming the same. The decoupling capacitor includes a p-channel device having first and second p-type doped diffusion regions, a device channel region therebetween, a device gate overlying the device channel region, and a gate insulator separating the device gate and channel region. The first and second diffusion regions are electrically connected to a positive power supply, and the device gate is electrically connected to a negative power supply. The decoupling capacitor may be formed proximate a signal driver in the integrated circuit. The decoupling capacitor may be formed without additional, expensive semiconductor fabrication steps and operates to minimize noise in the circuit.
摘要:
A method that decodes serially received MPEG variable length codes by executing instructions in parallel. The method includes an execution unit which includes multiple pipelined functional units. The functional units execute at least two of the instructions in parallel. The instructions utilize and share general purpose. registers. The general purpose. registers store information used by at least two of the instructions.
摘要:
A superscalar superpipelined microprocessor having a write buffer located between the core and cache is disclosed. The write buffer is controlled to store the results of write operations to memory until such time as the cache becomes available, such as when no high-priority reads are to be performed. The write buffer includes multiple entries that are split into two circular buffer sections for facilitating the interaction with the two pipelines of the core; cross-dependency tables are provided for each write buffer entry to ensure that the data is written from the write buffer to memory in program order, considering the possibility of prior data present in the opposite section. Non-cacheable reads from memory are also ordered in program order with the writing of data from the write buffer. Features for handling speculative execution, detecting and handling data dependencies and exceptions, and performing special write functions (misaligned writes and gathered writes) are also disclosed.
摘要:
A superscalar superpipelined microprocessor having a write buffer located between the core and cache is disclosed. The write buffer is controlled to store the results of write operations to memory until such time as the cache becomes available, such as when no high-priority reads are to performed. The write buffer includes multiple entries that are split into two circular buffer sections for facilitating the interaction with the two pipelines of the core; cross-dependency tables are provided for each write buffer entry to ensure that the data is written from the write buffer to memory in program order, considering the possibility of prior data present in the opposite section. Non-cacheable reads from memory are also ordered in program order with the writing of data from the write buffer. Features for handling speculative execution, detecting and handling data dependencies and exceptions, and performing special write functions (misaligned writes and gathered writes) are also disclosed.