摘要:
Disclosed are various embodiments related to dual-gate transistors and associated calibration circuitry. In one embodiment, dual-gate transistors may be configured in a sense amplifier arrangement, and calibration circuitry can be used to adjust an input offset of the sense amplifier. In another embodiment, a reference level voltage utilized in an amplifier with dual-gate transistors can be adjusted during a calibration sequence, and may be substantially unchanged from its nominal value outside of the calibration sequence. In another embodiment, a calibration sequence can be utilized to determine circuit results from a circuit including dual-gate transistors, and to adjust control gates to more closely coincide with expected or desired results. In yet another embodiment, a semiconductor memory device can include a memory array with amplifiers that include dual-gate transistors, as well as associated calibration circuitry.
摘要:
A content addressable memory (CAM) device (100) may include a number of sub-blocks (102-8 to 102-15) that can generate CAM search results. In a “search beyond” operation, sub-blocks (102-8 to 102-15) may be excluded from a search operation according to criteria, including a sub-block address and a soft-priority value. A CAM device may include a compare circuit (400) that may compare sub-block address values in a time division multiplexed fashion to establish priority from among multiple CAM sub-blocks.
摘要:
According to one embodiment, a search engine device (100) may include an input (102), search portion (106), and a vote portion (108). A vote portion (108) may receive responses to a search request at inputs. According to precedence information in received responses, a vote portion (108) may generate an output response having its own precedence information.
摘要:
A content addressable memory (CAM) device (100) may include a number of blocks (102-[n−1, n, n+1]) that each generate CAM search results and result compare circuits (104-[n−1, n, n+1] that receive CAM search results from multiple blocks (102-[n−1, n, n−1]), and compare at least a portion of such CAM search results. According to such a comparison result, a compare circuit (104-[n−1, n, n+1]) can generate an output CAM search result for subsequent comparison with CAM search result in another compare circuit (104-[n−1, n, n+1]).
摘要:
A charge pump limits the voltages at nodes internal to the charge pump to reduce the risk of junction breakdown in the charge pump. The charge pump includes a first pump circuit, a second pump circuit, a first clamp and a second clamp. The first clamp limits the voltage level of a well by providing a current path from the well to the output lead when the voltage level of the well reaches a first predetermined limit. The voltage level at a node from which charge is redistributed to the well is limited by the second clamp, which is configured to provide a conductive path from the node to the output lead when the voltage level of the node reaches a second predetermined limit. The pump circuits can each include a logic circuit that is configured, depending on the level of an external supply voltage, to reduce the rate at which the capacitor node is boosted when the external supply voltage is relatively high. The logic circuit can also vary the voltage difference between the capacitor node and the external supply voltage to decrease the relative voltage level at the capacitor node relative to the level of the external supply voltage. These features also help reduce the risk of junction breakdown in the charge pump.
摘要:
The present invention provides a method and apparatus that accomplishes a high performance, random read/write SDRAM design by synchronizing the read and write operations at the data line sense amplifier. This enables the design to perform random read and write operations without varying cycle time issues or unbalanced margin issues. The data lines are used as bi-directional lines to accomplish high performance reads and writes with minimal additional wiring overhead required. During a read operation, read data is transferred from the memory cells of the device across a series of consecutive pairs of data lines to an input/output port of the memory device. The first pair of data lines is coupled to a data line sense amplifier. The additional pairs of data lines are coupled to additional amplifiers. During a read operation, data is transferred across the consecutive pairs of data lines according to the timing cycles of the respective amplifiers. In order to quickly drive the data signals during a write operation up the series of consecutive pairs of data lines, the timing signals for each of the pairs of data lines except the first pair of data lines are disabled so that the data lines are allowed to float, and then the data lines are overdriven with the write data so that the write data quickly transitions up the series of data lines to the selected data line sense amplifier, where it arrives at approximately the same time that read data normally arrives during the timing cycle for the data line sense amplifier.
摘要:
A semiconductor memory device with a pair of data lines for reading and writing data signals to and from a matrix of memory cells and an accelerator circuit for accelerating the generation of a data signal on at least one of the data lines is disclosed. Slow signal generation on the data lines is due to the characteristics of NFET pass gates passing high signals, or PFET pass gates passing low signals. In an implementation using NFET pass gates, the accelerator circuit includes a pair of cross-coupled PFET transistors, one of which is activated by the low signal on the opposing data line. The drains of the cross-coupled PFET transistors are coupled to the data lines, such that when the low signal on the opposing data line activates one of the PFETs, it supplies additional current to the data line receiving the high signal, so as to accelerate the generation of the high signal on the data line. Faster signal generation allows for the data line latches of the circuit to be set earlier, thus allowing the read cycle of the memory device to be faster. An additional result of the increased signal generation on the data line that is receiving a high signal is that at the end of the cycle when the two data lines are coupled together, their average voltage due to charge sharing tends to be closer to a desired midlevel voltage such that less power is required to bring the two data lines to the desired mid-level voltage at the end of the signal cycle.
摘要:
A timing control circuit (10) is disclosed that provides a timing circuit (12) for controlling the operation of an I/O path circuit (14) in a synchronous static random access memory (SRAM). In a read or write operation, the timing circuit (12) sequentially disables bit line equalization circuits (34), enables sense amplifiers (38), disables I/O line equalization circuits (42), and enables secondary sense amplifiers (44). Further, the timing control (12) initiates a reset operation prior to the completion of the read or write operation. The reset operation includes sequentially enabling the bit line equalization circuits (34), disabling the sense amplifiers (38), enabling the I/O line equalization circuits (42), and disabling the secondary sense amplifiers (44). The timing circuit (12) includes first, second and third delay circuits (20, 22, and 24) to allow for minimum split times for bit line pairs (32) and I/O line pairs (40), and minimum secondary sense amplifier (44) sensing times.
摘要:
Disclosed are various embodiments related to stacked memory devices, such as DRAMs, SRAMs, EEPROMs, and CAMs. For example, stack position identifiers (SPIDs) are assigned or otherwise determined, and are used by each memory device to make a number of adjustments. In one embodiment, a self-refresh rate of a DRAM is adjusted based on the SPID of that device. In another embodiment, a latency of a DRAM or SRAM is adjusted based on the SPID. In another embodiment, internal regulation signals are shared with other devices via TSVs. In another embodiment, adjustments to internally regulated signals are made based on the SPID of a particular device. In another embodiment, serially connected signals can be controlled based on a chip SPID (e.g., an even or odd stack position), and whether the signal is an upstream or a downstream type of signal.
摘要:
Disclosed are various embodiments related to stacked memory devices, such as DRAMs, SRAMs, EEPROMs, and CAMs. For example, stack position identifiers (SPIDs) are assigned or otherwise determined, and are used by each memory device to make a number of adjustments. In one embodiment, a self-refresh rate of a DRAM is adjusted based on the SPID of that device. In another embodiment, a latency of a DRAM or SRAM is adjusted based on the SPID. In another embodiment, internal regulation signals are shared with other devices via TSVs. In another embodiment, adjustments to internally regulated signals are made based on the SPID of a particular device. In another embodiment, serially connected signals can be controlled based on a chip SPID (e.g., an even or odd stack position), and whether the signal is an upstream or a downstream type of signal.