摘要:
A method of tiling a customer memory design to configurable memory blocks within a standardized memory matrix. A customer memory capacity and a customer memory width is determined for the customer memory design, and a standardized memory capacity and a standardized memory width is determined for the configurable memory blocks. The customer memory capacity and the customer memory width are selectively transformed by inverse factors based at least in part on a comparison of the customer memory capacity and the standardized memory capacity. Case independent blocks are formed within the configurable memory blocks, where the case independent blocks include gate structures formed in a standardized array in a substrate in which the customer memory design is to be implemented. Case dependent blocks are formed within the configurable memory blocks, where the case dependent blocks are electrically conductive routing layers that selectively connect the case independent blocks according to the transformation of the customer memory design.
摘要:
A method of tiling a customer memory design to configurable memory blocks within a standardized memory matrix. A customer memory capacity and a customer memory width is determined for the customer memory design, and a standardized memory capacity and a standardized memory width is determined for the configurable memory blocks. The customer memory capacity and the customer memory width are selectively transformed by inverse factors based at least in part on a comparison of the customer memory capacity and the standardized memory capacity. Case independent blocks are formed within the configurable memory blocks, where the case independent blocks include gate structures formed in a standardized array in a substrate in which the customer memory design is to be implemented. Case dependent blocks are formed within the configurable memory blocks, where the case dependent blocks are electrically conductive routing layers that selectively connect the case independent blocks according to the transformation of the customer memory design.
摘要:
Methods and apparatus are provided for a fast unbalanced pipeline architecture. A disclosed pipeline buffer comprises a plurality of memory registers connected in series, each of the plurality of memory registers, such as flip-flops, having an enable input and a clock input; and a controlling memory register having an output that drives the enable inputs of the plurality of memory registers, whereby a predefined binary value on an input of the controlling memory register shifts values of the plurality of memory registers on a next clock cycle. A plurality of the disclosed pipeline buffets can be configured in a multiple stage configuration. At least one of the plurality of memory registers can comprise a locking memory register that synchronizes the pipeline buffer. The pipeline buffer can optionally include a delay gate to delay a clock signal and an inverter to invert the delayed clock signal. The clock signal can be delayed by the delay gate such that an output of the pipeline buffer is applied to a next stage of a pipeline buffer at a correct time.
摘要:
Methods and apparatus are provided for a fast unbalanced pipeline architecture. A disclosed pipeline buffer comprises a plurality of memory registers connected in series, each of the plurality of memory registers, such as flip-flops, having an enable input and a clock input; and a controlling memory register having an output that drives the enable inputs of the plurality of memory registers, whereby a predefined binary value on an input of the controlling memory register shifts values of the plurality of memory registers on a next clock cycle. A plurality of the disclosed pipeline buffets can be configured in a multiple stage configuration. At least one of the plurality of memory registers can comprise a locking memory register that synchronizes the pipeline buffer. The pipeline buffer can optionally include a delay gate to delay a clock signal and an inverter to invert the delayed clock signal. The clock signal can be delayed by the delay gate such that an output of the pipeline buffer is applied to a next stage of a pipeline buffer at a correct time.
摘要:
A method and system for performing optical proximity correction (OPC) on an integrated circuit (IC) chip design is disclosed. The system and method of the present invention includes dividing the IC chip into a plurality of local task regions, identifying congruent local task regions, classifying congruent local task regions into corresponding groups, and performing OPC for each group of congruent local task regions.By identifying and grouping congruent local task regions in the IC chip, according to the method and system disclosed herein, only one OPC procedure (e.g., evaluation and correction) needs to be performed per group of congruent local task regions. The amount of data to be evaluated and the number of corrections performed is greatly reduced because OPC is not performed on repetitive portions of the IC chip design, thereby resulting in significant savings in computing resources and time.
摘要:
A system and method are provided for reducing the signal delay skew is disclosed, according to a variety of embodiments. One illustrative embodiment of the present disclosure is directed to a method. According to one illustrative embodiment, the method includes receiving an initial netlist having components and connection paths among the components; identifying a first connection path in the initial netlist that comprises path fragments for which there are no equivalent path fragments in a second connection path in the initial netlist; generating a skew-corrected netlist wherein the second connection path is re-routed to have path fragments equivalent to the path fragments of the first connection path; and outputting the skew-corrected netlist.
摘要:
A system for reducing the signal delay skew is disclosed, according to a variety of embodiments. One illustrative embodiment of the present disclosure is directed to a method. According to one illustrative embodiment, the method includes receiving an initial netlist comprising components and connection paths among the components. The method further includes identifying one or more skew-influencing features in a first connection path in the initial netlist that lack corresponding skew-influencing features in a second connection path in the initial netlist. The method also includes generating a skew-corrected netlist wherein the second connection path includes one or more added skew-influencing features corresponding to those of the first connection path. The method further includes outputting the skew-corrected netlist.
摘要:
A built in self test circuit in a memory matrix. Memory cells within the matrix are disposed into columns. The circuit has only one memory test controller, adapted to initiate commands and receive results. Transport controllers are paired with the columns of memory cells. The controllers receive commands from the memory test controller, test memory cells within the column, receive test results, and provide the results to the memory test controller. The transport controllers operate in three modes. A production testing mode tests the memory cells in different columns, accumulating the results for a given column with the controller associated with the column. A production testing mode retrieves the results from the controllers. A diagnostic testing mode tests memory cells within one column, while retrieving results for the column.
摘要:
A method of buffer insertion for a tree network in an integrated circuit design includes steps of: (a) receiving as input an integrated circuit design including a tree network; (b) selecting a buffer type available to the integrated circuit design from a cell library that results in a minimum total delay for a predetermined wire length; (c) identifying each candidate leaf node in the tree network that has a required pin-specific target delay; (d) inserting a buffer between each internal node that is traversed by a path from a candidate leaf node to a root node of the tree network and each leaf node that is not a candidate leaf node; (e) creating a buffer sub-tree in the tree network from an upstream internal node for each internal node that is traversed by a path from a candidate leaf node to a root node of the tree network; re-parenting each internal node that is traversed by a path from a candidate leaf node to a root node of the tree network to a new buffer in the buffer sub-tree; and (g) generating as output a revised integrated circuit design that includes the buffer sub-tree.
摘要:
A method for optimal placement of cells on a surface of an integrated circuit, comprising the steps of comparing a placement of cells to predetermined cost criteria and moving cells to alternate locations on the surface if necessary to satisfy the cost criteria. The cost criteria include a timing criterion based upon interconnect delay, where interconnect delay is modeled as a RC tree expressed as a function of pin-to-pin distance. The method accounts for driver to sink interconnect delay at the placement level, a novel aspect resulting from use of the RC tree model, which maximally utilizes available net information to produce an optimal timing estimate. Preferred versions utilize a RC tree interconnect delay model that is consistent with timing models used at design levels above placement, such as synthesis, and below placement, such as routing. Additionally, preferred versions can utilize either a constructive placement or iterative improvement placement method.