Matthew C. Scott







Copyright ã Matthew C. Scott 2004




A Thesis Submitted to the Faculty of the




In Partial Fulfillment of the Requirements

For the Degree of






In the Graduate College










This thesis has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the library.


Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgement of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author.




                                                            SIGNED: _________________________________






This thesis has been approved on the date shown below:





__________________________________             _______________________________

      Jo Dale Carothers, PhD.                                                                 Date

      Associate Professor

      Department of Electrical & Computer Engineering

      University of Arizona, Tucson










Dr. J. D. Carothers

Dept of Electrical and Computer Engineering

University of Arizona

Tucson, AZ 85721



Dr. Hugh Barnaby

Dept of Electrical and Computer Engineering

University of Arizona

Tucson, AZ 85721



Dr. Jerzy Rozenblit

Department Head

Dept of Electrical and Computer Engineering

University of Arizona

Tucson, AZ 85721



Dr. David Vaugh

Manager, Electronic Design Automation

Texas Instruments Tucson

Tucson, AZ 85706



Dr. Michael Peralta

Device Characterization and Modeling

Texas Instruments Tucson

Tucson, AZ 85706













1.1  Problem Statement.. 6

1.2  Research Objectives and Motivation.. 11

1.3  Contributions of Research.. 13

1.4  Organization of the Thesis. 14



2.1  Design Environments, Kits and Tools Reviews. 17

2.2  Signal Integrity Reviews. 18

2.3  Parasitic Extraction Validation Reviews. 18

2.4  Regression Management Systems Related Work.. 19

2.5  Signal Integrity Design Case Studies. 19



3.1  Historical Basis. 27

3.2  Design Tools and Flows. 31

3.2.1  Physical Design Verification Flows. 35

3.2.2  Digital S.I. Flows. 38

3.2.3  Analog LPE Flows. 45

3.2.4  Mixed Signal and SOC S.I. Flows. 53

3.3  Process Design Kits. 55

3.4 Design Environments. 60

3.4.1 Compute Hardware. 60

3.4.2 Network Backbone. 62

3.4.3 Operating Systems. 63

3.4.4 Standard Directory Systems: PDKs, Projects, Tools. 64

3.5  Design Flow Error Propagations. 65

3.5.1  Device-Level Modeling Error Injection. 67

3.5.2  Design Entry and Simulation Level Error Injection. 71

3.5.3  Physical Design Level Error Injection. 80

3.5.4  Layout Parameter Extraction Error. 88

3.5.5  Yield, Process and Wafer Level Error Injection. 106

3.5.6  Reliability,  Environment and Life-Time Level Error Injection. 113

3.6  Related Work.. 113

3.7  Conclusion.. 116

CHAPTER 4. 118


4.1  Related Work.. 120

4.2  Survey of Signal Integrity Concerns. 123

4.2.1  Physical Basis: Capacitance, Resistance, Inductance, Impedance Theory. 123

4.2.2 Timing Analysis. 144

4.2.3  Noise Analysis. 150

4.2.4  Thermal analysis. 156

4.2.5  Power Integrity. 158

4.3  Design Integrity Survey.. 163

4.3.1 Electromigration. 163

4.3.2 Hot-electron Effects. 164

4.3.3 Wire Self-Heat 165

4.3.4  Process and Lithography technology concerns. 165

CHAPTER 5. 167


5.1  Related Work: Parasitic Extraction Theory and Methods. 169

5.2  Related Work: Parasitic Extraction Tools Benchmarking.. 170

5.3  Interconnect Extract Benchmarking Experiment Development.. 172

5.3.1  Process Parameters Definition. 173

Figure,  Comparison of Far-C, Nominal, and Field-Solver Extracts. 176

5.3.2  Extractor Program Development and Validation. 179

5.3.3  Layout Test Structures Development 186

5.4  Comparison and Benchmarking of Extractor Toolsets. 208

5.4.1 Benchmarking Experiments Executed. 209

5.5  Future On-chip Silicon Verification Experiments. 215

5.5.1  General Passive Structures. 216

5.5.2  Circuit-base Active Structures. 218

5.5.3  Summary. 224

CHAPTER 6. 225


6.1  Related Work.. 227

6.2  The RegMan Architecture and PDK QA System... 229

6.2.1  RegMan Verification Flow.. 230

6.2.2  RegMan Modus Operandi and GUI 234

6.2.3  RegMan Architecture and Coding. 238

6.3  Physical Verification Tools Validation (DRC, LVS, LPE) 243

6.3.1  RegMan Verification of DRC.. 244

6.3.2  RegMan Verification of LVS. 245

6.3.3  RegMan Verification of LPE Runs. 246

6.4  Parasitic Extraction Tool Benchmarking.. 247

6.5  Validation of Parasitics Laden Circuits. 250

6.5.1  Basic Intentional Device Extract Validation. 251

6.5.2  Rempars Script Architecture. 254

6.5.3  SimReg Script Architecture. 255

6.6  General Device Models and Simulators Checks. 258

6.6.1  Measured Vs. Simulation (A). 260

6.6.2  Schematic vs. Simulation (B). 261

6.6.3  SimulatorX vs. SimulatorY (C). 262

6.6.4  Corners min, max vs. nominal, Monte-Carlo (D). 263

6.7  Summary.. 265

CHAPTER 7. 266


7.1  Advantages and Benefits of a Systematic Framework for QA of Design Kits. 267

7.1.1  Design Kit Development Acceleration. 268

7.1.2  Layout Assistance Uses. 270

7.1.3  Constraint Management and Error Propagation Uses. 270

7.1.4  Design Flow Management and Design of Experiments. 271

7.1.5  Post-LPE Design Re-Centering. 271

7.2  Future Directions: Distributed Systems for S.I. Convergence. 272

7.3  Post-extract LPE/SI use-ability Experiments. 273

7.4 CONCLUSION.. 274










H.1  Structure of Experiments. 321

H.2  Device Level Checks. 321

H.2.1  Resistors. 321

H.2.2  Capacitors. 322

H.2.3  MOSFETs. 322

H.2.4  Diodes. 323

H.2.4  BJTs, PNPs/NPNs. 323

H.2.5  Miscellaneous Devices. 323

H.3  Schematic Integrity Checks. 323

H.4  Layout Design Rule Checks. 324

H.5  General Connectivity.. 324

H.6  Layout Parasitic Extraction Checks. 324

H.7  Design Environment Checks. 324






Figure 1.1a, ITRS’03 Technology Node Vs. Design Cost for SoC........................................... 07

Figure 1.1b, ITRS’03 Gate Length Trends and Forecast.......................................................... 08

Figure 3.2a, A 'typical' design flow from Old days.................................................................... 33

Figure 3.2b, ITRS’03 Landscape of Design Technology........................................................... 35

Figure 3.2.1a, Assura™ RCX Flow......................................................................................... 37

Figure, ITRS’03 Delay for Metal 1 and Global Wiring Vs. Feature Size...................... 41

Figure 3.2.3a,  General Analog LPE Flow................................................................................ 47

Figure, Mixed-Signal/Ultra High-Speed Digital Trends................................................ 52

Figure 3.3a,  PDK Elements, From the Cadence Website......................................................... 58

Figure, Overestimation of Adjacent Layer through Neighbors...................................... 92

Figure,  RC Π Network Representation...................................................................... 94

Figure,  Conformal Interconnect Stack....................................................................... 95

Figure, Various Resistance Structures......................................................................... 97

Figure, Various Representation of Via Resistances...................................................... 99

Figure, ITRS Call for Integrated Design Systems........................................................ 104

Figure, OPC Effects, Future-Fab............................................................................... 108

Figure, ITRS and Etec lithography roadmap............................................................... 111

Figure, Diva™ Capacitance Polynomial and Generated Code..................................... 127

Figure, 2.5D Capacitance Topologies........................................................................ 128

Figure, RC Network Reduction Simplified.................................................................. 137

Figure, Assura™ RCX-HF Flow............................................................................... 141

Figure, Assura™ RCX-PL Extraction Flow............................................................... 142

Figure, Transition Delay Calculation............................................................................ 145

Figure, Elmore Delay................................................................................................. 147

Figure, Substrate Noise Injection Model..................................................................... 155

Figure, Equivalent Circuit Model for Interconnect Coupled to Substrate...................... 155

Figure, ITRS’03 Cross Section of Hierarchical Scaling (pg 4, Interconnect) ............... 174

Figure, Comparison of Far-C, Nominal, and Field-Solver Extracts ............................ 176

Figure, Capgen™ Flow............................................................................................. 182

Figure,  SIPPs Validation Arrays................................................................................ 192

Figure,  Raphael™ Parasitic Extraction Flow  ............................................................ 198

Figure, Comparison of Diva™ Extract, Chern’s Equation and Foundry data................ 212

Figure, Snake-Comb Structure................................................................................... 217

Figure, CBCM Circuit............................................................................................... 221

Figure, CBCM Circuit Stimulus.................................................................................. 222

Figure, CBCM Circuit Simulation............................................................................... 223

Figure 6.2.1a,  RegMan Design Kit Validation Flow................................................................. 232

Figure 6.2.2a, RegMan Graphical User Interface...................................................................... 237

Figure 6.5.1a, Extracted netlist vs. Ideal Schematic. ................................................................. 253

Figure 6.5.1b, Ideal vs. LPE Simulations.................................................................................. 253

Figure 6.6.1a, BJT I-V Check and  MOSFET Cbg Check....................................................... 261

Figure 6.6.4a, SimReg Ocean Script Sweep of Parasitic Corners (Min, Max, Nom)................. 264




Table 1.2a, Design Signal Integrity Concerns........................................................................ 12

Table 3.0a, Requirements of Successful Analysis of Parasitics Effects.................................... 22

Table 3.0b, PDK Shortcomings, ISQED ’04 Panel.............................................................. 25

Table, Process Technology Scaling Effects............................................................. 40

Table 3.2.3a, General Analog LPE Flow Elements............................................................... 49

Table 3.2.3b, LVS requirements for LPE Simulations........................................................... 50

Table 3.2.3c, Device Parasitic Extraction Parameters........................................................... 50

Table 3.2.4a, Suggested SOC Design Flow......................................................................... 54

Table 3.2.4b, SOC Noise Prevention Methods.................................................................... 55

Table 3.3a, Minimal Reference Flow.................................................................................... 57

Table 3.3b, PDK Development Considerations.................................................................... 59

Table 3.4.4a, PDK Standards.............................................................................................. 64

Table 3.5a, Design Flow Error Injection Stages.................................................................... 66

Table, LPE Device Simulation Parameters.............................................................. 71

Table, Analog Layout Error Introduction................................................................ 83

Table 3.5.4a, Primary Parasitic Extract Design Signal Integrity Concerns............................... 90

Table, Standard Interconnect Process Parameters (SIPPs)..................................... 90

Table, Non-Interconnect Process Parameters for LPE.......................................... 101

Table, RC Corners Selection................................................................................ 178

Table, Simulation Based LPE Validation Experiment............................................. 185

Table, Basic 2D, 2.5D, 3D Capacitance Extraction Structures.............................. 190

Table,  SIPPs Validation Arrays........................................................................... 191

Table, Raphael™ Tools....................................................................................... 194

Table, Raphael™ Interconnect Technology Format.............................................. 195

Table 5.4a, LPE Benchmarking Metrics of Goodness......................................................... 209

Table, Extraction Versatility Features.................................................................... 214

Table 6.0a, Classes of RMS PDK Tests............................................................................ 226

Table, Cell-List Format for Assura™ LVS run..................................................... 240

Table 6.3.3a, LPE Tool Functionality Tests........................................................................ 247

Table 6.5.3a, SimReg Header File Format......................................................................... 256

Table 6.5.3b, SimReg testType Class Variable and Values................................................. 257

Table 6.5.3c, SimReg Simulation ‘mode’s.......................................................................... 257

Table 6.5.3d, SimReg Evaluators....................................................................................... 257

Table 6.6.a, Device Level Checks...................................................................................... 259

Table 6.6b, Simulation Validation Classes.......................................................................... 260

Table 7.1.1a.  Estimated QA time: Manual vs. RegMan...................................................... 269

Table D.1, Flow-oriented Examples of Error Injection Points.............................................. 310

Table E.1, Example Spreadsheet of Standard Interconnect Process Parameters.................. 312

Table E.2, Manual Calculation of Parasitic Components..................................................... 314

Table G.1., LPE vs. TMA Results Summary...................................................................... 320





This thesis addresses the methodologies, systems and rationale developed and employed in the creation and validation of physical design verification and layout parasitic extraction tools.  These tools address the on-chip interconnect parasitics and related signal and design integrity concerns in the areas of deep sub-micron and mixed-signal design.   The objective has been to facilitate the design of robust, high-performance circuits, with the least over-design, including consideration of integration into a design kit, its multiplicity of interoperating tools and libraries, and their usage in design-flows.  To this end, surveys and resulting checklists of process design kits, design flows and signal integrity issues have been compiled and a system for benchmarking parasitic extract tools is presented. The system integrates various scripts, including RegMan, into RMS, a Regression Management System. The information gleaned from the design kit analysis, the signal integrity survey and extract benchmarking are targeted for follow-on evaluation of representative circuits using the described tools and flows.





Engineering productivity in integrated circuit product design and development today is limited largely by the effectiveness of the CAD tools used.  - Paul R. Grey [1]


Legions of papers and articles have heralded the onslaught of deep sub-micron signal integrity effects resulting from the perpetual drive towards faster, smaller, cheaper, more feature-rich designs under shrinking market windows.  Many of the papers have given due diligence to the complexity of the design automation tools, but these are primarily originated by design automation companies in the business of selling extremely expensive tool suites.  Few works present the practical aspects of developing, implementing and validating the tools needed to survive the aforementioned trends.  The work presented here is the result of ten years of practice and research in the field of physical design verification (PDV), layout parasitic extraction (LPE), and the development of tools and methods to validate their correct and consistent behavior.  Throughout this thesis, it helps to have an overall ‘focus’ objective of the project:

    The development and validation of physical design verification and parasitic extraction tools to facilitate the design of robust, high-performance circuits, with the least over-design. 

Focus Objective

The corollary definition is to provide best possible ‘silicon to simulation’ equivalence: Does the simulation, with given device models, physical parasitics and all other design library and tools, sufficiently represent the final silicon to guarantee performance over all environmental and lifetime conditions?  This question is directed implicitly at the physical verification link between the front-end simulation and back-end physical layout implementation.  Between the layout and actual silicon, there are numerous opportunities for fault introduction, many of which may be accounted for in Monte-Carlo statistical simulations. Thus, the silicon-to-simulation equivalence metric is more a question of coverage of the possible response surface of the circuit under question, and regression testing is the means of gaining an acceptable level of confidence.  Similarly, the mapping of a schematic to a physical layout implementation has many possibilities, including structure of devices, placement, routing and topology of shielding and guard-rings, all of which must be validated for isomorphic (network graph) equivalence to the schematic in PDV tools.  The physical verification tools include ‘Layout vs. Schematic’ (LVS) equivalency checks, and Design Rule Checks (DRCs). The LPE tools analyze the post-layout database for parasitic resistance and capacitance, and may include inductance, amongst other things. These components are ‘back-annotated’ to the original schematic netlist for a more representative simulation.

It is important to realize that the focus objective has been pursued from the perspective of the Electronic Design Automation (EDA) engineer whose responsibilities encompass all facets of enabling circuit design, optimization and innovation. This broadly stated EDA mission also incorporates implementing the design environment and the Process Delivery Kits (PDK) that enable, through tools, libraries, methodologies and flows, the design of electronic circuits targeted for various semiconductor fabrication processes.  Thus, the solution to be provided (PDV and LPE) was bound by the requirements that it must fit, and enhance, the existing design environment, its PDKs, and design flows.

Signal Integrity herein is considered as those factors of the physical design which effect the end product’s performance or mean time to failure (MTF), and which are not available in the front-end simulation (i.e., pre-layout).  These S.I. parasitic factors include RC cross-talk induced errors, RC induced timing errors and IR drop on power supplies. The design integrity (D.I.) factors, also known as reliability, include electromigration, hot electron effects, wire self-heat and antenna effects. These are but a few of the primary concerns that must be accounted for in deep sub-micron design, high frequency, and precision analog and mixed-signal design. Various other factors are reviewed, such as; Selective Process Bias (SPB), copper low-K, stacked vias and dummy metal fills.

Although the LPE tools do not themselves evaluate many of these S.I. factors, they are the primary means to extract the data from the layout, which avails other tools this capacity.  Whether the data provided is correct or optimally usable is another question, which must be addressed through copious regression testing and application to actual designs. The benchmarking of LPE tools is usually done by creating a large suite of test structures which represents the possible interconnect layout topologies, extracting their capacitance and resistance components, and then comparing that to either a gold-standard tool (such as a 3D field-solver), or against actual silicon.  Many other methods have been described in the literature, which include circuit based test structures, E-beam, direct probing, and forms of built-in self-test.

The development and validation of the aforementioned PDV and LPE system, within the context of the complete design environment, the design requirements and signal integrity concerns, eventually demanded the development of a regression system to bind and manage the various tests.  It was determined early on to make it as general and flexible as possible, such that it could be applied to many tools, work equivalently across various design kits for different processes, and be easily acquired by other EDA engineers and I.C. Mask Layout Engineers.  A GUI was gradually developed as new capabilities were added, and multiple means for analyzing results of runs were added.  The core of the system manages jobs submitted to an LSF (Load Sharing Facility™) [2] farm, evaluates the results of each job when completed, and builds a corresponding report.  Primarily, the system is geared towards running Assura™ LVS, DRC and RCX jobs, but also has the capability of invoking, for example, Spectre™ simulation jobs through the use of a generalized Ocean™ [3] script.  These facets will be described in further detail in Chapter 6, concerning the RMS system.

As can be seen from this introduction, the scope and breadth of this project is rather large, the technologies are constantly moving, and the convergence can be very slow.  Challenges included not only the research into the technologies, the programming of multiple components of the framework, but also dealing with a constant barrage of bugs and errors in libraries, EDA tools, and their implementations.  But each bug discovery was a form of success, in the context of the objectives of the project.  Unfortunately, it is not feasible to document the multitude of bugs and discrepancies found, but many will be alluded to.  As a prelude, bugs that were discovered during the development of the system included mismatches between the models created for a Spice (Simulation Program with Integrated Circuit Emphasis) netlist, and those created for LVS netlists, errors in the Pcell (Parameterized cell) construction of layout devices, and multiple errors in the LPE extraction of the layout. Overall, it is, of course, impossible to test and track every facet of interaction in the design flows and kits, even if limited to the PDV and LPE domains.  Thus, to simplify the work, and put it into the context of real-world designs, it makes sense to actually run a design through the kits and flows, and evaluate the capability of the PDV/LPE tools to enable design of robust, high-performance, minimally over-designed circuits.

In summary, this thesis attempts to document the process, systems and rational developed and employed in order to create and validate PDV and LPE tools. Given the breadth and depth of the endeavor, the presentation, along with the development itself, has been plagued with catch-22 conditions: that the parasitic-enhanced analysis requires knowledge of S.I. issues, which requires knowledge of design and design kit issues, which again cannot be presented in any reasonably compact form without a focus on parasitics analysis.  Thus, the tone, form and sophistication of the thesis assumes an audience of experienced electronic design engineers who are familiar with the topics of EDA and Signal Integrity, but also allows that the more likely audience might be students of electrical engineering. In the latter case, artistic license has been taken in opining on the vagaries of design and EDA, with the intent of presenting the practical aspects of trying to get a design, or even EDA tool, functional and optimized in a real-world environment.


1.1  Problem Statement

The focus problem stimulating this project overall, is simply the fact that designs are failing due to the increased effects of signal integrity as minimum lithography feature dimensions decrease, circuit frequencies increase, and operating voltages decrease. According to Juan Rey [4], studies show that more than 50% of designs are failing due to functional flaws, with mask costs topping $1 million. As can been seen in the International Technology Roadmap for Semiconductors 2003 (ITRS’03) [5] projection below (Figure 1.1a), total cost of design is growing exponentially.  The predominance of design failures is partially due to the complexity of the designs, but also due to the complexity of the tools, their validation and usage within a PDK.  That is, the S.I. tools that are needed to detect, correct or avoid such failures are, to a degree, inaccessible to the engineer due to the complexity of their usage and errors and deficiencies in implementation. The objective to remedy this condition is broadly scoped here as the development and validation of PDV and LPE tools to facilitate the design of robust, high-performance circuits, with the least over-design. But, the pursuit of this objective leads to a much broader set of problems and translates partially into a problem of managing complexity: complexity of designs in terms of signal integrity effects, and the complexity of design kits in terms of physical verification validation. Thus, the problem also includes an analysis of design kits and design flows. This complexity issue is one of the primary grand challenges listed in the ITRS’03 report. In the NSF research on billion transistor systems [6] – the potential ‘show-stopper’ include “CAD tools, compilers, OS capabilities don’t scale fast enough” and “Complexity management: algorithms, abstractions, data management etc.”

Figure 1.1a, ITRS’03 Technology Node Vs. Design Cost for SoC [5]

(Reprinted with Permission of Sematech)


As stated above, the research, evaluation, development and implementation of layout parasitic extraction and signal integrity (SI) solutions fall into the domain of the Electronic Design Automation engineer.  As S.I. is the one facet of EDA that is coupled to all stages of design, it presents an added complexity of requiring high integration between tools and means of information sharing between various levels of tools to allow for best optimality at each level.  Thus the resolution of the design failures problem, in terms of PDV and LPE tools development, is also a problem of considering and balancing requirements in difference classes of circuit design, and the various fabrication processes that implement them. 


Figure 1.1b, ITRS’03 Gate Length Trends and Forecast [5]
(Reprinted with permission of Sematech)

EDA tools are driven by design needs.  The progress of fabrication technologies, CMOS, Analog and BiCMOS processes, and the designs that employ them, have enjoyed exponential success, as predicted by Gordon Moore’s Law.  It can be seen from the ITRS’03 projected gate length trends (Figure 1.1b), that the scaling trend may continue unabated to least the 6nm gate-length node.  But, it has become increasingly apparent that the EDA tools, and the Process Design Kits upon which they operate have required a proportional increase in complexity to cope with the design’s exponential increase in complexity and size.  This complexity eventually hits a bottleneck in the physical verification stage, where all components meet and must eventually be integrated and tested. In particular, the LPE tools are exposed to all PDK, Process, design flow and methodology conditions and complexities.

The LPE tools serve as a bridge between the back-end physical implementation of the design, and that which is simulated in the front-end of the design kit.  The role of LPE is, in its most simplified definition, that of making the circuit simulation more closely represent the final silicon behavior.  That is, the typical simulation with Spice may have a fairly good model of the behavior of discreet devices, but it can not, a-priori, represent the effects of interconnect parasitic resistances, capacitances and inductances.  The LPE tools attempt to provide this information by analyzing each shape in the layout, and determining its parasitic components according to form and relation to neighboring shapes.  This problem of correlating silicon to simulation is the basic definition of ‘Signal Integrity”.  Although, Signal Integrity may be considered to include other ‘non-ideal’ (defined here as ‘not in the schematic’) simulation effects such as substrate-noise, thermal effects such as wire self-heat, electromigration and hot-electron effects. Thus, the LPE validation is inherently and intrinsically dependent on the design, process, flow and kit complexities.

These effects and their eventual affects on the behavior of the final silicon must be taken into account when developing a design kit, its LPE tools, and ultimately in the design and simulation of electronic circuits.  A primary problem, and typical objective within an EDA team, is that of validating that the data provided by the LPE tool is accurate, and to what degree.  Knowledge of the degree of accuracy of LPE can by used in defining the bounds of statistical simulations such as to guarantee within, for example, 3-sigma (standard deviations), that the circuit will operate with the added effects of parasitics.  This knowledge may also be used in conjunction with data on the variances and tolerances of devices models, the accuracy of the Spice circuit simulator, and other factors such as Process drift, voltage shift and Temperature effects to build tighter envelopes of operation for the device. Overall, it can be seen that a primary problem lies in the managing the complexity of the contributing factors to a designs performance.

The development of PV (Physical Verification) tools such as LVS and DRC, including S.I. parasitic extraction tools has been fraught with errors and missteps.  This systemic problem has been the basis for the development of a regression system to validate those tools themselves.  Validation of a LVS or DRC rule-set turns out to be much more involved than simply creating a pipecleaner design and giving the tools a spin.  For both the LVS and DRC cases, it does little good to test a ‘clean’ design, as the intent of the PV tools is to detect and flag errors in the design. Thus, many test cases incorporating most, if not all, of the possible design rule and LVS errors must be devised.  Similarly, the quality check (QC) of LVS and LPE, in terms of enhancing Silicon to Simulation equivalence, also requires that the front-end simulations with the parasitics back-annotated perform as expected. Thus, cross-tool tests must be devised, which implies that the testing framework must control multiple tools and have means of evaluating their outcomes.  Of course, given the complexity of the tools, libraries and kits, it follows that the required number of permutations of test runs can be enormous. Thus, the regression system must be able to optimize use of computer resources through load-sharing process management tools, and independently analyze the results of each run.

In restatement, the problem addressed by this work can be encapsulated in the complexity of designs in terms of signal integrity effects, and the complexity of design kits in terms of physical verification validation. Addressing both problems is necessitated to obtain the primary goal: development and validation of PDV and LPE tools to facilitate the design of robust, high-performance circuits, with the least over-design.


1.2  Research Objectives and Motivation

            The project’s objectives have been the development of a set of tools, methodologies, and support system that affords the design engineer the capability to realize accurate and efficient parasitic-enhanced signal-integrity analysis of the design under test. The target is relative ease-of-use of the tools with a reasonable expectation of attaining timing/SI closure, and with an estimable confidence level that the simulation-to-silicon error is within the desired envelop of performance.  Inherently related, the parasitic extraction tools must be developed in sight of the interoperability and data flow to follow-on S.I. tools concerned with, for example, power-analysis, substrate noise, physical reliability and timing closure.  The solution that EDA provides for parasitic extraction and signal integrity must simultaneously strive to be:


·        Rigorous: The analysis must guarantee timing consistency

·        Robust:    The solution must be able to accommodate varying needs, resolutions.

·        Feasible: It must be simulatable within reasonable time and compute resources.

·        Usable:   The tools and methodologies must be integrated andeasy to learn.


The parasitic extraction tools must also be developed with a constant eye on the realm of design needs targeted.   Thus, consideration must be given to the various design classes (i.e., Opamps, Data Converters, DSP's, Filters, Power-Converters), the available analysis tools, and the available compute resource. These can be tabulated as:



1.      Caps extraction accuracy needs (compared to other errors, design sensitivity)

2.      RC requirements, RC reduction capabilities, vias, substrates, devices

3.      Inductance requirements, high frequency ranges

4.      Simulation max time, data size, accuracy trade-offs

5.      Digital vs. Analog design requirements

6.      Parasitic/Signal-Integrity analysis tools needs

7.      Fit and integration into existing design flows, PDKs, practices.

Table 1.2a, Design Signal Integrity Concerns


These requirements themselves are in constant flux, as driven by the need to keep up with existing and forthcoming technologies.  The EDA tools change daily, as do the environment conditions, and the design requirements.  Although the development of PDV and LPE tools are a small part of electronic design in general, they are, as presented above, integral to all parts of the design flow, and the final check of the physical design resulting from the inputs of all previous stages.  Thus, it is imperative that the development of these tools should be done with a fair understanding and accounting for of all the aforementioned contributors, which include the form and function of the EDA environment, the contributing signal integrity effects, and the designs and design flows on which they operate.


The overall objective was a set of qualified tools, libraries and a reference design flow which would afford the design engineer the greatest likelihood of achieving first-pass silicon success!


1.3  Contributions of Research

The contributions of this research include a set of checklists and guidelines in the development, analysis and validation of certain physical design verification and parasitic extract tools for industrial use on leading-edge designs.  These checklists include surveys of:


1.      A Parasitic Extract Validation Project Outline, (Appendix A)

2.      The design environment and design kit development, (Appendix C)

3.      Design flows Error Propagation, (Appendix D)

4.      A Signal Integrity Concerns checklist.  (Appendix I)


The end-result contributions are a ‘holistic’ set of tools, experiments and their reports which have helped in accelerating and optimizing the development of physical verification tools and their validation through regression tests.  A further, and perhaps primary, contribution of this work has been the development and employment of a process-independent regression management tool to manage layout verification runs, track the stability of the environment, and to provide a means of running parasitic enhanced corners simulations on designs. Given the successful accomplishment of the above research goals, and the concomitant development of analysis and tools, the future contributions will be demonstration of the benefits of the results through application to simple representative circuits. This will include comparisons of simulations varying PVT corners, LPE corners and over Monte-Carlo distributions.

This work does not have the benefit and advantage of developing and defending a focused invention, but rather contributes a holistic view of the process necessary for evaluating, developing and implementing parasitic extraction tools.  This work leads to a final appraisal and proposal for a system to globally manage and address signal integrity problems.


1.4  Organization of the Thesis

            The research is roughly organized by data dependency, such that each chapter builds from the results of the preceding.  This sequence has already been presented in the prior sections, but will be further elucidated here.

Chapter two presents works found that relate to the management of tools in an EDA environment.  The structure and nature of design environment and PDK’s can not be ignored in the development of LPE and S.I. tools. Thus, Chapter three attempts to provide a synopsis of the typical analog and digital design flows and the integration of the PDK and tools. The chapter evaluates the dependencies between tools, data and environment in a process design kit. The concerns of the design process and its error injection points must be considered when developing simulation and extraction tools in order to neither under or nor over-design the tools. Eventually, this knowledge will improve Corners definitions and Monte-Carlo (MC) simulations.

In Chapter four, a review of the design trends as presented in industry reports and a survey of signal integrity effects are presented. A general knowledge of all contributing factors provides a better understanding of the eventual field that parasitic extraction tools work in. A survey of the issues of physical design effects on the performance of circuits, the methodologies to account for the effects at various points in the design flow, and analysis of the error contributions of various stages of design is presented in order to scope an economical valuation of various degrees of accuracy in parasitic extraction.  Focus is given to the capabilities of existing tools and methods, as the objective is to provide the best parasitic extraction and signal integrity analysis tools within the bounds of development resources, design requirements, tool capabilities and compute capacity.

Chapter five focuses on the parasitic extraction validation and benchmarking. An overview of some related projects in extraction validation are presented.  Then, nature of the test structures is investigated, and the relationship to designs, extractor capabilities, and actual silicon is discussed.  The structure of the experiments is presented, but the means of managing, executing, and post-analysis is dependent on the following chapter.

In Chapter six, a framework (RMS) for systematically validating the quality of the design tools and libraries through the design flow is presented. A tool "RegMan" (Regression Manager), is introduced which encapsulates the regression systems for validation of verification tools employing the distributed processing of jobs over LSF.  This tool has been developed with the intent to also execute parasitic extraction and evaluation.  This tool is coupled with others, such as a 3D solver, a distributed load sharing facility, and a spice simulation scripting system.  Eventually, the tool should be able to manage the evaluation of various stages of design, and build error-propagation tables that allow for the tracking of accumulation of error through a design flow.

In conclusion, Chapter seven summarizes the results of this work and presents a survey of forward looking requirements of S.I. issues and various means of addressing these issues.

As this thesis is also a report of a project, its motivations, research and development, a summary of the project plan will be very helpful in understanding the overall composition of the various parts presented below.  This Project Outline is included as Appendix A, and is left as an optional purview for the reader.





This chapter provides an overview of works related to the overall concept of process design kit validation and parasitic extraction. Given the somewhat broad expanse of this research project, there were no directly equivalent works found, although sub-works abound.  Due to the large quantity of prior work in related sub-areas, each of these sub-areas are discussed in separate chapters that follow.  These sub-areas include signal integrity, parasitic extraction validation, and PDK testing regression systems.    An overview of the related work is first presented here to set the stage for a more detailed discussion of these background materials.


2.1  Design Environments, Kits and Tools Reviews

With regard to the relationship of parasitic extraction to design flows and design kits, related work appears to be largely limited to industry tools presentations.  In [7] and [8], a good overview of the concerns of addressing signal integrity within the framework of design kits and design flows is presented.  Caignet, introduces concerns of S.I. with an added awareness of the driving design flows and methods in [9].

The papers [10] and [11] are particularly salient, as they both consider the impact of design tools and flows in the introduction and management of error.

The documents developed by the Open Kit Initiative and Fabless Semiconductor Association in [12] and [13] are attempting to bring order and efficiency to the PDK development process, including quality assurance methods and standards.


2.2  Signal Integrity Reviews

There are numerous papers that give various overviews of signal integrity, its theory, algorithms employed and tools developed, but one stands out as significantly apropos to this work is the ITRS bi-annual report [5].  Similarly, [14] provides and overview of industry trends and concerns.

Altogether, there about 390 papers were found relating to signal integrity and parasitic extraction – although many additional papers have likely been written on this topic.  Two papers [15] and [16] are particularly useful here due to their brevity and clarity.  Both of the papers provide a clear and practical view of every-day S.I. issues as encountered in real designs.

The dissertation in [17] provides a general overview of S.I. issues, and presents a method for current-sensing, as apposed to repeater insertion, for timing management.


2.3  Parasitic Extraction Validation Reviews


In the area of parasitic extraction validation, there are a number of papers providing good surveys of the state of the art, and others with interesting ideas on on-chip test structures.  The two [18] and [19], reviewed in this thesis appear to be the most popular based on frequency of employment of the methods proposed, and frequency of citation.

With regard to the parasitic extraction system benchmarking project, there were three papers [21, 21, 22] found which described benchmarking projects on the target 0.35um process.  These projects were each contracted of the respective companies and, although the actual data is not available, the methods and means of acquiring that data is of interest.


2.4  Regression Management Systems Related Work


In the realm of regression management systems for the QC of design kits, tools and flows, the works discovered include [23, 24, 25].  Although none of these papers address regression management directly, they do relate to the problem of tools management through frameworks.  In essence, if one were to present to these frameworks and flow managers a script to run a set of tests, they would in fact be regression management systems.


2.5  Signal Integrity Design Case Studies


A future objective is to exercise a design through the various tools and flows which directly, or indirectly, employ data obtained from parasitic extraction.  It was found that there are numerous works investigating the effects and management of signal integrity in specific designs.  Two will be reviewed here [26] and [27], one with a focus on analog, the other on very large digital.

Although the above literature are presented as related work, they are by no means the only papers referenced within this work.  They will server as the foundation for each sub-part, but other papers will be required to establish or relate upon certain points.






“There are known knowns … there are known unknowns … and there are unknown unknowns …” - Donald Rumsfeld, on the search for WMD.


In the search for errors in design kits, there are similarly known conditions and errors – which may simply be avoided or fixed.  There are conditions which have not yet been proven or bounded, and are thus known unknowns, but may be accounted for by robust design coverage.  And then there are the unknown gremlins which ultimately destroy your chip.

This chapter presents a review of the general components of a design environment including the structure and development of a PDK, and the nature of digital and analog design flows.  These facets of design are presented, and considered with respect to the primary objective of optimal development and validation of PDV and LPE tools.  As introduced, this objective has been pursued from the perspective of the Electronic Design Automation (EDA) engineer whose responsibilities include all facets of enabling circuit design, optimization and innovation. This broadly stated EDA mission also incorporates implementing the design environment and the Process Delivery Kits (PDK) that enable, through tools, libraries, methodologies and flows, the design of electronic circuits targeted for various semiconductor fabrication processes.  The PDV and LPE solution to be provided was bound by the requirements that it must fit, and enhance, the existing design environment, its PDKs, and design flows, which have evolved over time.  Similarly, it must be tailored to the various design type requirements that the target process technology usually carries.  And ultimately, the solution must be able to succeed within the bounds of simulation and compute resources available.  As there are infinite possible variations on these flows, kits and designs, it would be impossible to specify an ideal, universal environment.  But, as presented by a Mentor Graphics white-paper [28], successful analysis of parasitic effects can only take place if:


1.      The parasitics can be modeled accurately

2.      The massive data can be managed and organized efficiently

3.      The extraction results can be effectively used by post-layout analysis tools for reliability, noise, timing and power analysis


Table 3.0a, Requirements of Successful Analysis of Parasitic Effects


To briefly introduce the nature and complexity of the Process Design Kits:  Each Process used for designmust have a set of libraries, tools, glue-code and documentation to enable circuit design.  Primarily, this includes the device models, schematic capture symbols, device layout generators (parameterized cells), digital cell libraries and layout technology files.  These components are developed with one eye on the Foundry requirements and capabilities, and another on the software/hardware environment targeted.  Both, of course, are driven by the design requirements which sets the Foundry goals and the tools’ speed, accuracy and data size requirements.  It is also the design requirements that must co-join all of the above to handle the overall picture of Signal Integrity. 

An in-depth checklist outline of these components is provided in Appendix C, as gleaned from this research and practical experience.  There is also a new PDK checklist produced by the Fabless Semiconductor Association [13].  As stated by Richard Goering in [29], “The amount of information in a PDK can be huge, and the presentation from one foundry to another is inconsistent.”  The new checklist provides a list of deliverables, and a "proxy" for their quality and maturity. This can be invaluable in the organization of QC checks and design flow validation.  Similarly, a new initiative, the “Open Kit Initiative” [12], is developing guidelines for standardizing PDK development, nomenclature, usage models, interfaces, quality threshold, and delivery structures and mechanisms.  As stated in their Design Objectives Document, “the inefficiencies inherent in the custom IC design process are due largely to the fact that design tools require detailed process data”.  And, this data has been made tool-specific, with tool-specific models, without regard for interoperability.

The process of electronic design is increasingly dependent on the capabilities and quality of electronic design automation tools and the integrity of their employed libraries and environments.  Design complexity, Signal Integrity and the integrity of the underlying Process Design Kits that facilitate the process of circuit design are all intricately interlinked.  A design’s performance is generally limited by the worst link in the design flow, whether it is in the device models, the simulator, cell library and Pcell layout generation or parasitic-enhanced re-simulation. Of course, any error can be designed-around, but this generally leads to further troubles later on.  Conceivably, the effects of an error in any stage of the design process may propagate and expand further down the design flow leading to a critical fault.  The existence and nature of such errors may be hidden to the designer, and thus the designer must work on a basis of confidence in the tools and libraries.  Idealistically, the circuit designer need only consider the usual process-voltage-temperature (PVT) corners and signal noise in defining the envelop of operation of a design. That would be the case if the design environment where perfect, the device models very accurate, and the parasitics fully accounted for.  In reality, the increasing demands to optimize power, speed, accuracy, size, yield and time-to-market all conspire to drive the design further into the signal integrity danger zone. Given these imperatives, it is already prohibitive to obtain design closure on signal integrity issues, not to mention contending with hidden design kit tool and library errors.  An interesting consequence of error propagation, is that it makes little sense to spend enormous time and resources to develop LPE tools with .1% accuracy to silicon, if there exists numerous errors in the design kit that ‘swamp’ any accuracy obtained in LPE.  Cases have been discovered during the course of this work of device models with 1000% errors in some properties, or layouts constructed with parasitic structures that throw the device operation off from the simulated by margins much greater that the device’s tolerance.  Of course, if the LPE tool could provide 0.1% accuracy to silicon without a large cost in extraction and simulation time, then it leaves plenty of room for improvement in the rest of the tools.

At the ISQED 2003, a panel of leading technologists discussed the shortcomings of PDKs [30]. Some of the things mentioned are paraphrased here:


-Design rules alone cannot determine yield. Designers must collaborate with the fab.

-Processes, and their PDKs, shift rapidly over time.  Changes in kits need fast QC.

-Techfiles from a process keep changing, and the models follow nine months behind.

-Revision control is a big problem. Designs may be built over several revs of a kit.

-Simply fixing the mess that lies beneath the surface of the PDK would be a big help.

-PDK’s are the common interface between foundries, EDA vendors and design teams.

-Design rules are growing (i.e. 650 for 90nm). They need to be a prioritized in DRC.

Table 3.0b: PDK Shortcomings, ISQED ’04 Panel


The above paragraph states the reliance and limits of the design on the PDK, but it also  implies that the Design Kit Integrity and Design Complexity are co-evolutionary.  Increasing demands on design performance and increasing susceptibility to signal integrity issues drive the design kit to increasing levels of complexity. Likewise, the design kit complexity leads to exponential complexity in design susceptibility to kit errors.  Rajeeve Madhaven, CEO of Magma Design Automation, has summarized this complexity condition:


“Many EDA companies responded to the challenges by acquiring point tools, each highly optimized for a particular critical area. Designs, therefore, see a significant increase in iterations, as one tool's changes introduce new problems for the next tool in the chain, and back again. Tools and methodologies are so complex that designs can be completed only by massive and unintelligible scripts. In addition, the multiple tools create obstacles to design completion by introducing interoperability and data-translation barriers.” [31]


As another reference of practice, in [4], it is emphasized that the infrastructure for parasitic extraction back-end tools has fallen behind in terms of providing designers accurate silicon modeling. It is noted that traditional tools are specialized, focusing on either analog or digital, and forcing custom flows to fit point tools. Notably, each analysis is stated to require different runs, including power, noise, static and dynamic timing, and general S.I. analysis.  This creates the complexity of having to build various rule decks for the separate tools and flows, and fitting them into the overall design kits and flows. And increasingly, the parasitic extraction requirements are becoming non-linear, or non-pattern matched, due to conformal dielectrics (non-planar), copper wiring and nonrectangular cross sections.  Primarily, it is noted that the solution requires tight integration between tools and the layout environment in order to allow back-annotation of parasitics to the schematics, and integration into mixed-signal and other S.I. analysis tools.

Overall, there exists a need to systematically qualify various stages and components of a design kit, and provide various cross-stage, cross-tool tests-chains to define paths of confidence, rather than clouds of errors. There is also a need to quantify the peak-errors of various stages and devices, and to identify opportunities for accuracy and efficiency improvement in both the design kit development process and in the design process itself. The above conditions and requirements have directed this work to include this chapter’s evaluation of the dependencies between tools, data and environments in process design kits, and have motivated a framework for systematically analyzing the quality of the design tools and libraries through the design flow.


Organization of Chapter 3


This chapter first presents a historical background to provide context into the complexity of the real-world task, relating the evolving complexity of designs to that of design tools and environments.  Next, the discussion on EDA environment and design flows is presented.  This is followed by a review of a few works with a focus on projects that investigate S.I. tools in design environments, papers which investigate the quality of EDA tools, and some general discussions on S.I. issues in analog and digital design flows. 


3.1  Historical Basis

A brief review of the history of this project will help elucidate the motivations, scope and breadth of the work included.  This research is a direct result of efforts to implement design automation and physical verification systems at a leading analog semiconductor company. The work has been progressing for nearly ten years, and thus reflects and addresses the fundamental theoretical and practical problems seen in the development, testing and usage of EDA tools during that span of time.  It is thus, interesting to note the relative increasing complexity of the EDA environments, and the increasing impact of LPE tools.  The state of EDA at the representative company was very similar to that of many semiconductor companies, as they are all naturally guided and limited by the technologies available at the time.  As noted in [32], which is a review of EDA at IBM spanning several decades, the close cooperation between product, technology, and tool development gave rise to many innovations in EDA at IBM, providing it with significant competitive advantages. It is interesting to note in that paper the strong references given to design environment, and the eventual tight integration of tools in order to address signal integrity issues.

Ten years ago (circa 1994), when the Physical Verification and LPE projects began, the design environment consisted of a few workstations for Layout connected by Ethernet, and a P.C. on most Circuit-Designer’s desks.  The tools used in this environment consisted primarily of Pspice™ and Hspice™ for simulations, with ViewLogic’s WorkView™ for schematic capture on the P.C., Cadence’s Virtuoso™ for Layout, and Cadence’s Dracula™ program for DRC and LVS (although, more often than not, colored markers were employed).  The designs developed in this environment were typically limited to an order of ‘hundreds’ of devices (analog), based on several flavors of bipolar processes with 1um minimum features.  Parasitic extraction was not a high priority at the time, as the designs were mostly slow analog. The Dracula™ 2-Dimensional LPE (Layout Parasitic Extraction) and PRE (Parasitic Resistance Extraction) extractor sufficed. Yield was the dominant factor to be considered in a design’s physical implementation.  Complexity, and it cousin, Chaos, on the other hand, were the primary concern for physical verification.  Chaos reigned in terms of a lack of standards and policies.  At the time, there did not exist standardized libraries for the device models, symbols and their layouts. Rather, they were basically made on the fly by engineers and shared.  There were literally hundreds of variations on resistor, capacitor, diode, and BJT layout construction.  Consequently, there were also quite a lot of device model files floating around for the various devices, and very few of them were equal.  In this environment, development of an LVS program became first a task in managing complexity through standardization and building structure into the environment, and defining procedures and protocols into a design flow.

Over the ensuing ten years, the design environment expanded to include hundreds of Unix-based workstations connected by multi-gigabit networks, spread across the planet in over 20 sites.  The fabrication processes available increased to include about 30 in-house flavors of CMOS, BiCMOS and the still active ‘legacy’ Bipolar processes, not to mention all the Foundry processes contracted. These processes span technology nodes from 2um minimum features down to 90nm, and include all sorts of flavors, including SiGe for high-frequency RF design, low-K Cu interconnects, high-voltage power processes with drain-extended MOSFETS, and a number of substrate-isolation flows. The tool set compliment has increased to include more than a hundred tools, although the vendors have mostly consolidated into several primary players such as Cadence, Synopsys and Mentor Graphics.  In the area of Physical Verification alone, there are at least four major tools in use: Cadence’s Dracula™, Diva™, Assura™ and K2-Ver™.  The good news is that the newer PDK’s have, for a couple years, focused and standardized on a defined set of tools.  Yet, there are still overlaps.  The primary analog circuit simulators are Cadence’s Spectre™, and the companies’ in-house version of Berkeley Spice.  The legacy design kits still demand the ability to use Pspice™ and Hspice™, and some designers are pushing for the integration of yet other simulators.  The digital flow is much more advanced and fragmented, yet tends to centralize around Verilog and VHDL with Mentor’s ModelSim™, Synopsys’ synthesis and Cadence’s Silicon-Ensemble™ for auto place-and-route. Yet, this is in constant flux as the demands for Signal Integrity management raise the bar. For mixed-signal simulations, there is a mixture of Cadence’s Spectre-Verilog™ and AMS™, and Synopsys’ Nanosim™. Further definition of the general design flow is relegated to the discussion on design kits below. 

Overall, the past ten years have seen quite a bit of consolidation, standardization and streamlining build into the design kits and environments. But, this increase in ‘order’ has been overwhelmed by an increase in complexity of the designs, design kits and the amount of legacy baggage to be maintained in terms of older kits  (due to I.P. reuse practices, which entail resurrecting and migrating designs from long-passed kits).  Given the number of components included in a design kit, and the number of tools that must operate on it, a small revision on the kit could take literally weeks to test and validate.  A central part of the QA sign-off of a design kit is the Physical Verification stage. The LVS stage brings together the front-end schematics and back-end layout, and the LPE stage provides a means to compare the Ideal (schematics based) simulation to that of the netlist derived from the layout.  But, as presented above, the LPE stage must be able to integrate into a plethora of tools and flows.  Moreover, the LPE stage must be robust and accurate enough to deal with highly sensitive analog designs (ADC’s), high-frequency designs (RF), and of course, timing closure for enormously large digital designs and Systems on a monolithic Chip (commonly knows as SOC’s).  Thus, these tools must be tested and benchmarked under real design conditions through common design flows.

This brief historical review sets the background for understanding the nature and origin of the design environment complexity, and the progressing impact and complexity of LPE tools.  But, the delineation of this thesis’ research objectives is presented in the form of a project outline (Appendix A).  It should be noted that, as with the history of the design environment and design needs, the requirements of this project started off simple and concise, but gradually expanded to include the six milestones presented above.


3.2  Design Tools and Flows

The general concepts of digital and analog flows are briefly presented here, which will further motivate the definition of the process design kits they require in Section 3.3, and the design environments that enable their use in Section 3.4.  The Process Design Kits (PDKs) consist of a set of tools, libraries, which, through defined methodologies and flows, enable the design of electronic circuits targeted for various semiconductor fabrication processes. But it is the design flow that predicates the requirements of the PDKs.  For example, an auto place-and route stage of a flow is universal to all large digital flows, but the tools employed and the structure of the cells libraries varies.

The design flow evaluated here principally consists of five stages: Circuit simulation (analog/digital), Physical layout, Physical Design Verification, Parasitic extraction, and Back-annotated re-simulation.  There are innumerable ways this flow may be constructed and extended.  Figure 3.2a represents one implementation. There are multiple spice simulators, multiple layout tools, multiple DRC, LVS, and LPE tools. The device and cell libraries must work in all of them.  Thus, the design kit development process is a bit involved.  Many concerns must be taken into consideration and made to fit with the total kit objective. This leads to multiple re-works and gradual improvement.  Given the interdependencies between libraries and tools, changes in any part may create faults propagated to others.  A testing and sign-off quagmire results.  As mentioned, the design kit complexity is directly driven by the design complexity and its need for multiple tools and flows, and their interdependencies.  Thus, the design kit development must be thoroughly researched and pre-planned based on design requirements, process technology capabilities, available tool suit, and planned design flows. 


Figure 3.2a,  A 'typical' design flow from Old days. (By Author)


            The partitioning in the above design flow can be deciphered as follows:  The upper left block represents the Cadence design framework which is central to many of the tools used, including schematic capture and layout. The block on the left below it represents the schematic capture and analog simulation environment.  The block on the upper right represents system-level design, such a signal-processing. Below that are two blocks, one on the left for synthesized digital design, and the other for custom, hand-drawn digital logic entry and simulation.  Below the analog and digital blocks follows the physical layout design, which analog flowing into the left part, and digital auto place-and-route into the right part, which a central merging point for final chip assembly.  To the lower left and lower right are blocks representing DRC, LVS and LPE, with the Cadence Diva™-based tools on the left, and Dracula™-based tools on the right. 

            Although various other tools have been added, and some have been replaced, this flow still represents the basis of today’s design environment and PDK’s.  This design flow and its representative parts are loosely discussed in the following paragraphs.  An exhaustive analysis is beyond the scope of the thesis, but salient consideration with respect to PDV and LPE will be put forth.


Figure 3.2b, ITRS’03 Landscape of Design Technology, [5] pg 3. Design
(Reprinted with permission of Sematech)


A more functionally ‘chronologic’ oriented flow, with representation of overlap of analysis through various stages is provided in the ITRS’03 Report. (Figure 3.2b above).  This representation better depicts the sharing of information an analysis across tools and stages.  In particular, the S.I. analysis of power, noise and timing can be seen to bind the logic, circuit and physical design stages.


3.2.1  Physical Design Verification Flows


Similar to the implicit requirements of LPE tool validation for goodsimulation of parasitcs, the Physical Verification objectives of DRC and LVS validation also have the implicit requirement that the underlying design environment, design kits and methodologies are sound and consistent across tools.  The typical LVS program checks quite a number of conditions, including the equivalency between the layout and schematic of their connectivity, device types, device sizes, and various ‘electrical rules’ such as the potentials at which P-channel Nwells are held.  The DRC programs typically incorporate on the order of a hundred various checks including size, space, shape, percent coverage, latchup and antenna checks.  These rules are derived from the fabrication process, and from reliability data from failure analysis. 

For the EDA engineer, it is not sufficient to just code the rules into an LVS or DRC rule-deck, and check that they pass a known-good circuit.  They must be validated on every possible and conceivable design-layout condition, as their ultimate purpose is to detect errors in the physical implementation of the design.  This is a complexity monster in of itself.  But, to make matters much worse, the EDA engineer must also contend with errors in the design kits, their tools, libraries and flows.  For example, the netlist generated from the schematic for LVS use is not guaranteed to be the same as the netlist generated for simulation. Thus, even if a design is ‘LVS clean’, it does not guarantee that the layout represents what the schematics simulate. 

In Appendix B, a physical design verification flow, based on Dracula™, is mapped out and its steps described.  It can be seen that the validation of LVS and DRC depends critically on the integrity of the PDK as a whole.  The complexity of the design kit exacerbates this condition exponentially. In particular, step 12 indicates that a ‘clean LVS’ must be obtained in order to attain reasonable extract-based simulations.  What this means is that the connectivity of the layout cells must be able to ‘fit’ into the schematic hierarchy, and similarly, the extracted intentional devices should be equivalent to the schematic on a first-order basis, and extract all affecting parasitics on a second-order.

Similarly, the Cadence Assura™ based RCX (or LPE) flow diagram is presented below (in less detail). As would be expected, both LPE flows have the same critical interfaces: Between the verification tools and the design database, between the extract tools and the simulation-design database, and (virtually) between the actual silicon structure and its representation in the RCX tools, device models, and layout device structures (to name a few).


Figure 3.2.1a, Assura RCX Flow, [33]
(Reprinted with permission of Cadence Design Systems Inc.)


These aforementioned complexities in the kits and designs can roughly be partitioned into two primary vertical design flows that are digital and analog centric, as presented in the following sections.  The analog flow may also be extended to a super-set ‘RF’ flow, which adds various capabilities for simulating designs in the RF field.


3.2.2  Digital S.I. Flows


The objective of a digital flow is, roughly, to take a behavioral or RTL specification through synthesis to gate-level netlist, through functional verification, auto place-and-route, and ultimately through post-layout physical verification, parasitic extraction, back-annotation into the Verilog RTL simulation through SDF (Standard Delay Format) and final timing sign-off.  The nature and form of the digital flow will be dominated by the size of the circuit.  Small digital blocks (<50K gates) used to control analog circuits, or those employed in ADC’s or DACs may follow an analog flow.  Medium sized (>50K) designs will require at a minimum, a fast-transient simulator such as Synopsys’ Nanosim™, and a reasonably robust auto place-and-route tool.  And, of course, large ASICs, MPUs and SOCs demand a fully integrated, heavy-duty RTL-to-GDSII platform such as Cadence’s ‘SoC Encounter™’, Synopsys’ Galaxy™, or Magma’s BlastFusion™ and BlastChip™. Timing Closure Paradigm Shift


In the digital realm, design complexities are being dominated by interconnect parasitics and their signal-integrity degradation effects which introduce non-linearity’s into the numerical optimization solutions provided by synthesis, placement and routing tools.  Previously, cell-based and custom design methodologies could successfully predict total delays based on device delays alone. Synthesis flows could rely on statistical wire-load models (WLM), which used a pin count to estimate interconnect delays, and were sufficient when average wire delay was relatively close to the actual.  But, as noted in  [11], gate delays depend mostly on the capacitive loads they drive, which are now dominated by interconnect.  Interconnect delays increase due to a narrowing of wires, with greater average relative length.  The increasing dominance of variable interconnect effects over the static cell delays (gate delays) effectively diverges the synthesized timing estimate from the eventual physical. A paradigm shift occurs. Numerical methods therefore have become less viable and the designer is often forced into a trial-and-error psuedo-convergent state of iterations. That is, the crosstalk noise from routing induces failure, which could only be revised from the synthesis or routing stage.  Any change in the routing might fix one problem, but introduce any number of others. Crosstalk noise, and its components: functional and delay noise, are further discussed in Section 4.2.3.  Further S.I. concerns of digital designs are presented in Section 4.4.2. Technology Scaling Effects


In [34], a good summary of technology scaling effects is provided. There are four major delay and coupling effects, itemized as:



1.      Device, or gate, delays decrease due to the thinning of gate oxide

2.      Interconnect resistances increase due to shrinking wire widths

3.      Interconnect heights increase, to reduce resistance, but leading to increased lateral (sidewall) and fringing components of capacitance

4.      Interconnect capacitance dominates total gate loading.


Table  Process Technology Scaling Effects


Also, it is noted that the total delay of a net or path is governed by a simple equation which includes the device delays, device loads, and signal slew rates. That is:


Total Delay = Device delay + Interconnect Delay + Slew rate.  Eqn.. 3.1


In this equation, device delay has been the dominant factor for processes above one micron.  It is noted that, previously, device loads could be treated as a ‘lumped capacitance’, and slew rates could be ignored.  As processes shrank below one micron, slew rates became significant due to the reduction of total delay rates, leading to the introduction of ‘lumped RC’ models.  Somewhere between 0.5 and 0.35um, interconnect delay due to device loading became equal to intrinsic device delay – and the methods of approximating it began to fail. Also, the increase effects of coupling capacitance between adjacent interconnect wires increases delay and noise, leading to faults.  This lead to the advent of ‘distributed RC’ models to improve timing accuracy.  In the era of ultra-deep Submicron (UDSM), which is given as those below 0.25um, the trends will continue, with crosstalk noise becoming critical, voltage drop and ground bounce in power rails affecting drive strengths and injecting noise.


Figure, ITRS’03 Delay for Metal 1 and Global Wiring Vs. Feature Size [5] (Reprinted with permission of Sematech)


            In the above figure from the ITRS’03 Report, the ‘relative’ delay impact through the 32nm node is predicted.  It can be seen that the gate delay decreases quickly, while the scaling of metal features results in an overall relative gain in global routing.  The importance of S.I. analysis to optimally determine insertion points for buffers can be seen by the difference in relative delay with and without the repeaters.  Timing Closure Methodologies


There are a number of strategies proposed to solve the DSM timing closure problem. Several of these are presented in [11] and are reviewed here.


Custom Wire Load Model

The more popular solution to the ‘timing closure’ problem, in commercial tools, is to link the front end behavioral and synthesis flows into the back-end physical flow (commonly called the “Custom Wire Load Model”.  One method is based on attempting to order the ‘placement’ phase in the synthesis process such as to minimize routing length and crosstalk.  After routing, if constraints are not met, the extracted netlist may be back-annotated with the actual wire-load and re-synthesized.  But this often results in an entirely different placement and routing, with different wire loads, and thus different timing.


Block-assembly Flow

Another strategy is the “Block-assembly flow”, which partitions the design into manageable (~50k cells) blocks such that a statistical wireload model can successfully estimate intra-block delays.  When all blocks have met timing constraints, block-assembly and global routing are performed.


Constant Delay Synthesis Flow

In this method, the delay through a logic stage is expressed as a linear function of the gain, which is defined as the ratio of the capacitance driven by a gate to its input capacitance.  To revise the delay means to revise the gain.  If a fixed gain is given to every logical stage, then timing constraints can be checked and met.  The gains must be preserved during the placement and routing stages.  But, this method ignores many physical effects, such as crosstalk and input slopes.  The gains may be kept constant only by adapting net capacitances and gate sizes through buffer insertions.


Placement Aware Synthesis

A trend in timing closure methods is towards binding the placement with the synthesis.  Thus, the synthesis tool has knowledge of the layout area and assigns placements as gates are synthesized. The placement provides a good estimate on wire length and congestion, but it may not predict a-priori the interactions of nets and signal integrity.  If the speed or sensitivity of nets is above some threshold, this method fails also.


Refinement-Based Flow

Given that routing will eventually determine S.I. effects, and routing depends on placement, which depends on floorplanning, all of these stages must be simultaneously optimized in synthesis. Furthermore, the optimization of timing/area/power must coincide with management of congestion, clock-tree synthesis, scan-chain reordering and S.I. effects.  The refinement flow will make a rough guess at all stages (or layers) of the flow, and then gradually refine it towards convergence by propagating information up and down the flow layers.  Thus, at the beginning of the process, the will be only an approximate placement and global routing with rough estimates of the clock tree and power/ground network. The end of the process produces a detailed place & route with clock, power/ground and all S.I. issues met.  In a sense, this must be a ‘simulated annealing’ method, in that a rough arrangement congeals to a finer, and if a dead-end is met, then the temperature is raised again moving up a bit to a more rough state.  Thus, a branch-and-bound search is executed.


Many EDA tool vendors are pursuing highly integrated platform strategies, but many small design houses will not be able to enjoy their costly benefits, and the migration from current design kits to these new systems is slow.  The predominant flow for small and medium sized circuits continues to include LPE on the final layout.  Unsurprisingly, this presents a huge problem in terms of simulation time and data storage for even smaller sized (LSI, or ~50K gates) designs.  The remedy pursued is typically that of reducing the RC network through various methods (to be reviewed in Section

The relevance of this review is that S.I. effects are most critical in digital/ASIC flows.  Although many ‘platform’ flows have built-in estimation capabilities for S.I. effects, it is still likely that an LPE point-tool will be employed somewhere in the flow.  Some flows will call a stand-alone LPE tool to obtain parasitics on the design in question. In either case, the results must be validated and benchmarked.  Part of this project strives to provide a means for evaluating and validating LPE tools such as to bound the accuracy and to enable better cell library characterization, better interconnect back-annotation, and thus smoother progress to convergence.


3.2.3  Analog LPE Flows


In the analog realm, the push towards higher frequencies, lower operating voltages and finer signal resolutions has demanded similar improvements in simulation and signal quality, such as signal matching. Analog has a much more diverse bag of tricks than digital, which must be meticulously considered to enable the analog formulae to work.  Usually analog is not limited by the device count complexity of digital, but rather the plethora of constraints and the ability to optimize on those constraints in silicon.  Critical to analog design is the accuracy of models and simulation, and the accounting for parasitic effects on sensitive lines in the layout.  Knowledge of the accuracies of models and knowledge of the accuracy of parasitic extraction can help immensely in determining the degree of simulation coverage necessary to ensure performance.

Some of the concerns in analog include device intrinsic noise and mismatch.  Fundamentally, all signal processing is limited by achievable Signal to Noise Ratio (SNR).  As noted in [35], all device elements in a device exhibit random fluctuations in current or voltage due to thermal energies.  These fluctuations combine with the input signal noise and become indistinguishable.  A common noise source is 1/f noise (or jitter).  This can be better simulated post-layout if the LPE tool is correct in extraction of device parameters such as MOSFET area of drain and source.  Device mismatch is an artifact of the intra-die deposition gradients. Mismatch can be statistically accounted for, given the distance between devices – which can be provided by an LPE or similar tool.

In the analog flow, some form of Berkeley Spice typically executes front-end simulations, and parasitics are extracted from the layout for back-annotation to a transistor-level re-simulation.  The following figure (3.2.3a) simplifies the description of this process.


Figure 3.2.3a:  General Analog LPE Flow (By Author)


This flow represents pretty much the same concept as depicted in Figure 3.2a above,  except that it is generalized to show the flow of data through the simulation and physical verification stages.  An outline table will help define the elements of this flow:


·        Probe, Keithley

o       Input: Silicon Device Test Structures

o       Output: Measured Device Characteristics

o       Results: Device Models Libraries

·        Simulator Engines Xi  (Spectre™, Hspice™, Pspice™ etc.)

o       Input: Circuit Netlists

o       Output: Simulation Waveform

·        Pcell Library

o       Input: Techfile (Process Layers, Device Parameters)

o       Input: Device Construction Definitions

o       Contents:  Device symbols and layouts

·        Standard Cell Library  (Digital, Analog etc.)

o       Inputs: Pcells, Logic descriptions

o       Contents: Schematics, Layouts, Abstracts

·        Schematics Entry Tool

o       Input: Pcells, Standard Cells

o       Output: Netlists to simulators, LVS

·        Layout Entry Tool

o       Input: Pcell Device Layouts

o       Schematic’s circuit description

·        Physical Verification Tools: DRC/LVS

o       Input: Layout Database, Schematic netlist

o       Input:  Verification Rule Decks

o       Ouput: Error Reports

·        Layout Parasitic Extraction Tool

o       Input: Layout Database

o       Standard Interconnect Process Parameters

o       Output:  Spice netlist of layout


Table 3.2.3a:  General Analog LPE Flow Elements


A primary concern of the analog-LPE flow is seamless integration – such that the netlist extracted from the layout may be easily fit back into the schematic.  For example, a designer may wish to focus on the parasitics of the output stage of an OpAmp, while leaving the rest of the design to simulate in ‘ideal’ mode.  The Cadence Analog Design Environment provides a ‘Hierarchy Editor’ (HED) tool, which enables the designer to choose which ‘view’ of a cell to include in their simulation. The view may be a normal schematic, a parasitic-enhanced extracted view (which has a build-in netlist), a Verilog or Verilog-A view.  As the netlister traverses a design hierarchy, the HED tool will tell it which view to netlist, and how.  The results should be a fully-hierarchical netlist, top-down, with valid signal ports from each master cell to its children cells.  The implicit requirement here is that if an extracted layout is selected at some point, then the netlist embedded into that cell must be consistent with the original schematic, albeit, with a large number of parasitics included.  There are several points to consider here:


·        To get the proper connectivity from the layout, a clean LVS run is required.

·        The ‘intentional’ devices, i.e. diodes, caps, resistors, etc. should have all of their parameters extracted from the layout necessary to reconstruct a simulatable netlist

·        If parasitic resistance is desired on interconnect, the traces must be fractured in such a manner as to provide a known starting port into the net, followed by created terminals between segments of the trace, and terminated at each device or pin by the original known port.

·        If parasitic capacitances are desired on interconnect, the terminals of the caps must fit into the network from (1) above, or the network from (3).

Table 3.2.3b: LVS requirements for LPE Simulations


Altogether, getting a valid, simulatable netlist from a mixture of ideal schematics and parasitic extracted cells is not trivial. Moreover, there are quite a few permutations on mode of creation of the parasitic extraction.  The options may include or exclude:


1.      Resistors, at a given threshold length, number or squares or max-fracture length

2.      Capacitance: net-to-net coupled, or decoupled to ground. Self-capacitance on a trace which has been fracture by R’s extract.

3.      Inductance, self-inductance.

4.      Parasitic diodes (i.e. MOSFET Nwell/Substrate or P-Plus resistor body to Nwell)

5.      MOSFET AD/AS and PS/PD (area of drain, perimeter of drain)

Table 3.2.3c: Device Parasitic Extraction Parameters


As can be seen, the multitude of options available can lead to a rather complex implementation.  For these options to assist in debugging a design, the must, of course, be valid.  But validation of the entire system and the various permutations of use can be excruciatingly arduous and messy.  For example, the intentional devices must be validated that they are, indeed, the devices drawn.  Given that each device has an unlimited set of possible constructions, this can only be approximated by creation of a large representative set of those devices, and then meticulously checking each one.  Further discussion on the regression testing of LPE and device extraction is relegated to chapter 5.  Analog RF Flows


Analog RF flows are of particular interest here as they, naturally, generate concerns about distortion, and inductance, which predominate at higher frequencies.  Given the trends in operating frequency and the materials needed to enable them (see figure below), the demands on the EDA tools can only be expected to worsen.




Figure  Mixed-Signal/Ultra High-Speed Digital Trends, From, pg. 30, RF and AMS Technologies for Wireless Communications, ITRS’03 [5]
(Reprinted with permission of Sematech)


In general, the RF flow is just an extension of a general Analog flow, with the added necessity of an RF-capable simulator, and an inductance-capable extractor.  In consideration of the LPE concerns with respect to RF flows, a highly relevant parasitics project is presented in [36].  Here, a parasitics project is presented to: “do in-depth investigations on parasitic effects, to provide methods and algorithms to analyze them, to estimate their impact on circuit performance and find solutions to model the important effects, that they can be considered in the design process.”  Emphasis is placed on the need for a streamlined ‘system architecture through RF IC design implementation’ design flow.  This flow must be able to provide parasitic analysis which works between both the baseband and RF partitions. Built into the design flow are interfaces to SCA (Substrate Coupling Analysis of Cadence), and output of an layout extracted netlist which is thence sent to PSS (Periodic Steady State), Pdist (Periodic Distortion) and envelop following analysis.  The PSS employs the LPE data in small signal analysis and Pnoise (Periodic Noise) analysis.  Hartung further notes that as circuits become more complex, it becomes extremely important to abstract out the behavior and employ a Top-Down design process.  It is much easier to fit blocks together, tweaking a few parameters on Verilog-A modules, than to build bottom-up from transistors – where a tweak on a parameter might construe days or re-design.


3.2.4  Mixed Signal and SOC S.I. Flows


            Mixed signal flows have the added complexity of requiring a merging between analog and digital in both the simulation and physical design stages.  In the front-end simulation, mixed-signal simulators usually partition the design into analog, mixed and pure digital sections. The analog and digital are simulated with their native engines, with mixed-mode signals which transfer between them being sent through interface elements.  These elements, when going from analog to digital, detect when a signal rises above a noise-margin threshold in order to pass a “1”, or below the lower noise margin to pass a “0” to the digital section.  In the digital to analog (D2A) interface, a “1” is simply translated into the upper supply voltage, and a “0” to ground.  Such methods provide an acceptable means of simulating digital and analog blocks in the same run, thereby greatly enhancing the ability of the designer to understand the behavior of the system.

With respects to parasitic extraction, the digital section must, as in the normal digital flow, be extracted to DSPF (Details Standard Parasitic Format) which is translated to SDF (Standard Delay Format), and which then enables the digital simulator (Verilog/VHDL) to maintain is speed advantage by simply calculating pin to pin delays.  The analog section may be extracted with any precision desired using the normal LPE tool with back-annotation into the schematic.

In [37], an SOC S.I. management methodology and flow is presented, which suggests that S.I. noise should be handle in three phases: an early prevention phase, a post-route functional repair phase, and a post-route noise aware timing analysis and repair phase.  The SOC design flow then suggests eight states:


1.      Creation of Synthesized and Placed Design

2.      Optimized Timing and Slew (pre-route)

3.      Early Noise Prevention

4.      Detailed Routing and Parasitic Extraction

5.      Static Timing Analysis and Timing Fixes

6.      Post-Route Functional Noise Analysis and Repair

a.       Loop to 4 for iterative improvement

7.      Detailed Routing and Extraction

8.      Post-route Noise Aware STA and Repair

a.       Loop back to 7 for iterative improvement

Table 3.2.4a: Suggested SOC Design Flow


Their noise prevention methodology adopts the four following techniques:


·        Limitation of the distance wires travel in parallel

·        Shielding for structured routing such as bus nets

·        Routing with extra spacing – which has a similar effect as shielding

·        Pre-route slew optimization by driver sizing and buffer insertion.

Table 3.2.4b: SOC Noise Prevention Methods


These techniques are fairly universal to all S.I. optimization tools, although it is pointed out that statistical early noise analysis may be used to identify critical nets based on floorplan/placement data and estimated routing per [38].


            In the layout, the digital and analog sections are usually isolated from each other by substrate guard-rings, in order to minimize “ground-bounce” signals from the digital biasing the analog ground.  Ground-bounce usually occurs when multiple n-channel MOSFETs turn on, draining their current into the substrate, thus raising its potential for a time – or, bouncing the ground.  This is usually evaluated through a substrate-extraction program, which models the substrate as a resistor network mesh, with ports into it from the actual devices. 


3.3  Process Design Kits

            This section considers the composition of a PDK and its development process.  The information provided here will serve as a basis for the description of design flow error propagation in Section 3.5, and throughout the rest of this document. In particular, emphasis is given to the regression testing of the components of the kit, and their relation to signal and design integrity.

A Process Design Kit is a set of libraries, code and specified flows that fit a given set of EDA tools and allows for circuit design with respect to a specific Foundry Process.  Depending on the Process in question, these may be either Analog-centric, Digital, or Mixed-Mode.  For example, TSMC (Taiwan Semiconductor Manufacturing Corp.) provides a Mixed-Mode reference flow for its 0.35um Mixed-Mode process, and provides some of the elements of a PDK through the Cadence PDK development service.  Although that kit and its flows are proprietary, the flow presented in Figure 3.2a preceded it and is this author’s own artwork. 

            As presented above, the reference design flow can be partitioned into five blocks. Coupled with these stages are a set of tools and the libraries that work within those tools.



1.      Circuit simulation (analog/digital)

a.       Schematic entry tool

                                                               i.      Device Symbol libraries (Pcells & CDFs)

                                                             ii.      Standard Cell Libraries (schematics)

b.      Circuit simulation tool (Spectre™, Verilog, Mixed-Mode)

                                                               i.      Device Models

2.      Physical layout (Block level, Top level Chip Assembly)

a.       Polygon shape layout editor

                                                               i.      Process Technology File (Layer Definitions, Rules)

                                                             ii.      Device construction Pcells (same as above Pcells)

                                                            iii.      Standard Cell Libraries (layouts)

3.      Physical Design Verification

a.       LVS, DRC Tools

                                                               i.      Rulesets (decks) describing devices, rules

4.      Parasitic extraction

a.       LPE Tools

                                                               i.      Process Standard Interconnect Parameters

5.      Back-annotated re-simulation

a.       Schematic/Layout binding tool or scripts


Table 3.3a:  Minimal Reference Flow


The elements of a PDK can be graphically summarized as follows:



Figure 3.3a:  PDK Elements, From the Cadence Website
(Reprinted with permission of Cadence Design Systems Inc.)


            The elements presented in Cadence’s PDK flow above are equivalent to those described in the table 3.3a preceding it.  Not fully represented here is the RCX stage and its hook back into the simulation stage, as represented in Figure 3.2.3a, General Analog LPE Flow.

The process of developing a PDK, its components and their interfaces, is outlined in Appendix C.  The process of systematically qualifying a PDK is arduous.  First off, it is unlikely that the kit is correct, or that the specification is complete while it is being developed. Also, when errors are found and detected, the whole QA process should be repeated.  A single pass through these checks can take weeks to months, if done manually.  If not specified in a systematic plan, it is likely that steps will be missed,

Checks to systematically qualify a kit must include cross-stage, cross-tool tests-chains which exercise the transfer of data, and consider the loss of information, or ‘intent’ in those transfers.  A brief overview of error injection points is given below, in Section 3.5.  More specifically, the development of a PDK must consider, and test:


Use of all library components throughout the flow.

Pcells Devices, good coverage of permutations (layouts, netlists)

Simulation modes: AC, DC, Transient, MC, Corners, Sensitivity

PDV through separate tools.  Full LVS/DRC regression testing.

Digital/ESD cell libraries – timing, logic, LVS, DRC ect.

LPE based-simulations (Ocean™ script) comparisons of ideal/extracted

All data transfers, such as Stream-out/in, netlist generators, layout generators

Exercise of ‘real’ designs through various permutations of design flows

Table 3.3b: PDK Development Considerations


Although this is an abbreviated list, it can be concluded that the PDK development process must be well planned, that the regression testing must be able to work the entire kit, all tools, and through all reasonable flows.


3.4 Design Environments

            Seemingly self-evident, the compute environment upon which all of the tools, library and code exist plays a critical, fundamental role in their successful use.  For example, a primary concern of most designers is simulation time.  It is generally taken for granted that the compute network, storage space, and organization and revision control of the PDKs, tools, and projects directories are ‘sound’.  But in reality, these facets seem to cause much more design delay than simple compute processing resources.  This fact is easy to defend by simply looking at the load usage of the compute farm – which rarely exceeds 50%.  Although, when a designer does get ‘rolling’, they can easily bog the entire system down through one Monte-Carlo or Corners simulation.

Generally any design environment can be decomposed into the computer hardware, network backbone, operating systems and the architecture and standardization of PDK, project and tool directories.


3.4.1 Compute Hardware


            Although, as mentioned in the ‘historical background’ above, the design environment is constantly evolving, there are general trends seen in most of the major semiconductor design houses, as gleaned from talks and papers of the various ‘Users Groups’ conferences, such as ICUG and SNUG.

The Network of Workstations (NOW) is the ubiquitous model these days.  Although, that does not imply they are linked into a distributed resource sharing system.  But, in general, nearly any designer can log into any other machine and gobble up the resources.  To remedy the excess requirements of some designers, and to better utilize capital, compute Farms are almost mandatory.  A dominant player in this field is the Load Sharing Facility [2].  A set of workstations may be allocated, such that any task may be submitted to the farm, and LSF will take care of the balancing and scheduling of jobs on those machines.  This is particularly useful in the regression testing systems, as the number of jobs could take days to weeks on a single machine.  Likewise, the regression system may be setup to run on a schedule, late in the evening or on weekends, with a lowered priority.

The LSF paradigm of distributed computing is also becoming central to many EDA tools.  The Assura™ physical verification tools all have ‘hooks’ into LSF to automatically submit jobs to a farm.  Likewise, the simulation engine Spectre™ has an option to submit jobs to ‘distributed’ mode.  In fact, all jobs can be shipped to the LSF farm through a simple ‘bsub’ command. Furthermore, some tools are building in native task partitioners such as Cadence’s Nanoroute. In [39] it is announced that Cadence's NanoRoute tool has added super-threaded route acceleration with a 10x productivity gain on routing, timing and signal integrity analysis. The super-threading is said to combine multi-threaded routing with distributed parallel routing to boost routing performance as 10x on 600 thousand- to 400 million-gate designs over an LSF network protocols or a standard network.

            As important to efficiency as the compute farms are Centralized Storage Systems.  As apposed to private, workstation resident home directories, these have the advantage of cost-effectiveness, low-support, scalability, and simplified management.  The cost-effectiveness is due to the ability to buy big loads of storage at bulk rates.  Given that the system-administrators need not tear down each machine that has a disk failure, the support is minimize.  With the modern data-center management systems, piling on more disk storage is a routine, hot-swap effort.  As important as disk-space is the backup system.  Design directories must be constantly mirrored, duplicated on regular intervals, and must be easily recovered.  As most of the data created by simulation and verification tools is temporary and not significant, it is useful to have a partition available for general dumping.  In particular, such low-grade partitions are perfect for regression testing data.


3.4.2 Network Backbone


            Given that most designs exist on a different machine than the designer sits on, and much of the compute work is again done on some remote farm, and in fact that designer might be sitting in another city on the other side of the world, the network bandwidth and architecture can often by a primary limiting factor to designer productivity. In the referenced environment, all workstations are smart-terminals, and thus all designers have a constant read/write flow to their designs on the storage farm.  If a network outage occurs, or any particular machine drops out (and causes other machines to attempt multiple mounts), the entire design community will scream bloody murder and then run off to coffee.

            In [40], the network is defined as being central, limiting and defining of the capabilities and evolution of the EDA environment.  It is noted that the network is central to the evaluation, distribution, integration and management of EDA systems. The network provides the designer access to tools, libraries, design data, and a variety of design and manufacturing services which are critical to productivity. Furthermore, with the trend towards globalization, the network plays a central role in the integration of design teams through design management software, teleconferencing and plain old email.  The network capabilities have driven EDA methodologies, and indirectly impact the choice of the most effective tools, algorithms and data structures, in that a benchmarking of a tool or algorithm may be limited by the network.  In summary, it is noted that the the ‘service’ layer will improve, allowing secure distributed collaborative design, and an overall increase in the sharing of knowledge and design development.


3.4.3 Operating Systems


            The operating system (OS) employed is not particularly interesting to this exposition, as it is entirely dominated by Solaris/Unix.  The OS is practically transparent to the designer, and is very mature.  Since all tools necessary for the referenced design environment operate under the same OS, the regression system has a fairly easy job of executing and monitoring tasks.


3.4.4 Standard Directory Systems: PDKs, Projects, Tools


            The organization of the PDKs, the design projects and design tools must be standardized and globally equivalent in order to develop streamlined flows, consistent usage models and generalized scripts.  The PDKs should be self-contained such that they are easily transported to remote sites or partnering customers.  This implies that there must be well-defined standardized directory systems across sites.  This facilitates the creation of universal tools and glue scripts.  Similarly, the design project directories must be automatically created to populate with configuration scripts, environment variables to fit into standardized PDK’s.  If all of these things exist, across processes and across all sites:


Standardized PDK architectures (models – symbols – pcells etc)

Standardized PDK directory systems

Standardized Project directory systems

Standardized Tool and script systems

Standardized or transparent networks, LSF farms, and interfaces

Table 3.4.4a: PDK Standards


            Then it is rather straightforward to create a regression system that will validate the usage and consistency of a PDK in any one of the environments.  If any one of these does not exist, turmoil will ensue.  For example, it is noted in the historical background that a standardized PDK architecture did not exist in the early days – which made it practically impossible to develop a consistent LVS system.  Again, is should be noted that the PDK’s, environment and methods are constantly evolving, so there still exists quite a bit of turmoil!  And, turmoil begets error.


3.5  Design Flow Error Propagations

Although there are many routes to circuit failure, the focus of this thesis centers on the disconnect between the ‘schematics based circuit simulation’ and the actual behavior of the physical silicon. There are many error injection points between these two points in the design flow, including


1.        Limits of device characterization (parameters coupled with equation approximations, or curve-fitting),

2.        Simulator’s precision in terms of granularity of data and time-steps,

3.        Layout devices mismatch with characterized devices, the unpredicted interconnect effects,

4.        Process, mask generation and wafer-level variations,

5.        Environment variations and lifetime degradation effects.

Table 3.5a: Design Flow Error Injection Stages


The LPE tools provide a means to ‘reconcile’ the schematic with the eventual silicon.  A primary concern in layout extraction is the realistic optimal degree of accuracy to pursue.  Every stage of the process of design introduces a new compounding of variation in process variables, characterization accuracy, simulation tools accuracy and such factors as layout interconnect effects.  Given the trade-off between time of extraction and accuracy, and the following time to simulate, it follows to determine the minimal degree of extraction accuracy (or granularity) necessary to guarantee (i.e., within 3-sigma standard deviations), the performance of the circuit under test.  As mentioned above, this knowledge may also be used in conjunction with data from the variances and tolerances of devices models, the accuracy of the Spice circuit simulator, and other factors such as Process drift, voltage shift and Temperature effects to build tighter envelopes of operation for the device through Monte-Carlo and Corners simulations.


            Given the above presentation (Sections 3.1-3.4) on the basis on the nature of PDK’s, the design environment, tools and various design flows, it is now easier to discuss the origins and magnitudes of error injected by components outlined above.  Whenever possible, consideration is given to the means of eliminating bugs, through regression testing, or bounding errors, through analysis.  The objective, in principle, is to quantify the peak-errors of various stages and devices, and to identify opportunities for accuracy and efficiency improvement in both the design kit development process and in the design process itself.  The nature of whether a peak-error is further compounded in another stage, or simply absorbed, will likewise be considered.


Much of the basis of this work depends on the base-case rigorous characterization of devices and process skew:  Without a precise determination of device characteristics such as Vt, it is rather futile to consider the loss in accuracy due to, for example, parasitics nwell-substrate diodes.  It is arguable that the mechanics of exercising a design through the maze of tools and flows is just as likely to incite fault as any model malfeasance.  Given these two cases, it is useful to categorize errors as:


·        Systemic: Device model errors, inherent simulator errors, process skew.

·        Episodic:  Implementation error, usage error, designed errors.


The following sections provide a rough sketch of possible error injection points. These can be decomposed into device modeling, simulation, physical design, process and mask-generation, and eventually life-time environment level errors.  These considerations are further itemized in Appendix C.


3.5.1  Device-Level Modeling Error Injection


            Device-level errors are of the systemic type. Given that they are built into the models, their contribution of error should be persistent and consistent throughout the design process.  Naturally, a device error will have a compounding effect upon the error of a signal propagating through a circuit.  Device level error contributors include those introduced in device characterization, test-die design and layout and the probe equipment.  The errors are accounted for in design through sampling of the design-space: either through worst-case corners analysis or semi-random Monte-Carlo analysis.

            Device models are simply a set of equations and their inputs which, when evaluated by a simulation engine, provides a good estimate of the behavior of an electrical device.  In general, circuit simulators compute the response of a circuit by formulating a set of equations that represent the circuit and solving them.  The individual devices are represented by their respective behavioral equations, and their interconnection are represented by a matrix formulation which is derive from Kirchoff’s voltage and current laws.   A good discussion of the fundamentals of circuits simulators is found in [1].

            The parameters of a model are coefficients of an equation which help fit the model to the measured data.  Given that there may be over 200 parameters to a particular model (i.e. 204 for BSIM4 PMOS [41]), much of which may have been manually measured and entered, there is plenty of opportunity for error.  Device Characterization errors


            The choice of device parameters to isolate and characterize is usually limited.  Most device modeling shops, it seems, do not characterize for all parameters recognized by the target simulation equations, instead opting to default to educated guesses.  It is not unheard of for the models to be completely off – and go unnoticed due to lack of sufficient regression testing.  Similarly, it is often not feasible to measure over the entire range of a characteristic, but rather to sample points and then curve-fit to derive a close approximation.  From a Device Modeling colleague:


            “Sometimes the region where model parameters are extracted may not be the regions where a device may actually be used. For example, the threshold and beta parameters of a MOSFET are extracted using measurements in the linear region, but the designers typically may use the MOSFET in the saturation region.” – Michael O. Peralta, [41a]


            As noted by Dr. Peralta, communications with designers have revealed that over 80% of design iterations in the wafer fab are a direct result of incomplete or incorrect device models.  He also notes that each design iteration costs on the order of $250,000 or more, given the prevailing technologies.  Both the condition of the modeling techniques and the resultant estimates on loss of revenue may be mitigated by implementation of an automated quality assurance system such as presented herein.  Device Characterization Layout Test Fixtures


            As with regular designs, the test-die on which sample devices are drawn may introduce error.  It is not altogether impossible that the device being measured is not exactly the device that is intended.  There are likely to be process shift or deposition variations introduced into each device.  Thus, it is normal to take a sampling of a large number of devices, and then determine the mean.  Or, more practically, a desired mean is specified, and the process is tuned to try to keep the parameter centered on that mean, with an acceptable distribution about it.  Probe (Keithley)


            The probe instruments will introduce errors. Although errors may be cancelled out to some degree, there will always be un-cancelled factors interfering during the process of analyzing a full suite of test fixtures.  Ambient EMI, physical vibrations, thermal shifts all contribute.  To cancel all effects requires a very clean reference.   Characterization Considerations of LPE


The considerations for LPE are: 1)  To keep the accuracy of LPE in line with the accuracy of the devices, 2)  To extract from the layout those parameters which are expected in the device models – and more if possible, 3) To build LPE corners (min, max and nominal scaling factors for parasitic resistors and capacitors), that are in line with the variation of the same devices in worst-case modeling corners.


            The LPE extraction of devices from the layout should be no less complete than that which is input to the device models.  The device simulation parameters typically include, but are not limited to, the following:


MOSFET: W, L, AD, AS, PD, PS, Nwell/Psub Parasitic Diode

Capacitors: Area, Width, Length (or perimeter)

Resistors: W, L, body resistance, terminal resistance, body parasitics (diodes)

Diodes: Area, perimeter  (capacitance is typically build into the model)

BJTs: Emitter area, base width, collector configuration

Table LPE Device Simulation Parameters


The coding of LPE tools is further addressed in Section 5.3.2 below.  In Section 6.6, Validation of Extraction Circuits, the means of validating device parameter extractions in LPE is presented.  This is basically done by comparison of an ideal schematic simulation against the extracted layout of the same circuit.  The quantification of peak-error in devices corresponds to defining the limits for Monte-Carlo simulation by choice of 3σ standard deviations.


3.5.2  Design Entry and Simulation Level Error Injection


            The introduction of error in tools may be either systemic or episodic: Systemic in the case of built-in approximation errors native to all simulators, or episodic in terms of incorrect programming or inputs to a tool (such as device equations or process parameters).  Tracking or identifying errors introduced in tools is particularly elusive, given that it is very hard to come up with a reference – something that is absolutely known to be correct.  Typically, ‘toy’ designs are created that either exercise a template of all devices through the various tools, or small circuits which elicit specific behavior and can ‘stress’ certain facets of the tools in isolation. For example, if a switched-capacitor schematic has been known to cause simulator convergence problems (due to floating initial nodes), it may be added to a suite of test designs.

            This section will briefly high-light a few of the more common ‘tool’ error injection points, including schematic entry, simulators, verification tools, design data translators etc.  Schematic Entry:


The schematic entry tool graphically represents the connectivity of devices and their properties.  The symbols are typically customized to represent various devices specific to a process, and may have build-in ‘call-backs’ which calculate various device properties based upon a few primary input properties. For example, a resistor symbol may optionally take as input two of width, length or resistance, calculating the third value.  A typical mode of error is in having the wrong device parameters built into such a Parameterized Cell, or Pcell.  For example, the sheet resistance may be incorrect on a specific type of resistor, such as a poly resistor. Of course, the more complicated the device construction, the more likely errors will be introduced.  Such cases may include MOSFETs which are coded with the ability to have multiple fingers, or which may be interdigitated with other transistors.

The schematic creates a netlist – which may not always represent what is input to the symbols. If the error is gross, it will easily be detected in the simulation stage, but otherwise, the only chance for discovery is by manual inspection of the netlist, or during the LVS stage – if the layout has been manually created.  If the layout is automatically created from the netlist (or driven by an interface to the schematic, such as Cadence’s VXL), then the LVS will be ‘clean’ (meaning, no hookup, device type or parameter errors reported.)

            The schematic symbols must encode the simulation parameters and netlist them in a format, or syntax, required by a specific simulator.  Most likely, the schematic symbols will be constructed such as to enable creation of various types of netlist: i.e.: Spectre™, Hspice™, Pspice™, LVS (A different netlist for each LVS program), VHDL, Verilog, VerilogA, etc. The more netlist targets, the more likely they will somehow trample each other – or simply be wrong of their own accord.  Basic Circuit Simulators: 


The simulator, by definition, introduces error.  There are a large number of considerations and settings available to optimize most simulators performances, under various simulation types and conditions.  The choice of device equations is of import, whether they be BSIM, MEXTRAM, Gummel-Poon, VBIC etc.  The type of circuit simulated, coupled with the equations in use, the numerical precision of the microprocessor, and the nature of the signal input all conspire to induce numerous incarnations of error.  The following will review the basic nature of the Spice algorithms as presented in [1].

            The Spice algorithm employs what is termed the ‘direct method’, which formulates the nonlinear ordinary differential equations and then converts them to a system of differential equations by a multi-step integration method such as the trapezoid rule.  The nonlinear difference equations are solved using the Newton-Raphson algorithm, which generates a sequence of linear equations that may subsequently be solved using sparse Gaussian elimination.

            In transient analysis, an approximation to the nonlinear difference equations may be made by use of an Euler formulation:




            It is noted that the approximation is accurate only if the time step (ti – ti-1) is small relative to the time constants in the signals.  If time constants are force small, then efficiency is lost when the signal are quiescent or linear.  It is also pointed out that transient analysis carry history: any error introduced at one time point can further degrade all future time points.

            In DC analysis, as with transient analysis, a system of nonlinear algebraic equations is solved by a sequence of linear systems of equations using Newton’s method. The Newton’s method is an iterative process that continues until a stopping condition is met – which thus determines the accuracy required.

            In AC analysis, the circuit is stimulated with small sinusoidal signals and the steady-state solution is calculated.  The circuit can be linearized since the stimulus is small, and resulting signals are also sinusoidal.  This provides an efficient means to calculate transfer functions without the accuracy problems of transient analysis or the convergence issues of DC analysis.  The AC analysis’ system of linear equations may be solved by decomposition into a system of N linear equations, which are solved by Gaussian elimination or LU factorization.

            Given knowledge of the fundamentals of the simulation algorithms, the designer may be able to tune the simulator for trade-offs in simulation and accuracy.  As an example, the Spectre™ simulator enables choices of conservative, moderate or liberal simulations.  Also available are a number of ‘relative tolerance’ settings, and convergence aids. For example, ‘reltol’ specifies how small a Newton update must be relative to the node voltage, and it works with ‘vabstol’, which is an absolute value of voltage in case the solution is near zero.  Note, if the required update is smaller than the computer’s round-off error, convergence may never occur.  Methods other than Newton’s are also ‘selectable’ in order to help in convergence or simulation speed. But as mentioned (pg 134), other methods such as higher-order Gear have not been heavily used and thus there is some risk of ‘tripping over a bug’.  Advanced Analog Simulation Methods


Beyond the Spectre™ of basic analog simulation lies the advanced methods, which include Monte-Carlo, Corners, Statistical Worst-Case modeling, sensitivity sweeps, and other, specialized analysis such as Thermal, Power, and Substrate-Coupling analysis.  These will be briefly visited in the paragraphs below.


PVT Corners Analysis:

            Corners Analysis employs a set of pre-constructed models which define fast, slow and nominal switching Process values for transistors, and optionally: min, max and ‘typical’ values for resistors and capacitors.  The corners simulation will systematically apply the circuit to a chosen set of corners, which may also include permutations on Temperature and Voltage.  Thus, they are often termed PVT sweeps.  If there are three corners for each of Pmos and Nmos, Cap and Resistor devices, and three points of sweep on temperature and voltage, then 36 = 729 permutations result.  In the ‘hyperspace’ of the design response, these should explore the hyper-corners.  Whether the defined values for the corners are correct or not is a question that should be tested in the regression suite.  A possible method for doing this is to simulate a representative small circuit in each of the corners, and verify that the results are ‘above’ or ‘below’ the nominal as expected.  This should be done for each of the affected devices, through each of the various corners.  Although this form of test is generally done by hand, there are opportunities for automation. 


Monte-Carlo and Statistical Coverage:

            If the device models are augmented with information describing the variation of various key parameters, then a Monte-Carlo (MC) simulation may randomly perturb those values to create sets of models that ‘explore’ the design space.  The collection of this data should be done across multiple lots of wafers, and regularly updated.  The test-die that carries the test-structures is called a Process Control Monitor (PCM).  The considerations presented with respect to the characterization of devices above are equally apropos here.  But, MC simulations, by nature, may vary process and device parameters in manners that are not realistic – in that they do not consider the correlations of various parameters.  To address this concern, a ‘Statistical’ model generation system may be employed.  But, this brings the added complexity of needing to determine and tabulate the correlation coefficients for the set of parameters to be varied – again, another level of complexity and error opportunity.  On the other hand, with MC and Statistical simulations, even if there are a number of errors in the system, the greatest likely result is overdesign.


Thermal, Power, Substrate Coupling Analysis:

            The simulation of thermal gradients, power supply and substrate coupling are a few examples of Signal-Integrity related analysis.  As with the standard simulators, they depend entirely on their process-description inputs, netlist inputs, internal equations and various factors of numerical precision. 

Thermal analysis requires a knowledge of the coordinate positions of devices on the die (from LPE), their heat dissipation characteristics (from the models), and a fairly exhaustive input test vector to drive the circuit through its possible operation states.  Power analysis follows the same format, although focusing more on IR-drop and leakage.  Device coordinates are not required, but RC networks extracted from the interconnect will increase accuracy. 

Related to thermal and power analysis is Substrate-Coupling Analysis.  This is basically a modeling of the substrate as an RCD (R, C, Diode) network, with ports into the circuit wherever active devices exist. For example, P-channel devices will have an Nwell-Psubstrate diode interface, N-channel have a Nplus-Psubstrate diode, BJT’s are similar, and likewise with Nwell, Pplus and Nplus resistors. As with Thermal analysis, the extraction of the substrate is done in the LPE stage – although it may be a separate sub-tool (e.g.,Cadence’s Substrate-Storm).  Inputs to the tool include a specification of the Doping Profile for the process, such as to build a realistic RC network.  Without further digress, it is apparent that considerable work must be done to validate any of these tools.  Since they are generally ‘point-tools’, they do not pass data forward (or backward) to other tools, and thus are not contributors of compounding errors.  There are exceptions, in the case of S.I. Platforms, which are tool suites that do allow forward and backward propagation of information between the LPE-level simulations and front-end analysis and/or synthesis tools.


LPE Back-Annotated Re-Simulation:

            In LPE-level simulations, parasitics may expand the total circuit nodes on the order of O(Nk), where N is the original number of nets, and K is the average number of parasitics introduced per net. The increase on the matrix size, and the introduction of numerous very small devices tends greatly increase matrix reordering complexity.  If there were already convergence problems in the simulation, they will likely be exasperated.

Often, there are ‘dangling’ devices introduced in the LPE netlist– such as metal traces that extend out from a terminal, but are not connected at the other end. Single-ended capacitors may be introduced in the case where a metal trace runs over, for example, the body of a resistor.  Since the resistor body does not have a ‘net name’, the extractor assigns that end of the cap an arbitrary numbered name.  Overall, the number of ‘odd’ conditions that may be introduced by LPE netlists is rather overwhelming.  Guaranteeing the integrity of the extracted netlist – given any particularly warped layout and similarly twisted circuit (the kind that tend to discover new niches of vermin) – requires a rather unforgiving regression testing system.  Some methods of trimming the parasitics netlist may be used to improve simulations, such as RC reduction, dangling device clipping, or lumping caps to ground.  These methods  may also be employed in evaluating the correctness of an extracted device.  For example, in this project, a script called ‘Rempars’ (Section 6.5.2) was created, which selectively removes capacitors and or resistors above or below a threshold, may remove parasitic diodes, and may reset MOSFET AD/AS/PD/PS to their original model-based estimations.  After various parasitics are removed, the netlist may be re-simulated and compared against an ‘Ideal’ reference run, through the use of an Ocean™ script called Simreg (Section 6.5.3).  Both of these scripts are further described below, in Section 6.5, Validation of Parasitic Laden Circuits.

In summary, it can be seen that the basic simulation system is replete with systemic errors.  The nature of the design, the settings that the designer chooses, and the nature of any parasitic back-annotated netlist all conspire to introduce episodic errors.  Such errors are very hard to isolate after the fact, so any degree of work done in the QA of the system before deployment is likely to save the designer enormous time, silicon and frustration.


3.5.3  Physical Design Level Error Injection


Given satisfactory simulation results, the schematic is translated into a physical layout.  The means of accomplishing the layout depends on the nature of the input design data, the design type, and the available tools.  All signal integrity effects of concern to this project may be seen as deriving from the physical design, insofar as it is assumed that the design input is correct and robust

            Physical design errors may be a: episodic – if due to designer error in use of a tool, or incorrect construction of a device layout, or b: systemic – if due to errors built into the router, cell libraries (layout or timing characterization), or if built into the Pcells device generator.  In either case, the layout is never perfect.  There is always the possibility of reducing area, shortening total routing, improving matching on matched devices such as current mirrors, improving manufacturability and yield, reducing parasitic noise, substrate coupling noise, improving reliability through electromigration fixes, antenna effect reduction … etc.  This section will briefly touch on some of the more common error inputs, or improvement opportunities, from the digital place-and-route state, custom analog layout, DRC, LVS, and LPE.  Digital Place and Route Error


Digital blocks may either be 'placed and routed’ by hand, or by an Auto Place and Route (AP&R) engine.  As described in Section 3.2.2 on Digital Flows, the prime S.I. concern in digital flows is the disconnect between the front-end synthesis and static timing analysis, and the back-end routed timing condition.  Thus, the greatest ‘error’ of concern is the difference between the timing expected by the behavioral simulator and synthesis tools, as apposed to the actual post placement and routing.  The LPE tools may be employed at the back-end of the layout to get a better accounting of the RC induced delays, which may then be fed back into the behavioral simulator through SDF.  Often, the AP&R tool has an extractor built-in – but as will be describe in LPE below, the accuracy may only be 2.5D.  The original timing may depend on Wire-Load-Models (WLM) which defines statistical capacitance for a net in any given block-size ,  and a Timing Library File (TLF) which is created from characterization (by LPE) of standard logic cells.  Any errors in these libraries contribute to systemic errors which will make it nearly impossible to differentiate silicon failure due to interconnect from WLM, TLF or just simply design. 

This simplified view of the flow error inputs ignores the enormity of the complexity of designs, the equally complex process of synthesizing behavioral code into Register Transfer Level, and multifarious means of generating the layout. All of which fall under the episodic error type – and which have no chance of remedy through PDK regression testing, other than that the correct characterization of LPE, and the thorough testing of LVS and DRC will at least help contain the error-space.  Custom Analog Layout


In any design not of the LSI or VLSI type, custom hand drawn layouts and routing are still the norm.  The trade-offs in deciding which to use lie in the setup time for AP&R, as apposed to the compactness of custom P&R designs.  Also, in Analog CMOS designs, digital cells may be customized to handle multiple power and ground sources and may use special shielding or isolation methods to protect signals from substrate noise.

Analog (transistor-level) schematics may benefit from Pcells which automatically generate layout devices based on the schematic symbol input properties.  As mentioned in Section 3.5.2, these symbols may need to ‘encode’ the netlisting format for multiple simulators and LVS tools, while also driving the generation of a large number of permutations on layout construction of each device.  There are various benefits of Pcells, which include:


1.      Pre-layout estimated device parasitics based on configuration of the device

2.      Correct-by-construction layouts

3.      Shared parameters for the various netlist generators (i.e., sheet resistance)

4.      Easy generation of test-templates containing all devices for regression


Item 4. is of crucial interest in that if 99.99% of the likely device constructions can be created in a test schematic and layout, and verified against DRC and LVS, then the likelihood of there being errors in the verification flows or Pcell code is significantly reduced.  Unfortunately, this does not cover all possible error – or non-optimal, conditions in the layout, some of which include:


Pcell layout construction grid-snap errors

Errors in the device construction as apposed to its simulation netlist

Device mis-match due to placement, orientation, structure and connectivity

DRC errors introduced by adjacent or overlapping polygons

LVS errors introduced by hookup, soft-connects, Pcell flattening

Non-ideal parasitics from interconnect, devices, substrate etc

Table Analog Layout Error Introduction


In summary, the analog layout is, by nature, very prone to ‘error-by-creativity.'  The advent of Pcells has greatly reduced this condition, as apposed to the old practice of creating every device from scratch, polygon-by-polygon.  Design Rule Checks Error


            The DRC verification stage attempts to detect and remove layout construction errors that will either lead to faults or loss in yield.  The rules are generally created by Process Engineering, and are based upon statistical yield data from Process Control Monitors (PCM).  From the list of rules, a DRC rule deck is created, which is a set of logical operators that operate on shapes from the layout. Basic rules may include limits on spacing between layers (i.e., metal1 to metal1 – such as to limit the likelihood of a short due to out-diffusion), to the more complex ‘Antenna-check’ and ‘metal-coverage’ rules, which require polygon-base calculations and algorithmic analysis.

            The list of DRC’s for any process can range from the tens to hundreds, depending on the complexity of the process (number of masks, sensitivity of yield, number of devices).  The rules are given as absolutes (spacing x, y > n)=>{t,f}, although the actual frequency of error in all cases is really a distribution, probably with normal Gaussian form.  Similarly, the magnitude of error incidence (or importance) is not encoded in the DRC ruleset. Incidentally, not all considerations can be encoded in polygon-logic, but are instead left to the designer to implement.  Such items may include the use of ‘dummy’ layers to improve matching. For example, an integrating capacitor array may require exact values on the capacitors, which may be enhanced by ‘edging’ the outer caps with dummy-cap, such that all caps have the same perimeter profile.  Finally, the creation of the DRC rulesets entails that the programmer ‘reconstruct’ the Process, its connectivity, and in many cases, it should recognize devices and electrical potentials of nets and wells.

            The process of validating the DRC rule-deck typically consists of creating pass-fail templates, running the DRC deck on the template, and visually inspecting that pass cases do not create errors, and that all fail-cases do get flagged with the expected error.  This process can be automated by creating a separate cell for each pass and fail structure, running the DRC check on each cell separately, then checking the results of each.  Of course, if there are already hundreds of rules, then the number of permutations to check will be on the order of thousands.  For example, the metal spacing check should have test cases for parallel lines, angled lines, and corner-to-corner conditions.  For the more esoteric tests (antenna, latchup, metal-coverage), the test cases should likewise include pass and fail cases, but with the added concern for unusual conditions that might break the code.

            In summary, the creation of polygons for mask-making in the layout is prone to error.  The rules provided by Process-Engineering are never complete, and do not carry information on distributions or importance.  The DRC rules-sets that attempt to encode the rules are convoluted at best, depending on the programmer to consider all possible cases of connectivity, devices and electrical potentials that may affect the rules.  The DRC program itself (i.e. Diva™, Dracula ™, Assura™, Calibre ™) – although usually very mature, are constantly being repaired or augmented by the EDA companies.  Any change in the rules, DRC program, or PDK in general, warrants a full re-run of the DRC regression tests.  Layout Vs. Schematic Checking Error


            The LVS stage, as with DRC, does not introduce error into the design, but rather may allow error to pass through due to lack of detection.  Basically, LVS checks the network isomorphism (or equivalence) between the schematic and the layout.  This includes the network connectivity, device types at each ‘vertex’, device sizes, and various device-specific ‘electrical rules’ such as the potentials at which P-channel wells are held. The LVS stage is particularly ‘sensitive’, in that it requires: 1) that the schematics’ generated netlist is composed entirely of devices from a defined set: 2) that each device lists known associated parameters: 3) that the devices created in the layout conform to a very rigid set of rules in order for it to ‘recognize’ them.  The LVS program, as with the DRC, defines the layers used in the process, their connectivity (i.e. metal2 connects to metal1 through via), discovers the devices through multiple logical operations on layers, discovers the terminals of those devices and assigns connectivity with the interconnect layers, measures the device’s parameters, and ultimately produces a netlist which includes all the devices, their parameters and interconnect.

            The process of running LVS requires a netlist to be generated from the schematic, and similarly a netlist is extracted from the layout.  The two networks are compared by starting at some given node that is known to be equivalent between them (i.e., a pin on a net in the schematic which is equivalently pinned in the layout), and then walking the netlists, comparing devices at each vertex, and each net. If a device in the schematic has multiple nets extending from it, then the devices or terminations of each are matched as best as possible with the devices on the end of nets extending from the same device in the layout netlist.  If there are too many discrepancies at some level of depth down a branch, then the program will back-track and try matching through some other branch, or through some other known starting point.  When all paths have been walked (and marked), the program generates a report of the mismatching components – if any.  The follow-on process of manually tracking down LVS errors is an art in of itself – very much like the logic mastermind game.  Through deduction, consideration of known correct components, and consideration of changes since the last good state – the CAD Layout Engineer tracks down errors.

            It is important to note that in debugging, the CAD Layout Engineer assumes that the LVS program is correct – at first.  But, given that it is likely the design has started on a different version of the PDK, and that the LVS program and rules have gone through at least one revision, there is always cause to wonder whether the layout is correct, and perhaps something else is broken.  Experience indicates that often the latter is the case, and untold number of hours will have been lost trying to ‘match apples to oranges.’

            The remedy for such conditions is to thoroughly test the LVS system on every new release of the PDK, LVS program, or rulesets.  That is, as with the DRC QC, every possible valid device layout configuration, hookup condition, and circuit configuration should be added to a QC battery of tests.  First off, an enormous number of permutations of the individual devices should be created, and each hooked up directly to pins (in schematic and layout) and LVS run, in order to guarantee that the LVS program recognizes all permissible configurations.  Then, every reasonable fail condition should be tested in separate schematic/layout cases, and the regression system should detect that the expected error report was generated.  For example, a schematic might have an n-channel device with W=.06um, while the corresponding layout has the same device, correctly hooked up, but with W=0.5um. The regression system must run the LVS program on the cell, parse the reports, and determine whether it did indeed report and error in Width.  A partial list of the LVS checks to be performed, and the conditions under which the tests must be made are provide in Appendix H.

            In summary, LVS is a critical check needed to guarantee that the layout encodes the network intended by the Designer’s schematic.  Since it must reconcile the front-end to the back-end for any possible admissible circuit, it is imperative that the process generating the schematic and layout fit a set of well-defined standards.  Given these standards, the LVS ruleset may be developed and thoroughly tested.  Furthermore, given knowledge of these regression tests, the design team is much more likely to try to stick to the set of well-defined standards, knowing it will save them much agony and time as they plow through ‘LVS Hell’ (A term commonly used).


3.5.4  Layout Parameter Extraction Error


            The error, or alternatively, ‘accuracy’ of LPE is the centerpiece of this thesis.  Much has already been said on this topic, and a bit more will be presented in Chapter 5 on Parasitic Extraction Systems and Experiments.  Thus, this sub-section will simply summarize what has been presented above, with some added focus on itemizing the various error or accuracy issues.

As with all stages of pre-silicon design, the schematics based simulation is just another level of abstraction.  The Layout is in some sense also an abstraction – as there are various effects of etching, diffusion and Selective Process Bias, photo-lithography, that will deviate the final physical masks or layout from the drawn.   Given a Layout database, many effects not accounted for in the schematic may be extracted and added to a new netlist, which is then simulated in the same fashion as the ‘Ideal’ schematic (through the same test-benches).  Primarily, these involve interconnect parasitics, which are not accounted for in the analog schematic simulation at all, and are only estimated in behavioral simulations.  The overall intent of LPE is to get the simulation as close to silicon behavior as possible.  In Chapter 4, a review of the non-ideal, Signal Integrity altering effects will be reviewed. Here, the major contributors related to LPE will be itemized and discussed, as listed in Table 1.2a, Design Signal Integrity Concerns. The related items are restate here as:


1.      Interconnect Capacitances (including metal-fill and via sidewall)

2.      Interconnect Resistances (including contacts, vias, Selective Process Bias)

3.      Interconnect Inductances

4.      Intentional Device Parasitic Diodes and Intrinsic Parameters

5.      Simulation max time, data size, accuracy trade-offs

Table 3.5.4a: Primary Parasitic Extract Design Signal Integrity Concerns  Capacitance Extraction Level Accuracy


            The capacitance extractor tools require a description of the process in order to accurately build equations and models.  This Process description is commonly termed ‘Standard Interconnect Process Parameters’ or SIPPs.  These are the process parameters that allow the calculation of capacitances between various layers, including:


1.      Metal/Poly Thicknesses, (including min widths, min spacings)

2.      Inter-Layer Dielectric Thicknesses (ILD)          

3.      Permittivity (k)

4.      Order of Layers, ILDs

5.      Conformal Layer relationships

Table Standard Interconnect Process Parameters (SIPPs)


As with the simulators, the SIPPs are only a model of the Process, and the eventual accuracy of the extraction depends largely on the form of the extractor algorithms, and the feasible limits of compute time and storage. Note: the SIPPs will be further addressed later in Section 5.3.1, Process Parameters Definition. With regard to the accuracy of LPE, it should be noted that the LPE technology has been improving in terms of speed, accuracy and capacity, as driven by the requirements of design.  Early on there was a predominance of 2D (two dimensional) extractors, which simply measured the overlap of metal traces to estimate parasitic capacitance.  Later, this was improved by the advent of the so-called 2.5D extractors, which employed look-up tables to match patterns of traces in the layout with.  The tables were built with various patterns, such as two lines crossing each other, that mapped to closed-form equations with variables for spacings and widths.  Thus, the extractor, when scanning the layout and finding a pattern, matches the layers and pattern to the table, provides the parameters, and gets a capacitance value in return.   The “2.5” descriptor means that it can calculate side-wall fringing capacitance to traces on laterally up or down layers.  The 2.5D extractors are still in major force, although the need for full 3D extractors is being accelerated by feature sizes in the sub-.13um range, where interconnect delays begin to dominate over device delays.   As can be seen in the two figures (,b) below from the Raphael™ Tutorial, the 2D extractors will overestimate capacitances between shapes that have intervening shapes – as they only operate on pairs.


Figure,  Overestimation of Adjacent Layer through Neighbors and Lateral Layer under NeighborsFrom [42] Raphael™ Tutorial
(Reprinted with permission of Synopsys Inc.)


It should be noted that a primary concern of accuracy in pattern-matching extractors is the extent of the pattern set.  If limited to orthogonal relationships, i.e., directly above, below, and on the same plane, the set will be fairly small, extraction time will be small, and relative error large.  Given a pattern set that sees all degrees around, but is limited in halo radius, and assumes all shapes are rectilinear ( formed from straight lines, vertical and horizontal), the table may be a couple of orders of magnitude larger, the accuracy similarly improved and the extract time enlarged.  But this still leaves room for error, as in the case of non-rectilinear shapes (i.e., 45-degree interconnect traces or tapering on certain shapes to prevent charge crowding. Given that it is not always known what form of pattern matching an LPE program has, or if it is even complete in the type advertised, it is always a good idea to run a battery of disparate tests to gain some confidence.

Although the dimensionality of the extractor determines the baseline accuracy, the actual manner in which ‘R/C/L’ circuits are constructed have been increasing in complexity.  In [43], it is presented that, originally, lumping all caps to ground was sufficient, and considerably reduced the simulation overhead.  With the advent of RC needs, the network was modeled as a lumped RC network, then as a more realistic distributed RC circuit. Distributed RC may be structured as ‘T’ or ‘Π’ networks (see Figure below). Eventually, full RLC transmission line models have been required.  Today, basically any combination or topology is available to the designer, at their discretion.  The question of which level of accuracy or complexity is merited is a topic of much research.


Figure  RC Π Network Representation, from [33], Assura™ User’s Guide (Reprinted with permission of Cadence Design Systems Inc.)


Also, as mentioned above, parasitic extraction requirements are becoming non-linear, or non-pattern matched, due to non-conformal dielectrics (non-planar), copper wiring and nonrectangular cross sections.  Such geometries are not amenable to pattern-matched lookup tables.   A 3D extractor is required which employs a field-solver to analyze a set of surfaces based on finite-element methods.  Of course, the extract time is significantly higher than that with the lookup-table method, and its complexity leaves much room for error to sneak in. 

            As noted, some processes maintain non-conformal layers – meaning that they are leveled through Chemical Mechanical Polishing (CMP).  But, if not leveled, the height between some layer like metal-4 and poly1 may vary due to the particular combination of layers between them, or devices sitting on the substrate.  The existence of such non-linearity’s in the interconnect stack require a specification of how much a layer will differ due to the conditions below it.  This effect can further confound the extractor due to the slopes between plateaus.  Fortunately, most contemporary Processes do employ polishing – for the primary reason that it keeps metal traces from stretching as they run up and down hills.  To assist in keeping the layers flat, dummy metal fill is added to open spaces after routing is complete. The idea is that by filling the voids less sagging will occur in the oxide during CMP.  The dummy metal is usually placed in a pattern of small, disjoint shapes for each metal layer.  Assuming that the plates do not collect charge, then if a dummy plate lies above a trace, it will tend to induce drag on any signal traversing the net.  Similarly, if a dummy plate lies between two nets, it will tend to serve as a shield, or capacitance divider.


Figure  Conformal Interconnect Stack, Assura™ Developers Guide, pg 171 (Reprinted with permission of Cadence Design Systems Inc.)


Overall, the accuracy required by the design dictates the type of extraction run. On the other hand, the time of extraction and more importantly, time of simulation, dictate that the least accurate mode of extraction that is necessary for the design should be used.  Interconnect and Via Resistance Extraction Accuracy


            The extraction of resistance of lines is fairly simple: just insert resistors in the netlist with terminal names based on a numbering of the original net name. For example, if a metal trace between two devices has a net name N123, then the resistors will have terminal names: N123, N123:1, … N123:n.  A trace may be ‘fractured’ to any degree of granularity, from infinite (meaning – at straight piece should not be fractured), down to a minimum of 1 square per resistor.  Naturally, the more fine the fractures, the more accurate the distributed RC network will be, and the longer the simulation will run.  Similar to the fracture control, a designer may choose to limit the extraction of resistors to only those greater than a certain threshold, i.e., 1u-ohm. But, if a line is fractured into a number of sub 1u-ohm segments, the result will be zero resistors in the final netlist.

It should be noted that bends in a trace are usually represented as ½ square of resistance, and a fracture is forced at that point.  Also, Steiner trees are fractured at junctions, and large, odd shapes may be fractured into disjoint rectangles – with arbitrary assignment of which sides get to be the terminals.  Accounting for all of the variations on interconnect structures can get rather messy – an error prone.  The following figure from the Diva Reference Manual provides a good summary of the conditions to be accounted for.

Figure,  Various Resistance Structures, Cadence Diva™ Reference Manual (Reprinted with permission of Cadence Design Systems Inc.)


The introduction of copper wiring, to reduce resistance, also introduces non-linearity in the resistance itself, as its resistance is dependent on line width.

Recently, via and contact resistance have begun to contribute a significant portion of a net's overall resistance.  In general, a contact may add 3 to 4 ohms, while each square of metal adds about .05 ohms.  Of course, adding multiple contacts creates parallel resistors, thus reducing their affect, and simultaneously improving the reliability of the chip.  The trade-off comes in realty and uniformity of the routes.  Contacts and Vias typically force a wider pitch between traces, due to the metal overlap requirement.  The extraction of vias can be represented by a simple point-to-point resistor, or by a various number of finer-mesh models.  For example, in the figure below from [44], Cadence Diva™ Manual, the lateral, vertical-area and side-wall edge components are all contributors to via resistance, and of course provide varying levels of accuracy and complexity.

Figure, Various Representation of Via Resistances, [44]
(Reprinted with permission of Cadence Design Systems Inc.)


            The extraction and representation of resistance can be messy – and usually leads to undesired behavior in the simulation. Insertion of an R or RC extracted network into an original netlist can be very error prone.  Recent extractors now extract the entire circuit netlist from the layout, thereby sidestepping the need to fit an RC network into an Ideal schematic network.  Even with a well-defined circuit, simulation times grow by magnitudes of order.  Interconnect Inductance Extraction Accuracy


            As noted in [43], inductance increases in importance with faster signal rise times.  Contributing factors are wide power lines and Copper interconnect which reduce overall resistance.  But, it is not always clear whether inductance is a major contributing factor to signal propagation delay and signal dispersion, as compared to RC effects.  A full RLC, or transmission line, model will be able to model such effects as signal reflection, but such a model increases extract and computation time. Thus, the minimal model required should be used.  As noted in [45] The selection of model may be made based on values of line parameters, driver parameters and the frequency bandwidth of the signal being transmitted on the line. It is found that there are a range of interconnect lengths for which inductance effects are significant, and others for which it is minimal.  Servel, et al. note that simulation of high-speed analog and digital circuits requires the analysis of frequency-dependent transmission lines, while the determination of transmission line delays and reflections requires time-domain simulation. It is noted that most analysis programs use numerical-transform techniques to alternate between frequency and time-domain analysis.  The interconnection can be determined by its electrical parameters calculated from the values of the complex propagation factor, and the complex characteristic impedance.  Such RC values may be used to build the distributed pi RC models (without inductance) in an electrical simulation, which may then be compared with electromagnetic analysis.  This topic will be addressed further in Section, Inductance Theory.

            The major problem with inductance extraction is determination of the return path.  There may be many possibilities.  Similarly, the behavior of neighboring traces switching characteristics will also contribute.  But as noted, extraction and simulation of such conditions can be long, messy and unrevealing as to the contributors.  Device Parasitic Diodes and Intrinsic Parameters


To extract various other non-ideal parameters of the layout (items 2-5), the following are required in addition to the SIPPs.


1.      Definition of Standard Devices – for parameter extraction (from LVS)

2.      Layer sheet resistances and variations

3.      Via and contact resistances

4.      Substrate doping profiles

Table Non-Interconnect Process Parameters for LPE


The parameters listed above are derived from characterization of the Process, through on-chip test structures, circuits and possibly E-beam measurements.  Their accuracy is bound by the same constraints listed above for the characterization of ‘intentional’ devices.  Consideration of the devices characterization needs for LPE are outlined above in Section, Characterization Considerations of LPE, and have been itemized in Table, LPE Device Simulation Parameters.

It has been noted that a clean LVS run leads to a much more representative LPE-base simulation. The LVS run in essence extracts the devices and their parameters (this is what it matches against), and provides this information to the post-LPE netlisting engine.  But, it should also be noted that, interestingly, a schematic is not really needed at all: if the LVS program extracts the devices, and LPE extracts the interconnect – what results is just the circuit that the layout represents. (This feature is used extensively in the creation of test-benches for LPE – without needing to bother with making the schematics).

Importantly, when designing the device symbols and models, consideration must be given to what parameters the LPE program is able to extract exactly, so that they may be provided directly, instead of estimated in the models. Thus, if a device model is to estimated, for example, MOSFET source and drain area base on channel width, then there needs to be a means for LPE to override the estimate with the exact value extracted from the layout.  Moreover, if a device parameter needs to be mutable by the designer or simulator, then it is easier to have it defined in the device symbol, and netlisted as a parameter to be passed to the model.  In such a case, the LPE program must also provide a device netlisting with the value provided, such that the full specification that the model expects will be available.  In the end, care must be taken that all values expected are specified, and that no values are double counted.  For example, if the models or ideal schematic provide an Nwell-Psub diode for each P-channel device, and if the LPE program also extracts these diodes exactly, then the final LPE netlist must somehow exclude the estimated diode so as to prevent double-counting.  Benchmarking and Calibration


Whether the data extracted from the layout is accurate or optimally usable is a question that must be addressed through rigorous regression testing on representative structures, and benchmarking on actual designs. The characterization and validation of LPE tools is usually completed through the creation of a large battery of test structures that represents the more prevalent interconnect layout topologies. The capacitance and resistance components are extracted, and then compared to either a gold-standard 3D field-solver, or against actual silicon.  Other methods that have been described in the literature include circuit based test structures, E-beam, direct probing, and forms of Built-in self-test. The validation methods used in this project are presented in chapter 5.

When the accuracy (or error) of the LPE system has been determined, there is often cause to go back and tweak the SIPPs and other input data to the extract tool, and then run the regression tests again.  It might be noted that such a multi-parametric experiment requires ‘Design of Experiment’ techniques such as Principle Component Analysis in order to sort out which parameters are contributing the error.  Error Due to LPE Flows


            Given an LPE tool that has been benchmarked and calibrated, the next major concern regarding error introduction is its use within a design flow.  An outline of these concerns has been presented above in Section 3.2.2 – Digital S.I. Flows, and Section 3.2.3, Analog LPE Flows.

In review of the Digital S.I. flow introduction, its has been noted that synthesis-based timing analysis and WLM use in routing are no longer sufficient in DSM.  Some methods in terms of statistical estimation of capacitive coupling of nets, and prioritization of net fixing based on these estimates has been proposed.  Some commercial S.I. platforms have built-in constraint management between the front-end synthesis and back-end routing.  The idea is to either route nets without S.I. faults, or to make the front and back end simultaneously converge to a solution.  This is the direction called for in the ITRS’03 Report, and exemplified in the following graph:


Figure,  ITRS’03 Call for Integrated Design Systems [5]
(Reprinted with permission of Sematech)


Overall, the critical concern of Digital flows is getting a design to converge to timing closure, without spending an eternity in extraction and simulation.  A full 3D extract will be a monster to deal with in simulation, so there needs to be either a means to reduce it (RC reduction), or to bypass simulation completely by analyzing net paths for likelihood of timing faults, or net-pairs for likelihood of crosstalk induced faults.


In review of the Analog flow, a discussion of the use of LPE to better model interconnect noise, device noise (i.e. 1/f or jitter), and possibly device matching was presented.  Also, concerns about the integration of the results from LPE into the flow were presented, and the large possibility of error in that process.  Principally, it is stressed that a clean LVS run is required in order to correctly bind the interconnect traces in the layout with corresponding nets in the schematic.  This also has the effect of improving the likelihood that the spirit and form of the devices drawn from the layout reflect that which was intended in the schematics.  A few of the device parameters extracted by LVS and LPE are presented in Table 3.2.3c, Device Parasitic Extraction Parameters.

Related to the simulation of LPE, there are many non-one-to-one device conditions the LVS and LPE programs must contend with.  For example, when extracting devices from the layout, the LVS program often needs to handle multiple parallel devices – such as parallel connected resistors, caps, BJT’s or MOSFETs.  Each of these may be ‘smashed’ into singular devices – given that the terminals are equivalent and the necessary device parameters match.  In such cases, parallel resistors may be combined if their widths match, MOSFETs require equivalent gate width and length, and BJT’s in general should have equivalent emitter areas. A few other requirements have been itemized in Table 3.2.3b, LVS requirements for LPE Simulations.

Furthermore, in Section above, LPE Back-Annotated Re-Simulation, various issues regarding the LPE netlist size and nature of the extracted netlist were presented, including dangling caps and resistors, and the filtering of various types of parasitics in order to structure experiments.

            Another accuracy concern in the LPE-flows is the extraction of parasitic corners and their re-simulation.  To build LPE corners, the SIPPs must be scaled to fit the best and worst cases for capacitances and resistances.  The worst case will increase the thickness of interconnect, reduce the ILD thickness, and increase the sheet-resistances.  The best case will do the opposite.  These scaling effects should be in-line with the adjustments made in the Ideal simulation models based corners.  Similar techniques used in testing models based corners can be used to check the LPE corners.  (Scaling LPE is further discussed in Section 5.3.1, Process Parameters Definition)

            In summary, LPE is a means of creating a circuit from a layout, with all the unforseen parasitics, noisy substrates, and as-is devices mixed in a result.  If the accuracy of the fundamental extract capabilities has not been verified by simple tests, it is very unlikely that the existence of systemic errors will be detected in the back-annotated simulation.  As noted above, there are literally hundreds of situations and modes to consider in the validation of the LPE tools.  Automating the QC process through a regression system is essential.


3.5.5  Yield, Process and Wafer Level Error Injection


            Consideration is given here to certain error-introducing effects of mask and die fabrication that lead to differences from the simulated circuit.  Most of these factors are dealt with through adjustments in the models, in post-layout PG (program generation), and may be accounted for through Corners or Statistical simulations.  Some factors may be dealt with through LPE.

‘Process bias’ is the effect of a process drifting one way or the other, due to gradual changes in equipment, materials and other environmental effects in the Fab.  Usually, a Process Control Die (PCD) is designed which contains sufficient structures to track the pertinent characteristics of devices and the process itself.  Multiple samples of this die are measured, and from the data, distributions are calculated. From the distributions, 3-sigma upper and lower specification limits are derived and coded into statistical models such as worst-case ‘Corners’ and Monte-Carlo simulators.  Pertinent to the LPE process, the PCD can track layer thickness, ILD and K factors.  Given this data, as with the device models, corners can be created for the LPE extractions.

Another characteristic of the fabrication process is ‘mismatch’.  Device mismatch is an artifact of the intra-die deposition gradients. Mismatch can be statistically accounted for, given the distance between devices – which can be provided by an LPE or similar tool  Mask Generation and Resolution Enhancement Techniques


            The creation of masks (reticles) in sub-micron processes introduces errors which deviate the final mask from what is intended in the design. To alleviate, resolution enhancement techniques (RET) and Design for Manufacturing (DFM) are employed to shape the drawn layout such that these effects are cancelled in the final silicon.

              In the finer dimensions under 0.13um, the wavelength of light used is not sufficiently short to define sharp corners and avoid interference from nearby lines.  The majority of today’s exposure systems are operating at an exposure wavelength of 248nm. At the same time, desired feature sizes are approaching half that wavelength at 150nm and below.  Diffraction at these levels causes differences between dense lines and  isolated lines, as well as shortening of line ends and corner rounding.  An example of the effects, courtesy of, is included below. 


Figure, OPC Effects, Future-Fab [46]
(Reprinted with permission of Future-Fab)


The effects can be seen to result in mis-match between transistor gates and loss of pattern fidelity – which also has a resulting effect of invalidating much of the LPE analysis.  These conditions have lead to correction methods such as optical process correction (OPC) and phase-shift masking (PSM), which are described here.


Optical and Process Proximity Corrections

            As described in [47] in OPC, the flow for physical verification contains a step which predicts the final results on the wafer, and then adapts the layout until convergence between the final image and the original physical design desired is accomplished.

            In optical proximity effects, low light levels transmitted through multiple regions cause undesired artifacts to appear on the wafer.  This is due to constructive interference from adjacent contacts, leading to and additional bright spot on the wafer, called ‘sidelobes’. Such sidelobes may be removed by adding an opaque patch to the mask with the layout software and a DRC function to find the sidelobes. Specifically, to compensate for line-end shortening, the line may be extended by a hammerhead shape. To compensate for corner rounding, serif shapes may be added to (or subtracted from) corners, resulting in corners that are closer to the ideal layout.

Also discussed is Iso-Dense bias, which is an effect where features on the mask with the same Linewidth print on the wafer with differing linewidths.  This is due to the difference in diffraction of light through dense arrays of lines and isolated lines. To alleviate, small features called Sub-resolution Assist Features (SRAFs) are placed on the mask near isolated or semi-isolated lines to cause the diffraction pattern to be similar to a dense line.

In [48], the effects of across-the-chip Linewidth Variation (ACLV) are described. ACLV is caused mainly by reticle and proximity effects during mask-making lithography, and by local density effects.  This effect increases with each generation of technology due to wavelength limitations of light and due to the relative random errors’ increased impact on smaller features.


Phase-Shift Masking

            Resolution enhancement techniques are designed to improve the resolution and depth of focus of the photolithographic process by using phase-shifting masks (PSM) in place of the conventional binary intensity masks (BIMs).  The PSM systems exploit optical phase to improve the resolution, increase the effective depth and provide a wider process window.  From [49] it is presented that photolithographic systems have evolved through several generations of wavelength of the light source:


·        365 nm (i-line of mercury)

·        257 nm (high-pressure mercury arc lamp)

·        248 nm (KrF laser)

·        193 nm (ArF excimer laser)


The basic concept is to phase-shift light through mask openings such that interference by proximal beams will tend to focus the light to desired regions, and defocus to desired dark regions.  There are various modes and means of accomplishing this, one being alternating-aperture PSM, wherein alternating line openings have their beams phase-shifted 180 degrees.

As noted on the website [50], The ITRS Report suggests that OPC techniques should be able to suffice lithography needs through the 50nm node, as depicted below:


Figure  ITRS and Etec lithography roadmap, [50]
(Reprinted with permission of Future-Fab)


Antenna Effects

            Antenna effects are similarly accounted for in the physical verification stages of a design, and due to their post-layout analysis, may lead to artwork modification or complete re-routes of designs.  As presented in [51], the antenna effect derives from the etching of gate poly and oxide sidewall spacers.  During the etching, intense electrical fields are generated to create and ionizing plasma, resulting in charge build-up on floating traces of interconnect or poly.  The resulting voltages may results in gate-oxide breakdown, if not able to bleed into the substrate.  The damage is a proportional to the gate area and the antenna area. A small gate oxide connected to a large polysilicon geometry will suffer an ESD effect. Similarly, a metal trace that is not connected to a diffusion until later in the process may accumulate charge.  When finally connected at some higher level, it will discharge rapidly. Remedies include insertion of jumpers from lower traces to the upper traces to reduce the area of layers connected to small gate-oxide shapes. Also, reverse-biased diodes (Nmoat/P-epi, Pmoat/N-well) may be added to provide a bleed path, but are not without their design altering effects.  Package Effects


As would be expected, packages, bond-pads and their connections contribute significant signal integrity degradation.  The affects of such are the subject of entire books, and are the real basis of many of the S.I. papers found.  In fact, there seems to be a dichotomy of definition of S.I., as to whether is applies to on-chip or off-chip PCB effects. The effects will not be related here as they are not significantly addressed by physical verification and LPE tools. That is, the off-chip interconnect and package effects are not analyzed by LPE tools.


3.5.6  Reliability,  Environment and Life-Time Level Error Injection


            The reliability concerns of semiconductors may be addressed by layout parasitic extraction and other physical verification methods.  These effects are generally under the watch of Design Integrity and Failure Analysis engineers.  The several conditions related include environmental variations (PVT), thermal gradients, hot-electron degradation, electromigration and electrostatic discharge (ESD).

            The PVT concerns have been discussed above in Section, and the development of LPE corners to compensate are discussed belowin Section

            Thermal gradients (or, thermal analysis), is a corollary of power analysis, and thus follows in its dependency on the extraction of interconnect RC elements and substrate modeling. The S.I. analysis of such will be delved into below in Section 4.2.4.

            The remaining design integrity concerns, hot-electron degradation, Electromigration and ESD, are similarly expanded upon ahead in Section 4.3.  It need only be noted here that such effects are primary targets of current S.I. solutions from major EDA vendors as discussed for example in [7], [8], [34], [28], [52] .


3.6  Related Work

A few papers are introduced here with a focus on projects that investigate S.I. tools in design environments, papers which investigate the quality of EDA tools, and some general discussions on S.I. issues in analog and digital design flows.  With regard to the relationship of parasitic extraction to design flows and design kits, related work found was limited to industry tools presentations.

            In [7], a white-paper on Deep-Submicron Signal Integrity from Magma Design Automation, an overview of S.I. issues is presented, with depth given to cross-talk noise, electromigration, IR-drop and design for manufacturing, such as antenna rules and metal fill.  Emphasis is given to the ‘central data-model’ concept of binding the front-end to the back-end so as to allow concurrent optimization and convergence.

In [8], another white-paper on signal and design integrity from Cadence Design Systems Inc., the same concepts are presented, with some additional discussion of hot-electron effect and wire self-heat.  Methods of prevention for crosstalk, wire self-heat, hot-electron, and electromigration are presented.  A description of enhanced TLF and library exchange format (LEF) are presented to manage the data.  A composite flow is given for Cadence’s Silicon Ensemble Physically Knowledgeable Synthesis (SE-PKS), which includes:


·        Clock-Tree Generation: clock wire self-head and hot-electron prevention

·        Placement Optimization: crosstalk, signal wire self-heat, hot-electron prevention

·        Power Analysis: Electromigration analysis/fix, IR-drop analysis/fix

·        Detailed Routing: Crosstalk repair, shielding, wide-space routing

·        RC Extraction: Extraction of RC coupling

·        Timing Analysis: Cross-talk delay and glitch analysis.


Both [7] and [8] provide a good overview of the concerns of addressing signal integrity within the framework of design kits and design flows.   In [9] various concerns of S.I. within CMOS processes are introdcued, but with the added awareness of the driving design flows and methods.

In [35], a circuit’s ability to handle signal integrity issues is described with respect to Signal-to-Noise ratio (SNR).  The input equivalent noise source is found by use of the Driving Point Impedance/Signal Flow Graph methodology.  This method is also appropriate to use to model signal noise inputs.

In [10], a rather different form of ‘noise’ is considered in terms of predictability of tool and algorithm behaviors under certain levels of perturbations of usage.  This concept, although somewhat abstract, forms a good model for regression testing of the disjoint components of a design flow.

In [11], a survey of  timing-analysis and signal-integrity sign-off flows is presented.  The several variations on digital flows are presented, (Section, including custom wire-load model, block-assembly flow, constant delay synthesis flow and placement aware synthesis.  The concerns of complexity, capacity and computability are touched upon, as are the usual suspects of cross-talk, electromigration and IR-drop.

These papers are particularly salient, as they each consider the impact of design tools and flows in the introduction and management of error.


3.7  Conclusion

In summary, the presented expansion in complexity, speed, size, range and S.I. sensitivity of circuits is a problem that every engineer is aware of.  The increases in all of these areas are plagued with inter-dependencies which are connected through the signal integrity realm. Each engineer is also acutely aware that there is a vast matrix of economic trade-offs in optimization of various parts, and likewise that there exists a chain of error introductions and variances within which they must exercise their design to ensure an envelop of operation during its expected lifetime. The typical engineer is not, however, empowered with the information or tools to globally account for all of these factors in their design planning and analysis.  They can only set certain parametric goals and then hope that through use of various point tools and many iterations through the design flow they might converge to an acceptable solution with reasonable hope that all likely faults have been found.  This work is also done with the assumption that the underlying design environment is correct and not changing, thereby allowing for a controlled-experiment environment. This is usually not the case – which may be considered a gross understatement. Importantly, the design engineer needs to have knowledge of the error levels of different tools, libraries, devices and methods.  A means is necessitated for the management and satisfaction of design constraints across tools and flows.  This must be provided by rigorous analysis of each stage, and development of an inter-tool constraint management utility. The Regression Management System (RMS) presented in chapter six is integral to the development of such a constraint discovery and management system.

The summation of the error propagation factors presented in this chapter have are found inAppendix D, Error Propagations Concerns.






There are two kinds of designers, those with signal-integrity problems, and those that will have them.” -Unknown [53]


This chapter provides a survey of Signal and Design Integrity issues, with a bent towards the LPE based reconciliation of those issues.    As an LPE tool must work with various other tools to provide a complete front-to-back solution, the trade-offs and interoperability with these other tools must be taken into account.  The fundamentals that drive those tools, as related to LPE, are the focus of this chapter.

The primary S.I. contributors include parasitic capacitance, resistance, inductance, timing analysis, noise analysis, thermal analysis and power analysis. Design integrity concerns include electromigration, hot-electron effects and wire-self heat. For each, the fundamentals, including their basic theory, analysis and application will be visited.  This information is targeted to the issues physical design effects have on the performance of circuits, including analog, digital and mixed-signal systems SOC designs.  The level of investigation, as opposed to the design flows overview of Section 3.2, is more circuit-centric – presenting examples of design level S.I. analysis.  The results are encapsulated in Appendix I, “Signal Integrity Concerns Checklist”. 

As introduced, Signal Integrity is herein considered as those factors of the physical design which effect the end product’s performance or mean time to failure (MTF), and which are not available in the front-end simulation (i.e., pre-layout).  Although this generally includes package-level and board-level effects, this thesis is focused on the on-chip contributors. It is found though, that increasingly the same analysis that has been applied to board-level interconnect (i.e., transmission line analysis) is now apropos to chip-level.  For example, in [53], which is board-level oriented, signal integrity is considered to encompass all problems that arise in high-speed products due to the interconnects.  These effects are said to fall into three categories:


1.      Timing  (i.e., RC induced timing errors)

2.      Noise (i.e., RC cross-talk induced noise and faults)

3.      Electromagnetic Interference (EMI)

But, as this work is concerned with ‘all’ of the effects that may be addressed by, or assisted by, LPE the following applications have been included:


1.      Thermal analysis

2.      Power analysis

3.      Failure analysis (Electromigration and Hot-Electron)

4.      Process and Lithography technology concerns


These are but a few of the primary concerns that must be accounted for in deep sub-micron design, high frequency, and precision analog and mixed-signal design. Of course, differing design styles or circuit architectures will be sensitive to varying effects.  For this reason, some investigation is given to examples of differing circuit types.


Organization of Chapter 4


The organization of this chapter is as follows:  First, a review of selected related work is presented. Following, a survey of S.I. concerns, including all those mentioned above is developed. Finally,  a brief  survey of Design Integrity concerns is compiled.


4.1  Related Work


            Several sources are reviewed here, including the Semiconductor International Association (SIA) reports, ITRS reports, and a few conference papers.

It is interesting to review the trends in the ‘Top Ten Challenges” for design and test as listed by the  1997 SIA Roadmap [54], which includes.:


·        Higher accuracy interconnect and power models for synthesis and system level design

·        Constraint driven interconnect synthesis

·        Architectural, design methods to overcome fatal interconnect performance problems

·        Signal integrity and IC Reliability

·        Automated synthesis and layout of analog and mixed-signal designs.


Notably, those issues are still leading in the 2003 ITRS report [5].         In terms of Signal Integrity checklist building, there is no other greater resource than the ITRS, which produces a report about every two years. The 2003 report lists a plethora of ‘Difficult Challenges’, ‘Grand Challenges’ and the fabled ‘Red Brick Wall’.  Basically, it is a superset of the above challenges.  The ITRS special report on Interconnect presents many challenges to the progress in semiconductor design, including:


·        Many new materials are being introduced at an unprecedented rate

·        Increase in conductor resistivity as line widths approach electron mean free paths

·        Slower than projected low-k dielectric introduction

·        The challenge of increasing complexity and decreased design rules for SoC.


            Managing the rapid rate of new materials introduction and the simultaneous complexity is given as the overall near-term challenge.  The long-term challenge is that scaling will no longer satisfy performance requirements.  This will require interconnect innovation including optical, RF or vertical integration, along with improvements in design and packaging.

The Ph.D. dissertations of [15] and [16] provide broad-based analysis of signal integrity and low power issues in deep submicron VLSI design.  They are, of course, more oriented toward actual circuit analysis.

This chapter follows closely on the work of [18], which presents a current state of the art and some future trends in parasitics extraction.   The latest trends in extraction of capacitance, resistance and inductance are reviewed, with relevant digress on the fundamental equations and utility in terms on complexity.

            The work of [55] is reviewed in Section, Resistance., regarding concerns of copper/low-k processes are presented: Non-linear resistance, Selective Process Bias, dummy metal fill, and process variations.

In [31], is a discussion on noise, crosstalk, signal integrity, power, EMC, inductance, hot electrons.  In addition to size and speed, new concerns in designs include package complexity, power reduction for mobile applications, test, and manufacturability.

In [56], a very useful presentation on the significant factors to consider in the development of parasitic extraction capabilities is presented.  Included are an overview of the on-chip interconnects with respect to technology scaling and interconnect structure, and methods for modeling R, L and C components, then a bit on silicon validation and process variation handling, and finally a part on the model order reduction of RLC networks.

In [62] reviews of the general numerical methods in parasitic extraction are presented.  These include finite difference, finite element, boundary element and ‘random walk’ methods.  These are evaluated along with the work of [63] in the review of 2.5D capacitance calculations


4.2  Survey of Signal Integrity Concerns

            The signal integrity concerns reviewed here may not be all encompassing, but an attempt has been made to assemble the more lamented effects from the literature.   The primary physical contributors (capacitance, resistance, inductance, diode) are marched out first.  Following, the major analysis resulting from the knowledge of the circuit (with or without parasitics included) is reviewed, including timing, noise, thermal and power analysis.


4.2.1  Physical Basis: Capacitance, Resistance, Inductance, Impedance Theory


            The physical implementation of a design generates an infinite number of relations in terms of capacitance between parts, resistance, inductance and diode junctions.  Some of these may be combined to define the actual intentional devices represented and simulated in the schematic.  All others are parasitic.  Also, it should be noted that the modeled intentional devices usually only abstract the behavior of a rather large set of these first-order components (chunks, usually not mutually exclusive), and usually they assume a perfectly homogenous exterior environment.  It is interesting to note that many device-modeling tools (TCAD) use finite element methods to develop the R, L, C and D components of a structure and simulate it.  A generic equation may then be curve fit to the simulated data to produce a compact model.  It is thus an abstraction drawn from the physical basis.  It is the part of the LVS tool to know the set and form of polygons in a layout which represent these modeled devices, and partition them from all the other ‘stuff’ that may be found in the layout.  Then, whatever is left over is ‘fair-game’ for the LPE tool to harvest and mill into discrete parasitic components.  But, as has been noted, there are yet an infinite number of relations to consider after the intentional devices have been marked and set aside.  It is the arduous job of the LPE tools (like the TCAD tools), to intelligently recognize these forms, focus on the dominant effects, partition and encapsulate them into devices, and generate a simulatable netlist.  That then, would be the focus of the following sections: The theory of the physical basis of the various types of parasitics, their extraction and representation.  Capacitance Theory  and Extraction


In Section, ‘Capacitance Extraction Level Accuracy’, the general concepts of capacitance extraction and its extraction were presented.  This section follows on with some of the analytical methods used in calculation capacitance given various levels of visibility (2D … 3D).

            Interconnect capacitance has always been of some concern in designs, but in DSM, coupling capacitance begins to dominate due to the fact that metal have begin to have a larger height than width.  As the technologies have scaled down, it was deemed necessary to keep the height relatively high in order to reduce resistance.  Thus, simple plate capacitance is no longer sufficient, and muli-element fields come into play.  It can be seen that the accuracy demands and speed demands of parasitics-laden calculation has increased at about the same pace as the complexity of designs.  This section will further investigate the capacitance calculation, from the basic equations through the use of advanced methods such as ‘random walk’ in extraction.


General 2D Equations:


Capacitance is defined, with respect to charge and voltage as:

C = Q/V                                                          Eqn.. 4.1


            C = capacitance in Farads

            Q = total charge, in Coulombs

            V = Voltage between the conductors, in Volts


            Integrating both sides:


            I = dQ/dt = C(dV/dt)                                                   Eqn.. 4.2

            Defines the I-V behavior of a capacitor.


            In the use of capacitors in design, typically an assumption is made that two infinite plates exist, and:

C = ε0 εr[A/h]                                                               Eqn.. 4.3


C : Capacitance in pF

ε0 : permittivity of free space = 0.089 pF/cm

εr : relative dielectric constant (usually 3.9 – 4.1)

A : Area of planes

h : highth of separation between planes


Dracula™ Equations

            Cadence’s Dracula™ uses the basic 2D equations described above for area (plate) and line to line collinear coupling.  Ctotal = CA + 2Cf.  A fringing effect between different layers is available (i.e., M3 to M1), but must be hand entered for various chosen spacings.


Diva™ Equations

Cadence’s Diva™ employs a tool called the ‘Coefficient Generator’, which takes as input a SIPPs table similar to that used by Assura™.  The table also specifies the spacing increments at which calculations are to be made for the collinear coupling capacitances and up and down fringing capacitances.  Thus, what results is a set of point samplings. Diva™ [44] creates a curve-fitting model using the following equation, and its generated coding.


C = a0 + a1(1/s) + a2(1/s^2)


cap( psub p1_lpe 0.10502 0.0 0.04664 0.0

          fringe( p1_lpe MLlayer_p1_lpe

                vertical( (-0.99986)/((s+2.4)*(s+2.4)*(s+2.4)) + 0.19416/((s+2.4)*(s+2.4)) + (-0.021885)/(s+2.4) )



Figure  Diva™ Capacitance Polynomial and Generated Code (By Author)


General 2.5D Equations

The extraction of parasitics in the era of 2D extractors could rely adequately on the general equations above.  As fringe effects and coupling came into play, further refinements were added.  As can be seen in the figure below, even the most basic 2.5D extractions must deal with quite a number of contributors.  The extraction tools typically build look-up tables for the various patterns below to get the related equations. Then, the equation is fit with the corresponding spacings, heights and widths, and evaluate.  The validity and accuracy of the equations and system in general should always be suspect.

Figure, 2.5D Capacitance Topologies, from [33],
(Reprinted with permission of Cadence Design Systems Inc.)


A couple of closed-form, sanity checks that have been used widely include Cherns’ and Sakurai’s equations.  These, as noted above, are used in the Raphael™ tool – which leads to the further use in the RegMan tool, and the SIPPs preview spreadsheet.  But, as analyzed by [57], their accuracy has been surpassed by other methods since.


Chern’s Equation:


            In [58], Chern et. al provide a general capacitance formula for three-dimensional crossing lines, assuming the same dielectric and wire thicknesses for all layers.  It is claimed that these models have compared against a 3D tool with RMS erros of less than 2% in the specified range, and 10% maximum.  As noted, these are used by Raphael™ (Appendix F) to form a baseline analysis.  Upon completion of a Raphael™ regression run on a chosen set of structures, the Chern’s equations for each may be generated. 


        Eqn.. 4.4


Line-to-Line Capacitance, One ground plane.


                            Eqn.. 4.5


Line-to-Ground Capacitance, One ground plane.


The following is an example of the Raphael™ output for the ‘array above ground-plane’ structure, on poly1 above substrate permutation.


Structure: arr_above_gp

Actual Structure Name: POLY,above,substrate


Where:  H=0.99,  T=1,  E=4


C_coupling Model Equation Output:



C_bottom_gp Model Equation Output:



Sakurai et. al. Equations:

            In [59], Sakurai derived formulas for two, and three dimensional parallel lines on a plane.  Also in [60], Sakurai provides equations for RC distributed interconnects. As with the above, the following is an example for the ‘array above ground-plane’ structure, on poly1 above substrate permutation.


Structure: arr_above_gp

Actual Structure Name: POLY,above,substrate


C_coupling  Model Equation Output:



C_bottom_gp  Model Equation Output:



Wong et al. Equations

            There are more advanced techniques for closed-form calculation of capacitance.  In [61], equations are presented which allow for input of wire thickness, dielectric thickness, interwire spacing and wire width – all of which can be expected to vary.  They focus on two structures, 1) parallel lines over a plane and 2) parallel wires between two planes.  It is stated that combinations of the two of them can cover any given layout.  Like the above equations, theirs are valid in a specific range for the parameters of T (wire-thickness), H (dielectric thickness), S (wire spacing) and W (wire-width).  As an example, Ccouple is modeled by the summation of three rational functions which simulate three flux components, and is then obtained explicitly by least-squares fitting.


                    Eqn.. 4.6


            Where εox = 3.9*8.85*10-14 F/cm.


General Numerical Analysis Methods

            In [62] and [63], Raphael™, the most widely used numerical techniques for two-dimensional static field analysis are presented, which include:


1.      Boundary Element Method (BEM)

2.      Finite-Difference Method (FD)

3.      Finite-Element Method (FEM)


The three methods may be applied to resistance, capacitance and inductance analysis.  But, each has its advantages and disadvantages based on particular applications.  It is noted that FD and FEM methods are more versatile than BEM, and can be applied to a greater variety of applications, but they are generally slower than BEM.  Both FD and FEM result in a large sparse matrix, whereas BEM results in a small and dense matrix. Thus, the FD and FEM methods are better for complex 3D problems and the BEM is more suitable for 2D and simple 3D problems.  The Raphael™ suite will be presented further in Section

To calculate the charge on the conductors with known voltages, Laplace’s equation is numerically solved for the potential distribution, and the normal component of the potential gradient is then integrated around the conductors for QI’s using Gauss' theorem.  There are various means of finding solutions to Laplace’s equation, including:


·        Finite difference method,

·        Finite element method: considered most reliable, accurate

o       CPU performance is related to square of number of unknowns

o       Becoming more critical as circuit extract complexity increases

·        Boundary element method: Conductor surface cut into sections of panels.

o       Electric function on panels computed using Green’s function, which is then used to compute surface charge. Faster than FEM, requires less memory. MIT building S.I. tools with this method.

·        Fast multipole and pre-correlated FFT methods for 3D capacitance, distributed RC, and inductance extraction have also been developed.

·        Krylov, subspace-based methods for automatically generating AWE-style SPICE macromodels directly from 3D structure descriptions.

·        Random walk method:

o       Does not require a mesh

o       is not deterministic

o       Geometric database is relatively small

o       can handle complex IC’s with thousands of nodes

o       Can operate in full 3D

o       Promising for parallel processing


Further investigation of these topics is deferred to later work.  This should include reviews of  Maxwell's equations, Poisson's Equation and Green's Function in the field modeling of electromagnetics.  Resistance Theory and Extraction


In Section,  ‘Interconnect Resistance Extraction Accuracy’,  the concerns of resistance extraction in an LPE environment were introduced.  This section follows with a general summary of the resistance calculation basis and some important issues including skin effect, contact and via resistance and selective process bias.

            The physical basis of resistance is often overlooked in layout design, opting for the much more workable equation:


R=(l/w)(ρ/t) , where  ρ =          resistivity                                 Eqn.. 4.7

= Rs (l/w)                                                                                 Eqn.. 4.8

Where Rs is the sheet-rho, or ‘ohms per square’.  Given the often large variation on resistance, this is usually acceptable as the actual physical model probably does not contribute a significantly larger magnitude of error.

            This is the simplistic version, and does not represent the temperature coefficient variation, Weff width adjustments, or skin effect.

            The extraction of resistance can be pivotal to analog designs in terms of signal matching on differential lines, on reduction of noise.  In general, precision thin film resistors are critical in analog and mixed signal circuits.  Primary attributes are precise resistance control, excellent matching properties, high voltage linearity, low temperature coefficients (TCR), low 1/f noise and low parasitics resulting in high Q values.  The resistors predominately used include Si-substrate, poly-Si or poly-silicide.  These suffer mainly from poor 1/f noise performance and substrate losses.  Key challenges for interconnect are finding materials with moderate and tunable sheet resistance compatible with normal interconnect materials, and having excellent thickness control and good etch selectivity, as noted in the ITRS’03 Interconnect report.


Skin Effect:

            The skin effect is basically current crowding occurring in the cross-section of a conductor.  The effect can be seen by sub-dividing a strip into three sections vertically [64].  Each sub-section has resistance, partial self-inductance, and partial mutual-inductance to the other pieces. The current in the wire is evenly distributed at DC and crowds to the outside of the wire as frequency increases. Thus, skin effect always increases resistance with frequency.


Contacts and Vias, Selective Process Bias

            Contacts and Vias, previously ignored, now add significantly to interconnect resistance.  As presented in Section and figure above, the fracturing of Vias can take many forms, and various degrees of accuracy.  In a related paper by [55], Nagaraj et al., four key concerns of copper/low-k processes are presented: Non-linear resistance, Selective Process Bias (SPB), dummy metal fill, and process variations. The accurate modeling of parasitic RC elements is significantly affected by these phenomena.  In particular, it is shown that metal sheet resistance is not a constant, but varies as a non-linear function of line width in 130nm copper technologies.  The two primary causes of this are the scattering of electrons off the edges of copper, and the changes in copper cross-sectional area as a function of line width as caused by the damascene process flow. In CMP (Chemical Mechanical Polishing), induced copper dishing leads to increased resistance.  To remedy this problem, it is presented that metal slotting and dummy fill are often employed.  RC, RLC Reduction techniques. 


            The extraction of RC elements in any circuit moderately large will create a huge netlist, and subsequently  overwhelm or crash the simulator.  To remedy this condition, many forms of RC reduction algorithms have been developed and proposed.  The general concept is simply to merge parallel and series resistors and capacitors such as to minimize the difference of the final circuit from the original.  A simple example is provided in the Assure RCX User’s Guide [33].


Figure  RC Network Reduction Simplified, [33]
(Reprinted with permission of Cadence Design Systems Inc.)


Of course, as with the capacitance extraction levels of complexity, there are numerous and various means of RC and RCL network reduction.  The following provides a brief chronological outline of the various developments, as gleaned from [18]’s state of the art review.  The list includes the data of introduction:


04/90   AWE: Asymptotic Waveform Evaluation

05/93   SWEC: Stepwise Equivalent Conductance Circuit

05/95   Lanczos Process, Pade Process

11/96   Arnoldi Algorithm

06/97   Split Congruent Transformations

            (Preserves passivity that guarantees stability)

11/97   PRIMA: Passive Reduced-Order Interconnect Macromodeling Algorithm

Moment Matching, Model Order Reduction

06/98   Multipoint Algorithm for passive reduced order models

            RICE: Complex Frequency Hopping


            A more substitutive review of these methods will have to be deferred.  Inductance Theory and Extraction


Following on the introduction of inductance effects from Section,  ‘Interconnect Inductance Extraction Accuracy’, a brief outline of basic principles is presented here.  Primary of concern is that inductance is meaningless without a loop, or return path.  The determination of that path in a circuit can be an intractable problem, and is generally unknown before parasitic extraction.  The method of partial inductance using Partial Element Equivalent Circuit (PEEC) has been introduced in [65], by A. E. Ruehli, to work around this problem.  The partial inductance method assumes the return path is located an infinite distance from the wire.  In the LPE (RCX-PL) tools, components of self-inductance and mutual inductance must be considered, along with frequency of operation.


Inductance Theory

The behavior of an ideal inductor in the time domain is defined as:


dV = L (DI/dt)                                                 Eqn..4.9


            With L = the inductance, I = current across the inductor, and V the voltage.

            Various means exist to calculate inductance, the simplest being for a cylindrical wire above a plane:


                                               Eqn.. 4.10

            Where, μ0 = permeability (1.257x10-18 H/cm), h = height above plane, d = diameter of cylindrical wire.

            Similarly, a metal trace above a plane may be defined as:


                                        Eqn.. 4.11

            Where w = width of the strip, h = height above the plane.


            The signal integrity concerns of an inductive line include not only the mutual inductance with neighboring lines, but also the self-inductance and ringing effects.

            When a waveform makes transitions over time intervals which are smaller than the time of flight of a wire trace, then the transmission-line behavior of the wire appears. From [66], Khatri et al., the velocity of propagation u is:


                                                           Eqn.. 4.12


            Where c is the speed of light in free space, k is the dielectric constant of the enclosing material.  Thus, for a process with k=2.0, u = 2.121x108m/s, and a line of 10 millimeters length would have a 50 ps time of flight.  Any line longer than that will exhibit transmission line effects.  The velocity of propagation for a transmission line is:


                                                       Eqn.. 4.13


            Where L and C are the inductance and capacitance of the line per unit length.  Given a value of C = 51.9aF/um, and u from above, the inductance was calculated for the above example to be L = 4.283x10-4nH/um of wire.


            This section deserves considerably more development, but will be limited to this basis for the time being.


Inductance Extraction

In the Assura™ RCX-HF [67], Williams et al. present an Assura™ inductance extraction methodology.  Extraction is done in four stages. First off, the wire geometry is fractured into small rectangular shapes with maximum lengths equal to one tenth of the wave length in the case of slow-wave propagation mode. Next, the fractured geometries are sent to a wire processor to compute the resistance with skin and proximity effects, partial inductance (self and mutual) including coupling capacitance between wires and substrate. Then, substrate parasitics are extracted and combined with the wire models. Finally, the complete network is reduced to create a compact netlist with Spice format and syntax.   These stages are graphically represented in the same Cadence paper as:


Figure, Assura™ RCX-HF Flow, from [67]
(Reprinted with permission of Cadence Design Systems Inc.)


            The ‘Substrate Extraction’ step is further described ahead in Section, Substrate Noise.  The Assura™ RCX-PL extraction tool follows a similar method and flow, except using a method termed ‘Return Limited Inductance Extraction’ to determine the return path, rather than extracting the entire substrate.  That flow is also included here for its relevance to the overall LPE development process:



Figure, Assura™ RCX-PL Extraction Flow
(Reprinted with permission of Cadence Design Systems Inc.)


            Here, it need only be noted that the LPE stage (Assura™ RCX) precedes the self-inductance (L) and mutual inductance (K) extraction stages.  Impedance Theory


Impedance is defined by: Z = V/I.  All forms of signal integrity problems can be defined in terms of impedance.  From [53],


“The manner in which these fundamental quantities, voltage and current, interact with the impedance of the interconnects determines all signal-integrity effects. As a signal propagates down an interconnect, it is constantly probing the impedance of the interconnect and reacting based on the answer.”  -  Eric Bogatin


            As would be expected, the impedance of an ideal resistors circuit is Z=R.  The reader is directed to the above resource for an entire volume on the benefits of impedance analysis in Signal Integrity.  Diode Theory and Extraction


            The presentation of diode theory and behavior usually precedes other devices in the textbooks, but in this review there is really not much to say about diodes.  Of course, nearly all junctions create some form of diode (P-N), and thus they likewise exist everywhere.  In Section,  ‘Device Parasitic Diodes and Intrinsic Parameters’, some of the device-level forms of diodes, and the dominant Nwell-Psub diode was presented.  Furthermore, every junction produces some degree of leakage current (depending on biasing of the junction) and a built-in capacitance.  Since the parasitic caps extraction tool does not form capacitors for junctions, their extraction (usually by the LVS tool) is particularly important to correctly model this junction leakage and capacitance.  The LVS tool usually recognizes layer overlaps as the definition of diodes, but they can just as well be the coincident edges of laterally abutting layers.  Either way, the LVS tool will recognize and define a diode of a certain type, area and perimeter.  It is up to the device model to correctly define the diode leakage and capacitance components.  Also, the model must have accounted for the side-wall effects (and must know the surface area of that sidewall).


4.2.2        Timing Analysis


Many techniques have been employed to reduce interconnect delay: interconnect topology optimization, device sizing, wire sizing, buffer insertion, and simultaneous device and interconnect optimization.

            The clock distribution network must drive a large number of devices, therefore its load is huge.  The insertion of buffers is needed to keep clock edges sharp, and to cause the signal to arrive at all registers at approximately the same time.  The difference in time between arrivals is called clock ‘skew’.  Static Timing


            Static timing is a first-order analysis of a circuit’s logic path delays.  The delays through each logic element in a synchronous path (between successive flip-flops) are summed, and delay times are calculated for each possible path.  The worst-case delay determines the critical path, and thus the maximum operating frequency of the chip.

            Since static timing is ignorant of the input vectors to a circuit, the worst-case delay may never actually occur.

            The propagation delay tp through a gate, an inverter for example, is considered the time between the 50% marks between the input and output waveforms as shown below:


Figure, Transition Delay Calculation.
Derived From [68] Jan Rabaey


The delays for propagation from low to high, tpLH, and high to low, tpHL, are indicated above as the response time of the gate. The overall delay is defined as the average of both [68],


                                             Eqn.. 4.14


The rise and fall times, tf, tr, are defined between the 10% and 90% points of the waveforms.  The propagation delay can be calculated by means of a ring-oscillator. The period will be T = 2 * tp * N, where N is the number of inverters in the ring.


The RC delay can be calculated by the exponential function:


             , where τ = RC.                         Eqn.. 4.15


            The time to reach the 50% point is t = ln(2) τ, and for the 90% point, t = ln(9) τ.


RC Delay

            From [68] again, the time constant of RC delay on a wire with infinite distributed R and C elements, is given as:


                                                 Eqn.. 4.16


            This equation basically states that delay is quadratic in the length (L) of a line.  A typical delay, tp (V 0->50%), calculates as T = 0.38RC.   For tr (10%->90%), T = 0.9RC.


Elmore Delay

            The simplest form of reduction for  an N stage RC network is given by [69], Elmore, for the following circuit:

Figure Elmore Delay,
(Derived from [68] Jan Rabaey )


            The closed-form equation for the first-order time-constant for dropping a node voltage at ‘i’ from Vdd to 0.5Vdd is given as:


                                             Eqn. 4.17


            Thus, the Elmore delay is simply a sum of the RC elements along a shortest path between two points of concern.


Logical Effort

In [70], the delay model of Logical Effort is developed, which has served as a basis for a time constant analysis developed by Carver Mead and others.  This is usefulfor characterizing the relative delays without calculating the absolute delays.

Logical effort g, (Where τ = Rinv*Cinv, the input resistance and capacitance), is defined as:  g = RC/ τ.  Electrical effort, h, depends on the input and load capacitance of the cell, such that h = Cout/Cin.  Parasitic Delay, p, depends on the intrinsic capacitance of the cell, such that, p = RCp/ τ.  The overall delay = effort delay + parasitic delay + nonideal delay, Where, effort delay = gh.  Thus, a simple formula can be evaluated given the inputs of Rinv , Cinv, and R,C, Cout/Cin.

            Overall, this is just another form of static timing which is enhanced by the inputs of accurate LPE.  Dynamic Timing


            In dynamic timing conditions, crosstalk noise from an aggressor may either speed-up or slow-down a victim, wherein speed-up can induce glitches.  Parasitic extraction can provide varying levels of accuracy on the coupling between nodes, but glitches can only be determined through simulation of all possible circuit states through the boundary scan methods using automatic test pattern generation (ATPG) and build-in self test (BIST).  This section delves a bit into the dynamic timing checks performed.


Setup and Hold Checks

            A ‘setup’ constraint specifies how much time is necessary for  data to be available at the input of a sequential device before the clock edge that captures the data into the device. The constraint defines a maximum delay on the data path relative to the clock path.


            A ‘hold’ constraint specifies how much time is necessary for data to be stable at the input of a sequential device after the clock edge that captures the data in the device. This constraint enforces a minimum delay on the data path relative to the clock path.


            An example figure ( from the PrimeTime reference manual [80] follows:


Figure, Setup and Hold Conditions [80] (WPO)


            Here, the upper figure shows the signal route and the clock route.  The lower figure indicates the clock edges which begin the setup phase and capture, and the corresponding hold capture edge point.


4.2.3  Noise Analysis


            Two types of noise corrupt signals in integrated circuits: device noise and environmental noise.  The environmental noise originates from the supply lines, or substrate.  The modes of analysis for analog and digital noise are considerably distinct, but the effects of interconnect parasitic noise is generally the same.


            Noise in digital circuits is typically measured against the ‘noise margins’ of the standard inverter.  Given high and low supplies, VH and VL, then a logic high is specified by the driver producing a minimum voltage VOH  < VH .  To recognize a logic high, the receiver must accept any logic greater than VOH  < VH    [64].  Crosstalk Noise


Crosstalk noise is the change in voltage waveform of a victim net due to signal activity in neighboring nets which are capacitively or inductively coupled to it.  There are two classes of crosstalk noise: functional and delay noise. 


Functional Noise

Functional noise occurs when noise is induced on a signal which is being held at a state by a driver.  The noise pulse may propagate a state change in downstream dynamic logic or latches, which is referred to as a glitch.  Such an erroneous state (glitch) can result in a functional failure. 


Delay Noise

Delay noise refers to the noise that occurs when two signals switch simultaneously.  Depending on the directions of these transitions, signals on either of the nets may either be boosted or dampened and slowed.


Crosstalk noise has become a critical issue in DSM designs due to various conspiring culprits:  Wire and via resistances have increased due to narrowing wires, increasing route densities and relative lengths.  The narrowing of wires has also driven wires to increase height (relatively) to reduce resistance, but which also tends to increase co-linear coupling capacitance between wires as a ratio of total capacitance.  Also, the use of aggressive and less noise immune circuitry such as dynamic logic (domino) has increased for performance reasons.  Shortened channel lengths have resulted in faster, but lower threshold voltage devices.  This, combined with lower supply voltages has resulted in reduced noise margins.  Finally, faster slew rates result in increased injected noise, and higher clock frequencies result in lowered tolerance to delay noise, i.e., setup and hold times shrink.


The challenges to consider in LPE are the number of victim-aggressor pairs, and the amount of distributed parasitics to be extracted.  Some remedies [81] have included noise analysis and repair techniques which employ electrical, logical and temporal isolation techniques to reduce the problem space to the significant and realizable cases.  Also, reduced order modeling techniques are applied to reduced the network complexity and speedup simulation.  Other efforts have been made to address Crosstalk noise earlier in the design stage, during routing, and post routing.  These include routing and interconnect optimization by wire spacing, wire widening, limiting wire coupling length, buffer insertion and gate sizing.


Power Supply Collapse

In [32] it is noted that coupling noise also occurs from ‘power supply collapse’, which is due to simultaneous switching of numerous gates.  A remedy often pursued is the use of ‘decoupling capacitors’ placed around the power supply rails.  Device Noise

            The parasitic elements of AD, AS , PD ,PS and NRD, NRS are typically extracted by LPE/LVS tools.  The elements, of course, considerably affect the capacitance and noise factors of a device.

Thermal Noise

            Thermal noise is considered in terms of resistors and MOSFETs in [82].  Basically, the thermal noise of the source and drain are model by the spectral densities:

, and                                      Eqn.. 4.18

            Where k is Boltzman’s constant, T is temperature, and RD and RS are the parasitic drain and source resistances.

White shot and Flicker Noise

            The other noise current generators are modeled as current sources from drain to source.  These are characterized in the saturation region by spectral densities of:

                                                  Eqn.. 4.19


                                             Eqn.. 4.20

where Kf and Af and definable, gm is the small signal Transconductance gain at the Q-point, IDQ is the quiescent drain current, Leff is the effective channel length and f is the frequency given in Hertz.  Thus, the overall noise spectral density is Sw + Sf.  Substrate noise


            Substrate noise is generally created by switching devices dumping current into the substrate. When a large number of devices switch simultaneously, an effect called ‘ground-bounce’ is created, wherein the ground may be bounced to some potential due to the finite impedance of silicon.  To evaluate the effect, LPE tools are used to extract the device ports into the substrate.  Given the port locations, and a definition of the doping profiles for the process, a substrate-analysis tool will extract a 3D RC network of the substrate.  The simulator then can estimate current noise flow from the digital section of a mixed-signal design, into the more sensitive analog section – as depicted in the figure below:


Figure, Substrate Current Noise Flow, From [83], Singh
(Reprinted with permission of Author)


            The following figure ( presents a representation of the network extracted by Assura™ RCX-PL, as reported in [67], Williams et al. 


Figure, Equivalent Circuit Model for Interconnect Coupled to Substrate [67] (Reprinted with permission of Cadence Design Systems Inc.)


            In this image, two parallel interconnects which are coupled to each other and the substrate network are represented.  The ports Vdd and Vss are shown for connections to a well (Cw), and to Psub.            As would be expected, the process of substrate RC network extraction creates a huge RC network.  The follow-on step thus is to run the network through an RC-reduction tool.

            The effects of substrate coupling may also induce the ‘Latchup’ effect by de-biasing the substrate.  Also, dynamic logic and other circuits using pre-charge techniques are sensitive to noise. 

One method for addressing substrate coupling noise (SCN) has been presented by [84] Liu, et. al.  The present an active method to reduce SCN by sampling the noise at the receiver end and the pumping it through the input stage of a negative feedback loop.  It is amplified with reverse phase and re-injected into the substrate – thus canceling out up to 83% of the original noise. The presented test circuit uses inverter rings with intermediate parasitic couplings to substrate.


4.2.4  Thermal analysis


In [85], thermal effects are presented as an inherent aspect of electrical power distribution and signal transmission through the interconnects in VLSI circuits due to self-heating caused by the flow of current. Thermal effects impact interconnect design and electromigration reliability. For deep-submicron technologies, thermal effects are noted to be increasing due to aggressive interconnect scaling and introduction of new dielectric materials with poor thermal properties. Furthermore, thermally accelerated failures in interconnects under high-current short-pulse stress conditions, such as electrostatic discharge, have become a reliability concern”  Negative Bias Temperature Instability (NBTI)


The effect termed NBTI is a result of decrease of the threshold voltage Vth of hole channel metal–oxide–semiconductor field effect transistors with ultra-thin gate dielectric layers under negative bias temperature stress.

In [85a], a degradation model is developed that accounts for the generation of bulk oxide defects, created by the tunneling of electrons or holes through the gate dielectric layer during electrical stress. The model predicts that Vth shifts are mainly due to the tunneling of holes at low gate bias, usually below 1.5 V, and predicts that electrons are mainly responsible for these shifts at higher |VG|. The result is that device lifetime at operating voltage, based on Vth shifts, can not be determined from measurements performed at high gate bias. The impact of nitrogen incorporated at the Si/dielectric interface on Vth shifts is also investigated. The acceleration of device degradation when the amount of nitrogen increases has been associated to the increase in local ‘interfacial strain’, which is said to be induced by the increase in bonding constraints, as well as to the increase in the density of Si–N–Si strained bonds that act as trapping centers of hydrogen species released during the electrical stress.

            In summary, NBTI can cause the threshold voltage in PMOS FETs to shift by as much as 50 to 100 millivolts over a FET's lifetime – which is not a range that most designers consider in their design sweeps.


4.2.5  Power Integrity


            Power dissipation in CMOS is due to several causes: Dynamic power loss due to switching current, short-circuit currents when both n-channel and p-channel devices are semi-on, and static power dissipation due to leakage current and subthreshold current.


            Primary concerns are leakage current and IR drop.  Leakage current manifests itself mainly through two anomalies which are categorized into dynamic and static forms.

Figure 4.2.5a, Total Chip Power Trend for SOC-LP PDA Applications

From  ITRS’03 pg. 7, System Drivers (RPO)


            The above figure depicts the power consumption trends for PDA applications as projected out to about 2018.  As would be expected with an exponential increase in device count and operating frequency, the dynamic power likewise increases exponentially.  Dynamic Power


As presented in, [70], dynamic power is primarily due to signal switching (also called simultaneous switching noise), or charging and discharging of capacitances in the signal path.  When a p-channel transistor in an inverter is charging a capacitor, C, at a frequency, f, the current through the transistor is C(dV/dt). The power dissipation is then CV(dV/dt) for half of the period of the input, t = 1/(2f).  The power dissipated in the p-channel transistor is then:


                                 Eqn.. 4.21


            When the n-channel transistor discharges the capacitor, the power dissipation is equal, making the total power dissipation:


                                                                  Eqn.. 4.22


            From the above equation, it can be seen that power dissipation increases linearly with frequency.  Similarly, short-circuit current increases with frequency, but is only about 20% of the total power loss.  The short-circuit power loss in an inverter can be represented by:


                                            Eqn.. 4.23


where it is assumed that the p and n-channel devices are sized such that β=((W/L)μCox) is the same for both, that the magnitude of the threshold voltages |Vt| are the same, and trf is the rise and fall time of the input signal.

            From [68], the peak power and average power (P=IV) can be found as:

            Ppeak = Ipeak * Vsupply = max[p(t)]                                            Eqn.. 4.24

            Pave = , Vdd constant                       Eqn.. 4.25

            The Power Delay Product (PDP) is defined as the energy consumed by a gate per switching event, and is a constant.  Static Power


            As presented in [86], static power is produced mainly by leakage current due to sub-threshold transistor current – when the gate is not able to fully shut itself off.  Since transistor performance has been increased by reduction in gate oxide thickness (Tox), which also requires a drop in Vdd and thus Vt (threshold voltage) for reliability, thus allowing for a thin channel to remain for leakage current.  This is compounded by the shortening of the channel-length.

            When VGS of a MOSFET is less than the subthreshold voltage Vt, the current conducted is:


                                                       Eqn.. 4.26


where, I0 is a constant, and the constant n is normally between 1,2.


            Transistor leakage is due to the very small leakage current that all reverse-biased diodes have.  Since the sources and drains of every transistor, as well as the junctions between Nwell and substrate, create parasitic diodes.  The parasitics are strongly dependent on the process and temperature.  The ideal parasitic diode currents are defined by the following basic diode equation:


                                                             Eqn.. 4.27


            Another form of leakage occurs from tunneling from the gate through the dielectric to the channel (a quantum mechanical effect).  This effect has been exasperated by the thinning of the gate oxide necessary due to lower Vt.  When a device’s Vdd reduces from 1.5V to 1.2V, the voltage threshold must drop by 80 to 85mV, which causes leakage to increase a magnitude of order.  Solutions to these problems include lengthening the channel, biasing a well under the channel, or by creating multiple-gate devices.  Gate leakage may be reduced by high-K dielectric materials and new gate materials.  Some solutions [87] propose creating devices of differing Vt in different partitions of the chip. Thus, low-Vt devices are used in high-performance areas, and high-Vt devices are used everywhere else.

To relate the impact of leakage current, in [88] it is heralded as the number 1 concern of sub 90nm design. It is noted that lower threshold voltages, thinner oxides and shorter effective channel lengths have made the transistor behave more like a ‘sieve’ than like a switch.  Power Droop (IR Drop)


            Power droop, or IR drop, is basically V=IR loss due to interconnect resistance.  When the voltage drop (or rise in the ground net) becomes large, gate switching slows which may lead to timing violations.  If severe enough, the IR drop may cause unexpected operation due to reduced noise margins.

            The solution usually requires widening of the power supply rails, and/or careful routing of the supply grids.  But, guaranteeing a strict IR drop bound in the power grid may require substantially more power-planning runs and thus affect design productivity.


4.3  Design Integrity Survey

            Design integrity, in this survey, includes reliability concerns such as electromigration, hot-electron effects and wire self-heat.


4.3.1 Electromigration


            With designs containing hundreds of millions of devices and running in the GHz range, the current densities (current per cross-section area) in the power and signal lines are increasing.  The resulting electron ‘wind’ in metal lines leads to ion migration, which in turn leaves voids upstream and bumps (hillocks or whiskers) downstream.  The voids, of course, lead to opens, and the hillocks can cause shorts with nearby neighboring wires.  The rate of the atomic transport is proportional to the current density, and is a function of temperature, metal line grain size, dimensions of the line and the nature of the current.  The Median Time to Failure (MTF) has been quantified in [7] by a modified Black’s equation:


                                   Eqn.. 4.28


where J is the average current density, Tref is the substrate reference temperature and ΔT is the self-heating temperature as a function of the RMS current, Q is the activation energy, k is the Boltzman constant, T is the metal temparature and A is a constant which is dependent on the physical and microstructural properties of the line.  In  the equation, the power ‘n’ in the equation is usually set to 2, but may vary between 1 and 5 depending on the self-heating due to the RMS current in the line.  It can be seen from the equation that the MTF is a function of both average and RMS value of current, and that the RMS property determines the increase in self-heating temperature.

            Power EM can be reduced by increasing wire widths and adding vias, such that the RMS current density through the wires is kept below the threshold for MTF.

            Another form of failure occurs in signal wires, where the charging and discharging due to signal changes creates Joule heating.  This induces mechanical stress and breakdown of the metal structures and creates stress in the surrounding oxide layers.  This again may cause shorting with neighboring wires – and again is preventable by setting wire width to keep the RMS current densities below the MTF threshold.


4.3.2 Hot-electron Effects


            The hot electron (or short channel) effect is described in [8] as occurring when a high voltage is applied across the source and drain of a device, the electric field is high, and the electrons are accelerated in the channel.  The fastest electrons may damage the oxide and the interface near the drain, thus inducing transistor threshold shift and mobility change over the life of the part.  As the gate is always positive in an N-channel MOSFET, the shift is always in the same direction.  Thus, as time goes by, the threshold eventually reaches a point where the device no longer operates as required in the design.

            The problem of hot electron has increased as technologies scale down due to the fact that device features have scaled proportionally faster than voltage. This leads to the devices having higher field strengths and thinner gate oxides.


4.3.3 Wire Self-Heat


            The effect of wire self-heat is due to frequently varying thermal conditions, as described in [8].  These lead to a mechanical failure due to stress induced by the difference in the thermal constants of the metal and its surround oxide.  The wire eventually fails after enough stress.  The problem is exasperated by the introduction of low-K dielectrics, as they are worse thermal conductors and less strong than silicon dioxide.  Also, since the failure occurs only after an extended time of use, its prediction and analysis are confounded.


4.3.4  Process and Lithography technology concerns


            Although not directly ‘Design Integrity’ related, the effects of mask-making and process drift have a large impact on the development of LPE tools.  These effects were introduced above in Section 3.5.5, ‘Yield, Process and Wafer Level Error Injection’.


Low-K dielectrics

            From the ITRS’03, it is reported that reliability and yield issues have plagued the progress on Low-k dielectrics due to, for example, dual-Damascene copper processing.  Flourine doped Si (k=3.7) was introduced at the 180nm node.  But, insulating materials with k = 2.6-3.0 were not widely used at the 130nm node, but should be available for 90nm.


High-k materials for gates

            Given the lowered threshold voltages and thinning tox fields, a push is being made toward high-k materials for gate oxides.

Conformal dielectrics and CMP

As noted above conformal dielectrics produce nonrectangular cross sections.  The effect is to severely complicate the process of parasitic capacitance modeling, due to the requirement for field-solver solutions.  The remedy has been found in Chemical Mechanical Polishing, which simply smoothes the oxide surface between metalization layers.  The negative effects are the requirement for dummy metal fill to support the oxide over voids, and the incidence of resistance variation depending on line width.

Copper wiring

            Copper wiring has been introduced to lower the overall resistance, but has a side-effect termed Selective Process Bias (SPB), described above in Section





This chapter focuses on the evaluation, development, validation and benchmarking of parasitic extraction tools. The preceding chapters have developed the groundwork for considerations in defining and developing LPE capabilities that form the basis of the motivations and constraints of this chapter.

In reflection, LPE tools serve as a bridge between the back-end physical implementation of the design, and the simulation in the front-end of the design kit.  In that sense, they are exposed to the complexity and errors introduced throughout the design process.  These errors are part ‘systemic’ – implying that they are built into the design kit, tools and algorithms, or episodic – implying that they are specific to the particular design, design methods, equations and data derived from it.  The benchmarking of the LPE tools helps to reduce systemic error, while the exercise of those tools through a flow accommodates reduction of user or episodic error by ironing out the wrinkles.  This entails that the solution to be provided for LPE should be driven through the design flow to evaluate and improve its performance with respect to the aforementioned ‘complexities’.  The previous chapters have attempted to lay the groundwork for the concepts and reasoning necessary to understand the impacts of the various contributors.  As the presentation and ‘flow’ of these concerns is has been somewhat distributed, (and at the risk of recitation ad-nauseum) a summation of key items from Chapter 3 and 4 are assembled here – with pertinence to the subject noted.

The gist of Chapter 3 has been to present all the various factors of the tools, flows, kits and design environments which must ‘synergize’ to produce a valid LPE simulation.  The measure of Chapter 4 has been to elucidate the fundamentals of Signal Integrity, their theory, analysis and inception.  Both chapters overlap across the design process, and from a certain vantage, can be seen as having a focal point in LPE.  In particular, Section 3.5, Design Flow Error Propagations, presents a generalized accounting of the various error causes and contributors that must be considered in the development of LPE, which are re-assembled here.

In summary, a plethora of considerations and requirements exists with respect to the development and validation of LPE tools.  The introductions of Chapter 3 concerning design tools, flows, kits and the propagation of error have intended to prepare the un-initiated to the overall motivation and scope, and complexities abounding.  The Signal Integrity review of Chapter 4 provides theory and design background that provide fundamentals against which LPE tools should be developed.  This chapter employs that information to guide the development of experiments and the evaluation of their results.


Organization of Chapter 5

The organization of this chapter is summarized as follows.  In introduction, a few selected papers on parasitic extraction theory and methods are reviewed in Section 5.1, and in Section 5.2 an overview of several related projects in extraction validation are presented.  Then, in Section 5.3, the nature of the validation system developed for this project is presented, including the form and function of the process parameters (Section 5.3.1), the coding of the LPE tool (Section 5.3.2), the rational and substance of the test structures developed (Section 5.3.3), including capacitance, resistance and inductance checks.  Following the experimental development, Section 5.4 presents the experiments executed and a review of the results, with comparisons to results from multiple extractor tools, and ‘sanity check’ analytical analysis based close-form equations for simple structures. Next, Section 5.5 presents a number of leading on-chip parasitic modeling experiments, including standard ring-oscillators, Charge-Based Capacitance Method (CBCM), differential comb-caps and methods for implementing BIST for SI.


5.1  Related Work: Parasitic Extraction Theory and Methods

            This section presents a few papers which are highly related to parasitic extraction validation and benchmarking.  The litmus test for inclusion is that the paper should provide a broad coverage of the Design Signal Integrity Concerns, as presented in Table 1.2a.

            The work of [18] is also highly apropos to this chapter, as it has been to the Signal Integrity basics.  The nature of modern 3-D capacitance extractors is presented as consisting of three major steps:

1.      Technology pre-characterization takes as input a description of the process cross-section (SIPPs), and enumerates tens of thousands of test structures.  It then simulates those structures with a 2D or 3D field solver.  The data resulting is collected and made to either fit some empirical formulas or to build look-up tables based on patterns.  The set of patterns may be reduced by reduction techniques.

2.      Next, the geometric pattern extraction stage simulates all the permutations of each pattern.  If a pattern requires ten parameters to describe it, and if there are five sample points, then there will be 510 patterns to simulate.  This can be remedied by a geometric parameter reduction technique which uses shielding to negate or merge many patterns.

3.      The final step is to calculate capacitances from the geometric parameters.  The geometric patterns from the layout are matched to entries in the pattern library.  If an exact match is not found, interpolation is necessitated.

Further presentations in the Kao et al. paper [18] consider the mixture of S.I. tools to integrate, including LVS, DRC, substrate analysis and inductance.

In [91], the authors present a system for developing compact models for parasitic capacitances using field-solver simulations.  Additionally, process variations are added to the compact models using the principle component analysis (PCA) and performance response surface models (RSM).  The requirements of a good characterization are summarized from this paper in Section 5.3.3 below.


5.2  Related Work: Parasitic Extraction Tools Benchmarking


            It has been found that nearly all projects dealing in the benchmarking of parasitic extract tools on a given process are kept by the originating companies as ‘company confidential’.  There are a few exceptions, primarily consisting of white-papers that originate from companies selling LPE tools.  A few will be touched upon here.

In [19] an on-chip, interconnect capacitance characterization method with sub-femto-farad resolution is presented.  This is the first introduction of the CBCM method.  In [55] Nagaraj et al. present a system for validating LPE vs Silicon correlation employing the CBCM technique with enhancements to allow measurement of up to 12 to 14 structures [by multiplexing] inside a 20-pad saw line module.  The CBCM is presented in Section ahead.

One example of a commercial white-paper is [34], in which the authors first present the complexities and concerns involved (see Table  “Process Technology Scaling Effects”).  Furthermore, the nature of geometric extractors is presented, including rules-based extractors, Boolean extractors, edge-based extractors, feature-based extractors, and Context-based extractors.  The last mode, Context-based, is said to utilize a suite of pre-characterized analytical, or parameterized, models.  The extraction of the layout provides parameters which are passed to the models for capacitance calculations.  This method does not suffer from boundary errors since all boundaries are included, and parameterized.

            There are several directly related parasitic extract benchmarking papers. These were all developed in the process of evaluating the TSMC 0.13um process.  The paper may not be available to the general public, but most can get them by request.  These include [20] Arcadia, [21] Synopsys, and [22], from Sequence.

            A direct example of a parasitic extraction benchmarking project can be found in [92].  Notably, these authors also employ the CBCM method, and go quite a ways in analysis or process effects such as SPB and copper interconnect resistance variability.


5.3  Interconnect Extract Benchmarking Experiment Development


This section considers the experiment preparations to determine the accuracy and consistency of LPE tools within the context of a particular design environment.  This will include evaluations of the form and function of the process parameters (Section 5.3.1), the coding of the LPE tool (Section 5.3.2), the rational and substance of the test structures developed (Section 5.3.3), including capacitance, resistance and inductance checks.

In order to provide the best LPE technology possible, a comparison of available extraction tools is necessitated. This implies a system for benchmarking the extraction tools based on comparison to either direct silicon measurements or a pre-qualified gold-standard field-solver extract tool.  Several LPE tools are considered (Dracula™, Diva™, Assura™), with respect to accuracy, capabilities and usability.  As the schedule did not permit completion of on-chip silicon measurements, the possible directions in that respect are presented.  In lieu of actual measurements, the tool ‘Raphael™’ from Technology Modeling Associates (now part of Synopsys) is employed.  This tool has a built-in regression feature capable of generating hundreds of permutations of test structures based on eleven primary topologies.  Equivalent structures are built in the design environment and fed to the LPE tools.  The results are compared and analyzed in Section 5.4.


5.3.1  Process Parameters Definition


            The process parameters, termed SIPPs, define the topology of the process in order for the extraction tool to calculate capacitances between variously oriented polygons on various layers.  This standard has been established by the industry through the Silicon Integration Initiative (Si2) [93], to create a common language and specification that will allow accurate modeling of interconnect characteristics, but is general enough to apply to any process or tool.  In a sense, the SIPPs have the same role as BSIM parameters have in describing transistor parameters.  They serve to shape the equations which take as coefficients the form of the structures being evaluated.  Interconnect Stack Parameterization


The process parameters required by the tools Diva™, Dracula™ and Assura™ have been summarized in Table, the ”Standard Interconnect Process Parameters”.  It should be noted that these are the minimal parameters needed to describe the interconnect stack, and serve well for the Processes typically in use.  That is, the Process is assumed to be locally flat, and where there are differences in height and thickness depending on existence of lower layers, they can be specified as a separate stack.  The typical interconnect stack profile is depicted in the following figure from the ITRS’01 Report:



Figure, ITRS’03 Cross Section of Hierarchical Scaling (pg 4, Interconnect) (Reprinted with permission of Sematech)


This vertical representation (metal thickness, ILD height) is generally sufficient, given that horizontal (or flat) information is depicted in the actual layout (and of course, vertical is not).  Thus, if all vertical conditions are specified, then any shape topology in the layout can be analytically represented.

The SIPPs table also specifies minimum spacing and widths.  These are used to generate bounds on the extent to which the LPE looks out from any particular structure to find coupling neighbors, and the increments at which to generate equations.  For example, if an M1 layer has minimum spacing of 1.0 um, then it may be specified to set increments at every 0.2um up to 2.0 um, and then every 0.5um up to 10 um out.  The result is a piece-wise linear mapping of the actual spacings to specific points – with the advantage being that polynomial equations can represent the capacitance between traces, and the number of equations is finite (but large), and with the disadvantage that accuracy is lost for any spacings which lie between these points.

            Another drawback of this PWL method is that any traces that lie just outside the upper limit will not be seen at all.  If two signals are co-linear for a long distance, then their coupling capacitance might be significant, yet be extracted as zero.  A work-around to such a problem is to develop a special set SIPPs with ‘far-looking’ specifications.  This special set of rules may be used with just the critical nets selected for extraction and analysis since, if the full layout where extracted, the data and extract time would be enormous.   Results of such ‘far-looking’ process parameters, as compared to a field-solver extract, and the near-sighted parameters are depicted in the figure below.

Figure,  Comparison of Far-C, Nominal, and Field-Solver Extracts
(By Author)


            As mentioned in Section, Capacitance Extraction Level Accuracy,  parasitic extraction requirements are becoming non-linear, or non-pattern matched, due to non-conformal dielectrics (non-planar), copper wiring and nonrectangular cross sections.  Conformal stacks can be represented, but the transitions from one locally planar set to another introduce some rather complex relations.  With sufficient analytical analysis, these can still be modeled with the given SIPPs, but their form leaves much to be desired in accuracy and speed, and it is generally better to resort to a 3D field-solve method.  An example of such a configuration is depicted in Figure above.


            Given that this project is being limited to the simplified, planarized process, the choice of test structures will focus more combinations of layers above, below and to each side of a subject shape.  Process Skew Corners


            Another important factor to consider is the skew in the above process parameters, in order to create ‘parasitic corners’.  That is, given that the device models have various corners built in – MOSFET fast/slow conditions, resistors high/low, capacitor high/low conditions … it only makes sense that the interconnect modeling should be able to similarly reflect these corners.  Ideally, the interconnect model corners should be equivalent to their brethren device models corners.  For example, the device models should include a model for metal-4 resistors, and min and max corners. These might be based on 3-sigma variance from the mean.  Thus, the LPE tools should similarly be able to extract min, typical and max corners, with metal-4 interconnect resistors having the same skew as the intentional device models.

            The selection of Capacitance/Resistance corner combinations is an interesting question.  As each corner created may require a very large chunk of disk-space, and simulating over a corner will likewise require a large time investment, it follows to minimize the corners to that which is most representative of a process skew, and minimal in size.  In Table below, it is argued here that the X’s win in this respect in that the O’s represent unlikely, or non-correlated corners.  For example, Cmin will occur when the average metal traces width or height is a little smaller than normal, which should also correlate to a larger resistance.  On the other hand, Cmax occurs when the metal is somewhat oversized, correlating to a reduced resistance.


















Table RC Corners Selection


In reality, all X’s and O’s have been implemented in the LPE tool – to afford the designer the flexibility of having the extreme corners available – even if they are uncorrelated.  If they do use the extreme corners, they will likely over-design their circuit, leading to extended design time and possibly increased die size.  Spreadsheet Automation of Technology File Generation


To assist in the generation of ‘technology files’ for various tools, the SIPPs data has been accumulated into a spreadsheet, which is presented in Appendix E.  The data presented is used later in the spreadsheet to calculate and format the input files for ‘Capgen’ and ‘Coeffgen’.  Capgen is the tool used by Assura™ to generate the ‘lookup tables’ for pattern matching various interconnect topologies.  Coeffgen is a similar tool used by Diva™ and Silicon Ensemble™ (One of Cadence’s Digital Place and Route tools).  For Diva™, a set of equations are generated for 2.5D pattern matching, and this code is inserted into the Diva™ LVS program.  The spreadsheet eliminates much of the data input error, and is easily dumped to CSV format.  This data is also parsed and used by RegMan (Chapter 6) in the evaluation of the data by pure analytic methods (Chern’s and Sakurai’s equations, Section 

The Table E.1 in Appendix E displays one ‘state’ of the SIPPs spreadsheet.  The data given is not real, or related to any process, due to intellectual property and non-disclosure constraints.  It does, however, still use the layer thickness, ILD and so forth to calculate the capacitance per unit area, populate the tables etc.

The key to developing the SIPPs spreadsheet is that it can be dumped to a comma-separated-vector (CSV) format, which is then easily parsed by a script.  The script may then complete estimate equations for various structures for reference and further use in analysis.


5.3.2  Extractor Program Development and Validation


In this section, the LPE tool coding requirements, concerns and validation experiments are presented.  Topics covered are LVS coding prerequisites, conductor layer isolation and marking, prevention of device parasitic double counting, and consideration interconnect to device body relationships

     Extractor Program Development


The coding of LPE requires that the LVS coding is complete and has taken into consideration concerns such as those listed in Table 3.2.3b.  And, as presented in Table 3.5.4a, the results of extraction may include not only interconnect RLC parasitics, but also the intrinsic parasitics of devices and various odd relationships such as interconnect to devices effects. Some of those parasitics have been listed in Table 3.2.3c, Device Parasitic Extraction Parameters.  Of primary concern in the extract of interconnect parasitics is the isolation of actual interconnect metal from the device intrinsic metal.  Obvious cases include intentional (or designed) metal and poly capacitors, resistors and inductors.  The interconnect shapes for these devices may simply be removed from the interconnect shapes passed on to the LPE tool. But, if that is done, then actual interconnect traces that pass over these devices will not ‘see’ the parasitic capacitances and/or inductances they should.  Thus, the LPE tool must be given specific information about such devices in order to see the intentional device shapes, but not extract them as parasitic devices themselves.  A non-obvious case is that of what to do about gate resistance.  An assumption may be made that gate poly should not be seen as interconnect by LPE, and that it need only be a ‘target’ for parasitic coupling by passer-by interconnect.  But this may lead to a design failure if the gate width is ‘large’, wherein the resistance seen from the input point to the end tab may be rather considerable.  The effect then is more of a sliding door rather than a uniformly opening gate.  As it is now, a solution is not available for this condition, as there seems to be no feasible way to pass the RC fracturing of the gate to the MOSFET model.


All parasitic extract tools require some minimal form of the SIPPs presented above.  Some, such as 3D field solvers, may then only require a GDSII input of the layers of concern from some given layout or layout area, and then output parasitic values for those layers only.  But, most of the ‘integrated’ LPE tools dovetail off the LVS rulesets in order to discriminate and define the intentional devices in a layout, and define the schematic equivalent network connectivity of traces.  The Assura™ Capgen™ flow (figure below) depicts this relationship between ‘Converted LVS rules’, ‘Process to LVS Mapping’, ‘Process File’ and the RCX extractor development.  As noted above in Section 3.5.4, Layout Parameter Extraction Error, the development of the LPE tool thus must consider its fit into a given design environment, and must consider the nature of the process devices which it must work-around.  That is, consideration must be given to each device as to whether certain parasitics (Table, LPE Device Simulation Parameters) are represented in the device model, in a device sub-circuit schematic, or need to be extracted from the layout.  If parasitics are estimated in the device model, then they must either be totally ignored in the layout extract (to avoid double counting), or there needs to be a switch built into the model to tell it whether to use the estimated value, or a value passed in from the circuit netlist.  Similarly, if a device’s parasitics are represented in a sub-circuit schematic, then if that same device is extracted from the layout, it’s element definition in the resulting netlist must call a device sub-circuit definition which now excludes parasitics that have been extracted explicitly from the layout (again to avoid double counting).  The validation of accounting for intrinsic device parasitics is particularly tricky, given the large number of possible device configurations.  The means employed in this project are presented below in Section


Figure, Capgen Flow, Assura™ Developer’s Guide, pg 217
(Reprinted with permission of Cadence Design Systems Inc.)


As listed in Table, there are a few parameters besides the SIPPs which are used by the LVS/LPE pair to accurately extract the layout.  The interconnect sheet resistances, via and contact resistances, and substrate doping profiles all have their respective levels of importance depending on the nature of the design being extracted.  Furthermore, the interaction between the interconnect parasitics and the intentional devices must be carefully addressed.  For example, a metal trace that runs over the body of a resistor has a capacitance to that body.  But, the resistor body does not itself have a net or terminal name.  Thus, special coding is necessary to describe to the LPE tool how to distribute this capacitance between the terminals of the resistor – and for which layer and device combinations this applies.       


            The considerations presented here are not exhaustive, but are indicative of the attention that must be given in defining the nature of parasitics to be extracted. Often, the EDA engineer must make assumptions about the accuracy that will be required by the designer – and may miss the mark on some unexpected designs.  Extractor Program Validation Experiments


The final test of the LPE extraction must be in the simulation of the extracted data, if simulation accuracy is the objective.  In actuality, LPE tools have many modes of operation – and each may have their own considerations and errors.  For example, the Assura™ tool has the following options for extraction:


·        All nets in a layout

·        Selected nets

·        Selected Paths (all nets between two nets that only crosses gates)

·        All nets except Excluded nets


Furthermore, the tool has various modes of extraction of capacitance and resistance as follows:


·        Capacitances coupled between nets

·        Capacitances lumped to ground

·        Capacitances above some threshold

·        Self-Capacitance (i.e., from a net to itself after resistance fracturing)

·        Resistances on traces fractured at a given max segment length

·        Resistance limited to those above a given threshold

·        Inductances on selected nets


There are many more options, but most of them are rarely used.  Also, these options are internal to the Assura™ program itself, and are not driven by the LVS or LPE rulesets. Nevertheless, they still need to be tested.  But, the testing is relatively straight-forward:  Given a design, try each option, check the results manually.

The validation of correct extraction of devices and device parasitics, along with the interconnect variations, requires a more elaborate, simulation-based experiment.  Basically, the extracted layout simulation is compared, waveform to waveform, against the ideal. This is done with the interconnect parasitics included, removed, and with device parasitics included and removed. The process requires the following steps:


1.      Building a template schematic and layout of all devices seen in the process

2.      Simulating each device as driven by independent representative signals

3.      Creating the layout of the schematic, Extracting that layout

4.      Simulating the extracted layout with the same stimuli as from 2 above

5.      Comparing the resultant waveforms from 2 and 4 above.

Table Simulation Based LPE Validation Experiment


            The creation of the schematics is usually done manually, but can be automated across multiple processes.  The comparison simulations of schematic to layout can take many forms – including those where the parasitics are stripped from the extracted layout netlist – thereby leaving a netlist which should be identical to the ideal. There may be variations on type of simulation, e.g., AC, DC, Transient, Corners etc., which are necessary to glean the behavior of a particular device or circuit.  And thus, overall, there are quite a few permutations on simulations, and a rather large set of experiments which may be repeated on a regular basis.  These factors have motivated the implementation of a general simulation automation script called ‘SimReg’, which employ’s Cadence’s Ocean™ scripting system to customize, control and execute simulation jobs. The employment of this script in the regression testing is further presented in Chapter 6 on RegMan below, and in particular, Section 6.6, ‘Validation of  Extraction Circuits’ delves into the use and results of the script. The architecture of the script is presented in Section 6.5.3 SimReg Script Architecture.


5.3.3  Layout Test Structures Development


As noted above in Section, the regression testing must include a large battery of representative structures to exercise the extract tool.  These will include multiple interconnect topologies such as parallel lines, crossing lines, arrays etc., with variations on widths and spacings.  The structures may be created on silicon and measured directly, or the measurement may be modeled by a ‘gold-standard’ 3D field solver tool (such as Raphael™ or Random Logic’s Quickcap™) extraction of the structures.  The results serve as a reference for the LPE-tool based extraction of the same structures.

Both the extraction of the large suite of test structures and the comparison of the reference to the extracted data must be automated through some sort of scripting system.  The scripting system used here eventually grew into ‘RegMan’, as it was necessary for it to run LVS, LPE and evaluate results.  A minor extension enables it to run DRC or possibly to manage simulation runs.  The form and function of RegMan are further developed in Chapter 6.  At this point, it is sufficient to know that there are a set of scripts and a process for creating the layout structures, running LVS and LPE on them, parsing the output Spice netlists, and extruding the relevant R’s and C’s from those netlists.

In validating the extraction tools, focus should be given to the four major technology scaling effects listed in Table  .  The equivalent interconnect length to gate delay assists in defining the level of accuracy needed in the extraction.  The increasing resistance effect leads to increased emphasis on the accuracy of extract – and in particular on the extract of odd-shaped traces which are not easily transformed to two-terminal resistors.  The increased interconnect height, and the resulting increase in lateral coupling capacitance intones that lateral capacitances to neighbors above and below will also have significant effect.


            The choice of structures to use in the extraction validation must simultaneously attempt to provide samples of all shapes likely to be seen, as well as provide a good distribution on fundamental shapes so as to allow for cancellation of error and averaging.  From [91], Doganis et al., the following table of considerations are presented:


1.      The structures should be amenable to common measurement equipment, probe cards and switch matrices.  The measurement should be fast, easy, direct and repeatable.

2.      The structures should be generic such that they may be easily adapted to various fabrication processes.

3.      They should consider the limitations of the particular extract tools – whether it be 2D...3D.  Thus, it is pointless to draw squiggles for a non-field solver extract tool.

4.      The structure set should include multiple layer combinations, including metal, poly and diffusions. Also, the suite should include complex circuits such as clock nets, ring oscillators – which may be used for the fine-tuning of interconnect models.  Specific structures for measuring cross-talk and inductance should likewise be included.

5.      Due to the statistical variations of the process, and the statistical nature of the input process parameters, a subset of the test structures should be designed for scribe lines in order to monitor process drifts and provide feedback to tune the interconnect models.


Thus, the basis of the validation of the extract process is a set of representative layout structures, which, in general, equate to those most commonly seen on silicon.  The silicon structures will form a minimal set whose data provides the fundamentals such as plate caps and line-to-line couplings.  The layout test structures may go to considerable more depth, with the baseline set of structures correlating to silicon structures, and the rest validating the contiguity of the extraction code vs. calculated (or gold-standard extractor). 


In development of such structures code may be written to auto-generate all permutations, for example, of 6 layers of interconnect (metal1-metal4, poly1, poly2) plus 6 formations of active/nwell/bulk.  From the number of models created in the Coeffgen and Capgen tables (which are cryptic), it appears that these programs are doing just that internally – except that they are also adding permutations on the possible shape relationships (i.e., line parallel to line, line crossing line, elbow to elbow, etc.).  Naturally, attempting to cover all possible shape relationships leads to an enormous lookup table.  On the other hand, all relationships not found in the table will either be dropped or fudged to fit some similar relationship.  The actual extent and content of the relationships included in the LPE tool tables is not known, thus the regression test suite should throw in a good number of odd shapes just to get an idea of its coverage.  With that in mind, the following sections list target structures to be analyzed in this sub-project:  Capacitance Validation Structures and Experiments


            The capacitance structures employed included custom-built arrays to focus on accurately defining the SIPPs (layer thicknesses, ILD heights), and auto-generated suites equivalent to that provided by the gold-standard tool ‘Raphael™’.  Also structures equivalent to those used by the foundry were included, as a fair amount of measurement of said structures is provided in the Process Designer’s Manual. Furthermore, various ‘odd’ structure tests were included, along with circuit based tests.  These five categories are covered next.  All capacitance analysis follows from the basic equations set forth in Section, Capacitance Theory.  Where necessary, structure specific equations are provided here.


            In general, the capacitance validation structures should consider the following conditions.  The 2D effects are sufficient to validate basic SIPPs.  The 2.5D structures were sufficient for Diva™ and Dracula™ validation (as that is all they are capable of extracting), and the 3D eight-body (8B) and N-body structures represent possible real-life topology conditions seen in a circuit.  Many of these topologies are covered by the Raphael™ suite, but generally it is an ‘intractable’ problem to consider all permutations.


1)  2D, 2B PlateL2 to PlateL1 caps

2)  2.5D, 2B PlateL2-Sw down-to Infinite-Plate L1

3)  2.5D, 2B PlateL1-Sw up-to Infinite-Plate L2

4)  2D, 2B Lx to Lx collinear coupling, 2x, 4x, 8x pwl

5)  2D, 2B Lx-to-Lx-1 lateral fringing, 2x, 4x, 8x pwl

6)  3D, 8B (3 Layer) Permutations

a)  Use 5x5 grid per layer, permute on squares inclusion, connectivity

b)  Includes L-shapes, T-shapes, U-shapes, etc.

7)  3D, N-Body (All layers, all separations, all shieldings)

Table  Basic 2D, 2.5D, 3D Capacitance Extraction Structures


Custom-Built SIPPs Validation Arrays

            Given that the LPE tools primary basis of extraction is the SIPPs (Table, their precise determination is of foremost concern.  Likewise, these structures were built early on for the validation of the basic 2D extractors.  For that level, anything more advanced would have been a waist of time – due to the extractor’s accuracy limitations (Section, Capacitance Extraction Level Accuracy). Thus, the SIPPs validation arrays are limited to the following ‘primitives’:


A.     10x10 um L1 over L2

B.     100x100 um L1 over L2

C.     10x L1min over L2

D.     100x L1min over L2

E.      10x L1min over L2, between two L1 at min, 2x-min, and 5x-min spacing

F.      100x L1min over L2, between two L1 at min, 2x-min, and 5x-min spacing

Table  SIPPs Validation Arrays


            Here, L1 is the principle layer, L2 is the next interconnect layer below.  The concept is fairly simple.  Structures A&B measure pure plate capacitance to determine dielectric thickness.  The 10x variation between A & B allows for the cancellation of most of the fringing and corner effects.  Structures C and D are a single strip to calculate fringe-down capacitance. Here, length is varied 10x to cancel corner and end effects.  Structures E & F measure basic line-to-line coupling capacitance.  The spacings are varied from minimum to 2x of minimum and 5x of minimum – providing a baseline curve.   An image of the combined structures is presented here (but, the structures are actually evaluated separately)


Figure,  SIPPs Validation Arrays (By Author)


            Given an example process with four metals and two poly layers, this suite produces 10 structures per layer, or 60 in total.  It might be noted that the large square structures may lead to inaccuracies due to metal warping – and in fact  a 100x100um structure is not typically allowed by the DRCs.  But, nevertheless, they are still useful for LPE comparison against manual calculation (given that you are provided the SIPPs by the foundry).


Raphael™ Regression Suite

            Given a measured set of SIPPs, it follows to employ a gold-standard field-solver tool to run regression tests against.  Of course, the accuracy of this stage is largely limited by the accuracy of the input SIPPs.  But, if validation of the LPE tool is all that is desired, then any reasonable arbitrary set of values will do – so long as the same values are provided to the regression tool and the LPE tool.  For this report, this is exactly what was done, as the Foundry derived data is proprietary.


The Technology Modeling Associates (TMA) program 'Raphael™', as introduced in Section 5.3, is considered an industry gold-standard against which to compare LPE results.  This utility enables one to specify the parameters of a process and run regression analysis of a set of microstrip topologies.  There are 11 default topologies available, with each topology permuting over sets of layers employed.  The regression system will, for each permutation, calculate the capacitance for each capacitance type (i.e. Coupling, fringing etc.) at various values for spacing, thickness, width.  The Raphael™ suit includes several systems for circuit analysis:


·        RC2: 2D Resistance, capacitance and Inductance calculation

·        RC2-BEM: 2D field solver by Boundary Element Method  (3-2)

·        RC3-BEM: 3D field solver by Boundary Element Method  (5-1)

·        RI3: 3D Resistance and Inductance with skin Effect (6-1)

·        RIL: Raphael™ Interconnect Library

Table, Raphael™ Tools


            The Raphael™ regression suite structures are presented in Appendix F.  The tool allows the user to select any set of the topologies, and any following set of permutations on valid layers following.  For example, the first topology, Array Above Ground-Plane consists of a central measure line bordered by two lines at spacing S, with widths W.  With a four-metal, two-poly process, there will be n(n-1)/2=21 permutations on layer combinations.  With i variations on spacings, and j variations on widths leads to i*j*21 permutations.  Similarly, for two-array structures, the TMA Tutorial states, pg 3-9 [63] that "it is easy to generate huge numbers of simulations, since the number of combinations for those structures is (n widths in top array) * (n spacings in top array) * is (n widths in bottom array) * (n spacings in bottom array)".  If a conductor has three widths, and 27 spacings, this leads to 6561 simulations.  But, if the spacings are reduced to min, typ and max, we are confronted with only 81 simulations.


            The Raphael™ tool requires the user to input the SIPPs into an Interconnect Technology Format (ITF) file.  As with the SIPPs, the ITF builds a definition of the process profile.  The input parameters include:


·        Basic cross-section parameters: H, W, T, S, K.

·        Conformal Dielectrics (3-4) (G-3)

·        Co-Veritical Conductors (G-3)

·        Cladding

·        Dielectric Air-Gaps

·        Layer Etch

·        Metal Fill

·        Anisotropic Dielectric Materials

Table, Raphael™ Interconnect Technology Format


Given this data, a field solver runs through a set of pre-selected structures to generate parameterized equations, and possible a report of ‘capacitance per micron’ for each component capacitance of a structure.  It is important to note that the structures assume infinitely long strip-lines, and thus all parameters must be given in terms of C/um.  After the regression runs have been completed, a plot of the results of any particular structure may be generated for visual inspection, such as:



Figure Raphael™ Regression, M4/M2 diff_lyrs_above_gp, By Author



After the equations are produced, the tool may be used to pass these to an LPE tool interface (for Diva™, Dracula™, Vampire™ or Xcalibre™), which generates a set of code that may be inserted into a PV rule deck (of the corresponding type).  Then, given a layout database and the rule decks, the actual LPE tool engines scan the layout and output spice models.  This flow is depicted graphically here:


Figure  Raphael™ Parasitic Extraction Flow  [63], Raphael Users Man. (Reprinted with permission of Synopsys Inc.)


            In this project’s exercise, the LPE rule decks have already been produced by other means. The layout database consists of many cells, each representing one of the permutations that the Raphael™ tool used to generate its parameterized equations.  The LPE tool is run on each layout cell separately, and the resulting spice file is parsed, and each component capacitance type is compared back against the reports (capacitance per micron per component) generated by Raphael™.  It should be noted that the strategy of placing one structure per cell is based upon three things: 1), that having isolated structures in a layout precludes having to worry about parasitics existing between structures, and 2), that a ‘Skill’ script may be written to automatically produce each permutation, and 3), that the net names assigned to primary and auxiliary structures may be repeated in each cell – which has the effect of allowing LVS to be run for each layout against just one schematic, and allows for the script that post-processes the spice netlists to also easily determine the primary and auxiliary parasitic components.

            The Skill code that generates the layouts is not particularly amazing, but it is particularly wonderful that it can be done at all – given the complexity of the DFII database and the proprietary nature of the environment.  This has all been remedied by the existence of a ‘generate layout Skill code’ utility provide by Cadence – which basically generates a file of Skill code for a given layout, that when loaded, will reproduce that layout.  This is done on an initial structure, and then modified to simply run through five nested ‘foreach’ loops on the layers (top and bottom), spacings, widths and lenghts of variation.  At the beginning of each loop, a new cell is created with a name representative of the structure’s characteristic, i.e. “AAG_poly1_subs_1_1_100”.  The core loop is illustrated here:


procedure( gen_all()

   prog( (gp layer w s l layerlist wlist slist llist)


             layer = "M1"

             gp = "TEXT" ; Dummy layer for none


             layerlist = '("TEXT" "PO1" "PO2" "M1" "M2" "M3" "M4")

             wlist = '( 1 2 4 )

             slist = '( 1 2 4 )

             llist = '( 100 1000 )

     w = 1

     s = 1

     l = 100


             ;arr_above_gp( "PO1" "PO2" 1 1 100 )


             ;Generate two-layer permutations

             gen_arr_above_gp( layerlist wlist slist llist )





procedure( gen_arr_above_gp( layerlist wlist slist llist)

   prog( (gp layer w s l )


             while( length(layerlist)>1

             gp = car( layerlist )

             layerlist = cdr( layerlist )

               foreach( layer layerlist

                foreach( w wlist

                                    foreach( s slist

                                                foreach( l llist

                                                   printf( "%s %s %d %d %d\n", gp layer w s l)

                                                   ;Call the general cell creator

                                                   arr_above_gp( gp layer w s l)










Figure  Skill Code Cell Generation Core


This code, given 6 conducting layers, 2 lengths (100u, 1000u), 3 widths and 3 spacings, will produce 21*2*3*3=378 permutations of one layer type above another.  The procedure ‘gen_all’ sets up the variable lists, then calls ‘gen_arr_above_gp’, which then loops through the variable lists as mentions.  Each loop call ‘arr_above_gp’, which actually generates the layout cell, with the given layers, etc.


After the layout structures have been generated, then LVS may be run on each cell (by RegMan) to bind the layout traces to actual net names. Following this, LPE is run on each cell (again by RegMan), with the output SPICE file being given a name equivalent to the cell name, i.e. “AAG_poly1_subs_1_1_100.sp”.  At the end of said process, various sub-procedures in RegMan parse the Raphael™ spice database and the LPE generated spice files and build internal hash tables.  These tables are compared, by layer pairs, parasitic components, and principle layers and reports generated depicting percentage error.  The results of one such run are presented in Appendix G. 


            In summary, the above process and the development of the requisite tools and methods is really the foundation and inspiration of this entire project.  The object here being to validate that the coding and infrastructure of the LPE tool in question really does behave as expected.  As seen from Appendix G, Table G.1, this is certainly not always the case.  The errors of various layer combinations run from 3.9% (M4 over PO1) to 37.77% (PO1 above Sub).  The resolution of these errors, by simple inspection or principle component analysis (PCA) will be the follow-on study.


Foundry Measured Equivalent Structures

            In this particular study, the process investigated has Foundry provided parasitics measured data against a defined suite of structures, all of one particular topology.  That topology simply consists of a stip-line (long narrow trace) over ground, with two equally spaced grounded lines to either side.  The values measures are C-top (top plate to neighbors), C-bottom (bottom of strip-line to ground), C-fringe (side-wall capacitance from center line to neighbors), and C-total = C-top + Cbottom + 2*Cfringe.

            The structures are replicated, extracted, and compared against the published results via a spreadsheet.  The spreadsheet also performs several comparison ‘hand’ calculations based on simple capacitance equations using Chern’s equations and Sakurai’s equations (  The resulting errors are tabulated on the ends of each row.  An example of this manual validation by spreadsheet is presented in Table E.2.


Odd-condition Structures

            As mentioned throughout this thesis, parasitic capacitances and resistances extract alone will not lead to a valid simulation-vs-silicon correspondence.  There are quite a few ‘other’ parasitics extracted, and odd conditions to be checked.  A few of those are listed here.


A.         Metal trace over or near resistor body:

B.          Variations on MOSFET source/drain diffusions:

C.         Variations on Nwell-Substrate Diode extractions:

D.         Comb-caps with high Q, low R for lateral capacitance measurement:

E.          Dual Serpentine trace structure for across-die metal-width variation measurement:

F.          Non-linear or rectilinear structures


Circuit Simulation Based Structures

            The circuit simulation based structures rely on the old industry standard ring-oscillators with varying loads, and the fairly new Charge-based capacitance method.


A.     Ring-oscillator over Parasitic Corners

To quickly validate the performance of corners-based extractions, large ring oscillators with varying structures are employed.  The expectation is simply that worst-case corners will be slower, best-case corners faster.  Given RO’s which have loads consisting primarily of one type of structure repeated, it is easy to related the delta in frequency to the delta in process corner.  This experiment may be tailored for resistance variance measurement, as with (E.) above. See Section for details.

B.     Charge-Based Capacitance Method (CBCM):

The CBCM method uses two out of phase inverters as current mirrors.  One of the inverters is loaded with the test fixture. The capacitance can be determined from the frequency analysis.  See Section for details

C.     Built-in Signal-Integrity Analysis Techniques (BISIT)

A method similar to ATPG is presented in [101], except that a modified differential sense amplifier is used to detect noise above a given threshold.  See Section for details


Process Variation and Mask Lithography Effects

            The effects of across-the-chip Linewidth Variation (ACLV), as presented in Section, are caused mainly by reticle and proximity effects during mask-making lithography, and by local density effects.  Thus, consideration of such effects, including OPC, PSM, and out-diffusion effects creating rounded vertices, should be considered in silicon based measurements and Process Control Die (PCD) design.  Many of the above structures will vary due to these effects, but they do not directly correlate and extract the causative conditions.  Some possible, albeit not included in this work, experiments to determine the range of such affects might include:


A.      Comparison of capacitances from basic structures built in low-density metal fill areas, as opposed to high-density (30%+) metal fill.

B.     Comparison of capacitance values with and without OPC measures.

C.     Comparison of measurements across multiple lots – essentially to determine variance due to process drift.

D.     Comparison of very long dual-snaked serpentine traces to measure ACLV.


Such experiments, and the development of PCD structures and methods, have generally been the domain of the Process Engineer.  But, given the enhanced effects due to shrinking processes features, their concern and measurement have seeped into the domain of the EDA engineer, and cross-cut teams have resulted.


     Resistor Validation Structures and Experiments


            As has been noted, the extraction of parasitic resistances is not quite so straightforward as chopping the interconnect traces into segments and assigning them resistive values.  Consideration must be given to Steiner-tree effects, contact and via resistances, handling of bends and other non-rectilinear ‘blobs’, single terminal tabs.  As noted in Section, the ”Interconnect Resistance Extraction Accuracy”, the mode of extraction of interconnect resistors varies, and the possible use of RC reduction tools introduces concerns of accuracy and validity.  The ‘theory’ behind resistors is not particularly complicated, as presented in Section, Resistance S.I..  But much consideration must be given to delta-W effects and the handling of odd shapes.

            The summation of resistance on a net has become a major stumbling block for at least one vendor of LPE tools.  Since a net from the schematic may have multiple ports into devices or pins (I/O ports), and that same net may have multiple ‘Steiner tree’ vertices in the layout, it is very unlikely that the graph of the schematic matches that of the layout.  Thus, it is not feasible to serially sum resistances between Steiner vertices and assign total values back to the schematic net segments.


            The following lists the various test structures developed.


A.     Ideal resistor Pcell vs Flattened (interconnect resistor)

B.     Ideal resistors vs Flattend and RC-Reduced traces

C.     Permutations on Interconnect Traces, Serpentine

D.     Contact and Via array Resistance Tests

E.      Serpentine resistors to measure bend-effect

F.      Tapering metal traces (W1 != W2) and general ‘blobs’

G.     Un-terminated tabs and traces (dangling resistors)  Inductance Extract Validation Structures and Experiments


            The testing of inductance extraction is somewhat complicated by the fact that a return path may either exist in the interconnect, through devices or through the substrate.

            The measurement of inductance will depend on a number of conditions, as presented in Section  Notably, the interconnect length and speed of signal transition on fundamental to inductance effects. As noted in Section, it has been demonstrated that there are a range of interconnect lengths for which inductance effects are significant, and others for which it is minimal.


A.     Varying length transmission line Inductance Extracts

B.     Spiral Inductor Tests Intentional Device Extraction Checks


As noted above, the extraction of intentional devices should include all parameters expressed on the schematic (which is validated by LVS), and may also include device-intrinsic parasitics.  As these intrinsic parasitics may already reside within a device model or sub-circuit definition, care must be taken to avoid double-counting.  Similarly, consideration must be given to the fact that extrinsic interconnect will have capacitive coupling effects to the device’s internal geometries.  Whether the LPE tool correctly sees these effects or not needs to be tested.  Experiments to validate interconnect over device effects include:


A.           Creation of isolated device-level simulations with two conditions: The device is crossed over by a signal net, the same device structure is not.  Equivalent signals are driven through the devices, and another set of equivalent signals are driven through the overlapping nets.  The structures are lay’ed out, extracted and simulated.  Inspection of the netlist should show the differences in parasitics between the structures, as will the simulations.


B.            Similar to the above experiment, two identical devices may be drawn in the layout.  On one of the devices, the recognition layers that define it to LVS as a device, rather than just layers, are removed (or equivalently, the LVS code is turned off).  An equivalent interconnect trace is run over the top of both.  The layouts are extract and evaluated as noted above.


For the ‘intentional devices’ intrinsic parasitics checking, a simulation-based method is developed.  As noted in Section, LPE Back-annotated Re-simulation, an Ocean™ script (SimReg) is employed which automates the process of running multiple simulations and comparing the signals between various runs.  This task would be arduously impractical otherwise, as the experimenter would need to manually run each simulation, and manually find, overlay and calculate the differences of signals.  This script also enables the following types of experiments:


A.     Comparison of ideal schematic simulation vs. LPE intentional devices extract

B.     Comparison of Dracula™/Diva™/Assura™/Other extracts

C.     Cross-simulator comparisons: Spice-simulator1 vs Spice-simulator2


These experiments are further addressed in Section 6.6, which delves into the use and results of the script. The architecture of the script is presented in Section 6.6.2 SimReg Script Architecture.


5.4  Comparison and Benchmarking of Extractor Toolsets

This section presents some experiments executed and a review of the results, with comparisons to results from multiple extractor tools, and a form of ‘sanity check’ analytical analysis based on Chern’s and Sakurai’s equations ( for simple structures. Included is sub-Section 5.4.1, a review of the ‘usability’ concerns.

As related above, the process of characterizing and validating parasitic extract tools is highly sensitive to the input process parameters, the nature of the test structures, the capabilities of the LPE tool itself, and eventually, to the integrity of the underlying PDK. The Circuit Design Engineer expects that the tool employed, whether it be in an analog flow or timing driven digital flow, will provide a suitable level of accuracy, with a reasonable measure of ‘usability’.  The EDA Engineer attempts to satisfy those needs by developing the LPE tool with said concerns in mind, and with a good measure of regression testing to determine the tool’s accuracy.  As noted by a Mentor Graphics, in Table 3.0a, Requirements of Successful Analysis of Parasitic Effects, the utility of an LPE tool is quite a bit more involved than simple extraction accuracy.  For reference in this exercise, these basic metrics are more appropriately itemized in the following table:


I.                    Speed of Extraction for small, average and large designs

II.                 Data extent and Data management capabilities

III.               Accuracy of extract, based on various generic and odd tests

IV.              Versatility of extract tool in modes of extraction

V.                 Usability of extracted data in back-annotated simulation

Table 5.4a: LPE Benchmarking Metrics of Goodness


            In the evaluations below, these metrics are based upon an assumption that sufficient resources exist in terms of compute power (e.g., An LSF farm with many high-end workstations), data storage, network bandwidth and license keys for the tools under test.  Since all tools ‘see’ the same resources, their limitations can effectively be ignored.


5.4.1 Benchmarking Experiments Executed


            The metrics from Table 5.4a will be addressed individually in the following section, with respect to the available tools: Dracula™, Diva™ and Assura™.  This will not be a perfect experiment, given that not all tools have been developed for a single Process and PDK.  Thus, fairly equivalent structures are built for each tool in their respective Process.  In the tables below, “Assura™” refers to the 2.5D table lookup mode of Assura™, and “Assura-FS™” refers to the Field-Solver capability built into Assura™.  Speed of Extraction Benchmarking


            For equivalence across tools and processes, various layouts consisting of simple ring-oscillators driving various loads are employed.  A 31 stage ring oscillator is extracted by four tools: Dracula™, Diva™, Assura™, Assura-FS™, and the log files are checked to determine run-time.  The above cells in then placed ten times in a higher level cell and the process repeated.  This has been continued up to a level of 31x10x10x10, providing a failly good sample of the scalability.  The empirical results of the runs have been omitted here due to concerns of NDA (Non Disclosure Agreement).  In general, it was found that there existed a constant overhead for setup and extract, with a fairly linear increase on extract time correlating to data size.  It was however, found that the tools could not handle the largest structure presented.  The structure ROx10 used represents a purely digital design.  In that respect, consideration should be given to the possibility of an hierarchical extraction – wherein the interconnect is evaluated, but the contents of cells are ignored as black-box.  Data Extent Benchmarking


            The structures and results of the above tests serve well for the data size and management benchmarking.  The extracts are run and a difference of the disk space before and after is determined.  The results again have been omitted here due to concerns of NDA.  But, it was however, found that the data scales up slightly super-linearly with respect to the design size.  This is reasonable, as the interconnect of each instance will have parasitics with all neighboring instances within the radius of the extraction halo.

Although not directly measurable, the concept of ‘scalability’ plays an important role in the selection and development of LPE engines.  In [34], it is pointed out that a full extract for a three million transistor design might be accomplished overnight if distributed across five CPUs.  But, the design team knows that the next design may have seven million, and require 10 CPUs to maintain the overnight extract schedule.  In this project, the scalability of Assura™ is not directly known, but it is known that Diva™ and Dracula™ are limited.  Accuracy Tests Benchmarking


The accuracies of the tools should employ the structures developed and presented above in Section 5.3.3.  This would include capacitance, resistance and inductance measurements. Unfortunately, not all experiments have been completed across all tools.  The empirical results have been omitted for NDA reasons.  The future objective will be to take a simple structure (i.e. inverter ring-oscillator), and extract it will all the available tools.  Analysis should be done with respect to the aforementioned concerns in Chapter 3, and with respect to the analytical foundataions of Chapter 4.

            As expected, all three tools performed rather well on the SIPPs structures, as they dominate in 2D capacitance effects. From the (Table E.2) spreadsheet data representing Diva™ extract values, Chern’s equation calculations, and actual Foundry provided data, the following chart has been generated.



Figure, Comparison of Diva™ Extract, Chern’s Equation, Process data

(Process data from ‘Virtual’ process definition, Figure by Author)


            This chart represents the coupling capacitance of a poly1 trace surround on both sides by equivalently long traces of poly1, at varying spacings between the traces.  It is notable that the Chern’s equation, Foundry data and Diva™ equation (from the rule-deck) track fairly well with each other, while the actual LPE results are consistently lower in value.  The cause of this error has not been isolated, but it is surmised that the fringing-down effect is stealing capacitance – which is not accounted for in the other three measures.  Extract Versatility Features Benchmarking


            The extraction versatility benchmarking is somewhat subjective in opinion of importance of features. Nevertheless, various features are listed here and their representation in each tool noted.






Selected Devices Extract




Selected Layer Extraction




Selected Nets Extraction




Excluded Nets Extraction




Via & Contact Resistance




Field-Solver 3D




Table  Extraction Versatility Features


            The significance of the features listed here is in the development of a layout, and in the debugging of a design that has failed due to parasitics.  The ability to selectively extract nets affords the layout designer the ability to easily match critical nets (for example, on differential pairs). Similarly, the ability to choose which nets are included or excluded affords the circuit designer the ability to systematically trace down victimized or aggressor nets.  The ability to choose which devices to extract comes in handy in the validation of device-intrinsic parasitics.  The notation X* above indicates that selective device or layer extraction is possible, but only through special switching code added to the LVS rule decks.  Post-Extract Usability Features Scoring


            In this case, post-extract usability refers to the management and analysis of the parasitics data.  This includes the ability to detect parasitics particular to any net, to sort that data, to correlate the data back to the schematic, and to fit the data seamlessly into a simulation environment.  Also, the data should be reported in formats amenable to follow-on S.I. tools for reliability, noise, timing and power analysis.





Assura™ (FS)

Extracted Layout Probing




Direct back-annotation




Built-in RC Reduction








Table  Extraction Versatility Features


            Overall, its plain to see that the 3rd generation tool Assura™ is significantly more versatile in capabilities.  Not clearly represented here is the fact that the netlist generated by Dracula™ needs considerable work to prepare it for simulation.  For this matter, designers would typically resort to extracting only capacitors, since they could be simply inserted into the original Spice netlist and re-simulated.  The Diva™ tool made significant strides in the creation of an ‘extracted’ layout view which embeds a schematic of the devices, parasitics and their connectivity within it.  In both Diva™ and Dracula™, code could be built into the rule decks such that the user could select which layers would be looked at in terms of capacitance and resistance extraction.  Moreover, the actual parameterized equations that did the extraction could be visibly inspected – whereas the Assura™ code builds a massive table whose essence is cryptic and mysterious – as is the likely performance of it on the extraction of all the permutations of shapes likely to be seen.


5.5  Future On-chip Silicon Verification Experiments

            The structure and method of the silicon test structures is the most critical and complex part of the true silicon-to-simulation validation.  An understanding of the target tools and the design requirements must be coupled with the device modeling engineer's knowledge of characterization capabilities and processes.  This stage requires several rounds of proposals and debate in order to bring all issues and possibilities to bear.  An eventual on-chip suit of test structure may include any number of the following experiments:


5.5.1  General Passive Structures

            Further investigation will be based on the crosstalk models of [94], which included effective self capacitance and self resistance as derived from the Elmore time constant and effective transmission line coupling. 

The test structures in [95] employ an on-chip sample and hold circuit to probe the voltage directly within the interconnect. This method can measure RC delays, static crosstalk and crosstalk induced delays.  The circuit includes a built-in delay based on a PMOS pass transistor and pulldown NMOS transistor which exhibits a quasi-linear delay dependence on the gate voltage.  These elements are used to control signals on victim and aggressor lines, which are then sampled for induced delays.

Other similar sources of use include the time-domain technique of [96] and the structures presented in [97].  Differential comb-caps

            In [98], an extraction method to determine interconnect parasitic parameters based on differential Comb-Caps is presented.  This method can simultaneously determine the interlayer and intralayer capacitances, line resistance and effective line widths due to cross-chip delta-w.  It is noted that this method of dielectric thickness monitoring improves over off-line scanning electron microscope (SEM) and optical methods.  Also, intra-wafer line-width variation detection was done by on-line SEM or the Van-der-Pauws structure.  These are done manually and require huge structures, and can be destructive.

            Without going into the details too deep, the test structure basically includes a long serpentine trace, which has on either side a ‘comb-cap’ fitting into the snake bends, as follows:

Figure, Snake-Comb Structure from [98]
(Reprinted with permission of the IEEE)


            The structure is laid out with two different widths W1 and W2.  The line to ground capacitance of the snake-line is measured for both structures as Cw1 and Cw2. Likewise, the resistance for both lines is measured as Rw1 and Rw2. The neighboring comb lines are connected to ground.  Then:

                                   Eqn.. 5.1

                           Eqn.. 5.2

                                     Eqn.. 5.3

            Where ε is the dielectric constant of the interlayer dielectric and L is the length of the snake line. The fringe component of the snake line in both structures is the same and is thus cancelled (ergo the comb caps).  These equations can be used to calculate the variation in tox, line resistance, width reduction and coupling capacitance simultaneously based on just two structures.


5.5.2  Circuit-base Active Structures

            This section presents the active parasitics measures methods of ring-oscillators, CBCM and Built-in Signal Integrity testing.  Ring-oscillators


First-order ring-oscillators (RO) are easy to measure, easy to implement, but not the most accurate.  Generally, the RO period as measured on silicon is compared against that which is simulated with parasitics included.  The number of stages needs to be large enough to have a delay time large than the delay through a stage – otherwise chaotic instability occurs.  In this test, a standard 31 stage ring with representative set of cap structures is tested.  Here, the RO is compared against an extracted version without a test structure cap to form a reference.


Figure  Ring Oscillator, Ideal Vs. Extract (1.72ns delay) (By Author)


            The figure depicts the waveforms from the ideal schematic simulation overlaid against the extraction-based simulation.  After the first cycle (rising edges), the difference between the two is 1.72ns.  Given that delay is accumulative, the offset continues to increase on later cycles.  The extraction is the ‘baseline’ ring-oscillator, without a test load.  After a 100u^2 M2 over M1 cap is added, the resulting waveform (vs. original extracted) is captured.



            Here, the capacitance load is found to be 285.246fF, which leads to quite a bit of charge and discharge delay and over/undershoot. The offset is about 16.5ns.  Charge-Based Capacitance Method (CBCM)


As first presented in [19], the CBCM method is a current-based capacitance measurement method that is primarily limited in accuracy by the device matching between inverters.  The method is estimated to have an accuracy of 10 attofarads.           

The CBCM method uses two out of phase inverters as current mirrors.  One of the inverters is loaded with the test fixture.  The following figure depicts their design:


Figure, CBCM Circuit (from [19]) and Implementation
(Reprinted with permission of the IEEE)


Here, V1 and V2 represent the input ‘clock’ signals, which are out of phase with respect to each other as follows:

Figure, CBCM Circuit Stimulus and Run, from [19]
(Reprinted with permission of the IEEE)


The two non-overlapping signals guarantee that only one of the two transistors in the inverter is conducting at any given time.  Thus, when V2 goes low, the caps will charge up. When V1 is high, they discharge.  The capacitance can be determined from the frequency analysis.

Inet = C * Vdd * f                                               Eqn.. 5.4

C = Inet / (Vdd* f)                                              Eqn.. 5.5

Where:  Inet =  I – I’, DC current , Vdd = Supply Voltage, f = clock frequency

Figure, CBCM Circuit Simulation, from [19]
(Reprinted with permission of the IEEE)


From these equations, it can be seen that the slope at any given Vdd or frequency f provides the differential capacitance (C Load = Slope / f). Furthermore, through regression testing at different points (Vdd, f), the accuracy can be bounded.  The accuracy can also be improved by subtracting C reference from C load to remove capacitance due to the MOS circuit overlap.

            The CBCM has found much popularity in industry and academia. Various EDA companies are noted to use it, as are semiconductor design and manufacturing companies – in the validation of the LPE tools.  Further discussion and development by the original authors is presented in [99] [100].  BIST for SI


In [101], a method similar to built-in self test (BIST) with automatic test pattern generation (ATPG) is presented for detection of signals which exceed a specified noise margin. Basically, a modified differential sense amplifier is used to detect noise above a given threshold.  The amplifier, called a Noise-Detect (ND-cell), may then be connected to either a compressor, flip-flop, or counter to signal when, or how often, S.I. events occur.  As with normal fault-coverage, a large set of test patterns is applied to the inputs of the design, and the test circuitry is monitored to determine overall performance.


5.5.3  Summary


In summary, no level of 3D field-solve validation of LPE tools is of any substance until there exists actual silicon measurements against which to compare.  All of the tests presented above may be implemented, but like all chips, die-area and resulting costs are the primary inhibitors.  If, on the other hand, the primary SIPPs have been measured, then the overall factor of concern is the robustness, consistency and accuracy of the LPE tool.







If you don’t test it, it will fail – guaranteed

Carver Mead [102]


This chapter presents the scripts and framework developed for the quality assurance (QA) of Process Design Kits.  The scripts and methods are bundled under the moniker RMS, such as to encapsulate an extensible system for developing and managing regression tests for the various stages of the design flow.  These include various simulation and analysis scripts developed to address some of the concerns presented in Chapter 3.  Also, the tool is interfaced to an LPE validation and benchmarking system.  In total, the use of RMS in the validation of a Process Design Kit includes multiple Ocean™-script based simulations to validate models, models over corners, simulator-X vs. simulator-Y, regression runs for layout vs. schematic, design rule checks, layout parasitic extraction, and various ‘loop-back’ Ocean™ script comparisons such as LPE versus Ideal schematics and LPE over corners.  These tests are roughly categorized here for reference:


I.                    Regression tests on layout versus schematics,

II.                 Regression tests on design rule checks,

III.               Regression tests for parasitic extraction accuracy benchmarking

IV.              Simulations to validate parasitic extracted netlists against the ideal

V.                 Simulations to validate models and simulators,

Table 6.0a:  Classes of RMS PDK Tests


Organization of Chapter 6

This chapter is organized as follows:  First, Section 6.1 presents several works related to the RMS project: primarily EDA Frameworks and Design Flow Managers.  No directly similar work was found. Next, Section 6.2 presents the design flow integration of RMS, its usage model and philosophy, and a little bit on its coding architecture.  Section 6.3 then delves into the methods of validation of Physical Verification tools such as DRC and LVS.  Regression naturally fits in this role as there are numerous tests to perform, and the evolutionary nature of the kits demands frequent re-runs of the test suites (Regression Classes I & II from above).  Section 6.4 then discusses the use of the RegMan tool to execute various evaluations of a large set of parasitic extractions of layout structures.  Comparisons of the resulting extracted netlists of a large suite of layout structures is made with respect to an industry standard 3D simulator (Class III from above).  Section 6.5 wraps up the Physical Verification validation with a system for comparing simulations of extracted netlists to the ideal schematic simulation.  The stage is motivated by the necessity to bind the layout to the original simulation - a more aggressive LVS  (Class IV from above).  The sub-system scripts SimReg and Rempars are further described.  Following, Section 6.6 investigates general device models and simulators, and presents methods for cross-checking models, schematics and simulation.  The RegMan tool is extended to run regression sets of the defined simulations and evaluations through calls to simulation scripts  (Class V from above).  Finally, Section 6.7 concludes the chapter with a review of the merits of the system and its results. Also, future directions for the project and tools are presented including uses in layout assistance, design flow management and design of experiments, and finally, constraint management and error propagation analysis.


6.1  Related Work

            The concept of regression systems is certainly not new.  There are entire volumes dedicated to the optimization of regression systems based on prioritization of work, scheduling and intelligent search.  In the arena of EDA, regression systems also exist – but specific literature regarding its implementation and strategies are scarce.  There have, however, been a few papers found related to the encapsulation of EDA tools through frameworks.  It is surmised that if a set of tools can be automated through a framework, it is fairly straightforward to add post-processing utilities to evaluate the results of various jobs.  Similarly, a design flow manager tool can be particularly useful in controlling regression runs through a set of tools, thereby enabling test of data flow and error propagation.  With this in mind, several papers related to EDA frameworks and design flow management are investigated.


            First off, the RegMan tool and its usage in the quality control of PDK’s has been introduced by this author in a previous paper [103].  That work is largely expounded upon in this chapter.  In particular, only the basic concepts of the tests run and the general purpose and results of the tool were presented in that work.

            In [23], a layered framework for managing design data and CAD tools is presented as essential for coping with interconnect and packaging modeling and simulation.  This tool assists the designer through a design flow of various tools built into the framework, which include simultaneous switching noise simulation, cross-talk analysis for lossy and coupled transmission lines with linear terminations and more. The designer is assisted in running the tools in the right sequence, in knowing where the tools are and what versions, in run-time environment setups, command-line requirements, interpretation of error conditions, translations, and is assisted in computer resource management (presumably – a form of load sharing).

            The RegMan tool developed in this research and presented herein follows closely with the above philosophy.  The tool has built-in knowledge of the design environment, it can run most of the tools involved in the design flow over a load-sharing facility, has built-in error detection and reporting capabilities, and provides a means of managing designs.  The most apparent difference between the descriptions of the two systems is that RegMan was designed specifically to exercise the physical verification system over a large set of cases.  The fact that it has been extended to run other tools is not entirely un-premeditated, as it has followed on the author's previous research into workflows with Java, CORBA and intelligent agents. 


EDA Frameworks Related Work

            A few works have been found related to the management of EDA Frameworks, including [24] [25].  The first presents a scripting system with wrappers to control the flow of tools.  The second presents a flow manager based on an object-oriented development methodology. Both are admirable in the ability to deal with disparate tools in environments which are similarly multi-faceted.  The architecture to be presented here does not yet sport a flow manager, but its addition can be done easily enough through hard-coded scripts.


6.2  The RegMan Architecture and PDK QA System

This section presents the design flow integration of RMS, its usage model, GUI, and philosophy, and a little bit on its coding architecture.

RegMan (Regression Manager) is a graphical tool-control framework designed to facilitate the distributed processing (LSF) automation of simulation, physical verification and parasitic extraction runs.  It is generalized to work on multiple design kits and with various sets of physical verification rules by use of wrappers and interfaces. It can be executed in either batch mode (to facilitate scheduled runs), or interactively through a perl/Tk GUI.  Various features have been added over time to enable such tasks as the automated generation of run-lists from design libraries, snapshot recording of the state of the environment a cell was run under, and numerous modes of reporting.

Although the preceding chapters have been developing the motivations and methods for QA of design kits, and physical verification tools in particular, it is the RegMan utility, and moreover, the Regression Management System (all the tools and methods combined), which are centerpiece in this thesis.  Its development required a full ‘holistic’ concept of the QC of PDK’s and the design S.I. issues driving PDK capabilities.  RegMan, in its simplest disquise, may be seen as simply a vehicle to automate and execute jobs on a compute farm. But, as with many ‘EDA frameworks’, it is a conglomeration of procedures that setup the run files for various tools, invoke them, wait for their completion states, and then parse the results files to glean information of the success/failure of the test.  Its actual structure and nature are better defined by the history of development, and by the motivations aforementioned in the several chapters preceding.


6.2.1  RegMan Verification Flow


As presented in Chapter 3, the various possibilities of error injection into a design include a web of interactions between the design tools and flows, Process Design Kits, and the design environment proper (in terms of data management and processing).  All of these components eventually funnel their results into the physical design, as can be seen in the ‘Merged Layouts’ stage of Figure 3.2a, a typical top-down design flow.  In this sense, DRC, LVS and LPE form the crux upon which the validity of the entire design environment can be determined.  Thus, RegMan is focused on the physical verification basis for QC of the process design kit … etc. Of course, various stages of the kit have their local inputs and outputs, which are obvious requirements in any tool or library development.  Many of these components are listed in Appendix C, Process Design Kit Development Checklist.  Some have proven more accessible to regression, and in particular can be validated in the process of validation PV, or are sufficiently similar as to require only small extensions to the scripts in order to accommodate.  The following ‘Regression Flow’ (Figure 6.2.1a) depicts the manner in which RegMan, and its derivative scripts, have been fit into the seams of the basic Analog LPE Flow (Figure 3.2.3a) and its PDK to strategically optimize likelihood of error or inconsistency detection.




Figure 6.2.1a:  RegMan Design Kit Validation Flow (Figure by Author)


A review of this flow and its back-ground components will expedite the presentation of the RegMan QA strategy and its resulting architecture.  First-off, the shaded (cyan) shapes headed by a ‘RegMan’ label indicate the points in the flow in which this tool acts.  The other shaded shapes (yellow) indicate likely points for tools. The rest of the shapes indicate I/O data files and libraries.  The elements are briefly described above in Table 3.2.3a,.  There are six points in the flow above into which the regression tools fit.  Un-coincidently, these coincide with the regression Classes described in Table 6.0a, and with the related sections as presented in the chapter organization.

This chapter is organized into six sections by features of the Regman QA capabilities and which classes of tests they operate on. 

First off, the Regman DRC/LVS capability is reviewed (Class I & II, Section 6.3).  A list of cells, their run modes and settings, and the expected results are fed to RegMan – which then farms the jobs out to LSF, monitors the jobs, and parses the results of each as they finish.  The resulting outputs are compared against the expected outputs, and a report of OK/NOK is generated.

Next, the Regman LPE tool validation is explored (Class III, Section 6.4).  Using the same engine as for LVS/DRC, the LPE program is run on a large set of test structures.  The results are compared to those generated by a 3D field solver on the same set of structures, and to some built-in simple analysis.

Next, Regman Sim vs. LPE is described (Class V, Section 6.5).  In this section, the ideal simulation of a schematic is compared against its layout parasitic extracted simulation. Various modes of extraction (RC, R-only, C-only, no parasitics, max, min and nominal corners, etc.) are tested.  Simple device templates or ring-oscillator circuits may be employed.

Next again, the Regman Sim vs. Measured stage is presented (Class IV, Section 6.6.1).  Again using the SimReg Ocean™ script, jobs may be run over simple device level tests (such as transconductance on mosfets), and compared against the measured data from Keithley/Probe.

Finally, the Regman Sim vs. (Sim, Sch, & Corners) operations are reviewed (Class IV, Section 6.6.2Section 6.6.4).  In this stage, using the same system (SimReg) as for the above, two different simulators are used to run on one equivalent netlist.  Given that the device model definitions and the netlists themselves will always be slightly different, the question is ‘how different’, or are they grossly different.  Also, the simulation may be compared directly to expected results given the schematic inputs (Sim vs. Schematic).  This is more a sanity check.  The simulation of one model corner vs. another should consistently bound the outputs in expected upper and lower ranges.  This test has found that to not always be the case (in fact, they were once inverted).  As an addendum to this section, the Regman LPE extract automation is presented (Used in Classes III & V) (Section 6.3.3).  Although not specifically a test, the execution of the LPE jobs must be automated in the same manner as DRC’s and LVS’s.  Since there are no desired fail conditions (as apposed to desired DRC error and LVS error detection), a positive completion of a mode of LPE is a test in of itself.

This outline summarizes the existing modes of operation of the ‘Regression Management System’, or RMS.  Further tests and capabilities are proposed in section 6.7, Summary and Future Directions.


6.2.2  RegMan Modus Operandi and GUI


RegMan takes as input a comma separated vector (CSV) formatted file describing test cells and their mode of pass/fail expectation.  Outputs for physical verification validation (DRC, LVS) include various reports on OK or NOK (not-ok) tests, matches, mismatches and failed runs.  For the parasitic extraction benchmarking (Chapter 5), reports on accuracy of the extract tool versus an industry standard, TMA Raphael™, are reported.  On the simulation regression tests, results may be compared to tables of pre-calculated expectation (by hand or automation) and summary reports are generated.  The general operation of the tool is as follows.  Each paragraph introduces a usage-mode, which is then further explained in sections below.

The basic operation of the RegMan tool begins with reading a cell-list into an internal table.  A cell-list element will contain all the information necessary to invoke the specific sort of run it is targeted to.  This is done by ‘tags’ such as ll=”someLayoutLib”, where ‘ll’ is a tag recognized as defining the layoutLib variable in the run job.  Given that any particular required input can be defined by a given or added tag, the regression tool is extensible to just about any tool. More on cell-lists and command-line inputs is presented in Section below.

The output expected from the run is specified by a tag x=”some_Token”, where some_Token is replaced by a token from a pre-defined list of token/string pairs.  These tokens are held in a file “.tokrc”, such that they may be added to the list by a user. The given token on the cell-list tells the program what to look for in a log file, and the type of file.  More on tokens is presented in Section below.

After the tool startup, cell-list file input and tokens-file input, the tool will either (a.) Wait for user action in GUI mode, or (b.) execute the entire run-list according to the command-line job-control specification.  If in GUI mode, the user must open a ‘Select-Cells’ list-box and select the cells to be run. The selected cells are collated into a run-list hash-table (Section  Then, the cells are actually shipped by the ‘execute’ button.

The execution consists of wrapping each cell in a command for the indicated tool, and passing it to the job-scheduling engine.  The program may also be required to create various setup files in the expected run directories (e.g., run specific files for Assura™).  The information for these is either hard-coded (where common to all cells, gleaned from the cells-list, or obtained from environment variables and the ‘.regrc’ (RegMan Resource) file.  Most of the run-control and environment variables are kept in environment control hash tables (Section

After all cells have been shipped, a monitoring routine is activated – which cycles through the list of cells repeatedly.  If the cell is not marked completed, a check on its run status is made. If it has completed, its results are parsed and compared against the expected, and a message added to the cell’s report slot.  The cell is marked completed.  When all cell’s are done, the array of reports is printed to a file.  Also, the report is available for interactive ‘list-boxes’, wherein specific cells may be queried for various sub-reports (a display of various files in its run directory).  More on the reporting modes is presented in Section below.

The Graphical User Interface for RegMan is presented below.



Figure6.2.2a  RegMan Graphical User Interface (By Author)


The interface enables the user to configure the environment, choose processes, choose various stages (LVS, DRC, LPE, SIM etc) to run on the cells list, choose subsets of cells to run, view the setup files for any cell, and view log files and results for any cell.  The list of basic options and actions available exceeds 160.  Further details of the GUI operation are presented in Appendix J, RegMan Graphical User Interface.



6.2.3  RegMan Architecture and Coding


The tool is written in Perl/Tk and consists of about 11K+ lines of code.  The language Perl was chosen for the framework due to its flexibility, ease of use in parsing reports, and the extremely large set of modules available.  Of course, if there were to be any serious number crunching to be done, that would be spun off to a ‘C’ or ‘C++’ based program.  As noted above, the code reads several inputs on startup, including command-line inputs, a cells-list file, the .regrc initialization file, the system environment variables, and the .tokrc file.  Once initialized, it will go into a GUI input hold state.  From there, the use may further configure the run mode through the ‘Env’ menu options.  Cells List, Job Control Inputs


On startup, Regman reads a file with a list of cells. Many things can be controlled from the cellsfile, including library, cellviews, technology corner, switches, ground nodes, etc. Basically, anything that might set in an RSF control file should be definable on the cell line. In addition, an x="TOKEN" is set, where TOKEN is one of a number of tokens that represent an eXpected string in one of the reports.  A file, ".tokrc" contains all the available tokens and their string bindings.  For example: MATCH = "CLS_MATCH" indicates that the LVS run should match, and the cellname.cls file contains the expected result.   Tokens may be added by the user to the .tokrc file for each new type of condition tested. 


Cell-List File Format

            The cell-list file describes the cell to be run (its library, view, etc), and provides tags describing the run mode.  The following table depicts the format used for an Assura™ LVS run.


FORMAT: (note each field is comma delimited)

DESCRIPTION, CELLNAME, tag0=value, tag1=value, ….,tagn=value



Replace with anything you like


The layout cellname.  If the schematic cellname is different, specify

ll= LayoutLib

Specify whenever the layout Library referenced changes from above cells


This is actually redundant


In case the layout is in some view other than "layout"


May be gd2 or df2, defaults to df2


Defaults to same as layoutLib


Defaults to schematic


Defaults to same as layout Cellname if not specified


Defaults to gds2, may be spice, cdl


Need for RCX.  For example, g=PBKG


Set according to the report expected.


Set to one of the assura_tech.lib corners available.  Also settable in menu


May be spice or av_extracted. RCX will generate av_extracted_$technologyField, to run all corners

Table,  Cell-List Format for Assura™ LVS run


            The Job-Control inputs are used to pre-configure the GUI, or to allow direct execution of the run-list in batch mode.  Message Tokens


The message tokens provide a means to abstract out of the program the messages expected to be seen in any particular report file.  The TOKENS may be written and read from the .tokrc file.  A set of default TOKENS are encoded into hash tables within the program.   The most prevalent token for LVS is X="CLS_MATCH", which of course means a clean LVS run is expected.  In the .tokrc file, this is defined by:

CLS_MATCH= "Schematic and Layout Match"

The standard TOKENS are mostly just various errors that have been seen in the report files.  The format is simple:


TOKEN File format

The following summarizes the form of the error-string tokens:


TYPE_TOKEN="some string you expect to see in file TYPE"

The "TYPE"s are


·        "LOG_" : looks only in the cellname.log file  (the Assura™ run status log )

·        "CLS_"  : looks only in the cellname.cls file (the Assura™ brief report)

·        "CSM_" : looks only in the cellname.csm file (the Assura™ summary report)

·        "ERR_"  : looks only in the cellname.err file (this contains the DRC error reports)

·        "RCX_" :  looks only in the rcx.cellname.log file

·        "SIM_"  : looks only in the cellname.out file in the simulation directory


An example of the tokens list for LVS is presented in Figure J.1 below


            The Tokens may be dynamically edited through the RegMan GUI, and re-integrated into its internal hash-table lists.  An example of the interface in presented in Appendix J, Figure J.1, RegMan Tokens Editor.  Run-List Hash Tables


            The cells-list file is read, and each line parsed into a separate hash-table object (one per cell).  The object describes everything needed to run that cell, and holds fields to track the cell’s run status and run results.  The objects themselves are tied together by a hash table, with the cellname providing the key.  Environment Control Hash Tables


Yet another hash table is built for all the control variables and the system environment variables.  These variables are used to define various tools' license files, run paths, default conditions and run directories.  The jobs themselves may use the environment variables as inputs to the tools (via wrappers).  For example, depending on the type of job (simulation, verification, unix, linux etc.), the LSF queue may be specified.  The environment variables and control variables may be displayed through the GUI – as a debugging utility.   Reporting Modes and Snapshots


One of the side-advantages to RegMan is the creation of a snapshot of the environment for each cell run. This snapshot enables detection of changes in the environment which may render previous runs indeterminate.  It has the advantage of enabling users to detect if the design-kit, schematic, layout, models or verification rules have changed. This snapshot records the DFII version, Assura™ version, CDK date-time, layout and schematic last save date-time, and status and date-times for the DRC, LVS and RCX runs.  The previous snapshot is saved with a data-time filename.   When a listbox window of cells is requested, Regman will display the status of a comparison of the current environment's snapshot to the last run snapshot.  Thus, the user may discern which cells have had changes to their schematics or layouts, or whether any overall environmental changes have ensued since the last run.


6.3  Physical Verification Tools Validation (DRC, LVS, LPE)


This section delves into the methods of validation of Physical Verification tools such as DRC, LVS and LPE.  Regression naturally fits into this role as there are numerous tests to perform, and the evolutionary nature of the kits demands frequent re-runs of the test suites (Regression Classes I & II from Table 6.0a:  Classes of RMS PDK Tests ).

As noted early on, the physical verification stage includes the DRC, LVS, and the follow on LPE stages.  The PV process is highly prone to error due to the creative process of transfer of a design from schematic to layout.  From the very beginning of this project, RegMan was necessary to run LVS and LPE jobs such that parasitics data could be compared to that from another source.  The requirements of generalizing the run of various verification tools, on various design kits, from various projects were carefully analyzed.  The core of the job control engine was thus targeted to be tool independent.  The ability to run and evaluate DRC jobs was a fairly straightforward extension.

As presented above, RegMan operates on a list of cells which defines their run modes and settings, and the expected results. RegMan then farms the jobs out to LSF, monitors the jobs, and parses the results of each as they finish.  The resulting outputs are compared against the expected outputs (defined by the x=Token), and a report of OK/NOK is generated.  The following sections elaborate a bit on each of the test modes: DRC, LVS, LPE.


6.3.1  RegMan Verification of DRC


The DRC rules are generated from a long list of historical process control tests and the post-fabrication results in terms of yield impact and reliability.  A typical DRC rule deck may contain hundreds of rules, implemented in something like 4000 and 6000 lines of code.  The rules are made to be as general as possible to avoid false errors and allow for layout creativity where warranted.  Given that most rules apply to many permutations of geometry orientations and relations, the number of checks needed to test just the minimum and maximum boundary conditions of rules in intractable.  In this respect, it becomes imperative to automate the process of rule-deck validation and accelerate through distributed processing.  Also, one can get by with the traditional pass/fail quilt, but the nature of the layers creation through Boolean operations within the rule decks leaves too much room for error whenever a rule is modified or added.  DRC regression tests have been automated for years now and have proven successful in QA’ing rule decks.

The biggest problem with the DRC QC is that of actually building the test templates.  Given that there may be hundreds of rules to check, and each rule must be tested by a least one pass and one fail example, the task of just building the QC templates leave much room for error.  Given that the structures have been built (correctly), whether manually or through some complicated layout generation system (of which a co-worker has developed), there still remains the task of defining to the regression engine the expected results of each cell.  This can be significantly automated if the cell-names themselves indicate the type of test (rule number) and the expected result.  If so, then the regression system need only scan the library’s list of cells and automatically generate the cells-list and its command-line options.  RegMan does not currently sport this ability (lists are hand generated), but the capability is there as provided by the aforementioned co-worker’s system.


6.3.2  RegMan Verification of LVS


Generally, LVS rules are developed to ensure equivalence of the netlists generated from the schematics and layouts.  Primarily, this entails checks of the isomorphic equivalence of the netlist graphs (or hookup), the device types and device sizes.  Here there is quite a bit of leeway given to the actual construction of devices.  For example, a MOSFET with W=x and L=y may be generated single-fingered, multi-fingered, inter-digitated with another device and surrounded by dummy poly.  There are quite a few esoteric practices allowed in the translation from schematics to layout, including use of parasitic devices in the schematics, multiple-potential substrate regions for analog and digital sections, and smashing of parallel devices in either the schematic or layout. The list of conceptual tests already created exceeds 200 (See Appendix G, LVS Regression Checklist). The number of permutations of device constructions is in the hundreds. The list of Token/strings pairs by which RegMan evaluates the reports is about 100.  Given that at least one relevant component in the kit is likely to change daily during kit development, this tool is seen as essential to the synchronization of parallel kit development.


6.3.3  RegMan Verification of LPE Runs


Unlike the parasitic extract accuracy benchmarking, the validation of the LPE tool functionality employs methods similar to that of LVS and DRC validation. In this sense, it may be automated in the same manner as DRC’s and LVS’s.  Since there are no desired fail conditions (as apposed to desired DRC error and LVS error detection), a positive completion of a mode of LPE is a test in of itself.  Some examples of LPE extract verification tests include:


·        Given a threshold of resistance to extract, does it extract as expected

·        Given a set of nets to extract, does it extract them, completely and only

·        Given a set of odd structures to extract, does it successfully extract them

·        Given various LPE corners, does the tool extract parasitics as expected

Table 6.3.3a:  LPE Tool Functionality Tests


            As can be seen, the manner of LPE extract tests can be rather non-specific and non-deterministic.  Where possible, simple LPE runs are setup.  Often, a run will simply crash if it is not configured correctly.  But, for most tests, more advanced post-extract simulation experiments must be employed to determine the correctness.  These are more fully addressed in Section 6.5, Validation of Parasitics Laden Circuits.


6.4  Parasitic Extraction Tool Benchmarking

This section briefly discusses the use of the RegMan tool to execute various evaluations of a large set of parasitic extractions of layout structures. In this mode, RegMan runs comparisons of the resulting extracted netlists of a large suite of layout structures to those of an industry standard 3D simulator (Class III from Table 6.0a).  The previous chapter (5) has presented in detail the methods and experiments for benchmarking parasitic extraction tools.  These included running many LVS/RCX jobs over various batteries of layout structures, compiling the results, and comparing against similar results from the Raphael™ regression.  As noted in Section 5.3.3, the actual means to run and manage these many jobs demanded the development of a regression automation system.  This would also require the parsing of the SIPPs data file, parsing of spice netlists, and parsing of the Raphael™ parasitics results reports.  The overall process of the parasitic tool validation and benchmarking consists of the following basic steps:


1.      Obtain the data for the SIPPs of the Process (Measured from Silicon)

2.      Define the techfile for the Raphael™ and LPE tools

3.      Run Raphael™ regression over the 11 primary topologies

a.       Includes 35 permutations each on layers

b.      Each layer permutes Width, Spacing, L

4.      Generate equivalent layout structures with Skill code

5.      RegMan runs LVS and LPE over all structures

6.      RegMan parses the Raphael™ capacitance database

7.      RegMan parses the LPE extracted spice files

8.      RegMan compares and analyzes the results, and writes a report


The first several steps regarding SIPPs and techfile development, and Raphael™ regression have been discussed in Section 5.3.1 and Section 5.3.3 above.  The generation of the Raphael™ equivalent layout structures has been presented above near Figure  Skill Code Cell Generation Core.  The basic operation of using RegMan to run LVS or LPE has been presented above in 6.3.2 and 6.3.3.  Also,

Thus, the only thing left to do for RegMan is to parse the Raphael™ data, the LPE data, and compare the two.  There is nothing amazing about the parsing of the data – it is just a bunch of files in an ordered directory system.  But, it was a bit tricky to determine the corresponding components (top-plate capacitance, side-wall fringing down, line-to-line coupling).  This was done by manual inspection of the files and simply understanding the form of the reports.  As RegMan parses the data, it builds internal hash tables for the structures.  Each structure has variables for the component parts, and all the structures are rolled into an overall hash-table.  When the parsing and hashing are completed, the tables are compared, by layer pairs, parasitic components, and principle layers and reports generated depicting percentage error.  The results of one such run are presented in Appendix G.  The report consists of the errors for each structure, averages for each primary layer and layer-pairs, and each class of W, S, L permutations.