Ramping up yield results, Jean-Marie Brunet, Bill Graupp, Luc Tissot, Mentor Graphics explore the importance of data exchanges across the manufacturing process.
Pattern related defects from nanometre processes are forcing new design signoffs
Getting acceptable manufacturing yields from the latest semiconductor processes necessitates the formation of much closer relationships between design, mask-making and manufacture, Jean-Marie Brunet, Bill Graupp, Luc Tissot of Mentor Graphics discuss
Until recently designers could achieve their task by respecting a set of manufacturing rules laid down by the foundry in the design rules manual (DRM). The designer laid out ideal polygons to represent the circuit structure, the mask house produced reticles with those structures on them, and the wafer fab printed the structures on the wafer. The layout was guaranteed to be manufacturable provided that they performed all necessary checks to ensure no violation of the rules.
With the latest processes this straightforward data exchange scheme between the world of design and the world of fabrication just doesn't work anymore. Unfortunately for designers, advances in lithography equipment used in manufacturing have not kept pace with the rapid evolution of semiconductor processes, creating situations where the geometries laid out by the designers are at best distorted and at worst can't be printed on silicon at all. The problem was already there at the 130nm node and has got worse with each new smaller technology: 90nm, 65nm and now 45nm.
Since the 130nm node the manufacturing world has been using resolution enhancement technology (RET) to try and compensate for the distortions of design intent involved in printing on silicon. This change in design handoff has remained largely invisible to designers although most are aware that the masks used in the foundry look very different from their layer drawings. But at 65nm and below RET implementation has become so complex than it cannot alone guarantee a satisfactory yield under all manufacturing conditions.
Another type of technology is now presenting a second wave of change. This technology focuses on applying new methods and tools in design in an attempt to achieve acceptable yield. As manufacturing technology becomes more complex, including the lithographic system, material science and sheer scaling, it is becoming much harder to ramp to acceptable yield and maintain it. This second wave, deemed design for manufacturability (DFM), requires a new level of communication, education and partnership between design companies and foundries. The new method definitely alters, or even completely departs from the current design and manufacturing flow. To not shift methods can have undesirable consequences: acceptable yield in advanced nanometre technologies will not be achievable.
Like that of traditional DRC manufacturing methods, DFM is largely a full-chip problem: data must be made available in its full context. This means having access to DFM yield limiting issues in a cross-layer and cross-hierarchical sense. Having the ability to look across hierarchical boundaries to see how the data in one cell interacts with data outside the cell is essential. It may be possible to improve the manufacturability of one layer by manipulating another. Similarly, a cell with no known manufacturability issues may significantly impact the manufacturability of a fullchip when it is placed into context.
To implement analysis, a method of defining levels of severity must be in place. This requires a method by which the author of a configuration file can define the issues of interest and associate each of these issues with a quantifiable level of impact. For example, by calculating the number of metal transitions with one via divided by all the metal transitions in the chip, a designer can have a better feel for the total impact this issue may have on their chip yield, as pertains to an acceptable limit set by manufacturing.
Similarly, this issue can be weighted in merit against other DFM related issues, resulting in a total ‘grade', representing how well the design layout can be manufactured.
Fortunately, the core technologies required to build this design-for-yield flow exist now. Tools capable of reading and analysing layout topography, in a manner that preserve hierarchy for upstream and downstream analysis are currently available, with integrated links to many existing design environments. The only new requirement, underway at advanced semiconductor companies, is a more robust communication mechanism that allows designers to be fully informed in order to make decisions about yield.
A new feature emerging with 65nm development is the ability for designers to simulate the process variation effects of lithography and etch over various circuit layout configurations. Using these simulation outputs, a SPICE netlist can be extracted for each of several combinations of process conditions. Simulation of the desired circuit with these extracted SPICE netlists allows the designer to accurately perform timing and power analysis across silicon process window. Using the simulated circuit performance results in conjunction with the actual fab process control distributions, it is possible to evaluate the relative impact on yield of multiple process layout configurations due to systematic process variation.
Designers can now view physical shape changes of the original layout resulting from the simulation of lithography and etch processing, based on foundry models. Design rules can be created and used to flag potential weak points in the resulting simulated layout. Lithography and etch processing distributions representing the extent of actual foundry variations can be added to the simulator as well, allowing designers to view the entire range of physical shape changes to the layout that result from the full range of foundry process variation. Designers can then improve the robustness of the layout to physical shape alteration due to silicon process window by iteratively changing the layout of the flagged regions and re-simulating until those regions are no longer flagged as weak.
Another use of these tools is to simulate discrete and parasitic devices extracted from the layout. These device extractions can also be done to correspond to specific choices of process parameters. Using the device extractions, the designer can simulate functional performance parameters of the circuit such as timing and power over the expected silicon process window during the design phase of a product. Current functional simulation techniques (not utilising layout simulation) assume that all discrete components will vary uniformly across the design, which does not represent what actually happens during silicon processing. Running layout simulation will take into account spatial context variation in lithography and etch processes, and so the subsequent device extractions will be spatial context dependent.
Limitations of this type of analysis are based on the quality of model inputs. The process models will affect the contours generated, which are used to extract physical parameters. The equations used to reduce the contours into SPICE model input parameters must be silicon validated. SPICE models must be altered to remove the constants used to adjust for process variation, to prevent double counting.
The scoring structure is constrained by the number of coverage points in each processing variable dimension. The more points used, the more accurate the response curve will be, but at a cost of run time to calculate more process conditions. If one wants to accelerate the analysis time by constraining the run to just the critical timing paths, these paths must be identified in layout first.
For random defect yield characterisation, probabilistic models based on Poisson, Murphy, negative binomial, etc. equations have been developed. These models comprehend the cumulative effect of each process layer to account for the entire process flow. Output of the model is a yield related score based on defect density estimations from the foundry.
The original methodology was crude, since sensitivity to design was not accounted for. The current technology can take into account design layout through the use of Critical Area Analysis (CAA) tools. These CAA tools take into account line spacing and widths and defect size distributions (provided by foundries) to improve the yield model calculations. But also, layouts can be modified based on feedback from this scoring system to reduce susceptibility to random particle
Additional DFM tools which automatically modify layouts to improve manufacturability are also becoming available. Improved density fill tools for CMP control, via enclosure tools, wire modification tools, and via doubling tools are just some of the new EDA offerings that improve layouts to both systematic and random yield limiting effects of manufacturing. However, an important problem with these automated layout tools is that there is no scoring system to determine the ROI of using them. Users are limited to the CAA results to estimate the potential gain.
As geometries continue to shrink, the ramp up time to reach stabilised yield is increasing while the yield at maturity is declining. This is primarily down to systematic pattern related defects and needs to be addressed more effectively. New methodologies are required to improve manufacturability by reducing the number of systematic defects.
To develop these new techniques all players in the chain from design to mask to fabrication are having to team up to ensure that layouts possess sufficient upfront quality to permit cost-efficient mask making and smoother manufacturing. EDA tools must incorporate sufficient knowledge of the complete RET flow employed by the mask shop and the foundry to perform the actual corrections and take into account the specific optical characteristics of the stepper.