Enhancing Manufacturing Test and Yield in the 90nm Era
Catching, pinpointing and correcting functional and layout errors, from initial modeling to the manufacturing stage, should, in theory, result in maximum yield. But in the real world, and in spite of the best efforts of everyone from layout designers to yield engineers, chips still fail to yield as expected. Even if yield seems acceptable at the time of manufacturing, chips with “test escapes” can cause failure after shipping, creating liability for the design house and manufacturer. This problem becomes especially severe at 130nm process technology and below, a point at which tried and true methods of detection and correction hit a wall. Now with 90nm manufacturing ramping up and 65nm on the near horizon, yield issues are at an urgent level; developing a solution is critical. Chip failure will only increase if new methods of process aware design, testing and diagnosis are not applied. And designers and manufacturers alike will need to address the issue.
When designs fail first silicon, it is necessary to determine the root cause of the problem (physical, design error, etc.) as quickly as possible. But in the nanometer era, the reasons for failure are as complex as the chips themselves. Antennae effects, planarity, copper manufacturing process, resistive vias and bridges, and pattern-dependent defects that are inherent in dense embedded memories, have physical impact and create unintended consequences at silicon.
Adding to the complexity are post-layout applications, such as resolution enhancement technologies (RET), which became an issue at 180nm; at 130nm, RET became mandatory. RET can greatly affect yield depending on the specifications of the full chip design, as can other post-layout modifications, such as metal fill, slotting and redundant via insertion.
With yields dropping and new failure mechanisms emerging, effective methods must be developed to increase the accuracy and capability of manufacturing test of complex SoC designs, which will be prevalent at 90nm. But testing for more types of defects results in increased test data volume and longer test times: test sets created by traditional methods are so large that running a comprehensive set of tests is cost prohibitive. While much attention has been paid to reducing the cost of test, the real struggle that semiconductor manufacturers face as they move to 90nm process technology is one of controlling cost while maintaining or improving the quality or effectiveness of the testing they do.
The test perspective
Other issues also became a concern when process technology shifted from 180nm to 130nm. Defect testing methods, known as “stuck-at”, became stuck. Higher failure rates made it obvious to yield and test engineers that new test methods were needed to detect new, subtle defect types. Case studies revealed that speed-related failures increased up to 20X when moving from 180nm to 130nm processes. (Fig. 1) As a result, “at-speed” testing became necessary to detect the variances in signal speed. Unfortunately, at-speed tests have consequences–very large data files, which in turn produce a secondary problem. Standard compression methods lack the capability to handle the exploding data file sizes at the smaller geometries. At 90nm, the amount of test data and test times will be unmanageable without advanced compression tools.
Defects, such as resistive bridges or vias, are more prevalent and manifest themselves as speed-related failures. But the cost of “at-speed” test is larger pattern volumes and longer test times, as much as 10X. That additional amount of test data volume cannot be processed efficiently; in fact, it creates a serious data bottleneck. In order to make “at-speed” test effective in a production flow, a new methodology was required to efficiently handle this step function increase in test data.
Integrated results
To significantly reduce the amount of test data required, and therefore the test time, an advanced on-chip compression solution is essential. Through this technique, highly compressed pattern sets (up to 100X smaller than the original size of the test set) are created. These highly compressed test sets are then delivered to the device from the tester just as they are with traditional scan patterns. The difference is that the on-chip de-compressor takes the highly compressed pattern and expands it to a fully specified pattern and delivers it through a large number of internal scan channels. Once the pattern is delivered and the responses are captured, another on-chip compactor compresses the response as it is shifted back out to the tester. Using this embedded deterministic approach, the amount of test data and test time can be reduced by up to 100X, making it more cost effective to add and conduct the additional tests required to improve quality. (Fig. 2)
Using an embedded deterministic test (EDT) approach, manufacturers can apply all of the tests needed and shrink data to volumes below that of stuck-at test volumes of larger designs. An added benefit of EDT is that it does not require changes to the functional (or core) part of the design, and does not disrupt the workflow. The overall result is less cost, less time, more capability and better performance. For designers and manufacturers, detecting errors is just half the battle. Pinpointing and correcting the root cause is the other necessary half. There are tools and combinations of tools available now that can help both parties positively impact even the most puzzling yield issues.
By pairing sophisticated design-for-test tools with a robust, full-chip layout viewing and debugging tool, failure analysis and yield engineers can more quickly analyse failure data in order to pinpoint systematic yield-limiting issues. The tools operate much in the same way as when a designer makes iterations in the physical verification phase of design. (Fig. 3) This integrated flow ensures that a high number of failures are not only identified, but also located in the layout for further analysis and, ultimately, corrective action.
A combined approach can analyse the failure information from manufacturing test and, based on this information, determine the most likely nets in the design that caused the failure observed. Failure information is then passed to a results viewing environment where these results can be linked between schematic and layout views. Through this process the physical layout can be examined to further identify the most likely failure points.
The design perspective
At larger process technologies, handing off a design to the manufacturer that was verified (Design Rule Check) DRC clean guaranteed acceptable yield. At 90nm, DRC clean does not ensure yield; in fact, the design could still entirely fail. Traditional methods of design constraints, based on the pass/fail or yes/no of minimum specifications, are no longer valid. For instance, a layout characteristic given a “fail” may, in reality, result in a layout that achieves adequate yield, or with very little effort, result in a “pass.” Inversely, a characteristic given a “pass” may be so close to failure that it could cause the entire chip to fail within a short time. But in order to affect yield, designers must be able to make determinations about minimum specifications that will result in a greater yield than that of traditional “pass/fail” methods of larger process technologies. This will require a new method of communication for defining and relaying manufacturing constraints, verifying IC layouts and addressing manufacturing related issues during design.
In order for designers to make yield considerations through cost/yield analysis, design data must be made available in its full context. This means having access to yield-limiting issues in a cross-layer and cross-hierarchical sense. Having the ability to look across such boundaries to see how the data in one cell interacts with data outside the cell is essential: it may be possible to improve the manufacturability of one layer by manipulating another. Similarly, a cell with no known manufacturability issues may significantly impact the manufacturability of a full-chip when it is placed into context.
Full chip data must also include post-layout applications. At 90nm, RET cannot be considered independently from yield-limiting considerations. Designers must become aware of, and design for, post-layout applications such as phase shift mask (PSM) and off-axis illumination (OAI), both of which have requirements on pitches. (A pitch is essentially half the width plus the spacing of the polygon in question.) With poly gate transistors of many different spacings placed throughout a design, manufacturing can be difficult, if not impossible, as adequate RETs are significantly constrained. But if a design has only a handful of pitches, then it becomes much easier to manufacture in a manner that results in acceptable yields.
Gaining full-chip and post-layout information from manufacturers can be a sensitive situation. For IDMs, it is much easier, as design-build teams work within the walls of single company; privacy issues are not at stake. But between fabless companies and manufacturers, privacy (IP) is a concern. Yet as achieving yield becomes more difficult for manufacturers, more responsibility will fall to the designer. This situation will necessitate a communication feedback loop in order to optimize yield in a cost efficient manner. This loop is at the core of a successful design for manufacture (DFM) methodology.
Although DFM is in its infancy and is yet to be fully defined, 90nm is here now, and a solution for improving yield is urgently needed. While manufacturers grapple with proprietary issues surrounding DFM methodology, and designers begin the educational process of understanding post-layout effects and designing for yield, there are tools already in place, easily accessed by manufacturers and designers alike, that can effectively detect, pinpoint and correct defects and design errors. A majority of design companies and foundries now use DFM tools. Combination test and diagnostic tools have been widely adopted throughout many segments of the industry. By combining these best in class solutions, designers and manufacturers can work together toward acceptable yields at 90nm.
Figure 1. At-speed tests revealed a 12X-20X increase in detection of defects from 180nm to 130nm. Advanced test and diagnostic tools will enable defect detection at 90nm. |
Fig. 2. Case studies show that embedded compression enables dramatic reductions in test data volume and test time. |
Fig. 3. Scan diagnostic flow using Calibre, FastScan and TestKompress. |
Authors bio:
Greg Aldrich is the director of product marketing for the Design-for-Test (DFT) product group at Mentor Graphics. In this role, Aldrich is responsible for managing the direction of the DFT product line, including the popular TestKompressÒ and FastScanÔ tools.
In the last several years Aldrich has held a variety of technical and product marketing positions at Mentor Graphics, most recently as a product marketing manager within the DFT group. Prior to joining Mentor, Aldrich served as an applications engineer at Sunrise Test Systems in San Jose, California and, previous to his work with Sunrise, spent 10 years as a systems design engineer and engineering manager at Amdahl Corporation in Sunnyvale, California.
Aldrich holds a bachelor’s degree in Electrical Engineering from the University of Illinois.
John Ferguson, Ph.D.
Author’s Biographical Information
John Ferguson received a BS degree in Physics from McGill University in 1991, an MS in Applied Physics from the University of Massachusetts in 1993, and a PhD in Electrical Engineering from the Oregon Graduate Institute of Science and Technology in 2000. For the past five years he has worked extensively in the area of physical design verification. John is a Product Marketing Manager at Mentor Graphics in Wilsonville, Oregon, managing the Calibre DRC and LVS product line.