+44 (0)24 7671 8970
More publications     •     Advertise with us     •     Contact us
*/
News Article

Live and let die

The need for known quality die in system-in-packages is placing new demands on automated test equipment. Peter O'Neill and Tom Vana of Agilent Technologies explain how new testing approaches can weed out unreliable devices without pushing up prices.

System-in-a-package (SiP) technology is being adapted to help reduce the size of end use devices such as mobile phones, personal digital assistant (PDAs) and digital cameras while increasing system power efficiency, ruggedness and signal integrity.

This shift to integrating analogue and RF IC technologies with logic and memory ICs in a SiP is allowing for smaller end use devices with lower overall system current draw, at price points demanded by these markets.

By 2007, the SiP market is projected to grow to about 9.1 billion units. In order to meet this rapid growth, manufacturers will need to achieve high manufacturing yield and reliability for SiPs. Here, we take a look at how automated test equipment (ATE) can help increase the yield and lower the costs of SiPs.

SiPs contain multiple heterogeneous and homogenous ICs, along with passives and in some cases such as automotive applications, microelectromechanical systems (MEMS) that are packaged together in planar, stacked or combination topology.

The ICs can range from DRAM to digital signal processors (DSP), to system on a chip (SOC) and can include analogue and RF. These ICs can take the form of bare dice or packaged parts that are then integrated into the SiP.

The testing strategies for SiPs depend on the number of bare dice that have been integrated and the compound-yield problem. Prior to the testing, it is important to have the known quality die (KQD) for each of the IC components.

The IC component suppliers must provide the SiP integrator with a breakdown of what has and has not been tested and the outgoing defect rates. The SiP integrator can then determine if any further testing is needed to achieve the quality target for the intended market, balanced against the costs of testing, scrapped material and effort.

As the use of KQD in consumer markets is replacing the use of known good die (KGD) used in military/aerospace applications, manufacturers now require new ATE innovations and test techniques to improve wafer test effectiveness.

One example of innovation in the ATE industry is the shift to supplying single scalable platforms. Today ATE suppliers offer platforms that scale digital pin-count, digital speed, and analogue and RF test capabilities. The single scalable platform concept ensures upgradability, allowing test capability to match the volumes and test requirements of the product mix, including bare die and package.

For individually packaged die, the costs of ATE for wafer probe can range from one-half to onethird the cost of ATE for packaged functional test. For new designs or new IC processes that are ramping into production for a SiP project, meeting the KQD requirements may require upfront test capability at wafer probe similar to those typically used for package test.

Once the design has fully ramped or the IC process has stabilised, the test deployed will have excess capability, resulting in excess cost of test. Using an over-configured tester (excess capability) that adds a 20% purchase price premium raises the cost of test approximately 12%. Given the cost-competitive markets these ICs and SiPs are used for, cost of test is a major issue. Here we take a look a cell phone SiP case study. Lets assume the initial wafer probe test for the media processor die requires package like test capability. The ATE capital cost would be as much as two times that of a standard wafer probe test.



And lets assume the baseband die has been in volume production for six months; therefore the design is known to be solid, the fab process is stable and the SiP integrator has accepted KQD using standard wafer probe tests.

If the same test capability were applied to both the media processor and the baseband die, the baseband cost of test would increase by about 60%.

To bring the cost of test back in line with a given die, test requirements will require additional platform innovations. A flexible performance library allows for per-pin licensing of digital vector speed and depth that is not locked to a given tester pin or tester in the enterprise.

This innovation allows for targeting very specific channels on classes of die that require a level of digital performance that would be too expensive to deploy widely across all tester pins and die tested.

A base level performance (and cost of test) can be established for given classes of die. Any tester in the enterprise can be reconfigured easily and economically, through software licensing, for die test requiring higher digital performance, including speed and vector memory.

There are two basic approaches to producing die of high known quality. The obvious approach is to move the package (final) test to wafer (probe). This, however, is challenged by the lower signal and power integrity connection usually provided by a probe compared to a socket.

It also usually relies on the direct measurement of the functional specifications. The alternative is to detect the same defects found by the package tests with indirect tests that do not require the high signal integrity connection or expensive instruments of the package test. This approach is called defect-based testing (DBT).

The most widely used DBT method is quiescent power supply current - Iddq. Iddq is a DC measurement that relies on simpler connections and instruments than at-speed testing. Iddq can replace at-speed scan or functional testing to reasonable quality level because digital logic speed failures are often caused by resistive bridging or capacitively coupled open defects which also elevate Iddq.

Since Iddq is a parametric measurement, unlike the pass/fail nature of most digital tests, it also detects latent or reliability defects, which may not cause a failure until later in the products life.

Normal field-effect transistor (FET) off leakage rises in comparison to defect current with process shrinks. As a result, its mandatory to measure Iddq at multiple vectors and undertake "signal processing" to extract the defect "signal" out of the background "noise" and improve this test method.

Another consideration with the Iddq method is that defects cause unexpected values called "outliers".

At deep sub-micron (DSM) technologies, faultfree and faulty Iddq distributions overlap due to FET-off current increasing faster than on current (creating a current measurement dynamic range problem. The pass/fail overlap in Case 3 requires changing the sequence from marking the die as good/bad in the test flow to testing enough dice on the wafer to find a pattern, and then making fault-free/fault decisions.

With this Iddq method, signal processing and contrasting conditions are used for pattern recognition and to establish fault-free/fault limits. This requires a large number of measurements and calculations.

To determine a distribution within a die, a vector pattern is repeated to establish an Iddq max/min ratio. Die-to-die vector comparison can also be executed, by calculating figures of merit such as nearest neighbour residuals (NNR) in which mean or median Iddq of neighbouring dies are used to estimate the centre die Iddq, and neighbour current ratios (NCR), in which the ratio of Iddq for one die is compared to that of its neighbour.

For fault-free dice, the ratio should approach one for the same vector. The use of wafer spatial patterns looks for high frequency spatial variations, an indicator of a faulty die. Techniques like these will assure Iddqs viability, at least for battery-operated chips.

These Iddq methods in turn continue to drive precision current measurements on the ATE system. This requires a low noise ATE architecture with high resolution device power supplies (DPS) that have fast Iddq measurement capabilities to meet the throughput requirements for the increasing number of Iddq measurements.

A cell phone SiP could integrate a baseband, media processors and memories with dice manufactured in a 90nm process. According to failure analysis experts, digital yield loss dominance is shifting from hard failures, primarily stuck-at faults, to parametric failures, primarily path delay and transition time faults. Capturing these increasingly common timing faults may require testing at speed.

And as mentioned above, select channels of the ATE that need to work at-speed for either functional or AC-scan can be provisioned for the higher data rates, so other wafers not requiring at-speed test will not be burdened with the cost of higher speed pins.

As more IC designs are targeted for SiPs and portable applications, the output-drivers are no longer required to drive the capacitance of a signal path that includes propagating the signal onto a PCB and then to a receiver.

A test mode for the output driver is required so that it can drive the signal to the ATE pin. At the same time, to screen these delay and edge-rate faults the ATE must have the EPA (edge placement accuracy) and rise times at the die contact points.

All defect-based tests, including Iddq, are indirectly related to conventional functional or structural tests. Therefore, it is beneficial to stress the defects to increase their effects on the circuit under test by accelerated ageing. Stress makes both initial (yield) and latent defects easier to detect.

Most semiconductor defects are accelerated by a combination of temperature and voltage. Since probing at high temperature is difficult and since temperature acceleration is slow, requiring hours of stress time, temperature stressing is most practically done on packaged parts in separate burn-in ovens, which is not practical for producing good dice for SiPs.

In contrast, voltage stress can easily be applied by probes, assuming a high voltage can be applied without causing destructive breakdown; it is possible to accelerate many defects at probe. Voltage stress is especially effective for identifying bridging defects.

Since the defects to be stressed are the same ones whose quiescent current is to be measured, the Iddq vectors are also the best stress vectors. The easiest way to apply this stress is to raise the constant power supply and input voltages.

In todays nanometre processes, the voltage required to highly accelerate interconnect defects will cause a toggling FET to break down destructively. Fortunately, a metal oxide semiconductor field-effect transistors (MOSFET) drain to source breakdown voltage is higher when it is off than when it is on, and, when static, every FET in a complementary metal oxide semiconductor (CMOS) gate is in series with an off FET which will take the stress voltage.

This led to the development of what IBM calls enhanced voltage screen (EVS), in contrast to the older dynamic voltage screen (DVS). The sequence is to clock in a pattern at nominal voltage, stop the clock, bump the supply to the stress voltage and hold it for a period, return the supply to its nominal voltage, measure Iddq, and proceed to the next pattern.

At the end of the vector set, the Iddq measurements can be analysed for a signature that determines if the part contains a defect. Depending on which signatures have been found useful for a given product, Iddq may also be measured at the stress voltage or before the stress bump.

Once the SiP integrator has packaged all the passives, KQD die and known good memory, some level of package test must be performed to ensure the targeted PPM defects is not exceeded. If defects are found only in the package interconnect, then this might be a simple system level test. However, the packaging process may create defects in the dice, which would then require testing the internals of the effected dice.

When the defect rates out of wafer probe exceed the target system defect rate, more test coverage will be required at package.

The package test will most likely contain some level of at-speed functional test for digital, as well as RF and/or analogue tests and memory test to capture any false positive die or packaging defects.

Test modules created for the die parts can be leveraged into the package test flow, provided a common platform is used at both wafer probe and package test. Package test ranges from simple interconnect system test to comprehensive DC, SCAN, at-speed functional, analogue, RF and memory test.

Conclusion
The growth in the use of SiPs, driven mainly by the price sensitive wireless, consumer and automotive markets, is requiring a shift from KGD to delivering product of known quality with KQD. As a result, different test strategies and methods at wafer probe and package test are being deployed.

To meet these requirements cost effectively, the ATE platform and its underlying architecture must support a wide range of test and reliability screening methods to support the large disparities in test times and resource utilisation across the range of dice integrated into an SiP.



 

 



×
Search the news archive

To close this popup you can press escape or click the close icon.
Logo
×
Logo
×
Register - Step 1

You may choose to subscribe to the Silicon Semiconductor Magazine, the Silicon Semiconductor Newsletter, or both. You may also request additional information if required, before submitting your application.


Please subscribe me to:

 

You chose the industry type of "Other"

Please enter the industry that you work in:
Please enter the industry that you work in: