Despite a number of premature announcements, the industry driving supposition known as Moore’s Law continues to be achieved by the semiconductor industry every couple of years. This time it was immersion lithography that saved the day. David Greenlaw, Director of Process Technology at AMD discusses how immersion lithography has achieved this and looks at future challenges to maintain Moore’s Law’s progress.
How long can this continue? It has now been more than forty years since Gordon Moore predicted that computer chip companies would double the number of transistors patterned into each square inch of silicon every two years. All along the way people have asked how long this process can continue, and whether it would be laws of physics or economics that finally slow progress on these integrated circuits. For now, technologists have pulled a trick with the physics of light that will, once again, keep semiconductor progress on track. Immersion lithography, as the trick is known, takes a hint from nature to enhance the resolving power of the machines used to print the tiny circuit patterns on silicon wafers.
Who cares? Everybody cares, because the semiconductor industry sits at the foundation of the information economy. Whether you follow the great strides being made today in the human genome project era of biology, or if you work in one of the many businesses that now depends on its internal server farm to process ever larger volumes of data for real time decision making, or if you're just interested in having the internet respond faster with more content, you care about whether the semiconductor industry is able to maintain the relentless pace of technical innovation and capital investment dictated by what is known as Moore's law. The high-end servers and consumer mobile devices of tomorrow will require silicon chips that deliver more performance at lower power consumption and lower cost than those we are physically able to build today, and the only way to enable such future products is to continue to pack more circuitry into the same silicon area. The individual transistors which make up this densely packed circuitry are brought closer together with each further biannual shrink, boosting performance and reducing wasteful power consumption, essentially because on-chip signals don't have as far to travel. In addition, a larger number of chips can be packed onto each silicon wafer, thus reducing the cost of building each chip.
The difficulty with this “doubling every two years” is that it describes a version of exponential growth, where each additional unit of time adds again the same compounded multiplying factor. So over time, doubling becomes quadrupling, and then 8-fold, and then 16-fold increases and so on, with twenty such doublings giving a more than million-fold increase all together. That kind of track record naturally leads to questions about how long this can continue, because whether you look at bacteria multiplying in a petri dish, or a mutual fund hoping to deliver consistent returns forever, systems characterised by exponential growth tend to follow the curve for a while and then eventually bump up against some boundary condition that changes the nature of the game. But so far every time a major obstacle has appeared in the semiconductor technology roadmap, we've been able to engineer an elegant solution and keep going.
The cost of fabrication
Modern chip factories come with a price tag approaching 3 billion dollars, so a new semiconductor technology cannot just enable smaller structural dimensions to be built in silicon; it must do so in an affordable way. A new approach has to be “manufacturable”, meaning that after a time of introduction and debug, new production processes have to run around the clock, all day and all night for years on end.
Over the last decade, as semiconductor companies began to seriously ask themselves how they would pattern the structures the industry would need in 2007, some options emerged.
All were checked against a list of criteria comparing basic technical capability and affordability and manufacturability. And to the surprise of just about everyone, Immersion Lithography has emerged the winner.
A production lithography machine is essentially an overblown slide projector with the optics hooked up backwards to shrink an image instead of magnify it. Since the colour of light used to project the desired circuit patterns determines the smallest images that can be resolved, the semiconductor industry has spent the last 15 years shrinking the wavelength of light used in lithography.
This means that the colour of the light was blue back then and has now been pushed into and beyond the deepest violet colours.
This progression to where we are today has not been simple, because each new wavelength of light required the development of new photosensitive materials to do the patterning along with new methods for correcting the light “diffraction” around sharp edges in the printed patterns.
All this has already brought us to the point where transistors in state of the art circuits control current flowing through a tiny region measured in nanometres; more than 1000 times smaller than the diameter of a human hair.
But the next step, the one we're all taking now into what is known as the “45nm generation” of semiconductor technology uncovered a really ugly problem, which has now been solved by Immersion Lithography, using a hint from the human eye.
This really ugly problem came from the fact that for wavelengths of light too far beyond the violet end of the visible spectrum, glass and many types of plastic stop transmitting light and start absorbing it. In a production environment, this absorption would lead to heating of the lenses, distortion, and a process that is not at all manufacturable.
There was a huge amount of effort, ultimately unsuccessful, spent on trying to solve this and a host of related problems. New materials were used to make lenses, and complicated optical problems with names like “birefringence” were solved. In the end, though, it turned out to be impossible to make all the materials needed for semiconductor lithography compatible with yet another wavelength reduction.
For a while, this problem of light absorption looked like a roadblock for the entire industry. Left unsolved, it would have meant a large break in the scaling described by Moore's law. But in a dramatic plot twist, we looked to the human eye, and added a thin droplet of water between the last lens and the silicon wafer. This provided the needed engineering solution, and allowed the industry to sidestep a showstopper.
How does this work? Human eyes are built with the lens up front, the retina to capture the image in the back and a water like fluid in between.
Most of the image resolving power of the eye doesn't actually come from the lens itself, it comes from having air outside and water on the inside. (This is also why it's hard for people to see underwater without goggles on. Having the same material on both sides of the lens gives less resolving power than can be reached when there are different materials involved.)
So a few years ago, some crazy semiconductor technologists wondered whether it would be possible to build production lithography systems with a fluid between the last lens and the wafer. The idea was dubbed “immersion”, even though it was soon clear that the silicon wafer would not be submersed in water, we would just drag a droplet over the part of the wafer that was being printed at that moment.
Many industry gurus thought it would never work. Sure, simple optics showed that we could get enhanced resolution without a wavelength change by making this step, but what about around the clock manufacturability?
How would this little meniscus of water be generated and maintained while a wafer whizzes by underneath for high speed printing? What if water droplets dried on the wafer causing patterning defects? What if bubbles got trapped in the water and ruined the image? All of these questions have been addressed at research institutes and equipment suppliers since the early years of this decade and at chip manufacturers over the last 24 months.
Fast forward to late 2007. The news of the year from the semiconductor industry is that immersion lithography is here, and it is working. The bubbles are under control, defect levels are low, downtime is reasonable. Once again, the next doubling of circuit density will be delivered on time.
Yes, individual companies argue about the best time and place to introduce new production techniques and often delay or speed up the implementation of a new technology by a year or two. The technical press often covers all this in exhausting play by play detail, but the real news is that the rumors of the demise of Moore's law continue to have been greatly exaggerated.
Still, the lingering question remains. Because as soon as we've solved the current set of problems, there is always another shrink waiting 2 years down the road. How long can this continue? The next shrink beyond 45nm is due in 2009 and will bring several ugly problems of its own.
New metallic materials will be needed inside the transistors to keep leakage currents under control.
Lighter materials are planned to surround the copper interconnect wires to give less “cross talk” between neighboring lines and lower power consumption overall. Even the Immersion Lithography folks have started talking about swapping out the water for a heavier liquid someday to further enhance the immersion effect. But overall, having sidestepped what looked like a true showstopper for the 45nm generation of 2007, it seems that we'll also have what we need to make it to the next shrink in 2009. And then we can start worrying about the one that comes after that, and then the one after that.