All News | Chips | Boards | Devices | Android | Software | LinuxDevices.com Archive | About | Sponsors | Subscribe

Follow LinuxGizmos:

Twitter Facebook Google+ RSS feed

Pondering the future of Moore’s Law

Mar 26, 2013  |  Guest column
Tweet about this on Twitter15Share on Facebook4Share on LinkedIn0Share on Google+6

In this guest column, Eric Gulliksen, a senior analyst at VDC Research, ponders the future of Moore’s Law, which in 1965 predicted that the number of transistors on an integrated circuit would double approximately every two years. Despite predictions that Moore’s Law can’t continue unabated indefinitely, there’s still cause for optimism.

 

Moore’s Law — Where Do We Go From Here?
by Eric Gulliksen

 

In case you’ve been living under a rock and don’t know this, in 1965, Intel co-founder Gordon E. Moore (whose photo appears above) predicted that the number of transistors on an integrated circuit would double approximately every two years. Empirically based on economic factors as well as technical ones, his observation and conclusion has been so accurate that it has been given the title “Moore’s Law.” Certain pundits continually predict that we are reaching the end of the trail, and that the trend cannot continue because miniaturization technology will reach its limit. (It should be noted that Moore’s Law doesn’t specify the size of the IC die; logically one should be able to fit more transistors on a larger die — but that’s another story.)

The Ivy Bridge architecture, which utilizes a 22 nanometer fabrication process, comprises Intel’s product offerings for 2012. The firm’s next generation micro-architecture, code named Haswell, is expected to arrive in 2013 and will continue to use the 22 nm process. In 2014, the process will be shrunk to 14 nm with Roswell. Down the road, the process is expected to shrink even more, getting down to 10 nm by 2018.

How long can this go on?

Well, there is certainly at least one real limit, which is the size of the transistors themselves. A combined team of researchers from the University of New South Wales, the University of Melbourne and Purdue University has recently created a functional transistor comprising a single phosphorus atom. Furthermore, they have also developed a wire made from a combination of phosphorus and silicon, one atom tall and four atoms wide, that behaves like a copper wire. Granted, this technology is far from practicable at this point in that it has to be maintained at a temperature of minus 391 degrees F, but it does show what is possible.

As circuits get smaller and smaller, other laws of physics come into play, causing additional technical problems. Dr. Michio Kaku (surely you’ve seen him on TV – if not, you should!) of CCNY says that, once transistors shrink to 5 atoms wide (projected for 2020) the Heisenberg Uncertainty Principle will come into play. This states that it is not possible to know both the position and velocity of any particle; that one can only know one or the other. Thus one cannot know precisely where an electron actually is, and therefore cannot confine it to a wire. Since free electrons can’t be allowed to go bouncing about in any logic circuit because they may cause shorts (or, at least, logical errors), this may prove to be a practical limit.

Some pundits have theorized, though, that getting down to these sizes may allow the development of true quantum computing, wherein information is processed on a more-than-binary level. This remains to be seen.
 

Interim solutions

So far, I’ve been talking about single-atom transistors. Although one (count it — one) has actually been made, the technology is a long way from being ubiquitous.

However, like global warming and climate change, the single-atom “wall” is real. And we are rapidly approaching it. Use of GPUs for general-purpose computing is a hedge against the wall; these have far more transistors than conventional CPUs and facilitate parallel computing. Intel, NVIDIA and AMD are all pursuing this approach to supercomputing. But this isn’t a long-term solution; GPUs are faced with the same wall.

Intel is pushing toward the Moore’s law limit through cooperative efforts with several outside firms. Intel has invested a staggering US$ 4.1 billion in ASML, a Dutch semiconductor equipment manufacturer. The investment will ultimately yield Intel a 15% share of ASML, and provides US$ 3.3 billion for R&D to make “extreme ultra-violet lithography” or EUVL (using super-short wavelengths of UV light for the etching process) practical, and to develop 450-mm wafers (as opposed to today’s 300-mm wafers). The former will enable 10-nm processes, while the latter will reduce manufacturing costs.

And Intel isn’t the only one; Samsung has followed suit with an investment in ASML, and Taiwan Semiconductor Manufacturing Company, Ltd. (TSMC) has also made a significant investment. TSMC purports to be the world’s largest independent semiconductor factory, and, although they are currently building three 300-mm wafer fabs, their current production is limited to 200-mm.

Increasing transistor density by shrinking their size is only one way of battling the approaching wall. TSMC and one of its rivals, GlobalFoundries (GloFo), as well as Intel and the rest of the usual suspects, are actively pursuing 3-D chip technology. 3-D chips have been made; Intel’s Ivy Bridge architecture utilizes 3-D technology. 3-D transistors, called FinFETs, promise to both increase speed and reduce power consumption.
 

3-D ICs

3-D integrated circuits, which will allow far greater transistor density in a given planar footprint, are on their way. However, fabrication of these is not a trivial matter. Early versions comprised stacking dice atop one another with an insulating layer between, and interconnecting the dies using a rather laborious process. This was called “Chip Stack MCM,” and didn’t produce a “real” 3-D chip. But, by 2008, 3-D IC technology had progressed to the point that four types had been defined, as follows:

  1. Monolithic, wherein components and their interconnections were built in layers on a single wafer which was then diced into 3-D chips. This technology has been the subject of a DARPA grant, with research conducted at Stanford University.
  2. Wafer-on-Wafer, wherein components are built on separate wafers, which are then aligned, bonded and diced into 3-D ICs. Vertical connections comprise “through-silicon vias” (TSVs) which may either be built into the wafers before bonding or created in the stack after bonding. This process is fraught with technical difficulties, not the least of which is relatively low yield.
  3. Die-on-Wafer, where components are built on two wafers. One is then diced, with the individual dice aligned and bonded onto sites on the second wafer. TSV creation may be done either before or after bonding. Additional layers may be added before the final dicing.
  4. Die-on-Die, where components are built on multiple dice which are then aligned and bonded. TSVs may be created either before or after bonding.

There are obvious technical difficulties and pitfalls, no matter which approach is used. These include yield factors (a single defective dice may make an entire stack useless); thermal concerns (caused by the density of components); difficulty of automating manufacture; and a lack of standards.

In my layman’s opinion, a new approach to 3-D technology may be needed before it becomes truly viable. Currently components are built on wafers through the selective removal of material. Construction of 3-D chips could be simplified through selective deposition of material rather than its removal. However, that’s beyond today’s state-of-the-art.

As we look at biological equivalents, though, it’s very clear that brains are 3-D structures. I doubt that true artificial intelligence can be realized in a relatively small package without the development of true 3-D chips. Moore’s law will ultimately stymie continued development of planar chip technology.
 

Of carbon nanotubes and small groups of atoms

So far, we’ve touched on a variety of new developments that may allow continuing miniaturization, despite predictions of doom from some pundits. We’ve talked about some interesting things like single atom transistors, 3-D ICs and extreme ultra-violet lithography.

Although conventional wisdom places the limit of silicon transistors at about 11 nm, some folks at Intel have said that they have a solution for shrinking silicon down to 10 nm, and think that they may be able to go as far as 5 nm. But the science-fiction nut in me is most fascinated by new developments at IBM and Berkeley.

Terrestrial life as we know it is, of course, based on carbon. SF writers — and, indeed, some scientists — have proposed that, because carbon and silicon share many chemical properties (for example, the ability to form long-chain polymers), it might be possible to not only derive a silicon-based organic chemistry, but to actually have silicon-based life somewhere in this wondrous and infinite universe. Interesting, but as yet still science fiction.

However, what if we were to turn this around, and look at carbon and silicon from another angle? That’s essentially what’s going on at IBM, Berkeley and other research facilities.

As we all know, today’s semiconductor technology is based on silicon. But what if we were to substitute carbon for silicon? Would it be possible to create carbon-based semiconductors? Carbon atoms are far smaller than their silicon counterparts, so this might enable heretofore unimaginable miniaturization.

IBM’s people have successfully fabricated and evaluated a structure comprising an array of 10,000 carbon nanotube transistors on a single substrate. In essence, a carbon nanotube is a single-atom thick sheet of carbon, rolled into a tube. Normally, these appear as a mix of metallic and semiconducting types but, to create a computing device, the metallic types must be removed. And, as if that wasn’t tough enough, the placement and alignment of the tubes on a substrate must be precisely controlled. IBM has been able to accomplish this utilizing ion-exchange chemistry. Researchers at Berkeley have been able accomplish a similar feat, producing arrays both flexible and stretchable, which show great promise for developments such as foldable electronic pads, coatings that can monitor surfaces for cracks and other potential failures, “smart” clothing and even artificial electronic skin.

It is theorized that carbon nanotube transistor arrays, which can be produced with existing manufacturing processes, have the potential to yield CPU structures that are not only far smaller than their silicon counterparts, but are five to ten times faster than today’s silicon chips.

It’s not clear to me whether a single nanotube can only carry a single transistor, or whether it might be possible to produce many transistors at different locations on the surface of a single nanotube. While the latter may not be “do-able” today, who can say what will be possible tomorrow?

IBM scientists have also been able to determine that only twelve atoms are required to magnetically store a single bit of information. This is accomplished by precisely aligning their magnetic properties such that they do not interfere with other groups of atoms located nearby. It is projected that this technology could increase magnetic storage density on a hard disk drive by a factor of 100.

I suppose (though this is pure speculation) that it may also be possible to create ultra-dense storage through the use of carbon nanotubes. And, since nanotubes are inherently 3-D structures, they may lend themselves to the fabrication of 3-D chips as well.

Whatever happens, two things are clear to me. First, we are nowhere near the end of miniaturization and, second, the ability to produce computing devices with human, or even superhuman, computing ability may be fairly close. Can the development of truly intelligent machines with the ability to both replicate and evolve be that far away? I certainly hope I’m still around to see this.
 

About the author: Eric Gulliksen has been a lead analyst in VDC’s Embedded Hardware practice for ten years. Prior to joining the VDC, he worked in industry for over thirty years, half in engineering and half in sales and marketing, with executive-level responsibilities in both.

(The contents of this post are copyright © 2012-2013 VDC Research Group Inc., and have been reproduced by LinuxGizmos with permission.)
 

(advertise here)


PLEASE COMMENT BELOW

Leave a Reply