Blog

Moore’s law is getting expensive

Not only taking longer, costing more too—do we care?

Jon Peddie

Gorgon Moore’s observation in 1965 about the increasing density of transistors per square millimeter of silicon while maintaining the price and yet increasing the performance is undergoing a tremendous change this year and looking forward. In 2017 the cost of a 300-mm silicon wafer went up 20% from the price this time last year. The silicon wafer suppliers have indicated that they will raise the price 20% again in 2018. This is partially due to the increased demands for higher quality silicon ingots as we push the feature size of the transistors to subatomic sizes, and partially to the increased demand exceeding supply; driven largely by memory demand.

These cost increases while feature size continues to shrink (by 2018 we should be seeing the first results of 7 nm parts) are one of the reasons many people are declaring the end to Moore’s law. It should be pointed out that Moore’s law is in reality an observation Gordon Moore made about the economics of semiconductor manufacturing in the seemingly unlimited ability to shrink feature size while maintaining manufacturing costs, at a regular cadence. Originally it was every 18 months the industry would step down to a smaller transistor feature size doubling the density of transistors. Then, ten years later Moore said the doubling would take two years. That was still phenomenal and the electronics and computer industries enjoyed tremendous growth, and new performance levels every cycle. By 2012 the cadence extended to 30 months, and predictions are that by 2025 the ability to scale will end, at least with the materials and structures being used today.

In the meantime, companies like Nvidia are not slowing down, and it is building the largest chip in the world (a temporary distinction until Intel brings out their dGPU) the cost of a wafer for such a device has to be a significant factor?

The shrinking feature size of semiconductors (Heinz Nixdorf Museum/Paul Townend)

So how will the semiconductor suppliers deal with that? Will they simply pass on the additional costs to their customers, or try to absorb it, or spread out over other things? And do we even care?

A Ford Mustang cost $2,427 in 1965 when Moore made his observation, which is equivalent to $18,326 today. A new Ford Mustang sells for about $25,200 today representing nearly a 37.5% increase over the inflation-adjusted price of the original.

When IBM introduced the PC in 1981 it sold for $1,565 (approximately 22% of the cost a new Mustang at the time), which is the equivalent of $4,150 today. A new desktop PC today with monitor and keyboard sells for as low as $600—14.4% of the inflation-adjusted price of the original, and 2% of the cost of a new Mustang. And the PC will have more, and faster memory, a larger and higher resolution screen, and a processor over a thousand times faster and eight times larger word size. That’s what Moore’s law has done for us, and done it every year for the past 52 years.

So, if things slow down to say three years or more, and a new computer costs maybe 1% to 2% more, can we legitimately complain about that, keeping in mind we’ll be getting a bigger, faster, PC in the process?

Not only will the costs of the semiconductors not be a significant factor, but when compares to the work product of the new processors, the bank/buck is still very much exponential. If you have any doubts about that go watch some of the presentations from the Supercomputer 2017 conference.

Softer, less scary assistants

Predicting the future