Way back in the 1960s, Gordon Moore, the co-founder of Intel, observed that the number of transistors that could be fitted on a silicon chip was doubling every two years. Since the transistor count is related to processing power, that meant that computing power was effectively doubling every two years. Thus was born Moore’s law, which for most people working in the computer industry – or at any rate those younger than 40 – has provided the kind of bedrock certainty that Newton’s laws of motion did for mechanical engineers.
There is, however, one difference. Moore’s law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application. In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases. “In terms of size of transistor,” he said, “you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations before we get that far – but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”
We’ve now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, there’s been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called “cores” – in the hope of postponing the awful day when the silicon chip finally runs out of road. (The new Apple Mac Pro, for example, is powered by a 28-core Intel Xeon processor.) And of course there is also a good deal of frenzied research into quantum computing, which could, in principle, be an epochal development.
But computing involves a combination of hardware and software and one of the predictable consequences of Moore’s law is that it made programmers lazier. Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes. He wrote it in assembly language to increase efficiency and save space; there’s a legend that for years afterwards he could recite the entire program by heart.
There are thousands of stories like this from the early days of computing. But as Moore’s law took hold, the need to write lean, parsimonious code gradually disappeared and incentives changed. Programming became industrialised as “software engineering”. The construction of sprawling software ecosystems such as operating systems and commercial applications required large teams of developers; these then spawned associated bureaucracies of project managers and executives. Large software projects morphed into the kind of death march memorably chronicled in Fred Brooks’s celebrated book, The Mythical Man-Month, which was published in 1975 and has never been out of print, for the very good reason that it’s still relevant. And in the process, software became bloated and often inefficient.
But this didn’t matter because the hardware was always delivering the computing power that concealed the “bloatware” problem. Conscientious programmers were often infuriated by this. “The only consequence of the powerful hardware I see,” wrote one, “is that programmers write more and more bloated software on it. They become lazier, because the hardware is fast they do not try to learn algorithms nor to optimise their code… this is crazy!”
It is. In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law. 3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.
As Moore’s law reaches the end of its dominion, Myhrvold’s laws suggest that we basically have only two options. Either we moderate our ambitions or we go back to writing leaner, more efficient code. In other words, back to the future.
What I’m reading
John Naughton’s recommendations
What just happened?
Writer and researcher Dan Wang has a remarkable review of the year in technology on his blog, including an informed, detached perspective on the prospects for Chinese domination of new tech.
Algorithm says no
There’s a provocative essay by Cory Doctorow on the LA Review of Books blog on the innate conservatism of machine-learning.
Fall of the big beasts
“How to lose a monopoly: Microsoft, IBM and antitrust” is a terrific long-view essay about company survival and change by Benedict Evans on his blog.
As 2020 begins…
… we’re asking readers, like you, to make a new year contribution in support of the Guardian’s open, independent journalism. This has been a turbulent decade across the world – protest, populism, mass migration and the escalating climate crisis. The Guardian has been in every corner of the globe, reporting with tenacity, rigour and authority on the most critical events of our lifetimes. At a time when factual information is both scarcer and more essential than ever, we believe that each of us deserves access to accurate reporting with integrity at its heart.
More people than ever before are reading and supporting our journalism, in more than 180 countries around the world. And this is only possible because we made a different choice: to keep our reporting open for all, regardless of where they live or what they can afford to pay.
We have upheld our editorial independence in the face of the disintegration of traditional media – with social platforms giving rise to misinformation, the seemingly unstoppable rise of big tech and independent voices being squashed by commercial ownership. The Guardian’s independence means we can set our own agenda and voice our own opinions. Our journalism is free from commercial and political bias – never influenced by billionaire owners or shareholders. This makes us different. It means we can challenge the powerful without fear and give a voice to those less heard.
None of this would have been attainable without our readers’ generosity – your financial support has meant we can keep investigating, disentangling and interrogating. It has protected our independence, which has never been so critical. We are so grateful.
As we enter a new decade, we need your support so we can keep delivering quality journalism that’s open and independent. And that is here for the long term. Every reader contribution, however big or small, is so valuable.