Swedish artist Simon Stålenhag paints images of suburban Sweden in the 1980s transformed by some unexplained technological breakthrough.
His collection of illustrations entitled Tales from the Loop is written as a memoir of childhood in an alternate reality where scientific discoveries spurred on massive technological research projects. His landscapes, and the little vignettes that accompany them, hint at revolutions in transport, energy and communications amid the cold-war arms race. But they also depict the way in which experimental facilities and machinery are often abandoned in the rush to improve and modernise. Many of Stålenhag’s machines are hulking, neglected landmarks- decommissioned or partially buried by snow and sand.
But it’s worth considering whether these images should seem so far fetched. For most of the 20th century the pace of technological progress seemed to be accelerating in every field. New ways of generating and transferring electricity were discovered every few years. Planners and engineers could barely keep up. Some cities were still decommissioning gas streetlamps when nuclear fission was discovered. Meanwhile advances in materials science took us from crystal radio to transistors to silicon micro-chips which have adhered to Moore’s Law for the last fifty years.
At the same time medical science accelerated at an astounding rate; In 1900 one of the most common surgical procedures – the Caesarian section- was still one of the most dangerous. Surgeons frequently operated without gloves and the procedure had a 10% mortality rate. By the end of the 1960s discoveries of blood types, transfusion medicine, antibiotics and improvements to surgical equipment had reduced that rate to 0.00013%. Those technologies spurred on more and more ambitious procedures culminating with the first successful human heart transplant in 1967.
But nowhere was technological progress more visible than in transport and mechanical engineering. Containerisation and the new field of transport logistics quietly kick-started an economic boom that continued for the rest of the 20th century while improvements to combustion and jet engines propelled cars, trains, ships and aircraft at ever increasing speeds. To put things in perspective; less than 70 years separated Orville Wright’s 40 meter flight on the first powered aircraft from the roughly 1.5 million kilometer return journey by the crew of Apollo 11.
Stålenhag’s paintings seem to beg the question; what stopped us from maintaining that pace of technological progress?
Each year since 2000 we have been edging closer to or overtaking the dates set by science fiction writers for impossible visions of the future (click here for a timeline). Yet when you look around you see cities and suburbs looking more or less the same as they did thirty or forty years ago. In labs we’re beginning to see bi-pedal and quadrupedal robots taking their first shaky steps and some specialised factories and vehicles have begun to resemble their space-age concepts but the technological advances of the last few decades haven’t really imposed on the landscape. We’ve made massive gains in our understanding of information technology, biology and physics but the grand transformations promised by science fiction always seem to be just on the horizon.
In an article for The Baffler David Graeber points out that:
In 1968, Stanley Kubrick felt that a moviegoing audience would find it perfectly natural to assume that only thirty-three years later, in 2001, we would have commercial moon flights, city-like space stations, and computers with human personalities maintaining astronauts in suspended animation while traveling to Jupiter…Why did the projected explosion of technological growth everyone was expecting—the moon bases, the robot factories—fail to happen? There are two possibilities. Either our expectations about the pace of technological change were unrealistic (in which case, we need to know why so many intelligent people believed they were not) or our expectations were not unrealistic (in which case, we need to know what happened to derail so many credible ideas and prospects).
Assuming the latter, Graeber offers several diagnoses for this apparent slowing of technological progress. He cites the diminishing urgency of government spending on science and technology following the end of the Cold War. He points to creeping bureaucracy in academia and the privatisation of universities which has seen the focus shift from blue-sky research to more immediate and marketable projects. But the most compelling reason for this apparent technological plateau seems to be economic.
Globalisation and a raft of free-trade agreements in the second half of the 20th century simply starved the more technologically developed nations of any incentive to automate or mechanise their workforce. Access to cheap foreign labor in developing nations meant the standard of living could be boosted in places like Britain and the United states without ever having to invent ways to make life easier on the home front. As Graeber explains, outsourcing to East Asia, Latin America and India:
… allowed manufacturers to employ much less technologically sophisticated production-line techniques than they would have been obliged to employ at home.
If necessity is the mother of all invention then globalisation pushed aside any reason for substantial innovation. That issue of necessity is the common thread running through a great deal of recent historical scholarship. Over the last 40 years historians have been putting forward theories to explain ‘the great divergence’ – the point when European civilisation began to grow economically and technologically much faster than equivalent societies in the Middle-east, Asia, and the Americas. It’s clear that a number of factors worked in favour of the European nation-states at the time. Legal frameworks, natural resources, culture and trade all helped determine which societies would dominate but they offer little insight into why other regions remained in a state of arrested development.
Again the most compelling explanation appears to be economic. Historian Mark Elvin coined the phrase ‘high-level equilibrium trap’ to explain why China was not the first civilisation to undergo an Industrial Revolution despite a clear advantage over Europe in its engineering and scientific sophistication. In the 15th Century China during the Ming Dynasty seemed poised to become the first world super-power. With a stable economy supporting a standing army of a million men and a vast fleet of ships engaged in exploration the Chinese empire had many advantages over the small European states of the time. Among a list of unique Chinese inventions historian Jared Diamond highlights:
Canal lock gates, cast iron, deep drilling, efficient animal harnesses, gunpowder, kites, magnetic compasses, moveable type, paper, porcelain, printing, sternpost rudders, and wheelbarrows
However Ming China might be considered the high water market of Chinese technological and mercantile dominance. At the same time in Western Europe scholars had begun to make advances in several scientific fields which catalysed further breakthroughs in medicine and technology and started what became known as the Renaissance. Trade and warfare among the early city states of Europe drove a massive technological transformation. By the 19th Century a small fleet of British warships blockading Chinese ports was enough to extort tax concessions and territory from an empire of 300 million people. Architectural historian Gregory Bracken described China as a victim of its own success:
“The efficiency of its pre-industrial economy discouraged any radical shift in production techniques and local shortages, which would usually have driven innovation, were mitigated by resources from other regions because they could be easily brought in”
Go back further into history and you find a similar stasis in ancient Egyptian and Greek civilisations. Throughout ancient Egyptian history incremental improvements were made (especially in hydraulics, ship building and metallurgy) but the day-to-day lives of an Egyptian farmer in the Bronze Age would have been familiar to an Egyptian farmer living under Roman rule more than three thousand years later. Classical Greek and Roman civilisations, while far more inventive, often seemed to be right on the verge of a technological revolution before stepping back and resuming their traditional pace of change.
To take one example; in the first century AD Greek mathematician and engineer Heron of Alexandria refined the design for the aeolipile. This device consisted of a hollow bronze vessel, held in a cradle, with vents projecting in opposite directions. The vessel was filled with water and the whole apparatus was then positioned over a heat source. When the water inside reached boiling point the pressure forced steam from the vents and spun the vessel on its axis. In effect the aeolipile was a prototype steam engine. It seems unlikely that its potential to provide energy was not recognised at the time and yet no evidence exists that it was ever put to use in that capacity.
In a series of essays entitled Hellenic History and Culture a number of scholars discuss the technology of ancient Greece and Ptolemaic Egypt. Peter Levi points out that the mathematical theory underpinning the great engineering feats of the time were remarkably sophisticated but the lack of value placed on the workers held back innovation.
What technology is nowadays expected to accomplish is the concentration or the transference of energy. And we know from the raising of obelisks that the practical mathematics were quite highly developed. It’s quite clever to raise a monolithic column or an obelisk. But I take it that what went wrong with the Hellenistic rulers’ exploration of different techniques is that they had too much man power – they had too many slaves. To have slaves is, apart from being wicked, inefficient because you may use a million men where one machine could have done the job.
Professor Peter Green goes further in explaining this reluctance to adopt new technologies:
I think that’s only the beginning of it. If you look away from technology for a moment, what you find throughout antiquity is a paranoid terror of revolution. It’s no accident that the Greek and Latin terms for making a revolution are neoterizein and res novare – that is, just doing something new. . .It’s not so much that slaves were available, which indeed they were. No, the ruling classes were scared, as the Puritans said, of Satan finding work for idle hands to do.
Fellow historian K.D White offers an anecdote to support this
There is a famous passage in Suetonius’ Life of Vespasian in which a technician appears before the emperor to advocate some kind of new device, we’re not quite sure what. But the answer of the emperor to an aide is; give him a reward and send him away, and please leave me here to feed my little people. Sine me pascere plebeculam meam. I think this is in line with what you’re saying. Apart from slaves, … there were lots of underemployed free citizens.
This type of state prohibition on new technology was not unique to the ancient world. In the 15th Century the Hongxi Emperor ordered a halt to the construction and maintenance of China’s deep water fleet and permanently ended the age of Chinese exploration. In the 17th Century the Tokugawa shogunate in Japan effectively outlawed the importation and manufacture of firearms – part of a range of policies that stayed in place until almost the 20th century.
At a much smaller scale religious communities like the Amish in rural Pennsylvania and Ohio often place strict limits on technology. The popular misconception is that Amish society rejects electricity and all modern conveniences. The reality is that they are simply more selective than the population at large. Senior Amish clergy spend a great deal of time debating the merits of individual devices or machines before deciding whether or not to adopt them. They make their decisions through a thick lens of Christian fundamentalism – aiming to discourage free enquiry and the pursuit of wealth – but also with a mind to preserving the physical and social cohesion of their community. Anything that might grant members of the community a certain level of autonomy are discouraged. The problem with car ownership, for example, is not the car itself but the implication that it might allow a member of the congregation to live beyond the range of a horse and carriage.
In the science fiction novel SevenEves author Neil Stephenson uses the term ‘Amistics’ to describe this principle of technological deliberation. In the story the characters are not concerned with nanotechnology, rogue AIs or genetic tampering, instead they try to limit the impact of the social media ‘rumour mill’ and the deleterious effect of participating in constant broadcast communications channels. An editorial for New Philosopher summarised the idea as follows:
Amistics, then, is a belated art: the fruit of bitter experience and cries of “never again!” It’s also a neat provocation, and a way in to a troubling set of questions. What does it mean to use technology selectively? How far can whole societies determine which technologies they use – if at all – rather than allowing technology to determine their history?
With open borders, global communication networks and potential advances in 3D printing it seems less and less possible to put the technological genie back into the bottle. The ability of a single ruler or governing body to decide against some disruptive technology has been compromised, perhaps indefinitely. While individual nations strive for any advantage over their neighbours it will be difficult to form a consensus over how to regulate new technologies – even those that might pose a threat to the wider civilisation.
The default instinct in most western societies seems to be one of technological determinism. Most would argue that it is futile to resist the ‘march of progress’. But, when it comes to research and design, it’s worth asking what the end goal actually is. Again from the New Philosopher article:
Progress towards what, and at what cost? Progress in which areas and to whose gain? Unless we can turn to the cumulative insights of other fields – to philosophy, science, politics, and art; to history, psychology, aesthetics, and fiction – we can’t begin to debate such questions. Indeed, we can’t debate anything. We’ve reduced ourselves to the state of automata, and in doing so have failed to do justice to either us or our creations.
But some progress has been made. An international consensus has slowly established over the 20th Century in relation to weapons of war. Until very recently restrictions on the use of chemical and biological weapons had been agreed upon and adhered to by most nations. The Ottawa treaty in 1997 which aimed at banning landmines also received broad support (though not from the USA, China or Russia). The Geneva convention was updated the following year to include provisions against blinding laser weapons and, in 2006, a Space Preservation Treaty was ratified which aimed to ensure that weapons were not placed into orbit (The USA provided the single dissenting vote).
But the best precedent for successful international cooperation is undoubtedly the ratification of the Montreal Protocol which was put in place to prevent further depletion of the ozone layer by chlorofluorocarbons (CFCs). The protocol received universal approval and came into effect in 1989 – only 14 years after the initial discovery of the effect that CFCs had on the atmosphere.
The challenge of dealing with new technology will always exist but only by developing our own art of amistics can we bring these discussions into the wider consciousness. The Montreal Protocol should give us cause for hope.