A brilliant and groundbreaking argument that innovation and progress are often achieved by revisiting and retooling ideas from the past rather than starting from scratch—from The Guardian columnist and contributor to The Atlantic.
Innovation is not always as innovative as it may seem. This is the story of how old ideas that were mocked or ignored for centuries are now storming back to the cutting edge of science and technology, informing the way we lead our lives. This is the story of Lamarck and the modern-day epigeneticist whose research vindicated his mocked 200-year-old theory of evolution; of the return of cavalry use in the war in Afghanistan; of Tesla’s bringing back the electric car; and of the cognitive scientists who made breakthroughs by turning to ancient Greek philosophy.
Drawing on examples from business to philosophy to science, Rethink shows what we can learn by revisiting old, discarded ideas and considering them from a novel perspective. From within all these rich anecdotes of overlooked ideas come good ones, helping us find new ways to think about ideas in our own time—from out-of-the-box proposals in the boardroom to grand projects for social and political change.
Armed with this picture of the surprising evolution of ideas and their triumphant second lives, Rethink helps you see the world differently. In the bestselling tradition of Malcolm Gladwell, Poole’s new approach to a familiar topic is fun, convincing, and brilliant—and offers a clear takeaway: if you want to affect the future, start by taking a look at the past.
Rethink 1 Introduction: The Age of Rediscovery An invasion of armies can be resisted, but not an idea whose time has come.
(i) To think about an idea again;
(ii) To change how you think about it.
The electric car is the future. And it has been the future for a very long time. The first known electric car was built in 1837 by an Aberdeen chemist named Robert Davidson. It is now all but forgotten that by the end of the nineteenth century a fleet of electric taxis—known as hummingbirds for their characteristic engine sound—worked the streets of London. The commissioner of the Metropolitan Police approved of their potential to solve the city’s burgeoning traffic problem, since they took up less than half the road space of horse-drawn cabs. Similar taxis also touted for trade in Paris, Berlin, and New York; by the turn of the century, more than thirty thousand electric cars were registered in the United States. They were much more popular than gasoline-powered cars. They were less noisy and had no polluting exhaust. The twentieth century was obviously going to be the electric century.1
Yet in a little more than a decade, production of such vehicles had slowed, and then eventually stopped. The drivers of London’s horse-drawn cabs had mounted a vigorous campaign pointing out breakdowns and accidents in their electrically powered rivals, and so put the London Electric Cab Company out of business.2 (The electric taxis did suffer technical problems, but they were exaggerated by their rivals, just as the taxi drivers of modern Manhattan and London are keen to paint Uber in as bad a light as possible.) Meanwhile, discoveries of large oil reserves brought the price of petroleum tumbling, and Henry Ford began to sell gasoline-powered cars that were half the price of electric ones. The construction of better roads in America encouraged longer journeys, which electric cars could not manage on their limited battery power. So it was the internal-combustion engine, after all, that won the century.
And then, at last, came the tech entrepreneur Elon Musk, who made a fortune from cofounding PayPal and then plowed it all into building elaborate machines in California. In 2004 he became an early funder and chairman of a Silicon Valley start-up called Tesla Motors, when almost everyone still thought that electric cars were a bad idea. “It is frequently forgotten in hindsight that people thought this was the shittiest business opportunity on the planet,” Tesla cofounder J. B. Straubel remembers. “The venture capitalists were all running for the hills.”3 But Musk was able to act as his own venture capitalist. He soon became Tesla’s CEO, and in 2008 Tesla launched the first highway-capable electric car, the $109,000 Roadster. It ran on lithium-ion batteries, of a similar kind to those used in laptops and phones, and it could go more than two hundred miles between charges. Most important, it didn’t look like a clunky eco-vehicle; it looked like a flashy sports car. Musk had delayed the car’s development by insisting that Tesla’s first model should have a carbon-fiber body and be able to accelerate from zero to sixty in less than four seconds. In doing so, he made the electric car desirable. It was a status symbol for the eco-savvy wealthy. George Clooney, Matt Damon, and Google’s Larry Page and Sergey Brin bought themselves Roadsters.4
Tesla’s next product was a slightly more sober-looking car for the mainstream market, the Model S. The S was short for “sedan” or “saloon,” but it also had a hidden historical message. Henry Ford, famously, made a Model T; well, just as S comes before T in the alphabet, so electric cars came before Ford’s gasoline cars. “And in a way we’re coming full circle,” Musk told his biographer, “and the thing that preceded the Model T is now going into production in the twenty-first century.”5 The Model S was a hit: it was also the safest car ever tested by the American highway safety authorities.6 By 2015 Tesla was selling fifty thousand cars per year. In the meantime, established car companies such as Nissan and BMW had begun producing electric vehicles too. In 2016, Tesla announced its Model 3, with a base price of only $35,000. Within twenty-four hours the company had taken preorders worth more than $7 billion. “Future of electric cars looking bright!” Musk exclaimed on Twitter. Maybe the second time around, the idea would stick.
The modern electric car is a great idea, made more viable by new technology—but it is not a new idea. And what is true in the consumer tech industry is true in science and other fields of thinking. The story of human understanding is not a gradual, stately accumulation of facts, a smooth transition from ignorance to knowledge. It’s more exciting than that: a wild roller-coaster ride full of loops and switchbacks. We tend to think of the past as less intellectually evolved than the present, and in many respects it is. Yet what if the past contained not only muddle and error but also startling truths that were never appreciated at the time? Well, it turns out that it does.
This book is about ideas whose time has come. They were born hundreds or thousands of years ago. But their time is now. Many of them spent a lot of time being ridiculed or suppressed, until someone saw them in a new light. They are coming back at the cutting edge of modern technology, biology, cosmology, political thought, business theory, philosophy, and many other fields. They are being rediscovered, and upgraded. Thought of again, and thought about in new ways—rethought. Creativity is often defined as the ability to combine existing ideas from different fields. But it can also be the imaginative power of realizing that a single overlooked idea has something to it after all. We are living in an age of innovation. But it is also an age of rediscovery. Because surprisingly often, it turns out, innovation depends on old ideas. Just in time Old is the new new. Many personal trainers have abandoned weight machines and now recommend old-school exercises such as gymnastics and kettlebells. In the food industry there is a trend for cooking elaborate feasts that the aristocracy might have enjoyed five hundred years ago. In the age of music streamed over the Internet, the coolest way to release your new album is on vinyl. Bicycle-powered rickshaws ply the streets of Manhattan and London. Adults buy coloring books. Even airships are back. (The British-made, helium-filled, hundred-meter-long Airlander 10 is designed to compete with helicopters for moving heavy cargo; others foresee a return to passenger airships in our skies, and NASA has developed concepts for airships that could resupply a space station floating in the clouds of Venus.)
Alongside this multifaceted cultural turn to the past, however, there is also a tremendous focus on the new. Smartphones, smartwatches, fitness trackers; start-up culture and a new global skyscraper race; Uber and WhatsApp—the pace of change, we are routinely told, is greater than ever before in human history. The past is just one long roll call of mistakes. We now know better. History is old hat; the future is just around the corner. “Conformity to old ideas is lethal,” says a former executive editor of Time magazine. This is “the age of the unthinkable.”7
But these opinions about our culture—the retro and the futuristic, “old is the new new” versus “the age of innovation”—are themselves very old. And so is the tension that arises from holding them at the same time. The pace of technological change today is impressive, but hardly more so than that of the nineteenth century. The twenty-five years between 1875 and 1900 saw the invention of the refrigerator, the telephone, the electric lightbulb, the automobile, and the wireless telegraph. (Not to mention the paper clip, just squeezing into the century in 1899.) Yet, in the same era, the Arts and Crafts movement was pushing a decidedly romantic return to older ideas of traditional craftsmanship and design; poets were reworking Arthurian myth; and the Renaissance was being rediscovered and hailed as the crucible of modernity. The late nineteenth century was looking both forwards and backwards, in ways that seemed unprecedented then as well.
Perhaps every age imagines itself as having a uniquely complex relationship to the past and fails to recognize that every previous era—at least since, say, the Renaissance itself—has done so too. But what are the consequences in our day when we fail to recognize an idea as an old one? The widespread assumption that it is necessary to start afresh with our uniquely modern wisdom is encapsulated in the “Silicon Valley ideology,” which insists that venerable social institutions (such as higher education) positively need to be “disrupted” by technology companies. The concept of innovation here is reduced to a curiously thin idea: a picture of the maverick young entrepreneur having a flash of inspiration and inventing something from nothing, to change the world. The old ways of doing things must everywhere be replaced. It is the philosophy of Snapchat, the photo and video messaging app whose messages are deleted permanently after a few seconds. What is past cannot help us; it must vanish completely.
This Valley dream of disruptive invention was beautifully satirized by a one-day event at New York University in 2014 called the “Stupid Shit No One Needs and Terrible Ideas Hackathon,” whose entries included a Google Glass app to make the user vomit on command, and a version of Tinder—for babies. If no one’s had an idea before, that might be because it’s an unprecedented stroke of genius—but it also might be because the idea’s just not worth having. Elon Musk, for one, does not consider himself to be in the business of disruption. “I’m often introduced onstage as someone who likes to disrupt,” he has said. “And then the first thing I have to say is, ‘Wait, I don’t actually like to disrupt, that sounds . . . disruptive!’ I’m much more inclined to say, ‘How can we make things better?’ ”8
As we’ll see, innovators can often make things better by resurrecting and improving the past—as with the Tesla electric car. The philosopher of science Paul Feyerabend observed, “No invention is ever made in isolation.”9 Either isolated from other thinking people, or isolated in time. What other forgotten ideas might one day be rediscovered, with the help of a little rethink?
We all love a good idea, but how can we tell whether an idea is good? Is an idea good because it’s useful, or because it will be financially profitable, or because it is morally praiseworthy, or because it inspires other thinkers? Is it an idea that helps others? Or is it an idea that merely revolutionizes one’s picture of the universe? Potentially any or all of the above, it seems. It is appealing to judge ideas primarily on their usefulness, but usefulness can be more narrowly or broadly conceived. Useful for what? To whom? And when? Dividing ideas into good and bad is a blunt approach. We can do better.
For one thing, our perception of an idea will change over time. And that drives rediscovery. The electric car was a good idea given the problems of congestion and (organic) pollution caused by horse-drawn carriages. But it was arguably not such a good idea given what cheap gasoline-powered cars could do. The first electric cars could run only about thirty miles on a single charge, and in the early twentieth century society had no pressing reason to wean itself off fossil fuels. Advances in battery technology, together with modern climate science, have turned a problematic idea into a very good one.
So what is an idea, exactly? Is it the same as a thought, or a proposition? Is it an initial inspiration or a final conclusion? Is it a spark of genius or the outcome of long, pedantic toil? We might not be able to pin down a precise definition of “idea,” but as the judge said about pornography, we know it when we see it. What counts as an idea is itself a subject for rethinking. And if we don’t rethink the way we think about ideas, we might miss out on extraordinary possibilities.
Everybody knows, though, that some past ideas were obviously bad. They were just plain wrong; they were permanently superseded by new discoveries. Now we look on them fondly as just funny mistakes. Of all the discarded ideas in history, perhaps one of the least reputable is alchemy. The notion that you could turn base metals into gold? Ridiculous. We regret that even Isaac Newton, for all his brilliance, was seduced by alchemical researches. To modern eyes, alchemy looks like wishful thinking at best, and esoteric fraud at worst. Alchemy is what people did before they discovered science.
Take the idea of the philosopher’s tree. According to obscure hints and scraps from laboratory notebooks of the seventeenth century, this was a precursor to the more famous philosopher’s stone, which would turn lead into gold. By planting a specially prepared seed of gold, so it was written, you could grow a whole tree of gold. That was the philosopher’s tree. A pretty fiction.
Except that, a decade ago, Lawrence Principe, an American chemist and historian of science, decided to see if it actually worked. He cooked up some philosophical mercury, a special form of mercury for which he found instructions in the secret alchemical writings of Robert Boyle, one of the founders of modern chemistry. Following a recipe that Principe reconstructed from seventeenth-century alchemical treatises and experimental notebooks, he mixed the philosophical mercury with gold and sealed the mixture in a glass egg. As he watched, it began to bubble, then rise like a baking loaf. Then it liquidized. After several days’ more heating, it had turned into what Principe called a “dendritic fractal,” a structure with ramifying branches. There, before his eyes, was a golden tree.10
Alchemy wasn’t so ridiculous after all. Historians now argue, for example, that Robert Boyle essentially “pillaged” the work of alchemists for valuable insights while denouncing the practice as nonsense in public. In other words, he was performing a kind of dishonest rethink, taking ideas from the past and presenting them as new, while ridiculing the earlier thinkers who had had them in the first place. Present-day studies have also confirmed the usefulness of ancient alchemical recipes for pigments and oils. Much of the occult weirdness, it turns out, fades away when investigators manage to translate coded terms in the old texts. According to a 2015 study in Chemical & Engineering News, for example, “Historians have now figured out that dragon’s blood refers to mercury sulfide, and ‘igniting the black dragon’ likely means igniting finely powdered lead.”11 Alchemy was not antiscience superstition; it was the best science anyone could do at the time.
When an old idea that was so obviously wrong turns out to have been right, we may be forced to rethink our ideas about the past—and our ideas about ideas themselves. Twenty-twenty hindsight Elon Musk’s electric-car company is named after Nikola Tesla, the wizardly Serbian American engineer and inventor active in the late nineteenth and early twentieth centuries, who pioneered the modern alternating-current (AC) electricity supply against Thomas Edison’s direct current (DC). (Tesla was hauntingly impersonated by none other than David Bowie in Christopher Nolan’s 2006 film, The Prestige.) In 1888, Tesla patented a design for the first AC induction motor—the kind that Tesla Motors engineers designed their first car around more than a century later.
In 1926, Nikola Tesla was asked to imagine how the world might look fifty years in the future. “When wireless is perfectly applied, the whole earth will be converted into a huge brain,” he declared. “Through television and telephony we shall see and hear one another as perfectly as though we were face-to-face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.”12
Tesla also predicted, “Aircraft will travel the skies, unmanned, driven and guided by radio.”
And Tesla foresaw, “International boundaries will be largely obliterated and a great step will be made toward the unification and harmonious existence of the various races inhabiting the globe.”
Oh, well. Two out of three ain’t bad.
Tesla’s vision of the technology of the future was impressively accurate. But this book isn’t about amazing predictions. Once you start looking, it can be tempting to see anticipations of exciting modern ideas everywhere in the past, but we must be careful in our identifications. To conclude complacently that there is nothing new under the sun is to insult those thinkers who really did discover or dream up something unprecedented. On the other hand, to celebrate every possible foreshadowing of a good idea as brilliance would be to risk seeing genius where often mere chance is at work. This point was well expressed by the nineteenth-century German scientist Hermann von Helmholtz. He complained about the number of untested scientific hypotheses in the journals of his time: “Among the great number of such ideas, there must be some which are ultimately found to be partially or wholly correct; it would be a stroke of skill always to guess falsely. In such a happy chance a man can loudly claim his priority for the discovery; if otherwise, a lucky oblivion conceals the false conclusions.”13 Very true. So if someone anticipates a later idea just by guessing randomly, that’s nice. But even roulette players betting on red or black are correct about half the time. So mere predictions won’t constitute the kind of heroic foresight we will celebrate in those thinkers who really got something right.
Science, as we’ll see, is both a field in which a lot of rethinking is currently taking place, and also itself a tool—maybe the best one we have—for rethinking. But it’s a tool whose limitations are too often denied. With the best of intentions, the defenders of science in our time paint a misleading picture of it. This is perhaps understandable, since more than 40 percent of Americans believe that God created humans a mere ten thousand years ago, and the robust findings of climate science are routinely rejected by cynical political operators. As a result, science tends to be portrayed by its advocates as the smooth-running machine by which humanity has gradually become enlightened, and the only reliable source of knowledge. Some even attempt to show that science can be the source of our moral values. Privately, many of science’s champions are no doubt more sophisticated, but in public they often oversimplify.
As this book will show, however, the borders between science and other disciplines (such as philosophy) are porous, and that is a good thing. Human understanding in general can often proceed only by the mistaken promotion of error. Above all, good ideas can be found, but then rejected or ridiculed for decades or centuries or millennia before they are finally rediscovered. Yes, the past is riddled with error and fraud. In other words, it is much like the present. But it also contains surprising gems that lie waiting to be unforgotten.
Some of the ideas in this book have lately reemerged in fields of hard scientific research—the notion that mice can inherit a fear of the smell of cherries, say, or that our universe is just one of an eternally bubbling multitude. Some, on the other hand, might just strike you as loopy—the idea that electrons have free will, or that there are infinitely many donkeys. But all of these ideas are carefully reasoned. To be sure, as the old joke goes, there is no opinion so ridiculous that some philosopher has not held it. But as we’ll see in Part II, what seems ridiculous is often as important as what seems sensible.
Soon we will just be another part of history ourselves—and what then? What of the future of ideas? As advertisements for investment products are obliged to say, the past is not a reliable guide to the future. But it’s the only one we’ve got. Ideas that are ridiculed by the mainstream or by the experts often turn out to be just ahead of their time—but that is not license to believe any old rubbish today. And ideas that seem obviously right today will no doubt be ridiculed in years to come—but that is no reason to abandon all hope in our own judgment. Aristotle thought that slavery was acceptable because some people were slaves by nature. In Part III of this book, I’ll ask, What is our equivalent conventional wisdom that will embarrass future generations on our behalf? And what apparently ridiculous ideas are we not taking seriously enough right now?
The art of rethinking, and rediscovery, lies in questioning our ideas of authority, knowledge, judgment, right and wrong, and the processes of thinking itself. Ideas cannot be pinned down like butterflies: they come from people living and thinking through time, and are passed among us down the centuries. The same idea can be bad at one time and good at another. An idea can be bad in the sense of incorrect, but nonetheless good because it is the necessary stepping-stone to something better. More generally, rethinking suggests that an idea can be good in the sense of useful, even if it is bad in the sense of wrong. It can be a placebo idea. Outrageously, it sometimes might not matter whether an idea is true or false.
The pioneering nineteenth-century psychologist William James, brother of the exquisitely verbose novelist Henry, is often credited (perhaps apocryphally) with this ironic diagnosis of an idea’s faltering progress to acceptance: “When a thing is new, people say: ‘It is not true.’ Later, when its truth becomes obvious, they say: ‘It is not important.’ Finally, when its importance cannot be denied, they say: ‘Anyway, it is not new.’ ”14 That transition can take an awful long time—and cause an awful lot of sorrow along the way.
The world of ideas is a moving target. What follows is a freeze-frame. It’s easy to picture ideas as static packages of thought that can be definitively judged. But that is not really accurate. Like a shark, an idea needs to keep moving to stay alive. An idea is a process just as much as a thing. And that process is seldom one long, linear march to enlightenment. If we are not constantly rethinking ideas, we are not really thinking. As the French say, reculer pour mieux sauter—if you step back first, you can jump farther. The best way forward can be to go into reverse. And the best new ideas are often old ones. As we’ll see next, that goes even for cutting-edge microsurgery and modern warfare.
Steven Poole is the award-winning author of Rethink, Unspeak, Trigger Happy, You Aren’t What You Eat, and Who Touched Base In My Thought Shower?. He writes a column on language for The Guardian, and his work on ideas and culture also appears in The Wall Street Journal, The New Statesman, The Atlantic, The Baffler, The Point, The Times Literary Supplement, Edge, and many other publications. He was educated at Cambridge, lived for many years in Paris, and is now based in East London.
"A magic carpet ride through the history of thought, viewed from such a height that unexpected patterns and correspondences emerge...It is testament to the author’s narrative skill that this whirlwind of discovery doesn’t end up in a pile of papers scattered across the floor... His powers of orchestration succeed. Among the greatest compliments you can give a book is that it helps you to see things differently. So long as you’re not dazzled by the fireworks, Rethink could do just that."
– The Guardian
"Clever and entertaining... Startling... Fascinating... When it comes to describing a complex idea clearly, Poole is one of the best writers around... A thoughtful and thought-provoking book."
– Sunday Times (UK)
"Entertaining and important... Elegantly written and full of surprises."
– Daily Telegraph (UK)
"Full of fascinating stories about ideas that were thought to be rubbish at the time but whose hour came later... A treasure trove... The pleasure, and it is considerable, is in the examples... Where the arts merely accumulate, science progresses. Yet it does so by swirling back on itself. That is in itself an insight that is well served by a clever and entertaining book which, if it does nothing else, will make me more fun at parties."
– The Times (UK)
"An always entertaining and often eye-opening taxonomy of old ideas that refuse to die... I see Rethink as a kind of post-modern, post-ironic smart-thinking book, undercutting the genre's pretensions by borrowing its old clothes, drawing our attention to how its so-called new ones belong to the emperor. This rises far above satire or parody because what Poole actually says is largely both true and interesting. I don't think anyone has subverted the smart-thinking genre like this before. That's inspired rethinking."
– Financial Times (UK)
“A fascinating compendium of new ideas that aren’t new at all.”
– The New Statesman
"Fascinating... Exciting... Poole invites us to be a bit bolder than we often are, to challenge accepted truths, to revisit old ideas and even to play with some crazy new ones... Rethink makes you, well, rethink... With this book, Poole confirms his standing as one of our liveliest and most thought-provoking writers on science and technology... A stimulating journey that challenges our fixation with ‘winners’, but also with novelty for novelty’s sake... Rethink invites us to be skeptical and to look back, but perhaps just as important, I think, it also encourages us to be more creative when looking ahead."
– The Spectator (UK)
"Entertaining... Remarkably accessible and well-organized. Such a cross-section of material guarantees there is something here for everyone."
– Publishers Weekly
"The stories behind these ideas with their twists and turns through the years make fascinating reading. Poole has written an entertaining and informative book that provides a new appreciation for the past."
Get our latest book recommendations, author news, and competitions right to your inbox.
Thank you for signing up, fellow book lover!
Tell us what you like, so we can send you books you'll love.