Yuval Noah Harari predicts the future—sort of
The name of Yuval Noah Harari came up recently in a conversation I had over the holidays. He seems to have become some sort of guru to a lot of people, and yet I know practically nothing about him or his work. Looking at his Wiki profile, I see that he’s 42, a historian, and someone who likes looking at the big picture—for example, questions about free will and consciousness—and making predictions about what that future will bring. Lately Harari’s been interested in “the consequences of a futuristic biotechnological world where intelligent biological organisms are surpassed by their own creations; he has said ‘Homo sapiens as we know them will disappear in a century or so.'”
Well, that’s a mouthful. So when this interview with Harari popped up today in my Pocket recommendations, I decided to read it. I was immediately taken aback by something Harari says almost at the outset:
We began by agreeing that something feels very different about this moment in history. We are on the precipice of a revolution that will change humanity for either our everlasting benefit or destruction—it’s not clear which. “For the first time in history,” Harari said, “we have absolutely no idea how the world will look in 30 years.”
Now, Harari’s a historian and I’m not, but I beg to differ with those statements of his. It seems to me that many generations before ours have had the perception and even the conviction that they were “on the precipice of a revolution that will change humanity for either our everlasting benefit or destruction.” Whether it actually will pan out that way is not clear to the people who believe that it will, because we never see the far-reaching consequences of the trends that develop in our lifetimes.
And is this really the first time in history that “we have absolutely no idea how the world will look in 30 years”? First of all, Harari then proceeds to tell us how he thinks the world might look in 30 years, so he certainly has ideas about it. Although we don’t know whether his ideas are correct, that doesn’t mean people (Harari and many others) don’t have ideas. What’s more, it’s hardly the first time in history we’ve been incorrect in predicting 30 years into the future. In fact, predictions for the future have tended to be very poorly correlated with what actually occurs. History often throws us quite a curve.
For example, did the prognosticators of the 1890s and turn of the century from Nineteenth to Twentieth foresee the Great War that occurred less than 30 years later? As far as I know, they did not. In fact, that conflagration was not only generally unanticipated but, as Henry James wrote in a letter to a friend written the day after war was declared in England (and as I discussed in this 2012 post), that Great War was a great shock:
The plunge of civilization into this abyss of blood and darkness… is a thing that so gives away the whole long age during which we have supposed the world to be, with whatever abatement, gradually bettering, that to have to take it all now for what the treacherous years were all the while really making for and meaning is too tragic for any words.
Henry James had thought that humanity was going in one direction, and then it made a huge volta unpredicted by most people. It was both startling and disillusioning, actually shocking, and it’s a turn of history from which we have never recovered (as I argue in that post where the quote occurs).
Did Europeans predict, thirty years before it happened, the Black Death that changed Europe and its society enormously? I don’t think so, although there might have been generally apocalyptic thoughts floating around, as there often are and often were, particularly in times when people had no idea what caused such pandemics and no idea how they might be prevented or ameliorated.
Harari goes on to speculate a great deal about the effects of AI on human beings and how our fundamental nature will be changed as a result. I’ve read a lot about AI and find some of these arguments persuasive, but as with all predictions for the future I take them with a large grain of salt. For example:
What worries you about the new cyborgs?
Experiments are already under way to augment the human immune system with an inorganic, bionic system. Millions of tiny nanorobots and sensors monitor what’s happening inside your body. They could discover the beginning of cancer, or some infectious disease, and fight against these dangers for your health. The system can monitor not just what goes wrong. It can monitor your moods, your emotions, your thoughts. That means an external system can get to know you much better than you know yourself. You go to therapy for years to get in touch with your emotions, but this system, whether it belongs to Google or Amazon or the government, can monitor your emotions in ways that neither you nor your therapist can approach in any way…
Combining this with enormous amounts of data collected on you 24 hours a day can provide the best healthcare in history. It can also be the foundation of the worst dictatorial regimes in history.
Much more at the link.
Science fiction writers have been toying with these ideas for a long time. But we still have no idea whether it will happen in the predicted manner, and even Harari notes that such developments can go either way in terms of good or bad effects.
So we’re back to the usual inability to predict much of anything, and that’s not unique or different. But it’s interesting to speculate, as it always has been.
We will be lucky to avoid a revolution in the next ten years. World War I is still with us and has changed history as much as the 30 years war did. The tech freaks think that they know how things will go. I was a programmer in 1959 on a mainframe. I was an early adopter of the internet. Amazon keeps reminding me that I have been a customer since 1997. Cliff Stoll, in the recent edition of his classic “The Cuckoo’s Egg” notes how wrong he was about the internet in “Silicone Snake Oil, ” where he told his readers the internet would never amount to anything. He readily admits he was completely wrong.
I’m mostly in agreement. Every era thinks theirs is a crucial time in history. Certainly the whole of the Cold War was sold as such, and the rise of Fascism before that definitely wasn’t passed lightly, as something that would blow away.
But the First World War was widely predicted, decades out. Europe was a set of factions still thinking that territorial land grabs were acceptable, and with long-standing grudges.
That the Balkans would be involved was likely too. The Crimean War was an early precursor, and Austria vs Russia attempts to dismember Turkey involved Britain intervening several times.
Some things really are obvious.
What wasn’t predicted was how encompassing and destructive the new forms of war could be. That the defense had a huge gain with machine guns and barbed wire, whereas the offense had no similar gains.
WWI had little long-term effect on Britain and France, as countries and economies (obviously at the individual level it was horrific).
It’s biggest effects were the removal of empires and emperors in the East. Not strictly war related, but merely a by-product. So even if the war was predicted, as it was by some, the results were not predictable.
So it will be with technology. We might predict what we will be able to do. We won’t come close at predicting the secondary and tertiary effects.
Chester Draws:
Yes, some sort of war was predicted, but that’s not at all what I meant. Perhaps I didn’t make it clear enough, but I’m not referring to some sort of war being predicted, I’m referring to the enormous scope and transformative nature of that war, and its effect on society and on people’s psyches, as well as the stalemated nature of trench warfare and the number of deaths. If you follow the link in the post to my earlier post, you’ll find some discussion of that.
Those things were not predicted, as far as I know. And those were the important things that represented the big change from which we have not recovered.
I was looking at material on the film First Reformed by Paul Schrader. He is a fine screenwriter and it is supposed to be a good film about a minister who is struggling with losing his faith and parishioners, until he meets an environmental activist. He then becomes convinced of the coming climate change apocalypse, with possibly disastrous consequences for the minister’s mental health.
So I read the interview with Schrader in Vox to see if Schrader had really bought into the apocalypse himself. His hero perhaps loses his mind after all. The short answer is yes.
The thing that bugged me was Schrader’s comment that he’s sure that humanity’s demise is imminent because people now talk about not having children. Something that never happened 50 years ago he said.
Really! Apparently he missed Malthus writing about how we only had a century left in 1798. Or the environmental hysteria in the ’70’s. Rachael Carson, Barry Commoner, the book Diet for Small Planet (a good book if you discount the prediction premise). A great many of those predictions were garbage. I also remember exactly that kind of general talk then; who would want to bring children into the toxic famine?
Mike K,
Once again I find a fellow fan. In this case, of Mr. Stoll’s The Cuckoo’s Egg, which I still think is the best computer thriller ever written — and it’s not even fiction. (I’m sad to say I can’t really think of any other good ones, fiction or non-fiction. Maybe I don’t read enough?!)
I also read the “Snake Oil” book, and was sorely disappointed by its downbeat view of computers. But I was a programmer in the heyday of the great IBM mainframes, the 7070, 7094, S/370, and I was so in love with the work, and when I moved into datacomm (telecomm) I found it had a mystique that was mind-blowing. So Dan Brown just doesn’t cut it for me. :>)
I am going to have to look around and see what Chris Stoll says now.
Funny that the title of that linked interview with Harari is “Yuval Noah Harari Is Worried About Our Souls” because he doesn’t believe that humans have souls. From his latest book, 21 Lessons for the 21st Century: “when you begin to explore the manifold ways the world manipulates you, in the end you realize that your core identity is a complex illusion created by neural networks.”
Something else Harari has written — the introduction in the latest edition of Peter Singer’s Animal Liberation.
The plunge of civilization into this abyss of blood and darkness… is a thing that so gives away the whole long age during which we have supposed the world to be, with whatever abatement, gradually bettering, that to have to take it all now for what the treacherous years were all the while really making for and meaning is too tragic for any words.
As I have noted, this event is the one which, IMNSHO, is what damned the Left into becoming PostModern Liberals while abandoning Classical Liberalism.
In short, the Left metastasized into a social cancer, seeking largely to destroy their own culture and civilization.
Looking carefully at the tenets of PML, you see nothing but constructs aiming to destroy and reject the very foundational components of The West — The twin boons of the Judeo-Christian Ethos combined with the inheritance of Classical Greek thought and ideal. Deconstruction, Moral Relativism, Critical Theory, the rejection of many Enlightenment concepts (Freedom, liberty, self-determination), and a general enrapturement with meaningless language forms in an attempt to de-legitimize every form of reasoning or critical thinking processes.
Its political and economic efforts — the endless promotion of soul-destroying Marxism and Alinskyism are equally destructive.
Pushing folderol such as “Climate Change” as “settled science” as well as the general religification of scientific notions in place of actual scientific processes (The gold standard of science is not credentialist-oriented “peer review”, but the double-blind study) is a direct and incontrovertible attack on science itself.
And it all started with WWI.
What We Lost In The Great War
http://www.americanheritage.com/content/what-we-lost-great-war
Seventy-five years ago this spring a very different America waded into the seminal catastrophe of the twentieth century. World War I did more than kill millions of people; it destroyed the West’s faith in the very institutions that had made it the hope and envy of the world.
John Steele Gordon
July/August 1992
WWI had little long-term effect on Britain and France, as countries and economies (obviously at the individual level it was horrific).
I disagree. I think it destroyed both countries. Churchill managed to hold off Hitler with our help but Britain is a shadow of hit 19th centrist self.
I am going to have to look around and see what Chris Stoll says now.
I just picked up a new edition and that has the apology for “Snake OIl.”
The other good thriller is “Takedown,” about the detection and capture of Kevin Mitnick. The author was a programmer and IT guy for the UC San Diego supercomputer center.
Speaking of science fiction and future history, I rather enjoyed The War in the Air by H. G. Wells, a book that I found in the town library when I was a kid. It’s rather bizarre, but interesting for the ideas of someone writing in 1907.
Ann:
Ah, Peter Singer.
See this, this, and this.
Chuck:
Please see this.
Yeah, Neo, Peter Singer is a horror, and the fact that Harari aligns himself with anything Singer-related makes him a dope, or worse, in my book.
Just looked at those three posts of yours on Singer that you linked to and saw that I’d commented on all three. I included this on the first one, from a radio interview he did in Europe:
I have read two Harari books – “Sapiens” and “Homo Deus.” I enjoyed both because he’s an excellent writer, a provocative thinker, and I like the subject matter. That said, I disagree with Harari at almost every turn. He takes a dim view of Sapiens’ path to our modern selves. He believes agriculture and animal husbandry made life much worse for we humans. He rightly identifies our ability to imagine such things as God, money, economic systems, and political systems as ways we have managed to improve our standards of living. But he also feels that, because they are products of our imagination, they are some how not real. He has a very dark belief about the future because he believes so much of civilization is based on what he sees as fiction. He seems to believe that such insights will eventually be recognized as “fake” and things will not end well. In other words, he doesn’t seem to put much value in faith in ideas that have worked as a sustenance for the continued existence of humans. Is he right and my mind is too closed to recognize his truth? Well, that is the question, isn’t it.
Harari’s focus upon A.I. is an indication that he’s one of the many obsessed with the predicted coming of the “Singularity”. The Y2K hysteria was the precursor to the Singularity paranoia. I imagine Harari is a big fan of the Terminator movies…
What’s left after belief in a benevolent creator is abandoned? Two options; embrace the hope in the perfectabilty of mankind or… “abandon all hope, ye who enter here”…
Attempting to predict the future is a fools errand.
But larger events are sometmes fairly predictable. A year before his death in 1898 Otto Von Bismarck, grand strategist of the German Empire predicted; “One day the great European War will come out of some damned foolish thing in the Balkans.”
Though unable to imagine the utter destructiveness that modern warfare would prove to be, Bismarck clearly envisioned that when the peace failed, it would be a “great”, civlization spanning war.
Mile K:
Chester certainly stepped on his other part with the “WWI had little long term effect on Britain or France …” quip. Especially given the passing of the centenary so recently; France and Britain loosing much of a generation of young men. Or only the minor long term effect of preceding WWII, which combined essentially bankrupted Britain. For some reason rationing of food was a minor effect that persisted in Britain until 1954.
Chester may argue next that WWI had little long term effect on the USA, fans of WW and FDR may beg to differ.
Wasn’t Peter Singer the guy that said after they reach a certain age old people should be left to die? And then, when it was HIS mother, did all he could to get the doctors to prolong her life?
Did he ever change his views because of that?
Typical of close minded hypocrites.
Of course “do as I say and not as I do” is human nature and and you can find examples of people from all belief systems who act that way.
For example, “Free Speech for me but not for thee!”.
Predictions of the future are made by ‘wise’ fools. Additionally, to suggest The Great War to End All Wars had no reverberations is idiotic. My best advice to Noah is we’ll see.
Dennis Prager has written about the urge to save animals and the less interest in saving humans or the willingness to risk life to save an animal.
<Wasn’t Peter Singer the guy that said after they reach a certain age old people should be left to die?
That was probably Ezekial Emmanuel, the brother of Rahm who was an architect of Obamacare.
Parker,
One of the three posts neo linked to above about Singer talks about his mother and her care.
I meant that last comment to Tuvea.
Damn edit function.
Ezekial Emmanuel espoused the “Social Value” function of individuals to society as a way of allocating health care resources. The elderly and the very young were to receive disproportionate (negative) impact under his rational plan. He might have considered them “useless eaters.” Ezekial isn’t one of those dim-bulbs elected by the ignorant masses; he’s more of a rational expert this country needs (not).
Harari is a self-inflated windbag. Reminds me of Fukuyama’s claim, and title of his book, “The End of History.”
As to the nanobots, cyborgs, and other blah blah, one can drown in data. It is the uses to which the data are put that matter. Note how quickly and blithely the writer switches from the organic to the psychological, and the latter is foolishland: your cortisol is up (!), which means exactly what? Lots of different and totally unrelated things. Others already know more about us than we know ourselves; our physicians, for one.
J.J.,
Without God there is no hope for something beyond this life. Without money we’re reduced to barter, which leads to mass starvation. Without the economic laws upon which capitalism is based, there is little to no economic progress. Without political systems we are reduced to tribalism and “might makes right”.
These things are so obvious that only cynicism and depression can explain how they escape Harari’s awareness.
It is heartwarming and strokes the ego to believe one can know what is ahead. Fools. One thing that can be counted on is each of us dies not knowing what the future will bring once we have turned to dust. If WW1 teaches anything, it is not that any event ends anything. We humans, despite the inevitable progress of technology, remain the same through the ages capable of horrendous actions and also beautiful love, but mostly horrendous actions.
The utopian dreamers of the left believe in the perfection of humanity while they glorify the brutal murder of millions of the unborn. Bottom line. They are the monsters they vilify.
G.B.: “These things are so obvious that only cynicism and depression can explain how they escape Harari’s awareness.”
Yes, I get the feeling that Harari is not a very happy person. Very well educated and PC (all the important things, ya know 🙂 ) but not happy.
He avers that we should study happiness more. That may also be a tip off.
Can a device determine that Deodorant A leaves my pits smelling doable? No, a person must be hired to sniff my pit. Can we stock the store shelves with our new Lascivious Eyeliner, on the basis of an instrumental determination of its non-irritability? Bring in the rabbits.
Yet Mr. Harari claims micro-miniature devices can determine “moods”, “emotions”, and “thoughts”. No device can do more than detect & measure simple proxies, purported to reflect fundamentally inaccessible states … which they also can do with pits and eyeliner … but this does not rise to the goal or promise.
Futurists love to insist we will ‘upload’ ourselves into RAM. Yeah well, try it with trained planarian worms, first.
It’s *possible*, though unlikely, that 30 years on we won’t be in Kansas anymore … but Harari [and the subculture in which he lives] certainly has no inside track on the matter.
All this talk about AI apocalypse is just hot air. This guy Yuval Hariri is either charlatan or madman, probably both. Strong AI is mathematically impossible, and a weak AI is a misnomer. No algorithmic process can solve really hard problems of combinatoric optimization (so called NP-complete problems), no matter how much computational power applied. No real progress was achieved in this field for the last 25 years, compared to what it was when I was the scientific editor of “Cybernetic review” (an yearly collection of the best publications on the subject worldwide, in Russian translation). All the successes since than were essentially trivial and only reflect vastly more cheap and affordable computing power.
There is a substantive difference in the predictability of the future today, as opposed to, say, 200 years ago. Back then, technology did not chang so fast. The technology a man learned as a young man was still much the same a generation later. Today, we are seeing several quantum changes per decade. The amount of change a man or woman has to assimilate in a lifetime has changed drastically.
Nevertheless, I agree with Neo that the unpredictability of the future has always been a feature of the human experience.
I don’t give much credence to the doom and gloom variety of social prognosticators. When I was a kid, Alvin Toffler predicted “the end the world as we know it” in wildly popular book called Future Shock. The technological advances came, and somehow, humans adapted far better than anyone thought. Afterall, that is human being’s real “superpower”… Adaptability.
For predictions of future tech, they don’t get any better than Ray Kurzweil. See https://en.m.wikipedia.org/wiki/Ray_Kurzweil.
One US military perspective on AI:
https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/
Every new technology has created new problems. The invention of agriculture was, arguably, a disaster. Human health and longevity statistics fell and did not recover to pre-agricultural levels for more than a thousand years.
The only advantage bestowed by agriculture was that it supported more humans per fertile acre than hunter-gathering. Agricultural societies were able to claim and defend territory and defeat the less populous bands of hunter-gatherers. But, sedentary living in one place created sanitation problems and a single crop diet produced dietary deficiencies. Lifespans fell an average of ten years.
It took many generations, but we did solve the problems and the ultimate benefit (civilization) ended up being worth all the trouble it took to get there.
In any case, technology is like Pandora’s box. Once you open the box, you can’t return the contents. So, all of the fretting and naysaying is not going to change anything. The future will arrive whether we want it to or not.
Roy:
Is this a parody of serious thought? You might have well said that civilization beyond oral history has been a wretched mistake, for what else do hunter gatherers leave to the future? Oh for the good old days to be wholly without resources to deal with the vagaries of nature. Oh to know nothing of parasites, the causes of disease, of art beyond your own kin or tribe. For the good old, olden times indeed!