Oh boy, now we have to worry about…
…the destruction of the human race by artificial intelligence:
In 267 brisk pages, Barrat lays out just how the artificial intelligence (AI) that companies like Google and governments like our own are racing to perfect could — indeed, likely will — advance to the point where it will literally destroy all human life on Earth. Not put it out of work. Not meld with it in a utopian fusion. Destroy it…
ASI is unlikely to exterminate us in a bout of Terminator-esque malevolence, but simply as a byproduct of its very existence. Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant’s next meal will come from. We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too. If we do achieve ASI, we will be in completely unknown territory.
The theme of countless science fiction plots, come to life (or death)? Frankenstein, or the Modern Prometheus? Those of you with more scientific acumen than I can read the article and decide for yourself whether such a scenario is likely. Quite a few of the article’s commenters are skeptical.
I thought the term ASI was the more appropriate Artificial Sentient Intelligence, rather than Super intelligence as used in the article. The author clearly is implying sentience in the initial AGI stage he describes.
This is an old question; when is something sentient? Hofstader wrote a large book about it, and even Schroedinger (father of Quantum Mechanics) weighted in. The answer probably is the old “you’ll know it when you see it”.
His reason for our demise being the competition for energy is interesting. Now if the ASI is in charge of a bunch of robots that can do the mining, drilling, etc, to extract the energy, then I guess it’s possible. And if it is so superintelligent I guess it could solve the energy problem anyway.
Other questions: Is the ASI a singularity, or does it produce copies of itself? Do the copies compete? i.e. ASI warfare?
If ASI is the end result of biological evolution, then we are faced with the same question now facing SETI: where are they? If every intelligent biological species ends up creating ASI’s, and they grow exponentially, as the author suggests, then where are they?
The Eloi scenario is the single most likely outcome of AI.
Morlocks could never be.
Just remember that the crowd that is coding Healthcare.gov is gazillion years away from coding AI — a fantasy that will never come off.
For, at the heart of it, to construct AI one must truly KNOW man.
We’ve been working on that project for about 50,000 years.
One must leave unto God His own work.
“For, at the heart of it, to construct AI one must truly KNOW man. We’ve been working on that project for about 50,000 years. One must leave unto God His own work.”
I’m guessing that it has been at least twice 50,000 years; and we will never ‘know’ who we are because we, as individuals, are not gods. We are mere mortals fumbling along as we gaze at the stars.
We’re at the stage where science fiction is catching up with science.
On the other hand, since AI will be created in our own image, and with Asimov’s Laws of Robotics, it is just as likely that an “I, Mudd” scenario where they become our “parents” or they make us their pets.
Sort of like the Nanny State but run by intelligent beings and not bureaucrats .
If our enemies create AI, they will weapons used to enslave humanity.
If AI is created with free will, various other things will happen instead.
Parker, I’ve heard that Obama offers ascension to godhood, all you have to do is to sign up.
Reading Mark Steyn’s latest (the loss of work, the loss of purpose), I see no reason why the useless should not be eliminated.
I for one welcome our new overlords. At least if they’ll give us work. Even makes us slaves. Something instead of this grotesque being provided for!
It is always dangerous to depend too much on something too unpredictable, and this is true even for very simple automatons, like autopilot or computer program running nuclear station. AI is not different in this respect from clockwork controller of a washing machine. The real problem here is that many complex devices became inherently unreliable when they became too complex. One of them is not electronics or software, but government itself. This is our Frankenstein.
This problem has been dealt with in SF a considerable number of times.
When H.A.R.L.I.E. Was One, by David Gerrold
The Two Faces Of Tomorrow, by James P. Hogan
are probably two of the most seminal works.
Classic works involving AI also are
The Moon Is A Harsh Mistress, by Robert Heinlein
Colossus, by D.F. Jones (there are two little-known sequels, The Fall of Colossus, and Colossus and the Crab, which are very relevant to the whole storyline, mind you)
The Revolution From Rosinante by Alexis Gilliland (which also has two sequels, Long Shot For Rosinante and Pirates of Rosinante
The Colossus trilogy is the only one of those which the AI is central to the events of the plot, as opposed to an additional character of significance.
There are many others but those come directly to mind.
By the way, the first two of those are specifically regarding an emergent intelligence, that is, one developing and being dealt with as it becomes able to effect its environment.
One of the most relevant things to understand is that we still haven’t mastered the problem of “common sense”. This is described fairly well in Hogan’s book — how do you define “common sense” — not even in the more complex form of “why do liberals fail so badly at it?”, but the kind that even THEY generally don’t fail at.
Hogan’s example:
The cat has fleas. We want to get rid of the fleas. Well, heat kills fleas. Throw the cat in the furnace!
It is boneheaded obvious that that will kill the cat, too… but we didn’t specify that. It was “understood”. So we tell the AI that we don’t want the cat to die, providing a “common sense constraint”.
OK, now the dog has fleas. How do we get rid of them? Well… heat kills fleas….
We didn’t TELL the AI that we wouldn’t want a solution to kill a DOG, only a CAT.
And there you can see the other aspect of the problem, that makes simple rule-based answers not work — we need common sense to have enough reason to not only handle a specific rule, but to be able to effectively GENERALIZE from those rules, to realize that when we say “don’t kill the cat”, we also almost always will mean “don’t kill the dog, the cow, the horse, the elephant, the giraffe…”
The “generalization” problem is very much outside the scope of anything programmers can currently come close to, and it’s one of the key things preventing us from CREATING an AI intentionally.
Whether or not something like that can ARISE from a disorganized system is unknown to us. It may well be that, when the internet gets 100 billion cpus dedicated to its own control, that something will “magically” (and yes, it would BE magic because we have no idea how it could occur) happen and some form of self-sustaining AI will be enabled. And that AI would be utterly alien to us because we’d have no understanding of it, and whatever principles that created it would be a mystery. As with Hogan’s book, things could get pretty chaotic until we managed to actually recognize each others’ existence … yes, it would be an ALIEN life form in every sense.