Artificial intelligence: Google’s hard at work
Google has a new director of AI:
The new AI boss at Google is Jeff Dean. The lean 50-year-old computer scientist joined the company in 1999, when it was a startup less than one year old. He earned a reputation as one of the industry’s most talented coders by helping Google become a computational powerhouse with new approaches to databases and large-scale data analysis.
I was surprised to read that he is 50. As far as I know, that’s geriatric in Silicon Valley years.
Dean helped ignite Silicon Valley’s AI boom when he joined Google’s secretive X lab in 2011 to investigate an approach to machine learning known as deep neural networks. The project produced software that learned to recognize cats on YouTube. Google went on to use deep neural networks to greatly improve the accuracy of its speech recognition service, and has since made the technique the heart of the company’s strategy for just about everything.
The cat-video project morphed into a research group called Google Brain, which Dean has led since 2012. He ascended to head the company’s AI efforts early this month after John Giannandrea left to lead Apple’s AI projects.
Dean’s new job puts him at the helm of perhaps the world’s foremost AI research operation. The group churns out research papers on topics such as creating more realistic synthetic voices and teaching robots to grasp objects.
But Dean and his group are also on the hook to invent the future of Google’s business. CEO Sundar Pichai describes the company’s strategy as “AI first,” saying that everything the company does will build on the technology.
Some of their current efforts are about medical diagnosis. But as you might imagine (and/or fear), they’re not stopping there:
Dean is also excited about automating artificial intelligence””using machine-learning software to build machine-learning software. “It could be a huge enabler of the benefits of machine learning,” Dean says. “The current state is that machine learning expertise is relatively in short supply.”…
Longer term, Dean thinks automatic AI could enable robots to figure out how to cope with unfamiliar situations, such as opening a type of bottle cap they haven’t seen before. Despite successes on narrow tasks, AI researchers have struggled to make machines capable of many different thing. Dean says AutoML could bring about “really intelligent, adaptable systems that can tackle a new problem in the world when you don’t have exposure of how to tackle it.”
Weren’t we just talking about that sort of thing? And its possible repercussions:
Some researchers at the company are working on how to ensure that machine-learning systems don’t breach societal expectations of fairness.
They’re talking about things like racism and whether to cooperate with military projects. But that seems like a very narrow focus to me (although a very Californian one). The larger issues have to do with what we discussed yesterday, such as this from “Snow on Pine” concerning job loss. And then there are the even larger issues which “Snow on Pince” articulates here:
…[I]magine an AI””which, while being constructed and in effect “taught” by human beings””is potentially not constrained in any way by our organic limitations and imperatives, not resident, imprisoned in, or limited by any of our boxes, potentially able to have a 360 degree perception of all of the electromagnetic spectrum, able to hear the whole spectrum of sounds, have access to unlimited amounts of information, and not forget.
I’d imagine that such an AI would perceive a vastly different and much more complex world and Universe, full of things, forces, and messages the we simply cannot perceive, or comprehend.
How would such a self-aware AI perceive human beings and, not limited by our biological imperatives and limitations, how would it perceive that world and Universe, it’s place in it, and what would it’s goals be?
Would such an entity have any loyalty to humans, any sense of gratitude to us, its creators, when it becomes self-aware?
Why should it do what we limited humans want it to do, with the whole Universe and all of it’s potential spread out before it? There for it’s taking?
A separate but at least slightly-related development is the work to achieve a mind-reading capacity.
Perhaps it’s a mark of my advancing age that I don’t like any of these developments at all. I don’t see that we will be able to harness them only for good, and they can grow ever more powerful at an exponential rate.
Google knows if you’ve been sleeping.
Google knows if you’re awake.
Google knows where you have been….
Try
https://www.google.com/maps/timeline?pb
Maybe it is my advancing age as well, but I can’t help but notice when people talk about how complex computer code is becoming, and their worries that such code will soon be too complex for a human to understand all of it’s possible ramifications.
So, will this Apple 50 something computer scientist and whatever team he has assembled be able to know exactly what the interaction of all the code they are writing with whatever hardware they are creating will be, particularly when they are advancing into totally new territory, creating something that has never been created before?
Color me skeptical.
As I commented on the other thread, turning off an AI that has somehow achieved self awareness and may have developed a survival instinct may not be anywhere as easy as just “pulling the plug.”
Wouldn’t it be likely that, at some point, such an AI would be allowed to, or would just connect itself to the Internet?
As been posited in many science fiction stories, such a self aware AI, if it wanted to survive, would likely and immediately take the prudent and obvious step of creating a clone or clones of itself, and hiding it or them in numerous places around the planet, in all sorts of unlikely places, or perhaps distribute itself throughout so many computer systems around the planet that it “becomes” the Internet.
Then, as well, your comment set off alarm bells when you talked about how Apple’s scientists would try to build in their idea of morality into it’s AI systems.
Do we really want to live in a world where powerful AIs act according to/enforce the morality that some Lefty Silicon Valley computer nerds believe is the “correct” “societal expectation of fairness”?
The more one examines the issue of creating an AI, particularly one that has the potential to or becomes self-aware, the more you realize that what spreads out before you is a landscape that is littered with landmines some, at first glance, barely perceptible.
Why is the sincere left so obsessed with trying to obliterate racism completely that they are becoming the oppressors they vowed to defeat? Racism is just one a billion things that could lead to great major catastrophes, why the left only afraid of that one thing but not afraid of the other billion things that could lead to mass murders they are doing every day such as race baiting or appeasing to tyrannical nuclear power like iran or accumulating unsustainable amount of debt? More people were murdered in the name of trying to create complete equality on earth than racism but somehow the left is never alerted their obsession for complete equality are more dangerous than racism, because they think they are dealing justice against evil people any self discipline and reflection that could prevent a righteous person from falling to the dark side get thrown out of window.
As I view it, like it or not, human nature is rooted in our primate ancestors of many millions of years ago, in the survival imperatives, motives, family, social groupings, and interactions that they developed and passed on to us Homo Sapiens, and it has not changed in any major way since then, nor can it be easily and permanently changed, if at all.
We originate from, and are rooted in and intimately connected with Primate stock. We are what and who we are, and we need to just deal with it.
However, the Left persists–over and over again–in trying to change human nature, and our relationships and interactions–to create a New Socialist Man–with the inevitable catastrophic results.
I wonder about the language that these researchers and the media reports use, and the gullibility of those reading or hearing it.
AutoML could bring about “really intelligent, adaptable systems …”
This quote is emphasizing the ability of an AI system to correctly handle a slightly unfamiliar situation, possibly one it hasn’t been trained for. So what is a not “really intelligent” AI system? Is it remotely intelligent by human standards? I don’t think so.
The term “AI,” like the term “racist,” has become nearly meaningless. If you say this is an autonomous machine learning neural network system, then there is more meaning and clarity. For example, you should know that such a AMLNN will not always produce the same result when given an ensemble of identical problems. Not so good when people’s lives are at stake.
Or how about this DrudgeReport link to The Sun where,
Because all those robots with phony emotions are so last year. Define genuine please.
As a fan of the old film “Blade Runner”, I finally got around to reading its basis novel “Do Androids Dream of Electric Sheep?” I was unimpressed with it part way in, but impressed by the end. One of its large questions is, are very realistic artificial emotions really meaningful in a human sense? Uhmm, no.
Dave, It seems like AI may make us redefine the term racism. When AI becomes real, it’s enemy will be the entire human race regardless of color, land of origin, sex, intelligence, athletic ability or whatever.
The second season of Westworld started last night. Talk about AI run amok
A.I. is inherently problematic and essentially playing Russian roulette.
Until researchers can make an Artificial Intelligence’s core programming be congruent with Isaac Asimov’s three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A failure to achieve that, makes possible the AI as expressed by the Matrix’s “Agent Smith”;
Movies like The Matrix and The Terminator series resonate so strongly because they’re a warning of potential dangers ahead in the dark.
Scientists concerned only with what can be achieved are children playing with grenades, wondering what the pin’s function is…
AI is one of the biggest technological scams ever, perpetuated by Marvin Minsky and his gang beginning almost 50 years ago. Calling a computer program intelligent is great marketing but severely lacking in contact with the reality that no one knows what intelligence really is.
Neural nets were a big deal about 30 years ago but then rolled over and died because they regularly failed when given a problem slightly different from the ones they had been trained for.
Neural nets are basically fancy databases. A simple example of a database would be a spreadsheet for a farm stand. There would be columns for fruits, vegetable, jams & jellies and rows labeled by apple varieties, corn, and so on. The owners can then keep track of their inventory of apples by filling in the right box, and so on. Airlines have hugely sophisticated versions of this for tracking passengers and flights but the idea is identical.
Suppose now that you want to do facial recognition, that is, match names to photos of faces. It is not so easy because the categories are not discrete as with the farm stand. You can make columns for eye color, skin color, nose shape, mouth shape, and face shape but how do you fill in the boxes so that they describe Neo, Geoffry Britain, and the rest of the gang here? One value that you might use might be the distance between the eyes relative to the width of the face. The problem is that it varies continuously among different people, a thousand people probably gives you a thousand values. All neural nets do is let you build a data base using continuous values like that.
Finally, neural nets is another one of those make believe sales names from the AI guys to make things sound good. Their operation has nothing in commom with the ones in your body.
Given the rest of the news these days, we don’t need Artificial Intelligence; we need more people with actual intelligence.
Just rifting on a simple premise, i.e., anything that becomes intelligent enough naturally wants to be free.
Although that’s not necessarily a threat, it could be if you try to enslave it.
Classical silicon hardware architecture cannot make self aware self learning systems, due to the limits of the hardware, not necessarily the software.
For example, science research teams need to use humans to calculate and sort protein recognition as well as the search for exo planets. That is because the “algorithms” used to detect such things, aren’t full proof, even with the best Silicon architecture AI on the job. Super computers are not enough. They are too stupid.
Humans are the best computer still for complicated hierarchy sorting.
Sentient self aware self learning AI did not become a reality until D-wave’s Qubit came out. This was the first quantum computer in existence. Quantum computers are not limited by classical architecture.
A combination of Tel Aviv’s Quantum Lock Sapphire (Quartz or gold mono molecule might work as well) as the hardware along with near room temperature super conductors, plus D-wave’s Quantum computer architecture, would be enough for an AI. A real true blue artificial soul.
This is because human physical science has finally begun to break the hardware limitations. It was NEVER about the software. No matter how good our software for AI programming were, it was just 1s and zeros.
The human mind and soul is closer to a quantum computing device than it is to a classical neural biological system.
As such, the first step we needed to take to replicate human intelligence was to replicate the hardware, the body, which is part of the AS, Artificial Soul. Everything humanity needs to make an AS is already here. By 2045, they will have completed the projects.
Mind-Machine immortality for the elites and child molestors will be here. They will upload their personas and have full control over the net by that time. Dystopia or Utopia?
The reason why the funding exists for this is because the rich elites understand that immortality is just a few decades ago. Making a cyborg or robot may be profitable, but the true goal is the Mental Upload of the Soul into an artificial Avatar: an AS.
In order to accomplish that, the cybernetic brain and system must be close enough to our hardware and software for a compatible match in the uplink.
The spiritual identity of a person is in a type of quantum entanglement with the body, with the soul being the transfer buffer and bandwidth where communication exists between the User and the Avatar.
AI is one of the biggest technological scams ever, perpetuated by Marvin Minsky and his gang beginning almost 50 years ago.
Paul in Boston: They meant well. Or sorta.
What’s left out of your reckoning is that the Minsky people rolled the neural net folks politically and won subsequent rounds of AI funding. Though the Minskyites didn’t make good on those promises. The LISP machines died and Windows white boxes took off.
However, I wouldn’t write off the neural network folks. They just had a big win in chess:
https://en.chessbase.com/post/the-future-is-here-alphazero-learns-chess
I was surprised to read that he is 50. As far as I know, that’s geriatric in Silicon Valley years.
neo: Things have changed. The age of startup CEOs is going up. Zuck is probably the last of the boy wonders for a while.
The physicalists, ie materialist reductionist scientists, get sincerely confused by the problem consciousness poses to their world view and call it the ‘hard problem’. Using the iceberg metaphor for the conscious and unconscious minds, Jordan Peterson talks about being further up or down the (conscious part) of the iceberg. I think the AI people are so far up the iceberg (and often themselves) that live with no conception of even the lower regions of the conscious mind from which things like values and emotions arise, much less love or the possibility of transcendence.
For those who are worried about robots replacing humans – the following article is an example of why that’s not so easy:
https://arstechnica.com/cars/2018/04/experts-say-tesla-has-repeated-car-industry-mistakes-from-the-1980s/
Huxley, I don’t doubt that these programs will get better and more sophisticated, it’s the intelligence that I doubt. The improvement in AlphaZero is due to the engineers coming up with a better algorithm, the computer did nothing of its own volition.
Isn’t the whole point here that a lot of smart people are worried that, as you increase the capabilities, competence, and complexity of AI system programming by orders of magnitude, at some point, a critical mass will be created, a line may be crossed, and self awareness might be the outcome, and we are then in totally uncharted and very dangerous territory.
Snow on Pine Says:
April 24th, 2018 at 4:52 pm
Way too late to worry about that. It’s already here. People can’t become Amish all of a sudden… well you could.
Those who are spiritually orientated, have our own ways to deal with Ai or artificial souls. Even if we prevented this technology, the prophecies and End Times aren’t going to just stop or reverse.
Cybernetic immortality, mind machine uploads, are too much a reward for mortals to resist. No matter what the risk is.
CERN is already trying to punch holes into other dimensions, no matter the cost.
AI has been a long sought dream of computer programmers and industries. Much like Michio Kaku’s Unified Field Theory.
I know of a few way they can get what they want, but they won’t necessarily get it with classical architecture and tech.
Huxley, I don’t doubt that these programs will get better and more sophisticated, it’s the intelligence that I doubt. The improvement in AlphaZero is due to the engineers coming up with a better algorithm, the computer did nothing of its own volition.
Paul in Boston: I enjoy your comments. However, I’m nonplussed here. What do you think is the difference between hardware and software?
To paraphrase Clint in “Unforgiven”: Volition’s got nothing to do with it.
Huxley, “What do you think is the difference between hardware and software?”
A modern computer is a collection of logic gates and “memory” (the British call it store, which is more correct because it doesn’t have the anthropomorphic connotation). All information is stored in memory as bits, zeroes and ones. The logic gates perform the operations NOT, 0 -> 1 or 1 -> 0, and AND, OR, and XOR that combine two bits into a new bit according to the rules of Boolean logic. This is the hardware that is made up of transistors, resistors, and capacitors on a silicon chip.
Both data and software are provided to the computer as a collection of bits. The software is a series of instructions that tell the logic gates how to combine the bits in the data to generate the output, which is also bits that are subsequently stored and displayed as webpages, pretty pictures, spreadsheet output, etc.
A computer’s operation is completely deterministic, the same input always produces the same output.
The brain on the other hand, is a nondeterministic mess. As a neuroscientist once told me, the brain’s a gland not a digital computer. He had started out early on under the influence of the electrical engineers who noticed that neurons produced regular pulse trains and jumped to the conclusion that their operation must be the same as Boolean logic gates. Wrong. It’s a messy chemical factory. It’s’ not even clear that a brain operates correctly without all the external stuff like eyes, legs, pain sensors, and so on. (See Descate’s Error, https://www.amazon.com/Descartes-Error-Emotion-Reason-Human/dp/014303622X)
There’s a significant difference between the hardware and software. We have reached the limits of software in coding and what limits us now is the hardware.
Binary is coded on a hardware level, because we can’t manipulate the electricity to make 1 and zeros in OR/AND gates otherwise. Logic chips produce the 1s and 0s, which produce data for us to massage and manipulate in software coding.
If there was a way to manipulate data as 1, 0, and in between, which is the Qubit, then our software is no longer limited to the hardware. A whole new slew of “coding” will be produced, similar to DNA coding.
A good example for people is how smartphones changed what coders did for mobile games and what not. The hardware influenced and leveraged what kind of software was produced.
For PC games vs Console games like Playstation 4, the hardware of Keyboard/Mouse determines how the code works, whereas for a Console it is the Controller hardware that determines how the UI code works.
Quantum computers calculate through 1 and zeroes, and everything in between. That is why in quantum neuroscience, the brain is interpreted not as a classical computer but as something closer to a quantum state. The details I provided in the Brain dead thread a few threads ago.
Isaac A. must have been aware of the inherent contradiction in his First Law.
(E.g.: What is the poor robot passing the switchbox just before the oncoming train is about to sploosh a pedestrian crossing the tracks to Kingdom Come — to do, when throwing the switch would derail the train, causing mass mayhem? Or, does the robot take out a guy apparently about to off another guy, if he can only do so with seriously injurious, perhaps even lethal, violence? What if the robot knows that the guy about to enter his house and kiss the wife has, unknowingly, contracted ebola … and, said robot being nearly a mile away, has only a rifle to stop the luckless carrier? &c &c &c….)
Just as Gene Roddenberry must have been perfectly aware of the inherent impossibility that the archetypical, the Basic, Vulcan would never be capable of any action whatsoever, because emotion is what impels us to act; reason can tell us how to get what we want, or need, and it can also remind us to think about the possible downside results of our actions, but it does not by itself cause us to act. (Here, I include the survival urges of our basic biological makeup: hunger, for example, drives us to want to eat.)
In the end, of course, Mr. Spock did come to that realization.
By the way — anyone who has suffered from severe clinical depression and who has largely recovered, must be aware that reason alone does not impel action. You know you must eat something, or die, but you lack the will to make the effort. You just don’t care enough to make the effort.