AIs won’t really rule us, they will be very interested in us: Juergen Schmidhuber
The Hindu
Jacob Koshy
December 20,
2017
The
pioneering computer scientist prophesies that machines smarter than humans will
emerge in the next two decades
Juergen Schmidhuber,
54, is a computer scientist who works on Artificial Intelligence (AI). Considered
to be one of the pioneers in improving neural networks, his techniques, the
best known being Long Short-Term Memory, have been incorporated in speech
translation software in smartphones. In this interview conducted in Berlin, he
speaks of developments in AI, why the fear of job loss due to AI is unfounded,
and his work.
Excerpts:
What is the most exciting AI
project under way in the world?
I would be quite
biased because I’d say what’s happening in my lab is the most exciting. My goal
remains the same as it has been for a very long time: to build a
general-purpose AI that can learn to do multiple things. It must learn the
learning algorithm itself (that can help it master chess as well as drive a
car, for instance) — true meta-learning, as it’s called. We’ve been at it for
30 years and it’s getting more feasible over time. On this journey, we are
producing less sophisticated but more useful stuff, like smartphones.
How impressed are you by AlphaGo,
a creation of Google DeepMind, that now beats human Go champions?
DeepMind is a company
that was heavily shaped by some of my students. Shane (Legg), one of the
co-founders, was among those who worked in my lab. It’s great that you can play
Go better than any human. On the other hand, the basic techniques (in making AlphaGo) date back to the
previous millennium. In the ’90s, there was a self-teaching neural network by
IBM that learned to play backgammon by playing against itself. So board games
are kind of simple in the sense that they can use a ‘feed-forward’ network.
(These are layers of neural networks arranged to mimic neurons in the brain.
The programe makes decisions based on how information moves up these layers.)
There are no feedback layers and they cannot ‘learn’ sequences. These
principles were developed when computers were 100,000 times more expensive than
today. It’s great that Go (like chess), which is so popular in Asia, is among
those that machines play better.
What is Long Short-Term Memory?
It’s a technique in
speech recognition and translation that many major companies — Facebook,
Amazon, Samsung — are using and is based on work that we did in the early
1990s. It’s a recurrent network, a little bit like in the brain. The brain has
a hundred billion neurons and each is connected to 10,000 others. That’s a
million-billion connections and each of them has a ‘strength’ that indicates
how much one neuron influences another.
Then there are
feedback connections that make it (the network) like a general-purpose computer
and you can feed in videos through the input neurons, acoustics through
microphones, tactile information through sensors, and some are output neurons
that control finger muscles. Initially all connections are random and the
network, perceiving all this, outputs rubbish. There’s a difference between the
rubbish that comes out and the translated sentence that should have come out.
We measure the difference and translate it into a change of all these
connection strengths so that they become ‘better connections’ and learn through
the Long Short-Term Memory algorithm to adjust internal connections to
understand the structure of, say, Polish, and learn to translate between them.
Given that self-driving cars are
a reality, do we need AI machines to be regulated or do you think it could kill
innovation?
Self-driving cars are
now so well understood that you can, in certain countries, take some of them
out on the roads. It’s old hat. They have been there since the 1980s and were
implemented in the Autobahns. They went at 180 km/h, or thrice the speed of the
Google cars, and went from Munich to Denmark on the highways. Back then,
computers were 100,000 times more expensive. Now, thanks to Deep Learning (a
way of organising neural networks and the zeitgeist of the field), pattern
recognition since 2011 has vastly improved.
But what about people in
self-driven cars who could make mistakes?
The nature of AI is
that it doesn’t know. Machine learning is all about failing and learning from
failure. It’s not like the perfect robots of Isaac Asimov stories. Were a car
to sense a situation that could potentially lead to an accident, 99% of the
time it’s going to brake hard. There will be flashing lights that will warn
people in cars behind you that this car is going to brake hard and you should
do the predictable thing of braking hard too and not swerving blindly. In some
situations, that too may not be the perfect thing… maybe there’s another way to
save a life by doing something really complicated. However, if there can’t be a
fix to self-driven cars to address this and it leads to, say, one life lost per
100 million per day as opposed to 10 as of today (where manual cars are the
norm), then lawmakers would move to make it mandatory to have only self-driven
cars on the road. There could still be mistakes, but the law of large numbers
says that on average, there will be fewer deaths from self-driven cars. This is
also provided, of course, that insurance companies and such carmakers aren’t
driven to bankruptcy. If better traffic is key to the better running of
society, then systems will shift accordingly. In that sense, it’s no different
from what has happened in the past too.
What about AI’s potential to
destroy jobs?
Interestingly, people
have predicted similar things for decades — for example, in industrial robots.
Volkswagen and other companies had hundreds of millions of workers who lost
jobs to robots. But look at countries with a high per capita presence of
industrial robots — Japan, South Korea, Germany. They all have low unemployment
rates. This was because lots of new, unanticipated jobs came up. Who could have
thought there’s a job where people make money being YouTube bloggers? Or
selling Apps? Some make a lot of money, some don’t, but it’s still a lot of new
jobs. It’s easy to see what jobs will be lost but harder to predict what new
ones will emerge. Societies must think of alternative ways to adjust to these
new realities. There was a referendum on universal basic income in Switzerland.
It failed, but still got 30% of the vote. You wait another 20 years and it
could be 55%.
Do you think it will be possible
for AI systems to ‘learn’ ethical and moral codes?
Anything that can be
taught via demonstration can be taught to an AI in principle. How are we
teaching our kids to be valuable members of society? We let them play around,
be curious and explore the world. We punish them, for instance, when they take
the lens and burn ants. And they learn to adopt our ethical and moral values.
The more situations they are exposed to, the closer they come to understanding
values. We cannot prove or predict that they are always going to do the right
thing, especially if they are smarter than the parents. Einstein’s parents
couldn’t predict what he would do, and some of the things he discovered can be
used for evil purposes. But this is a known problem. In an artificial neural
network, it’s easier to see, in hindsight, what went wrong. For instance, in a
car crash, we can find which neuron influenced the other. If it’s a huge
network, it will take some time, but it’s possible. With humans you can’t do
this. You can only ask them and very often, they will lie. Artificial systems,
in that sense, are under control.
Do you believe in the concept of
super intelligence (when AI evolves to a level that far exceeds human
capability)? Is there a date by which, given current progress, machines could
‘rule us’?
I would be very
surprised if, within a few decades, there are no AIs smarter than ourselves.
They won’t really rule us. They will be very interested in us — ‘artificial
curiosity’ as the term goes. That’s among the areas I work on. As long as they
don’t understand life and civilisation, they will be super interested in us and
their origins. In the long run, they will be much more interested in others of
their kind and it will expand to wherever there are resources. There’s a
billion times more sunlight in space than here. They will emigrate and be far
away from humans. They will have very little, if anything, to do with humans.
It won’t be like a (dystopian) Arnold Schwarzenegger movie.
But can we go extinct or be
exterminated like Neanderthals?
No. So, people are
much smarter than, say, frogs, but there are lots of frogs out there, right?
Just because you are smarter than them doesn’t mean you have any desire to
exterminate them. As humans, we are responsible for the accidental
extermination of a lot of species that we also don’t know exist. That is true,
but at least you won’t have the silly conflict of Schwarzenegger movies, or
like The Matrix,
where bad AIs live off the energy of human brains. That, incidentally, is the
stupidest plot ever.
Thirty watts (what a
brain produces) and the power plant used to keep the human alive is much more.
When should you be afraid of anybody? When you share goals and have to fight
for it. That’s why the worst enemy of a man is another man. However, the best friend
of another man is also man or a woman. You can collaborate or compete. An
extreme example of collaboration may be love — that is shared goals towards
having a family. The other extreme could be war. AI will be interested in other
AI, like frogs are interested in other frogs.
What’s the limitation to AI now —
code or hardware?
It’s a little bit
about code although the basic ideas are from the previous millennium. We have
some breakthroughs but the dominant theme is that hardware is getting cheaper
every year. In 30 years, it’s going to be a factor of a million. Soon we will
have a small device that computes as much as the human brain. In our lab, we’ve
profited a lot from hardware built earlier by companies such as NVidia. They
didn’t care for Deep Learning and only about selling graphics processors to the
video game industry. But then it turned out that these were exactly what was
needed to make neural net
Comments
Post a Comment