12.29.2017
by Jim Edwards
READ ORIGINAL ARTICLE AT BUSINESS INSIDER
- Chess grandmaster Garry Kasparov sat down with Business Insider for a long discussion about advances in artificial intelligence since he first lost a match to the IBM chess machine Deep Blue in 1997, 20 years ago.
- He told us how it felt to lose to Deep Blue, and why the human propensity for making mistakes will make it “impossible for humans to compete” against machines in the future.
- We also talked about whether machines could ever be programmed with “intent” or “desire,” to make them capable of doing things independently without human instructions.
- And we discussed his newest obsessions: privacy and security, and whether — in an era of data collection — Google is like the KGB.
LISBON — Garry Kasparov knew as early as 1997 — 20 years ago — that humans were doomed, he says. It was in May of that year, in New York, that he lost a six-game set of chess matches against IBM’s Deep Blue, the most powerful chess computer of its day.
Today, it seems obvious that Kasparov should have lost. A computer’s ability to calculate moves in a game by “brute force” is infinitely greater than a human’s.
But people forget that the Deep Blue challenge was a set of two matches, and Kasparov won the first set, in 1996, in Philadelphia. In between the two matches, IBM retooled its machine, and Kasparov accused IBM of cheating. (He later retracted some of his accusations.)
In fact, Kasparov could have won the second series had he not made a mistake in game 2, when he failed to see a move that could have forced a draw. Deep Blue also made a mistake in game 1, which, at the time, Kasparov wrongly put down to Deep Blue’s “superior intelligence” giving it the ability to make counterintuitive moves.
Nonetheless, in a conversation with Business Insider at Web Summit in Lisbon this year, Kasparov said that was the point at which he first realised that humans were “doomed” in the field of games.
As long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that’s enough for machines to reach the level that is impossible for humans to compete.
“I could see the trend. I could see that it’s, you know, a one-way street. That’s why I was preaching for collaboration with the machines, recognising that in the games environment humans were doomed. So that’s why I’m not surprised to see the success of AlphaGo, or Elon Musk’s Dota player AI [an AI player for the video game Dota 2], because even with limited knowledge that these machines receive, they have the goal. It’s about setting the rules. And setting the rules means that you have the perimeter. And as long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that’s enough for machines to reach the level that is impossible for humans to compete,” he says.
Kasparov has written a book on AI, titled “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.” He has also currently an ambassador for Avast, the digital security firm.
Our first question was about “brute force,” and whether AI has moved beyond the problem of being reliant on vast databases to make choices instead of real “thinking” or “learning.”
“In the territory of games where the machine prevails because humans make mistakes.”
World chess champion Garry Kasparov studies the board shortly before game two of the match against the IBM supercomputer Deep Blue. This was only the second time in history that a computer program has defeated a reigning world champion in a classical chess format. The Russian grandmaster, who won game one May 3, lost game two after 45 moves and 3 hours and 42 minutes of play. Kasparov will play six games against Deep Blue in a re-match of their first contest in 1996.Reuters
Jim Edwards: You once said of artificial intelligence, we’re “in the territory of games where the machine prevails because humans make mistakes,” implying that AI’s main advantage was only that humans commit errors and machines do not. Is that still true?
Garry Kasparov: Human nature hasn’t changed since I said it. Humans are poised to make mistakes because we – even the best of us, in chess or in golf or in any other game – we cannot have the same steady hand as a machine has.
JE: Does the AI advantage consist only of mere consistency? That it will not make a mistake?
GK: I want to understand the difference between what [US mathematician Claude] Shannon classified as type A and type B machines. So the type A brute force type machines – it’s brute force and some algorithm. Might be something that you may call AI, because it resembles the way humans make decisions. By the way, all the founding fathers of computer science, like Shannon, [Alan] Turing, [Norbert] Weiner, they all believed that real success, the breakthrough, would be achieved by type B machines, human-like machines.
“Consistency is what is deadly for humans, because even the best of us are not consistent.”
Noam Galai/Getty Images for TechCrunch
JE: Would those human-like machines make mistakes?
GK: All machines make mistakes, don’t get me wrong. Because even when you look at the most powerful type A machines, the brute force, they cannot cope with everything – Deep Blue was a monster in speed in 1997, making 2 million positions per second. But the number of legal moves in a game of chess is 10 to the 40th power. That’s why it’s not about the mass of speed, it’s about certain assessments that the machine has to make just by moving from point A to point B. Machines are not prophets, machines can solve a game or goal. But it’s not about solving, it’s about winning. And that’s why machines can also make mistakes, but when you look at the average quality of the moves it’s fairly consistent. So consistency is what is deadly for humans, because even the best of us are not consistent. When you look at the top games played by the best players in a world championship match, you still find that in all the games there are – not blunders or mistakes, but obvious inaccuracies.
“It’s, you know, a one-way street. That’s why I was preaching for collaboration with the machines, recognising that in the games environment humans were doomed.”
Garry Kasparov and Business Insider’s Jim Edwards talking at Web Summit in Lisbon, in November 2017.Avast
JE: One example you write about is that in chess, humans really dislike giving up their queen, because it is the most powerful piece on the board, even when doing so is advantageous.
GK: If you’re talking about professional players they do whatever it takes to win. If we talk about top, top, top-level chess, it’s still not free from inaccuracies caused by the fact that players can get tired, they can lose their vigilance. Psychologically, when on the winning side, you think OK, the game is over so you can be relaxed. While in the human game it doesn’t matter since the favours are always being returned. Facing a machine – you will be out of business quickly. That’s why every closed system, and games are closed systems, automatically give machines an upper hand.
Today machines are absolutely monstrous.
I knew it since 1997. When you look at the absolute strengths of chess computers, Deep Blue was relatively weak by modern standards. Today machines are absolutely monstrous. They are much much stronger than Magnus Carlsen, and a free chess app on your mobile device is probably stronger than Deep Blue. I could see the trend. I could see that it’s, you know, a one-way street. That’s why I was preaching for collaboration with the machines, recognising that in the games environment humans were doomed. So that’s why I’m not surprised to see the success of AlphaGo, or Elon Musk’s Dota player AI [an AI player for the video game Dota 2], because even with limited knowledge that these machines receive, they have the goal. It’s about setting the rules. And setting the rules means that you have the perimeter. And as long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that’s enough for machines to reach the level that is impossible for humans to compete.
“Machines can solve any problem but they just don’t know what problem is the relevant one.
”
JE: In terms of deep learning where the machine has to learn on its own, it’s not just brute force, you’ve talked about how hard it is to give a machine a purpose – machines don’t have an intent. They do what you tell them. The machine itself doesn’t desire.
GK: It’s about identifying the goal.
JE: How important is intent in AI in the future? Is it possible to give an artificial intelligence an intent?
GK: Intent starts with a question, and machines don’t ask questions. Or to be precise, they can ask questions but they don’t know what questions are relevant. It’s like a limitation, because a machine doesn’t understand the concept of diminishing returns. It can go on and on. So the moment there’s an intent, the purpose, you get this wall that a machine reaches and stops. So I see in the foreseeable future just very little chance, if any, for machines to come up with an intent. I think it contradicts the way machines operate, because machines these days know the odds, they can work with patterns, but still “intent” — this is what transfers open-ended systems to closed systems. And you have to understand what is relevant. [I did a lecture once] and I was followed by a professor from Cornell and she talked about it, and she said, “Machines can solve any problem but they just don’t know what problem is the relevant one.
”
“As someone who grew up in the Soviet Union, I know enough about dictators and the way they operate, how do we deal with the fact that we are willingly conceding so much personal data out of convenience?”
JE: Why is security such a big deal for you?
GK: It brings together certain elements of my life. So the key interests of my current life. It’s connected to AI and technology, but it’s also about individual rights and I’ve been warning for years about the danger of modern technology being used by bad guys to undermine our way of life. And now I think it’s this. We are facing a big challenge. Let’s set aside Putin and Kim Jong un and all these terrorists and dictators. Our society is undergoing a massive change because of technology. And there are many issues that I’m raising in these blogs for instance, like free speech versus hate speech. We used to live in the environment where we just could separate good from bad. Now we live in an environment where it’s, unless you regulate it heavily, which I don’t like, so it’s impossible to stop the flood of negative information, something we don’t want to see. So we just have to make some tough choices. So it’s how we operate in the era of fake news, troll factories, and malicious players. But also there’s a big issue, so I think it’s important for me as chairman of the Human Rights Foundation and as someone who grew up in the Soviet Union, so I know enough about dictators and the way they operate, how do we deal with the fact that we are willingly conceding so much personal data out of convenience? Because we want to use all these benefits, not recognising that if you’re connected to the world, your information will be collected. What will be the outcome of this enormous data collection about individuals, and it’s information being stored within multinational giants, corporations? I’m just trying to find a rational, not solution, but it’s a suggestion that we still have to see the difference between Google data collection and the KGB. But even if it’s Google, it’s an American company, but how you make sure that this information is not used to harm individuals? So those are issues that are important for me. And also Avast is actively engaged in using AI in protecting individual customers from all sorts of malware. And while I’m not an expert in technology, I feel strongly that this is what is to be done. So it’s protecting individuals against all sorts of threats they’re dealing with in this exciting, modern but dangerous world.