Artificial Intelligence Pioneer: We Can Build Robots With Morals
By Jason Koebler
Like it or not, we're moving computers closer to autonomy
Judea Pearl, a pioneer in the field of artificial intelligence, won the Association for Computing Machinery's A.M. Turing award earlier this month, considered the highest honor in the computing world.
Pearl developed two branches of calculus that opened the door for modern artificial intelligence, such as the kind found in voice recognition software and self-driving cars....
The calculus Pearl invented propels probabilistic reasoning, which allows computers to establish the best courses of action given uncertainty, such as a bank's perceived risk in loaning money when given an applicant's credit score.
"Before Pearl, most AI systems reasoned with Boolean logic—they understood true or false, but had a hard time with 'maybe,' " Alfred Spector, vice president of research and special initiatives at Google, said of his work.
The other calculus he invented allows computers to determine cause-and-effect relationships.
At 75, Pearl, who is the father of slain Wall Street Journal reporter Daniel Pearl, is currently working on a branch of calculus that he says will allow computers to consider the moral implications of their decisions.
Artificial intelligence has improved by leaps and bounds over the past few years—what's the greatest hurdle for scientists working on making machines more human like?
There are many hurdles. There's the complexity of being able to generalize, an array of technical problems. But we have an embodiment of intelligence inside these tissues inside our skull. It's proof that intelligence is possible, computer scientists just have to emulate the brain out of silicon. The principles should be the same because we have proof intelligent behavior is possible.
I'm not futuristic, and I won't guess how many years it'll take, but this goal is a driving force that's inspiring for young people. Other disciplines can be pessimistic, but we don't have that in the field of artificial intelligence. Step by step we overcome one problem after the other. We have this vision that miraculous things are feasible and can be emulated in a system that is more understandable than our brain.
What do you think is the most impressive use of artificial intelligence that most people are familiar with?
I think the voice recognition systems that we constantly use, as much as we hate them, are miraculous. They're not flawless, but what we have shows it's feasible and could one day be flawless. There's the chess-playing machine we take for granted. A computer can beat any human chess player. Every success of AI becomes mundane and is removed from AI research. It becomes routine in your job, like a calculator that performs arithmetic, winning in chess—it's no longer intelligence.
So what's next? What are people working on that'll be world changing?
I think there will be computers that acquire free will, that can understand and create jokes. There will be a day when we're able to do it. There will be computers that can send jokes to the New York Times that will be publishable.
I try to avoid watching futuristic movies about super robots, about the limitations of computers that show when the machines will try to take over. They don't interest me.
Do you think those movies scare people off? Are they detrimental to the field?
I think they tickle the creativity and interest of young people in AI research. It's good for public interest, they serve a purpose. For me, I don't have time. I have so many equations to work on.
What are you working on now?
I'm working on a calculus for counterfactuals—sentences that are conditioned on something that didn't happen. If Oswald didn't kill Kennedy, then who did? Sentences like that are the building blocks of scientific and moral behavior. We have a calculus that if you present knowledge about the world, the computer can answer questions of the sort. Had John McCain won the presidency, what would have happened?
Sort of like an alternative reality?
It's kind of like an alternative reality—you have to give the computer the knowledge. The ability to process that knowledge moves the computer closer to autonomy. It allows them to communicate by themselves, to take a responsibility for one's actions, a kind of moral sense of behavior. These are issues that are interesting—we could build a society of robots that are able to communicate with the notion of morals.....
North Korea’s dehumanizing treatment of its citizens is hiding in plain sight
With President Obama in Korea this week, we will hear a lot about the dangers of North Korea’s nuclear aspirations.
We’re unlikely to be hear about a young man named Shin Dong-hyuk, who was bred, like a farm animal, inside a North Korean prison camp after guards ordered his prisoner-parents to mate. But Shin arguably has as much to teach about Korea’s past and future as about the cycle of negotiation, bluster and broken promises over the nuclear issue.
“Shin was born a slave and raised behind a high-voltage barbed-wire fence.”
So writes Blaine Harden, a former East Asia correspondent for The Post, in a soon-to-be-published account of Shin’s life, “Escape from Camp 14.”
Harden describes a closed world of unimaginable bleakness. We often speak of someone so unfortunate as to grow up “not knowing love.” Shin grew up literally not understanding concepts such as love, trust or kindness. His life consisted of beatings, hunger and labor. His only ethos was to obey guards, snitch on fellow inmates and steal food when he could. At age 14, he watched his mother and older brother executed, a display that elicited in him no pity or regret. He was raised to work until he died, probably around age 40. He knew no contemporaries who had experienced life outside Camp 14.
At 23, Shin escaped and managed, over the course of four years, to make his way through a hungry North Korea — a larger, more chaotic version of Camp 14 — into China and, eventually, the United States. He is, as far as is known, the only person born in the North Korean gulag to escape to freedom.
Improbably, his tale becomes even more gripping after his unprecedented journey, after he realizes that he has been raised as something less than human. He gradually, haltingly — and, so far, with mixed success — sets out to remake himself as a moral, feeling human being.
When he watched his teacher beat a six-year-old classmate to death for stealing five grains of corn, Shin says he “didn’t think much about it.”
“I did not know about sympathy or sadness,” he says. “Now that I am out, I am learning to be emotional. I have learned to cry. I feel like I am becoming human.”
But seven years after his escape, Harden writes, Shin does not believe he has reached that goal. “I escaped physically,” he says. “I haven’t escaped psychologically.”