Interview with Stuart Russell – ‘AI community must start doing ethics’

Spread the love

Artificial intelligence expert Stuart Russell thinks it’s time for the AI ​​community to mature in terms of ethics and professionalism. He said this in an interview.

Russell teaches computer science at the University of California at Berkeley and is the author of the standard work Artificial Intelligence: A Modern Approach , which is used in many universities. During his presentation at the Developers Summit, he positioned himself against Stephen Hawking and Elon Musk, among others, who have said more than once in the past that artificial intelligence can destroy us all or cause a third world war.

It doesn’t have to come to that, said Russell, who casually noted that Hawking and Musk are not computer scientists or AI experts. According to Russell, it is quite possible to design artificial intelligence that can be controlled and will not turn against its creators. He therefore argues for a different approach and explained what is needed for this. He bases this on his concept of ‘provably beneficial ai’, an artificial intelligence that demonstrably benefits humans. That proof must flow from mathematical theorems, or theorems .

In that way, AI tries to maximize human preferences, Russell explained. He means by this that we do not give the artificial intelligence fixed goals that should optimize the system, but that it is always uncertain about our preferences. The AI ​​therefore does not pursue its own goals, but those of its creators. He explained this with the phrase: “I can’t make coffee if I’m dead.” Suppose a robot only receives the concrete instruction to make coffee, then it will have a reason to disable its off switch. After all, if the robot is off, it cannot make coffee. This behavior can lead to problems, for example, if the robot is trying to reach a goal and its values ​​are not defined correctly.

However, by not setting clear goals and designing the robot to identify human preferences through observation of behavior, this can be avoided, according to Russell. Man will then only switch off the machine if this is better for achieving man’s goal, he reasons. Therefore, it is also ‘in the interest of the machine’ to allow itself to shut down.

How the different individual preferences of all 7.5 billion people should then be weighed against each other is not yet entirely clear, Russell explains in the follow-up interview. “That’s not really the decision of AI people, but we can contribute to it by making clear the consequences of different choices. One of the ways, for example, is to give equal value to all preferences.”

However, Russell also identifies a second problem, namely that people exhibit “negative altruism,” that is, derive pleasure from the suffering of others. “I don’t think we should take those people’s preferences into account. I think I can make an exception on this point and say that as an AI person I can determine the way things should work. The basic principle remains however, that the designers of AI systems do not determine the preferences, but that the preferences of humanity are the guiding principle.”

That doesn’t alter the fact that someone has to design those systems in this way. “We do, but the question arises whether people will use these systems in this way. Why, for example, would a Dr. Evil, who wants to take over the world, use this ‘good’ software if it is not going to help him. I I don’t have a good answer to that question. The same goes for fissile material. We can use it in good and safe reactor designs, but that’s not what Dr. Evil is after. He’s after a dirty bomb . That’s why there’s a whole international infrastructure to prevent such a thing.”

The same may be needed for artificial intelligence, says Russell. “I’m afraid we’ll need something similar for AI, but I don’t know what that should look like yet. Maybe other AI systems could be used for that, which are constantly looking for breaches of the AI ​​software design code .” To the comment that that code doesn’t exist yet and the question of who should make it, Russell says, “That would be the same way we set rules for designing nuclear power plants or plugs. So we’re trying things on a small scale, provide mathematical underpinnings, go through necessary committees, and revise concepts.”

He continues: “I’m sure the designers of the Chernobyl nuclear reactor didn’t intend to blow it up, but it happened anyway. And that’s one of the reasons why I want mathematical theorems. Software is not a substitute for that, because we can come up with rules in advance, but the system can find a way around them and thus bring about the end of humanity.

Unlike the ingredients for a dirty bomb, AI tools are a lot easier to get. When asked if some kind of Wassenaar Arrangement is needed for these tools, Russell says: “I hope we won’t need it. It will take development, professional codes of conduct and possibly also legislation to clarify that there are certain things are the ones you just don’t build.” Taking the example of the FakeApp tool , which uses machine learning to easily swap faces in videos, he says, “Even Pornhub, which may not have the highest moral standards, has moved to block that kind of material.”

“I think it’s fair to say it’s illegal to have someone say something in a video, say, that they didn’t actually say. I don’t see what valid public interest is served by being able to create that kind of lie. I “I think there are already rules in place that prohibit the use of the image of a public person without permission. We just need to make it clearly illegal and make people think about its morality.”

When it comes to how we can agree on this, Russell points to existing examples. “There are certain precedents, such as the genetic modification of humans. The US took the lead with a voluntary ban and the rest of the world followed suit. I think it has to do with the fact that the medical community is different from the community of computer scientists. The former has always taken its ethical and professional responsibility very seriously. I think the AI ​​community needs to mature a bit in that regard.”

What Russell says is common sense , is a ban on the use of autonomous weapons. Last year he therefore co-published a video called Slaughterbots , in which autonomous drones are used to take out targets. His aim was to make it clear that this is by no means science fiction, but that this is possible with existing technology.

He concludes: “The AI ​​community really should have thought about this a decade ago and managed policy. I proposed a simple code of ethics that says: we shouldn’t make things that could decide to kill people. It is precisely the machine’s ability to decide for itself to kill a human that allows it to act autonomously, which in turn makes it possible to scale up, which can lead to a catastrophic event.”

You might also like