Will AI run the world? Why should technology be ethically? Should we be afraid of machines that learn? Rumman Chowdhury is an Artificial Intelligence expert at Accenture and is dealing with such questions on a daily base. At the 3. Blockchain Summit on the 26th of November organised by the Austrian Wirtschaftskammer in cooperation with the Wirtschaftsuniversität Wien, she shares her insights with Trending Topics.
At the Summit in Vienna, that goes on today with blockchain workshops, Chowdhury gave a keynote on the topic „convergence”. As senior principal at Accenture, she consults businesses on artificial intelligence solutions and leads the company’s strategic growth initiative on responsible artificial intelligence.
Trending Topics: Why does AI need to be ethical?
Rumman Chowdhury: Artificial Intelligence has a very personalized impact on our lives. If it is not built ethically, it reflects the unequal nature of our existing societies. When an AI is build, it is built with the biases that exist today. So we need to make sure that the way we build it is ethical. It’s very different from past technologies.
What is the danger in that?
Instead of giving straight forward lines, you give AI guidelines or even just information. It draws its own conclusions. Because of that, it might learn bad things. We need to be able to control, what these negative outcomes might be. We often call them unintended consequences – meaning that, we didn’t mean for them to happen, but bad things still happen.
That’s why we talk a lot about review boards in the AI world. Sometimes you don’t even know. The AI is working in the background. You don’t know if you were denied something or didn’t see something. In the cases you do, we often don’t have ways to fix it. So we need to make sure that we have those safety checks in place.
„We’ll need AI to help us analyze AI.“
Are programmers the best people to set those guidelines?
No. That is why we need ethic review boards. I think that companies need people who are trained or knowledgeable in ethical behavior. You need lawyers who understand the legal compliance. For example a lot of companies have signed on to the Universal Declaration of Human Rights by the UN. There is potential for the UN Declaration to impact your company’s AI.
When you think about all of this, you need a diverse group of people. The other thing is, you need guidelines to set up on top, so that you have consistancy. You don’t want one set of projects to have standards and the other have not. So it should not be a programmer deciding every time. It should be standardized.
Is there a danger humankind won’t be capable anymore to survey AI technology?
We need to build intelligent systems who help us survey it. Why do we have AI? Because we can’t analyze the amount of data and information. We’ll need AI to help us analyze AI.
What kind of corporations benefit from ethical guidelines in AI?
Right now, we see a lot of financial companies and health care companies, which makes sense because they impact our lives very directly. More and more we see citizen protection and consumer protection. That needs to happen. Consumer groups are pushing for AI guidelines. We need to think about increasing political polarisation and how people have been nudged because of AI driving recommendation systems that show you your new media outlet that you are used to.
You don’t get diverse perspectives. So that may lead to you being more close minded as a person. There is the primary direct harm of being denied a job because you are a woman. But there is that secondary harm when you only get exposed to one type of information and you don’t understand the world around you.
„Sometimes we don’t want to admit that it is what society is.“
Is AI a better ethical actor than a human being?
I think it depends on the situation. AI is painfully objective. It does not know good or bad. So it is not more ethical but also not less ethical. It is exactly what society is. Sometimes we don’t want to admit that it is what society is. Sometimes it is a waking up moment and it is a difficult thing to see.
Do you have an example?
AI in hiring. What happened at Amazon for example. (Editor’s note: Amazon scraped a secret AI recruiting tool that showed bias against women.) That example was positive because they spotted the problem before it harmed anybody. Investing years in a project and then not following through with the project is actually a bold move to make. Most companies would’ve done that. But we don’t want to admit that we have a merit based system of promotion in hiring but clearly we do not. What the AI will do, it will help us, find evidence of our flaws. It is upon us to decide what we want to do with that information.
„The fear AIs take over in some sense is justified.“
What do you tell people who fear AI?
Being sceptical is actually the right way to be. We do live in area of narrow AI. It is actually a very blunt instrument. It is very stupid. It does one thing very very well. The fear that AIs take over in some sense is justified. Not in the „Terminator“ sense but some companies hold so much power. And they will continue to hold that power. And AI will mold the world around us and will control. How do we give people power over their data and information, whether it’s being used right? What are we consenting to? Those are very important questions to ask.