Ramin Hasani: Our target is to design learning systems that would understand their internal processes when they make a decision. In other words, we design algorithms that turn a black box AI model to a more functionally transparent model. Such AI systems are highly desirable in safety-critical tasks, for example, self-driving cars.
AI is used today in several digital products. What is the best usecase so far in your opinion?
AI has influenced many real-life applications. Examples include translation machines, medical diagnoses, discovering a new use of drugs, personal assistance, object detection, self-driving cars and many more. In general, AI has revolutionized the way we extract patterns in data.
In the future, which are the areas where AI will be most important?
Education is the first area I can think of. AI systems will assist humans to quickly access relevant learning material and encourage continual learning. AI will significantly impact medicine and healthcare. Finally, AI will continue enhancing automation in all industries and services.
Recently the AMS announced that algorithms will rate the potential of jobless people. Will that work?
Yes. That is possible, assuming one has access to proper and fair data in that specific domain.
A lot of companies claim to work with AI. What is real AI, and what should not be called AI?
In standard terms in Computer Science, any machine that can perceive the environment, and take actions to maximize the chance of achieving a particular goal is called an „intelligent agent“. Such models could be expert systems that mimic human decision-making process in solving complex problems, by using a set of hand-coded programs. They can also be learning systems which are agents that progressively improve their performance on a given task to solve a complex problem without being explicitly programmed. Any system designed based on these definitions is considered to be an AI system.
Europe seems to fall behind after the US and China in terms of AI development. Should European developers and companies focus on a special area of AI?
This is not precisely true. In the machine learning area, many fundamental contributions have been originated from European institutions.
If we ask what EU researchers have to focus on, I would say in order to have fair AI algorithms with high degrees of learning capacities, we should attempt rigorously to understand our current learning systems better, instead of developing new fancy algorithms.
For companies also, it is imperative to understand when using AI systems in their infrastructure is beneficial and do not follow a hype, only because it sounds cooler to use AI. For this reason, I would encourage EU companies to collaborate closely and consult with AI researcher (as it is a ubiquitous trend in the US), to establish a ground understanding of the benefits AI can bring. There is no doubt that soon, AI becomes an inseparable part of automated systems, and data processing engines.
Which role can Austria play in the future development of AI?
The first step would be national interdisciplinary collaborations. In Austria, we have so many pioneer research groups in computer science, neuroscience, math and physics, that could gather together to discuss fundamental AI questions. Such environments can presumably result in reciprocal benefits with industrial partners, too. The second step is to internationally collaborate with governments with the aim of designing new structures for education, especially lifelong learning, enhancing people’s digital skills, development of fair AI, and to promote AI for good.
There are a lot of ethical questions around AI. What should AI never be allowed to do?
Well, I think the initial step in answering questions of this form lays in the concept of explainable AI. As long as we understand how the AI system gives rise to behavior, especially in safety-critical tasks, we are good to go!
Do you think that the algorithms that work with AI should be made transparent to the users?
It is hard to make AI algorithms transparent at the lowest level of implementation. However, at a conceptual level, considering the security of the AI system and user’s privacy, yes, it has to be made transparent to the users.