Skip to content

Our experts

Vasilios Danos

Tracking down the security of artificial intelligence as an AI detective

Artificial intelligence (AI) can be used to solve many data-driven problems more efficiently - but it also harbours various risks. Vasilios Danos looks at how these can be recognised efficiently and at an early stage.

Vasilios Danos

Specialist for Artificial Intelligence

"In my field of work, I find it particularly exciting to delve deep into the basics of machine learning like a kind of detective in order to uncover vulnerabilities and potential risks."
 

Contact: 
+49 201 8999 560
Write an e-mail

LinkedIn

You say about yourself that you are the “killjoy in the whole AI debate.” Can you explain that? What exactly are you working on?

Basically, we are not so much interested in how well an artificial intelligence works, but rather how poorly or where the weak points of the AI systems lie. This means that we ultimately try to identify precisely these weak points in our daily work. The aim is to "stress" the AI systems to such an extent that they perform incorrect classifications.

So I am a "spoilsport" in the sense that I look at AI systems as a rigorous tester and am therefore primarily concerned with what can actually go wrong and with what probability. In the best case scenario, weak points are uncovered during development so that they can be eliminated at an early stage.

In your opinion, what is the potential of artificial intelligence?

AI ultimately has an impact on many different areas, in other words on every aspect of modern life. One of the reasons for this is that it is developing at a rapid pace, meaning that the development steps are becoming ever shorter and the effects ever greater.

A few years ago, many people would probably not have imagined that academic professions in particular would be the first to be affected by AI. However, the technology is already having an impact on medicine and software development, for example. And these are just two examples of areas where everyone thought for a long time that humans were irreplaceable.

The flip side of the coin, however, is that the risks also increase with the increasing spread and automation. The two are therefore directly linked.

Do you have a striking example from practice that shows how problematic AI can be under certain circumstances?

AI has the peculiarity that while it works very well and reliably on the one hand, it can also behave completely incorrectly in some situations. A good example of this is a smart surveillance camera that I own myself: it has the basic ability to differentiate between pets and people. This is to prevent false alarms being triggered every time a cat walks through the image, for example, but it is often the case that the camera identifies a person as a cat, depending on the angle. Or that a person - and I have tried this myself - is only recognised as a pet if they pick up a cat in their arms.

Of course, there are many more examples in practice. Another very striking example is traffic signs that are no longer recognised by self-driving cars or are even misrecognised if there is dirt on them.

ChatGPT would like to know the following from you: What international standards or best practices exist or should be developed to promote uniform security standards for AI systems?

There isn't quite that much at the moment, but thanks to standardisation and regulatory efforts, such as the AI Act, more and more AI-specific standards are slowly being published. Of course, there won't be "one" AI standard, but more or less uniform standards are being developed for many domains or use cases, which set the framework for how such an AI can be operated safely and reliably, which tests it should pass or which management systems need to be set up.

The aim of all this is to make the characteristics of AI tangible so that they can be taken into account as early as possible in the development process.

What role does artificial intelligence play in your personal life? Are you perhaps more cautious than others when it comes to using it?

I'm not actually any more cautious. I already use a lot of AI in my private life and like to try out new tools or systems - mainly out of personal interest. But in the back of my mind there's always the risky idea that you shouldn't blindly trust these systems and should always bear in mind that they ultimately don't have the same idea of the world as humans. In other words, they may be very good in a certain area, but if even a small situation changes, they suddenly don't know how to react.

Current publications & articles by Vasilios