The Verifying Learning AI Systems (VeriLearn) Workshop
ECAI 2023 Workshop, Kraków, Poland

Welcome

The Verifying Learning AI Systems (VeriLearn) workshop will be held in conjunction with the 26th European Conference on Artificial Intelligence, which will take place in Krakow Poland.

Over the past decade, AI has achieved breakthroughs in areas such as self-driving cars, generative modeling, and game playing among others. Moreover, AI is progressing at a very rapid rate and major companies are deploying AI in a variety of different applications. However, as AI becomes more deeply integrated into our daily lives, it is clearly affecting us in a multitude of different ways, not all of which are positive. Therefore, there are increasing concerns about what AI systems will be able to do, what they should be allowed to do, and what the implications are for society. It is clear that there needs to be guardrails in place to ensure that AI is used in safe manner. This has motivated the AI community to investigate what constitutes appropriate and "safe" AI and whether it is possible to develop such “safe AI” systems that can be trusted.

While there is no uniformly agreed-upon definition of what constitutes safe or trustworthy AI, it is clear that such systems should exhibit certain properties. For example, systems should be robust to minor perturbations to their inputs and there should be some transparency about how a system arrives at a prediction or decision. More importantly, it is becoming increasingly common for deployed AI models to have to conform to requirements (e.g., legal) or exhibit specific properties (e.g., fairness). That is, it is necessary to verify that a model complies with these requirements. In the software engineering community, verification has been long studied with the goal of assuring that software fully satisfies the expected requirements. Therefore, a key open question in the quest for safe AI is how verification and machine learning be combined to provide strong guarantees about software that learns and that adapts itself on the basis of past experience? Finally, what are the boundaries of what can be verified, and how can and should system design be enhanced by other mechanisms (e.g., statistics on benchmarks, procedural safeguards, accountability) to produce the desired properties? The goal of this workshop is to bring together researchers interested in these questions.

Topics

This workshop solicits papers on the following non-exhaustive list of topics: