Keynote Speakers

Amro Najjar (University of Luxembourg)

Thomas Studer Title: Interactive Embodied Explainable AI
Abstract: In the last 5 years, research on eXplainable AI has received a growing attention.  Initially aiming to investigate the intriguing results of some black-box machine learning mechanisms such as DNN, XAI research has later expanded to several other domains such as explainable planning, explainable recommender systems and explainable agents.  The goal of many of these works is to promote human-centered explanations. Compared to static explanations, engaging the user in interactive (back-and-forth) communication is crucial as it allows the user to dive into the concept and build a more solid and backed-up knowledge/awareness of the subject being discussed with the AI system. Moreover, this allows verifying/fixing misunderstandings and elaborating on follow-up questions. Coupling this interactiveness with embodied XAI has also shown to be a powerful combination since, in such conversations, embodied agents (e.g., social robots) can improve their communication with the user by relying on facial expressions, emotions and other body cues. This boosts their understandability and increase their acceptability from the user standpoint.

Bio: Dr. Amro Najar Is a postdoctoral research associate at the AI Robolab and the ICR research group. Social robots, multi-agent systems, agreement technologies, and explainable artificial intelligence are his main research interests. In 2015, Dr. Najjar received a PhD in AI and multi-agent systems from Ecole des Mines de Saint-Etienne, in France. Since then he participated in several research projects involving partners from academia and industry. As of June 2020, he has more than 50 peer-reviewed publications in high-ranking conferences, journals, and other venues.



Mehdi Dastani (Utrecht University)

Thomas Studer Title: Controllable Artificial Intelligence
Abstract: The increasing presence of intelligent autonomous systems and their interactions in open environments urgently requires control mechanisms to prevent bad outcomes and ensure desirable properties without limiting the autonomy of the systems. In artificial intelligence, social laws, norms, and sanctions have been proposed as flexible means of controlling the behaviour of autonomous systems. In this lecture, I will present my research work on various norm-based control mechanisms designed to monitor the behavior of autonomous systems and intervene when bad outcomes are expected.

Bio: Computer scientist Mehdi Dastani is Professor and chair of the Intelligent Systems group of the department of Information and Computing Sciences at Utrecht University and program leader of the master program Artificial Intelligence. His research focuses on formal and computational models in artificial intelligence. Inspired by knowledge and insights from other scientific disciplines such as philosophy, psychology, economy and law, Dastani investigates and develops computer models for autonomous agents whose behaviors are decided based on reasoning about social and cognitive concepts such as knowledge, desires, norms, responsibility and emotions.



Dong An (Zhejiang University)

Leila Amgoud Title: Artificial Reactive Attitudes
Abstract: The reactive attitudes approach to moral responsibility has been a prominent approach in making sense of how we are morally responsible for our actions. Reactive attitudes are attitudes we hold towards each other in everyday life interactions. For example, we may feel gratitude for other people’s kindness and anger for other people’s malicious intentions. Gratitude and anger exemplify these attitudes. Accordingly, to hold people morally responsible is to accept and express these attitudes in moral practice. In this presentation, I explore the implication of applying the reactive attitude theory of moral responsibility to non-human agents, i.e., artificial intelligence. In the current literature of artificial intelligence and moral responsibility, scholars mainly approach the issue from the perspectives of consciousness and intentionality (Chalmers 1996, Searle 2008, Johnston 2006, 2015, Sparrow 2007) or various functional substitutes of these faculties (Floridi & Sanders 2006, Taylor 1996, Himma 2008, Coeckelbergh 2010). I hope to contribute to the discussion and add to these approaches by examining the possibility and feasibility of holding artificial intelligence morally responsible because we can legitimately hold reactive attitudes towards them.

Bio: An Dong is a Distinguished Associate Researcher of the Department of Philosophy, Zhejiang University, and a Doctor of Philosophy from Texas A&M University. Her main research areas are philosophy of emotion, moral psychology and normative ethics.



Leon van der Torre (University of Luxembourg)

Yanjing Wang Title: Advanced Intelligent Systems and Reasoning: Standardization, Experimentation, Explanation
Abstract: Our future, as much as it is a projection of the present, is also a reflection of the narratives we create, especially those crafted in the realm of science fiction. This genre—an intriguing amalgamation of philosophy and speculative thinking—serves as a canvas for portraying potential advancements in technology. A paramount example of such advancements is the development of advanced intelligent reasoners: artificial intelligence (AI) systems that encapsulate philosophical concepts such as rationalism and empiricism. In this talk, I will present AI as a research area and explore the historical relationship between AI and logic. I will provide an example of the Jiminy Advisor, an ethical recommendation component using techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. I will also discuss the development of a platform for experimental user studies at the Zhejiang University – University of Luxembourg Joint Lab on Advanced Intelligent Systems and REasoning (ZLAIRE).

Bio: Leendert (Leon) van der Torre is a professor of computer science at the University of Luxembourg and head of the Individual and Collective Reasoning (ICR) group, part of the Computer Science and Communication (CSC) Research Unit. Leon van der Torre is a prolific researcher in deontic logic and multi-agent systems, a member of the Ethics Advisory Committee of the University of Luxembourg and founder of the CSC Robotic research laboratory. Since March 2016 he is the head of the Computer Science and Communication (CSC) Research Unit.