Keynote Speakers

Jan Broersen (Utrecht University)

Jan Broersen Title: New directions and considerations for the BOID
Abstract: 20 years ago, the BOID-architecture was put forward as an agent modelling framework for reasoning about the selection of goals on the basis of Beliefs, Obligations, (previous) Intentions and Desires. I will critically review our original proposal and suggest new ways of thinking about the BOID, also in relation to LLMs.

Bio: Jan Broersen is professor of logical methods in artificial intelligence at the Department of Philosophy and Religious Studies at Utrecht University. Before moving to theoretical philosophy, he worked in computer science (intelligent systems). His doctoral thesis was on modal logics for normative system verification. His main research interests are in logics of agency and norms and in the philosophy of AI.



Mehdi Dastani (Utrecht University)

Mehdi Dastani Title: Causality and Responsibility in Multi-agent Systems Systems
Abstract: Causality and responsibility are intertwined concepts that play an important role in the reasoning of human and artificial intelligent systems in interactive multi-agent environments. These concepts have been extensively studied in the literature, resulting in a plethora of views and interpretations of these concepts and their relationships. In this presentation, I will introduce a particular view where a group of individuals is held responsible for an outcome if they caused the outcome while they had a strategy to prevent the outcome. To formally instantiate this view, I will propose a systematic approach to modeling interactions in a multi-agent environment based on a given structural causal model. The generated multi-agent model is then used to analyze and reason about the causal effects of agents' strategic decisions and their responsibility.

Bio: Dastani is a computer scientist and chair of the Intelligent Systems group in the Department of Information and Computing Sciences at Utrecht University. His research focuses on formal and computational models in artificial intelligence. Inspired by knowledge and insights from other scientific disciplines such as philosophy, psychology, economy and law, Dastani investigates and develops computer models for autonomous systems whose behaviors are decided based on reasoning about social and cognitive concepts such as knowledge, desires, norms, responsibility, causality and emotions.



Jun Pang (University of Luxembourg)

Jun Pang Title: Structural Inference of Dynamical Systems: Recent Development and Future Directions
Abstract: Dynamical systems are pervasive and critical in understanding phenomena across various domains, from the majestic dance of celestial bodies governed by gravity to the subtle ballet of chemical reactions. In the quest to unravel the complexities of dynamical systems, the initial imperative is to unveil their inherent structure, a key determinant of system organization. Achieving this necessitates the deployment of structural inference methods capable of deriving this structure from observed system behaviors. In this talk, I will give an overview on recent development of deep learning based methods for structural inference of dynamical systems, in particular methods based on variational auto-encoder (VAE). Through a comprehensive benchmarking study, I will present some key findings and discuss future research directions.

Bio: Prof Jun Pang is at the Department of Computer Science of the University of Luxembourg and a researcher at the Institute for Advanced Studies of the University of Luxembourg. He has long worked on formal methods, information security, complex networks, and interdisciplinary research. Currently, he focuses on Trustworthy AI (especially security and privacy) and AI-driven scientific research. He has served as a programme committee chair and member of several international academic conferences, and on the editorial boards of several leading academic journals. According to Google Scholar, his papers have been cited more than 4000 times (h-index: 34).



Marija Slavkovik (University of Bergen)

Marija Slavkovik Title: Human norms, machine norms and AI value alignment Systems
Abstract: The talks will consider the problem of AI value alignment from the perspective of the problem of how to automate moral reasoning and decision making for autonomous systems. Normative reasoning has been a sub-discipline in multi agent systems research for a few decades. How does that fit in the age of LLMs? We will situate the normative reasoning work in the larger problem of AI alignment and machine ethics, by discussing the pertinent differences between how value alignment sees norms, why value alignment needs norms and overall what is the role of logic in the world of deep data processing.

Bio: Marija Slavkovik is a Professor with the Faculty for Social Sciences of the University of Bergen. Her background is in computer science and artificial intelligence. She has been doing research in machine ethics since 2012. Machine ethics studies how moral reasoning can or should be automated. Marija works on formalising ethical collective decision-making. She has held held several seminars, tutorials and graduate courses on AI ethics (http://slavkovik.com/teaching.html). Marija is a vice-chair of the the Norwegian AI Association, board member of European Association for Artificial Intelligence, a member of the informal advisory group on Ethics, Legal, Social Issues (ELS) of CLAIRE, in the editorial board of AI Magazine, and AI and Society track editor of JAIR. She is the current chair of the department of information science and media studies that since 2021 has been offering, in collaboration with the department of informatics. the first bachelor program in AI in Norway.



Leendert van der Torre (University of Luxembourg)

Leendert van der Torre Title: Weakest Link, Prioritised Default Logic and Principles in Argumentation
Abstract: In this article, we study procedural and declarative logics for defaults inmodular orders. Brewka’s prioritised default logic (PDL) and structured argumentation based on weakest link are compared to each other in different variants. This comparison takes place within the framework of attack relation assignments and the axioms (principles) recently proposed for them by Dung. To this end, we study which principles are satisfied by weakest link and disjoint weakest link attacks. With the aim of approximating PDL using argumentation, we identify an attack defined from PDL extensions, prove that each such PDL extension is a stable belief set under it, and offer a similar principle-based analysis. We also prove an impossibility theorem for Dung’s axioms that covers PDL-inspired attack relation assignments. Finally, a novel variant of PDL with concurrent selection of defaults is also proposed, and compared to these argumentative approaches. In sum, our contributions fill an important gap in the literature created by Dung’s recent methods and open up new research questions on these methods.

Bio: Leendert van der Torre is a professor at the Faculty of Computer Science at the University of Luxembourg, PhD supervisor, head of the Individual and Collective Reasoning (ICR) research group at the University of Luxembourg, and director of the Joint Laboratory for Higher Intelligent Systems and Reasoning at the University of Luxembourg and Zhejiang University. He is a leader in the field of moral logic and multi-subject systems. He is currently a Fellow of the European Federation of Artificial Intelligence (EurAI), an Associate Editor of the Journal of Logic and Computation (Oxford University Press), an Editorial Board Member of the Logic Journal of the IGPL, the Journal of Applied Logic, and an Editorial Board Member of Dawning Logic, Canonical Systems, and Canonical Multisubject Systems. He is on the editorial boards of the Logic Journal of the IGPL, Journal of Applied Logic and editor-in-chief of the Handbook of Moral Logic, Normative Systems, and Normative Multisubjective Systems. His research interests include moral logic, normative multisubjective systems, and formal argumentation. According to Google Scholar, his papers have an h-index of 57 and have been cited more than 14,500 times.



Yisong Wang (Guizhou University)

Yisong Wang Title: Witnesses for Answer Sets of Logic Programs
Abstract: Answer Set Programming (ASP) is a declarative problem solving paradigm that can be used to encode a problem as a logic program whose answer sets correspond to the solutions of the problem. It has been widely applied in various domains in AI and beyond. Given that answer sets are supposed to yield solutions to the original problem, the question of ``why a set of atoms is an answer set'' becomes important for both semantics understanding and program debugging. It has been well investigated for normal logic programs. However, for the class of disjunctive logic programs, which is a substantial extension of that of normal logic programs, this question has not been addressed much. In this talk, we propose a notion of reduct for disjunctive logic programs and show how it can provide answers to the aforementioned question. First, we show that for each answer set, its reduct provides a resolution proof for each atom in it. We then further consider minimal sets of rules that will be sufficient to provide resolution proofs for sets of atoms. Such sets of rules will be called witnesses and are the focus of this article. We study complexity issues of computing various witnesses and provide algorithms for computing them. In particular, we show that the problem is tractable for normal and headcycle-free disjunctive logic programs, but intractable for general disjunctive logic programs. We also conducted some experiments and found that for many well-known ASP and SAT benchmarks, computing a minimal witness for an atom of an answer set is often feasible. These results have been published in ACM Transactions on Computational Logic volume 24 (2) in 2023.

Bio: Dr. Wang Yisong is a Professor of Guizhou University. He obtained his PhD degree from Guizhou University in 2007 and completed postdoctoral research at The Hong Kong University of Science and Technology in 2008 and at the University of Alberta, Canada in 2010. He serves as an Associate Editor of Annals of Mathematics and Artificial Intelligence. He has received the KR-2006 Ray Reiter Best Paper Award. He worked as a principal investigator for 4 Chinese national fund projects, and contributed over 50 academic papers in international conferences such as AAAI, IJCAI, KR, and journals including JAIR, TCS, TOCL, IEEE TFS and so on.



Alessandra Palmigiano (Vrije Universiteit Amsterdam)

Alessandra Palmigiano Title: Categories & categorization
Abstract: Categories are cognitive tools that humans use to organize their experience, understand and function in the world, and understand and interact with each other, by grouping things together which can be meaningfully compared and evaluated. They are key to the use of language, the construction of knowledge and identity, and the formation of agents’ evaluations and decisions. Categorization is the basic operation humans perform e.g. when they relate experiences/actions/objects in the present to ones in the past, thereby recognizing them as instances of the same type. This is what we do when we try and understand what an object is or does, or what a situation means, and when we make judgments or decisions based on experience. The literature on categorization is expanding rapidly in fields ranging from cognitive linguistics to social and management science to AI, and the emerging insights common to these disciplines concern the dynamic essence of categories, and the tight interconnection between the dynamics of categories and processes of social interaction. However, these key aspects are precisely those that both the extant foundational views on categorization struggle the most to address. In this talk, I will discuss by way of examples how methods, insights, and techniques pertaining to structural proof theory, algebraic logic, duality theory, and category theory in mathematics can be used in synergy with one another to develop an overarching logical theory of categories and categorization, on which new generation explainable AI can be based, as well as a principled approach to human-machine interaction.

Bio: Alessandra Palmigiano holds the Chair of Logic and Management Theory at the School of Business and Economics of the VU Amsterdam. In the past ten years (also thanks to a number of grants), her research has been focusing on categories (understood in the Aristotelian sense) as the most fundamental cognitive tools both for humans and machines. With her group, she is engaging in a research program aimed at building a foundational logical theory of social interaction and its dynamics on the basis of categories and categorization. This research program has direct relevance for a wide range of disciplines which include AI, (computational) linguistics, cognitive and social sciences, management science. Especially working in collaboration with researchers in these disciplines, her strong interest is in using formal tools for representing categorical dynamics to track changes in meaning, analyse the processes of decision-making via persuasion and deliberation, and other aspects of agency and cognition.