Keynote Speakers

Herman Capellen (Hong Kong University)

Leila Amgoud Title: A New Framework for the Philosophy of Artificial Intelligence
Abstract: The first part of this talk will answer two questions: What is the philosophy of artificial intelligence (AI)? What are the core methodological tools for doing philosophy of AI? The second part of the talk will outline a theory of AI according to which even current systems, such as Bard and ChatGPT, are full-blown cognitive and linguistic agents. The material in the talk is drawn from a new book by Cappelen & Dever, "In Defense of Artificial Intelligences."

Bio: Professor Herman Cappelen is a philosopher. He currently works as a Chair Professor of philosophy at the University of Hong Kong. Before moving to Hong Kong, he worked at the Universities of Oslo, St Andrews, and Oxford. Since 2008, he has been an elected fellow of the Norwegian Academy of Science and Letters. And since the same year, he has been an elected member of the Academia Europaea. Professor Cappelen is also the Director of AI&Humanity-Lab@HKU, and co-director ConceptLab Hong Kong. He is serving as the Editor-in-Chief of Inquiry: An Interdisciplinary Journal of Philosophy. His current research focus is on the philosophy of AI, Conceptual Engineering, and the connections between those two. However, his philosophical interests are broad – they cover more or less all areas of systematic philosophy.



Darrell Rowbottom (Lingnan University)

Thomas Studer Abstract: The no miracles argument for scientific realism rests on the notion that the approximate truth of science’s theories best explains its persistent predictive success. In contemporary science, however, machine learning systems, such as AlphaFold2, have also been remarkably predictively successful. We might therefore ask what best explains such successes. Might such AIs accurately represent critical aspects of their targets in the world? And if so, does a variant of the no miracles argument apply to these AIs? We argue for an affirmative answer to these questions.

Bio: Darrell Rowbottom is Editor in Chief of Studies in History of Philosophy of Science, Associate Editor of Australasian Journal of Philosophy, and Coordinating Editor of Theory and Decision. He has published on a wide range of subjects in general philosophy of science (esp. the realism issue and scientific method) and also in epistemology, metaphysics, philosophy of probability, philosophy of physics, and philosophy of psychology/mind. His most recent monograph is The Instrument of Science: Scientific Anti-Realism Revitalised (Routledge, 2019); he develops a new form of anti-realism, ‘cognitive instrumentalism’, therein.



Fei Song (Lingnan University)

Yanjing Wang Title: Can AI be Moral? Two Approaches to Ethical AIs
Abstract: XiDeveloping moral machines is a pressing task for AI research. With the explosion of possible contexts of robot-human interactions, we have every reason to make sure machines shall act in accordance with acceptable moral standards. Failure to produce sufficiently moral behaviour in machines can be moral. However, despite the pressing need for such a task, the challenges are equally daunting.
In this talk, I first briefly discuss the inherent limitations raised in the literature with three different possible approaches: 1) the top-down approach (e.g., deterministic algorithm model); 2) the bottom-up approach (e.g., machine learning model); 3) hybrid systems (e.g., algorithm + machine learning). I illustrate the advantages and drawbacks of each approaches. I further propose a novel approach called a pluralist hybrid system. The pluralist hybrid system comprises two elements. First, it has a deterministic algorithm system that impartially includes different moral rules for action guidance. The deterministic algorithm system is responsible for making explicit moral decisions. Second, it has a machine learning system responsible for calculating the value of the parameters required by the application of moral principles. I argue that the pluralist hybrid system is better than the existing proposals in two aspects. First, it better addresses the moral disagreement problem of the top-down approach. Second, it reduces the opacity of the system to a justifiable level compared with bottom-up models.

Bio: Fei Song is a Research Assistant Professor at Lingnan University, Hong Kong. Before that, She was an Assistant Professor at Nazarbayev University, Kazakhstan. She was awarded a Ph.D. in Philosophy from the University of Hong Kong. She obtains her Master of Art from the Australian National University and my Bachelor of Philosophy and Bachelor of Science in psychology from East China Normal University. Her specialization is primarily in normative and applied ethics on risk. She also co-authors and collaborates with psychologists on behavioral ethics with a particular focus on moral decision-making under uncertainty. She also works on the ethics of AI, on a project of a hybrid model for moral A.Is.



Title: What is the challenge to acquiring understanding via deep learning systems?
Speaker: André Curtis-Trudel (Lingnan University)
Abstract: Despite their extraordinary predictive power, deep learning systems do not provide scientific understanding — or so many philosophers and scientists suspect. Yet the details and scope of this limitation are still unclear. The purpose of this article is to clarify this worry and express it in its most general form. We argue that opacity in deep learning systems limits or otherwise prevents an agent from acquiring scientific understanding, for a wide variety of conceptions of the latter. In particular, we suggest that opacity is a threat even on anti-realist conceptions of understanding. One upshot of our argument is that opacity in deep learning is a problem even for scientific instrumentalists. Another is that philosophical work on explainable artificial intelligence should be more sensitive to the conception of understanding at issue, if such techniques are to play a role in promoting understanding.



Title: Making large language models trustworthy: A deeper examination of alignment and misalignment
Speakers: Xiaoyu Ke (Zhejiang University), Yang Shen (Zhejiang University), Yu Ji (University of Chicago)
Abstract: A widely accepted perspective among LLM researchers posits alignment as the generation of truthful, helpful, and non-harmful outputs for the user. Accordingly, a "misaligned" model is considered to yield detrimental ramifications, ranging from epistemic ones such as propagating bias, false information, or fake news, to ethical problems that may cause harm to the user.
We argue that the existing definition of alignment is problematic. The present approach predominantly takes a limited subset of user intentions, specifically those of researchers and contractors, as being universally applicable. This methodology operates on the underlying assumption that these specific intentions can be generalized to represent the entire user population. However, the notion of an "average" user is, at its best, a vague abstraction with no clear attributes or parameters that can be universally applied. We question the effectiveness and validity of models designed under this framework of alignment.
We propose a triadic framework of epistemological alignment and argue that it provides a better framing for making LLMs trustworthy. We propose to distinguish three perspectives on alignment: “factual alignment”, which prioritizes factual accuracy, "social alignment”, which prioritizes human receptivity and ethical considerations, and "cognitive alignment”, which prioritizes users’ psychological expectations.



Title: Internet and the Formation of Echo Chambers
Speaker: Quiantong Wu (National University of Singapore)
Abstract: This paper aims to explain the limitations of the Internet in terms of the epistemic agents' belief-forming process and analyze how these limitations are related to the potential formation of epistemic bubbles and echo chambers. In the first part, I will demonstrate that the Internet is characterized by a large amount of word-based information, excellent accessibility, and filtering algorithms that facilitate people's learning. I will point out two limitations that are related to these seemingly positive characteristics: the lack of phenomenal content and the imbalance between the cognitive ability of the epistemic agent and the Internet. In the second part of the paper, I explain the importance of situatedness in the belief-forming process by the concepts of top-down and bottom-up processing in standard information processing. In the third part of the paper, I analyze how the two limitations of the Internet disrupt the situatedness of the epistemic agent and how this disruption makes it easier to form epistemic bubbles and even echo chambers.



Title: Genuine Understanding or Mere Rationalizations? Approximations and Idealizations in Science and Explainable AI
Speaker: Luis Lopez (Leibniz University Hannover)
Abstract: Rudin (2019) has prominently argued that local post hoc XAI models are inherently misleading, as they offer mere rationalizations (rather than genuine understanding) of decisions made by black box machine learning models. In response, some philosophers of science have been too quick to construe that these arguments stem from a normative premise according to which perfect faithfulness between (local) post hoc XAI models and their targets is necessary for genuine understanding. Moreover, they have been even quicker in drawing insights from the literature on idealized scientific models to challenge such a premise. I show how these responses not only mischaracterize what is at the core of Rudin’s arguments but also fail to distinguish idealization from approximation. I argue that when local post hoc XAI models are misleading, it is primarily due to approximation failure rather than imperfect faithfulness (Fleisher, 2022) or idealization failure (Sullivan, unpublished). Finally, I clarify the conditions under which these models can be said to provide genuine understanding.



Title: Opacity and intelligibility of Neural Networks
Speaker: Aleks Knoks & Thomas Raleigh (University of Luxembourg)
Abstract: Much has been written recently about the ethical and policy implications of using 'opaque' AI technologies. In this talk we consider what this 'opacity' of Neural Networks trained via Machine Learning is exactly. In particular we ask: what would it take to have a fully satisfactory level of understanding of a Deep Neural Network? We then discuss various forms of 'XAI' technology by considering to what extent (if any) these simplifying models could provide some partial understanding that approximates the ideal of full understanding. We also discuss whether/when there is the possibility of gaining understanding of a DNN via use of the 'intentional stance'. Finally, we briefly discuss the nascent field of research into 'mechanistic interpretability'.