Keynote Speakers

AIPS is proud to announce the following lineup of keynote speakers (Sorted alphabetically by surname).

Mario Alai

Mario Alai

Is Science Epistemically Responsible? The Prospects of the Debate

Abstract: According to instrumentalists and constructive empiricists, science should be technologically fruitful, but it need not be also epistemically responsible. That is, it cannot prove that its theories are true (as opposed to simply empirically adequate), hence it cannot be required to provide knowledge, i.e., true and justified descriptions of the unobservable levels of reality. Scientific realists, instead, hold science can achieve both truth and justification, hence it has the paramount responsibility of delivering theoretical knowledge. To make their point, realists must propose exceptionless criteria of truth, but none seems to work across the board. They have also tried different "recipes", i.e., versions of realism, like entities realism, structural realism, deployment realism, or semi-realism. Yet, up to now none of these has been completely immune to the antirealist's objections, especially those concerning the empirical underdetermination of theories and the pessimistic meta-induction from the history of scientific failures. Realists have reacted by refining their initial criteria and proposing new "recipes", but counterexamples have been brought to each of them. While nothing forbids realists to try still new versions and refinements, certain critics contend that the debate itself is unending and unconclusive. Thus, they take deflationist approaches, like Fine's "natural ontological attitude", Faye's radical naturalism, Stanford's moderate particularism, Saatsi' "exemplar realism", stance voluntarism, etc. On the contrary, I point out that these deflationist positions are unsatisfactory, while the realism-antirealism debate has displayed some progress, achieving substantial clarification in certain areas. Not only there has been some convergence of originally distant positions both among realists and among antirealists, but even the two opposite camps have come closer to each other on certain important issues. In particular, I would argue that the most discussed realist criterion of truth (the "no miracle" argument based on novel predictions) has been refined to a formulation which stands unchallenged, and a moderate version of deployment realism has emerged, which may well prove unobjectionable.

Bio: Mario Alai (1952), a Philosophy graduate of the University of Bologna, holds postgraduate degrees from the Universities of Urbino and Helsinki, and doctoral degrees from the Universities of Maryland and of Florence. After teaching Religion, Psychology, Education, History and Philosophy in high schools in Cesena and in Turin (Italy), from 1999 to his retirement in 2022 he worked at the University of Urbino first as lecturer, then as researcher in Logics and Philosophy of Science, and finally as associate professor of Theoretical Philosophy and of Philosophy of Language. His main research interests concern metaphysical realism and scientific realism. In particular, he has defended sophisticated metaphysical realism against such versions of antirealism as neopositivistic and Dummettian verificationism, Parrini's agnosticism, Goodman's constructionism, and Putnam's "internal" realism. Besides, he has been arguing for a balanced form of selective scientific realism by discussing the "no miracle argument", the nature and role of "novel predictions", and the best ways to resist the pessimistic meta-induction and to overcome the empirical underdetermination of theories. He has written also on the philosophy of Agazzi, Carnap, Frege, D. Lewis, Popper, Quine, Russell, Wittgenstein, and published works on the history of philosophy of science, the history of philosophy of language, the theory truth, the theory of knowledge and justification, skepticism, artificial intelligence in scientific discovery, meaning and reference, and experimental philosophy.

Atocha Aliseda

Atocha Aliseda

Automated Diagnostic Reasoning: How Far Can We Go?

Abstract: The aim of this talk is to put forward a philosophical framework for automatic diagnostic systems in medicine. After reviewing some research in the area of oncology, in particular in breast cancer detection, I will address some philosophical questions of epistemological and ontological nature. I will inquire on what is the theory of mind a human agent most endorse when interacting with clinical decision support systems, as well as what type of entities these systems are. I will finally propose some attributes these systems must observe, in order to show that responsibility matters most in the medical domain.

Bio: Atocha Aliseda is a full professor at the National Autonomous University of Mexico (UNAM). She has published and edited books and articles on Logic and Philosophy of Science. Her research topics include abductive reasoning, logics of scientific discovery, and clinical reasoning. In 2022, her co-edited book Philosophy of Medicine: Contributions and Discussions from Mexico (in Spanish), was published.

Valentin Bazhanov

Valentin Bazhanov

Responsibility of a Scientist Through the Lens of Political Philosophy of Science

Abstract: The aim of my presentation is to explore the problem of the scientist's responsibility from the perspective of the political philosophy of science. The political philosophy of science had a special mission in the period of the coronavirus pandemic, when seemingly purely medical and biological problems were elevated to the level of the most complex social problems of organization and financing of health care. Why is the political responsibility of science particularly relevant in connection with the phenomenon of "post-normal" science, as well as with the problem of realizing the political subjectivity of science and its potential "democratization"? Is the main reason relates to the nature and epistemological status of "post-normal" science and the possible role of scientists in political decision-making in situations of significant uncertainty of the future (which is particularly characteristic of ecology)? Firstly, we are inclined to stress the crucial importance of appealing to traditional criteria of rationality, the carrier and conductor of which are scientists for they are working according to the norms and guidelines of "normal" science. Secondly, we argue that, despite the transdisciplinary nature of the problems at the heart of post-normal science, the political subjectivity of modern science is not full-fledged because of the alienation of scientists from important political decisions. Thirdly, science does not participate in politics in an independent way, as an autonomous actor acting on the same plane and on a par with other political actors (like parties or other political communities). Whether science and scientists acquire or lose the status of a political subject depends on the nature of the political milieu in society. Political subjectivity in a certain political climate cannot be the goal, the value of science, and the meaning of the scientist's activity. Striving for political subjectivity as a norm of existence of "post-normal" science could lead to a radical change in the "self-consciousness" of science, its sociocultural status and political weight, which would certainly affect the position of scientists. However, this aspiration makes theoretical and practical sense only as an integral part of the progressive movement towards civil society and democracy.

Bio: Dr., Prof. Valentin A. Bazhanov (born in 1953 in Kazan, Russia, the USSR) received his Kandidature (PhD) from Leningrad (now St.-Petersburg) University. He has been on the faculty of Philosophy at Kazan University since 1979 until 1993. He has been the Senior Research member at the Institute of Philosophy (Moscow) in 1987-88. Since 1993 he is at Ulyanovsk branch of Moscow State University (now Ulyanovsk State University). The Dean of the Humanities (1993 - October, 1995), Key researcher at Baltic Federal University (Kantian Rationality Lab, 2020 - 2022). He won Academy of Sciences of the USSR prize for young researchers (1985) and Kazan University First prize (1989). He is the International Science Foundation winner (1994), British Academy Fellow (1998) and Lakatos Research Fellowship (2007-2008). Dr. Prof. Bazhanov was an Associate Editor of THE REVIEW OF MODERN LOGIC (former MODERN LOGIC) since 1989 to 2004, an Associate Editor of PROBLEMS OF PHILOSOPHY (RAS), EPISTEMOLOGY AND PHILOSOPHY OF SCIENCE (RAS), PHILOSOPHY OF SCIENCE AND TECHNOLOGY (RAS), KANT STUDIES (Baltic Federal University), and LOGICAL INVESTIGATIONS (RAS). He has published 15 books and edited 6 books (included scientific biographies of great geometer N.I. Lobachevsky and forerunner of non-classical logic N.A. Vasiliev). In addition, he has 625 papers to his credit.

Jesus Zamora-Bonilla

Jesus Zamora-Bonilla

Towards a Theory of Epistemic Public Goods

Abstract: In economic theory, something is a public good if it is non-rival (its consumption by one agent does not diminish the amount of it available to others) and non-excludable (once it is provided, it is not viable to charge a price to the users). Knowledge and information have often been understood as public goods in this sense, but different epistemic items may have in different degrees the mentioned properties. Furthermore, though the theory of public goods has been used in order to answer the normative and political question of how scientific research has to be socially organised, little work has been done regarding the possible relevance of the theory to the understanding of the epistemological nature of knowledge, information, belief, and the like. In this paper, a Brandomian inferentialist model of rationality and knowledge is employed in order to provide a framework within which to offer an answer to these epistemological questions.

Bio: Jesús Zamora-Bonilla (Madrid, 1963) is full professor of Philosophy of Science at UNED (Spain's National Open University). He holds PhDs both in Philosophy and in Economics, and has worked mainly on the problems of scientific realism, scientific rationality, philosophy of the social sciences, and the application of game-theoretic models to the process of scientific research. He has published numerous papers in some of the most important journals of the area, like Philosophy of Science; Synthese; Erkenntnis; Journal of Economic Methodology; Economics and Philosophy; Philosophy of the Social Sciences; etc. In Spanish, he has also published several books on philosophical topics for the general public and contributed often to other popular channels (like TV, radio, newspapers, blogs, and social networks).

Bo Chen

Bo Chen (陈波)

Why Do We ‘Must’ and ‘Should’? Bridging the Gap from ‘Is’ to ‘Ought’

Abstract: The split and gap between ‘is’ and ‘ought,’ between ‘fact’ and ‘value’ or ‘norm,’ as proposed by Hume, Moore and others, are fictional for two reasons. First, there are no purely objective facts, because facts always involve subjective intervention by cognitive subjects. Second, there are no purely subjective norms, because norms must have objective foundations and theoretical bases. Why do we ‘must’ and ‘should?’ This is determined by a combination of the following factors: First, our needs, intentions, and goals – where intentions and goals originate from needs, and the strength of our intentions often depends on the strength of needs. However, needs themselves have objective grounds. Second, the current state of affairs often deviates significantly from our needs and intentions. We therefore strive to change the status quo and create a vision that aligns with our needs and intentions. Third, we rely on relevant broad scientific principles – including those of natural sciences, social sciences, and humanities – as well as on social consensus, such as cultural traditions and conventions. Fourth, we depend on our rational thinking ability: faced with the current situation, and drawing upon relevant theories and social consensus, we rationally deliberate on what we must or should do – how we can satisfy our need, achieve our goals, and turn our visions into reality. Thus, there exists a common thread linking our needs, interests, intentions, goals, the distant reality, and our rational capacity. This commonality bridges the gap between ‘facts’ on the one side and ‘values’ and ‘norms’ on the other.

Bio: Chen Bo is Chair Professor of Humanities and Social Science, Doctoral Supervisor at School of Philosophy, Wuhan University. He has been elected as Titular Member of IIP (Institut International de Philosophie, Paris, 2018) and of AIPS (Académie Internationale de Philosophie des Sciences, Bruxelles, 2021). He was a full Professor for 23 years at the Department of Philosophy, Peking University (1999-2021). His specialty and expertise cover logic and analytic philosophy, especially the philosophy of logic, philosophy of language, history of logic, epistemology, Hume, Frege, Russell, Quine, and Kripke. He is also engaged in comparative study of Chinese and Western philosophy. His representative publications include the monographs Dialogues, Interaction, and Participation: The Process of Integrating into the International Philosophical Community (Beijing: China Renmin University Press, 2020), Analytic Philosophy: Criticism and Construction (Beijing: China Renmin University Press, 2018), Studies on Paradoxes (Beijing: Peking University Press, 2014; second edition, 2017), Studies in Philosophy of Logic (Beijing: China Renmin University Press, 2013), and Studies on Philosophy of Logic (Beijing: China Renmin University Press, 2004; revised and enlarged edition,18 Social Sciences in China 2013), Studies on Quine’s Philosophy: From Logical and Linguistic Points of View (Beijing: SDX Joint Publishing Company, 1998); and the papers “Logical Exceptionalism: Development and Predicaments” (Theoria: A Swedish Journal of Philosophy, vol. 90, 2024, no. 3) and “Social Constructivism of Language and Meaning” (Social Sciences in China [Chinese Edition], 2014, no. 10). E-mail: chen-bo@whu.edu.cn.

Pan Deng

Pan Deng (邓盼)

From Teleology to the Illusion of Autonomy: A Philosophical Genealogy of Machine Ethics & AI Agents

Abstract: With the increasing penetration of autonomous systems and artificial intelligence, “machine ethics” has become a central topic in contemporary applied ethics. However, current discussions often bracket the deep philosophical roots of this issue—namely, the millennia-long debate in the history of Western thought surrounding the “tool” (Organon) in relation to teleology and mechanism. This study constructs a historical-philosophical framework to trace the evolution of the “machine/tool” concept from a teleological creation with embedded value to a functionally autonomous agent. The core thesis is that contemporary discourse on machine “autonomy” is conceptually flawed, and that a proper framing of machine ethics must return to the human subject, viewing it as a form of “media ethics.” The inquiry traces the concept from its Aristotelian roots, where a tool’s value is bound to the realization of the “good” (Gut), through its profound rupture in the 17th century, when ethical judgment shifted from objective purpose to subjective utility. This legacy shapes today’s core dilemma, as true “autonomy” stems from self-legislation (αὐτός νόμος)—the capacity to reflect upon and question rules—which machines fundamentally lack. Attributing moral responsibility to them is therefore a category error and a dangerous “deflection of responsibility” (Ablenkungsmanöver). Consequently, this study concludes that the discourse must move beyond the fetishism of “autonomy” and be reframed as a media ethics, shifting focus from creating illusory “machine agents” to critically regulating the human values embedded within technological systems and returning responsibility to their human creators and users.

Bio:Pan Deng is an Associate Research Fellow at the School of Social Sciences, Shenzhen University. Deng holds a Ph.D. in Philosophy (Doctor philosophiae) from Ludwig- Maximilian-Universität München (LMU Munich), where the dissertation focused on German Philosophy with a minor in Sinology. Primary research interests include the historical-critical edition of the works of Marx and Engels (MEGA²), Classical German Philosophy, and the Philosophy of Technology. Deng is the Principal Investigator of a General Program grant from the National Social Science Fund of China (NSSFC) and has authored three books and numerous academic articles. Additionally, Deng serves as an Associate Editor for the journal Technology and Language.

Dennis Dieks

Dennis Dieks

Foundations of Quantum Mechanics: From Pure Science to World-changing Technology

Abstract: For the first two decades after World War II, physicists had little appreciation for work on the foundations and philosophy of quantum mechanics. This began to change in the 1960s with the publication of Bell's work, and that shift accelerated in the 1980s with experimental tests that confirmed quantum predictions that contradicted Bell's inequalities. In the 1990s, these developments merged with fundamental experimental research, giving rise to such fields as quantum information and quantum computing. As these fields gained technical relevance and attracted significant funding, fundamental quantum research turned into a prestigious discipline with clear potential for societal impact. This growing prestige and potential impact raise a question: does the development of the field bring more responsibility? In the early stages of research, the social impact seemed minimal, but the landscape has changed - especially with the combination of quantum technology and AI. At the same time, there remains considerable uncertainty about both the promises and potential risks of these developments. So, the argument against responsibility based on ignorance, alongside the argument based on fragmentation of responsibility, should certainly be taken seriously. Yet some potential dangers seem too serious to be ignored. In the talk, we will explore the changing responsibility of scientists in fundamental quantum research, drawing comparisons with other historical cases where scientific advances have posed ethical challenges.

Bio: Dennis Dieks is a professor (em.) of philosophy and foundations of the natural sciences, particularly physics, at Utrecht University. He is one of the discoverers of the no-cloning theorem (1982) and has published extensively on the foundations and interpretation of quantum mechanics, the concept of 'identical particles,' and the philosophy of space and time. He is a member of the Royal Netherlands Academy of Arts and Sciences, the Royal Holland Society of Sciences and Humanities, and the Académie Internationale de Philosophie des Sciences. In 2024, he won the Langerhuizen Oeuvre Prize.

Vincenzo Fano

Vincenzo Fano

Explaining Neural Network as An Ethical Problem for Scientists and Philosophers

Abstract: In recent years, Artificial Intelligence has made remarkable strides, primarily due to the adoption of deep neural networks. However, one of the fundamental challenges of these models remains their lack of explainability. This issue was already recognized before the advent of Large Language Models (LLMs) such as ChatGPT, but their success has further exacerbated the problem. AI-based neural networks excel in solving complex tasks but are notoriously difficult to interpret in terms of decision-making processes. This practical limitation has significant implications: if we do not understand the algorithm that a network follows, we cannot easily adapt it to different contexts, thereby reducing the flexibility and reliability of the system. Drawing from David Marr's well-known framework, modern AI systems seem to lack the algorithmic level. We can identify the task being performed (computational level) and observe its physical implementation (implementational level), yet the actual reasoning process remains opaque. This issue is both practical and ethical: we cannot make informed decisions about AI utilization without a deeper understanding of its mechanisms. The explainability problem can be approached in two primary ways: logical analysis and studying expert discourse. This specialized technical discourse may provide valuable clues for interpreting neural network activity. To make informed decisions regarding AI deployment, we must enhance our understanding of these models' internal mechanisms. Investigating the language researchers use to describe LLMs may offer a novel avenue for interpreting these machines, potentially bridging the gap between the computational and implementational levels. In subsequent analyses, we will delve deeper into the process of discovery that has led to the development of ChatGPT, seeking to unveil the natural language models deployed by researchers.

Bio: Vincenzo Fano is Full Professor of Logic and Philosophy of Science at the University of Urbino "Carlo Bo". His research spans philosophy of physics, epistemology, history of philosophy, and metaphysics, with a special focus on the interplay between science and philosophy. He currently serves as President of the Italian Society for Logic and Philosophy of Science (SILFS), Director of the Urbino Summer School in Epistemology, and Director of the Urbino International School in Philosophy and Foundations of Physics. He is also a member of the Académie Internationale de Philosophie des Sciences. Author of numerous books and articles in leading journals, he has held visiting positions at institutions such as the University of Western Ontario, the University of Pittsburgh, and Helsinki University.

Wenceslao J. Gonzalez

Wenceslao J. Gonzalez

Analysis of Responsibility in Artificial Intelligence as a Science of Design

Abstract: The responsibility of AI can be first approached as a science of design, analyzing it in terms of the objectives sought, the processes selected, and the expected results, from which consequences can be derived. This responsibility can be that of individual or social subjects. The former are researchers, especially when they choose ends and means in their activity to expand human possibilities via AI; and the latter are organizations when they prioritize objectives, by supporting processes and by highlighting certain results. A key factor for individual and social responsibility in AI is the chosen rationality. Herbert Simon distinguished three types of rationality: (i) administrative, (ii) human in general (based on economic decision-making) and (iii) symbolic. It is in this third type, which Simon associated with AI, where what Thomas Nickles calls "alien reasoning" appears, a type of rationality different from human rationality and which we do not control, which raises questions about responsibility for research processes. In recent years in particular, a group of leading researchers, including Geoffrey Hinton and Daniel Kahneman, have drawn attention to the possible extreme risks of achieving "artificial general intelligence." This in turn raises the question of whether or not to accept a "digital mind" and, if so, how to understand it, which raises the question of responsibility in the context of the limits of science.

Bio: Wenceslao J. Gonzalez is Professor of Logic and Philosophy of Science (University of A Coruña). He has been a Team Leader of the European Science Foundation program entitled "The Philosophy of Science in a European Perspective" (2008-2013). He received the Research Award in Humanities given in 1995 by the Autonomous Community of Galicia (Spain). In 2009 he was named a Distinguished Researcher by the Main National University of San Marcos in Lima (Peru). He was a member of the National Committee for Evaluation of the Scientific Activity (CNEAI) of Spain. Dr. Gonzalez has been a member of the National Commission for the Evaluation of Research Activity (CNEAI). He has been the Director of the Research Center for Philosophy of Science and Technology (CIFCYT) at the University of A Coruña since its creation in 2017. He is also the President of the Philosophy of Science and Technology Foundation (FFCYT), recognized by the Xunta de Galicia as of educational interest (Diario Oficial de Galicia of June 18, 2015) and of Galician interest (Diario Oficial de Galicia of July 7, 2015). He has been Vice-Dean of the Faculty of Humanities. He was President of the Committee of Doctoral Programs at the University of A Coruña (2002-2004). He was the promoter of the Honorary Doctorate awarded by the UDC to John Worrall (London School of Economics) on March 11, 2020.

Hans-Peter Grosshans

Hans-Peter Grosshans

Freedom of Science and Responsibility in Science. A Dialectical Relation

Abstract: With the publication of Hans Jonas book on “The Principle of Responsibility” in 1979 (German original: Hans Jonas, Das Prinzip Verantwortung, Frankfurt 1979), the category of responsibility became widespread in science and technology. The article shows how Jonas conceived responsibility new with relating it dominantly to the future. Then this concept of responsibility is related to the freedom of science. It will be shown, how both principles support each other in science, but also where they are in conflict.

Bio: Hans-Peter Grosshans is a German theologian and philosopher of religion, with a special focus on questions and problems of hermeneutics, methodology and philosophy of science. From 1990 to 2002 he was teaching at the Faculty of Protestant Theology at the University of Tübingen. He then held teaching positions at the theological faculties of the universities of Hamburg (Germany), Munich (Germany) and Zürich (Switzerland) before in 2008 he took over his present position as professor (chair) for Systematic Theology and director of the Institute for Ecumenical Theology at the Faculty of Protestant Theology of the University of Münster, Germany. From 2016 – 2020 he was dean of his faculty. At the University of Münster Hans-Peter Grosshans is member of the Centre for the Theory of Science and of the interdisciplinary Cluster of Excellence on “Religion and Politics”.  Hans-Peter Grosshans is since many years in the Board of the European Society for the Philosophy of Religion and has been president and vice-president of this society. In 2020-21 he was president of the European Academy of Religion (EuARe). He is the present president of the German Society for Philosophy of Religion. He is member of the Executive Board of RESILIENCE: an European Research Infrastructure for Religious Studies. He is main editor of the journal “Theologische Rundschau” (Mohr Siebeck, Tübingen) and editor (with Chr. Danz, J. Dierken and F. Nüssel) of the book series “Dogmatik in derModerne“ (Mohr Siebeck, Tübingen).

Shuifa Han

Shuifa Han (韩水法)

University Education in the Age of AI

Abstract: In the era of artificial intelligence, people have clearly recognized the following facts: (1) The nature of human beings is undergoing change; (2) Appropriate algorithms can process knowledge, discover, reorganize, and present relations between pieces of knowledge with extraordinary efficiency. As a result, education must inevitably undergo corresponding changes. All previous educational reforms are merely updates compared to the revolution we currently face, while today's educational revolution requires a complete redesign of the system. It must redefine the nature, function, and content of education from scratch and establish new institutions. As for research universities, standard, foundational, and general education courses will all be delivered using artificial intelligence methods and will be studied independently by students. In other words, a large amount of foundational, repetitive, and standard-answer-oriented instruction and experimentation will be handed over to AI systems. Teachers will be responsible for exploratory and research-oriented advanced courses; in other words, courses in which teachers are directly involved will be equivalent to small, temporary academic research teams, and face-to-face teaching between teachers and students will become a process of collaborative research. At the same time, courses that develop and strengthen uniquely human abilities—such as willpower and emotional intelligence, integrative ability, and teamwork—will become the most important aspect of university education.

Bio: Shuifa Han is Boya Distinguished Professor at Peking University, Director of the Academic Committee of the Department of Philosophy at Peking University, and Director of the Institute of Foreign Philosophy at Peking University. He is also Deputy Director of the Chinese Society for the History of Foreign Philosophy. His main research fields include Kant's philosophy, political philosophy, Max Weber, modern Chinese thoughts, and Hanese philosophy. His major works include A Research on Kant's Theory of "Thing-in-itself" (1990), Max Weber (1998), University and Learning (2008), The Horizon of Justice (2009), Critical Metaphysics (2009); he is also translator of Critique of Practical Reason (Kant), The Methodology of the Social Sciences (Weber). His main papers include "The Concept of Bürger in Kant's Philosophy of Law", "The Third Meaning of Enlightenment: Enlightenment Thought in the Critique of Judgment", "Humanism in the Age of AI", "Hanese Philosophy: Methodological Significance", "Hanese Thought Orders and the Ancient Divine System".

Karl Kraatz

Karl Kraatz

Heidegger’s Critique of Technology and Its Relevance: Insights from His Philosophy of AI

Abstract: In the philosophy of technology, it has become increasingly common to describe the approaches of “classical philosophers of technology,” including Heidegger, as reflections on “Technology” with a capital T and to brand them as “too abstract” or “metaphysical.” Don Ihde, for instance, criticized Heidegger’s approach for being ignorant of the particularities of “concrete technologies.” To improve on classical approaches, many have called for an “empirical turn” within the philosophy of technology, advocating for stronger interdisciplinary work with scientists. I attempt to show that this distinction between “abstract” (Technology) and “concrete” (technologies) is deeply flawed, by demonstrating the “concreteness” of Heidegger’s critique of technology and its relevance for understanding current developments in AI research. To prove this relevance, I will discuss examples from research on “Heidegger’s philosophy of AI” in the areas of ethics, language, ontology, and epistemology. From these discussions, the notion of “ontological responsibility” will emerge which is the responsibility for one’s underlying assumptions about reality.

Bio: Karl Kraatz is Associate Professor in the School of Philosophy at Zhejiang University. His research spans phenomenology and artificial intelligence, with a special focus on transcendental philosophy, particularly Kant and Heidegger, as well as Heideggerian approaches to AI, encompassing ethics, large language models, and the limitations of generative AI. He is the author of two monographs and numerous articles in leading journals. He is co-founder of the international journal Eksistenz: Journal for Intercultural Philosophy and Hermeneutics and founder of Synopsis, a German publishing house specializing in translations of important philosophical works.

Beishui Liao

Beishui Liao (廖备水)

Approaches to Explainable and Ethical AI

Abstract: This presentation explores the development of explainable and ethical artificial intelligence (AI). It begins by examining the nature of AI and its primary implementation paradigms—symbolic reasoning and model-based learning—highlighting the distinct ethical and safety challenges that arise from each. These challenges differ between non-embodied AI systems (such as issues of reliability, bias, and transparency in large language models) and embodied AI systems (such as accountability and moral dilemmas in autonomous vehicles). To address these issues, the presentation proposes four key technical pillars for building trustworthy AI: establishing a world model for factual grounding, ensuring logical correctness through formal methods, achieving verifiability and explainability via integrated approaches, and embedding ethical and legal norms with causal traceability. Finally, the “3C” principles—Correctness, Clarity, and Compliance—are introduced as a guiding framework for translating theory into practice in the creation of responsible and trustworthy AI systems.

Bio: Beishui Liao is a Full Professor of Logic and Computer Science at Zhejiang University, where he has held the prestigious Qiushi Distinguished Professorship and Changjiang Distinguished Professorship since 2019. He received his Ph.D. in Computer Science from Zhejiang University in 2006 and has since established himself as a leading scholar in logic, formal argumentation, and their applications to multi-agent systems, explainable AI, and ethical AI. Currently, he leads a major project funded by the National Social Science Fund of China on Logics for New Generation Artificial Intelligence. He serves as Vice Dean of the Faculty of Humanities at Zhejiang University, Director of the Institute of Logic and Cognition, and Co-Director of the Zhejiang University–University of Luxembourg Joint Lab on Advanced Intelligent Systems and Reasoning (ZLAIRE). He has been a Guest Professor at the University of Luxembourg since 2014 and has held visiting positions at the University of Texas at Austin, the University of Brescia, the University of Oxford, and the University of Cambridge. An active member of the international academic community, he serves as an Associate Editor of Annals of Mathematics and Artificial Intelligence, AI Logic Corner Editor of the Journal of Logic and Computation, and an editorial board member of Argument & Computation and Journal of Applied Logics. He is also a steering committee member of DEON and COMMA. In 2015, he co-founded the International Conference on Logic and Argumentation (CLAR), which has since become a leading international conference in the field of logic and argumentation. He has published three monographs and numerous papers in top journals and conferences, including Social Sciences in China, Artificial Intelligence (AIJ), Journal of Artificial Intelligence Research (JAIR), Journal of Logic and Computation (JLC), IJCAI, and KR, among others.

Fenrong Liu

Fenrong Liu (刘奋荣)

Advances in the Logical Reasoning of Large Language Models

Abstract: This talk presents a survey of the current state of research on the logical reasoning capabilities of large language models (LLMs), with particular attention to challenges concerning consistency. I will then outline several methods we have employed to investigate these issues and to enhance the logical reasoning capacity of LLMs. Finally, I will argue that strengthening the logical foundations of LLMs provides a promising paradigm for integrating symbolic and neural approaches, offering new perspectives for the development of the next generation of AI. This line of work also contributes to advancing AI systems that are both safer and more explainable.

Bio: Fenrong Liu is a Changjiang Distinguished Professor at Tsinghua University, the Amsterdam-China Logic Visiting Chair at the University of Amsterdam, and Co-Director of the Tsinghua-UvA Joint Research Centre for Logic. She is a Member of the Institut International de Philosophie (IIP) and a Corresponding Member of the Académie Internationale de Philosophie des Sciences (AIPS). Her research interests include preference logic, social epistemic logic, graph game logic, logical reasoning for large language models (LLMs), and history of logic in China. She held a visiting position at Harvard University and was a Berggruen Fellow at Stanford University. She is a steering committee member of TARK, PRICAI, LAMAS, and AWPL, and serves on the editorial boards of Synthese, Studia Logica, Global Philosophy, Theoria, and Topoi.

Chuang Liu

Chuang Liu (刘闯)

What Do the Fundamental Differences Between Natural Intelligence and Artificial Intelligence Mean to the Future of Art (As an Example)?

Abstract: Animals and humans mainly perceive and change their environment in the form of natural intelligence. Before the advent of AI, although with plenty of machines, the intelligence that uses and controls them was still human intelligence. From now on, this traditional mode of life may change drastically. With the massive introduction of AI, what challenges will humans encounter in their interaction with their environment in the future? Recognizing the difference between the principles and mechanisms of natural vs artificial intelligence will help us recognize such challenges and their severity. This paper uses art creation and application as an example to analyze and explain the fundamental differences between the Bayesian brain (natural intelligence) and the Boltzmann machine (artificial intelligence). The differences of natural kinds and statistical kinds foreshadow a big possible difference between creative processes and their products.

Bio: Chuang Liu is a Distinguished Professor in the School of Philosophy at Fudan University, Shanghai; Chair of the department of philosophy of science and logic, the Director of the Center for the Philosophy and Science of Intelligence (Fudan PSI), and the Academic Director of the Institute of Philosophy, the Chinese Academy of Sciences (CASIP). Chuang Liu is professor emeritus in the Department of Philosophy at the University of Florida. His research interests span in philosophy of physics, scientific methodology, and philosophy of intelligence and has published widely among the top journals in philosophy of science. His recent work engages evolutionary game-theoretic approaches to morality, Markov blankets and active inference in perception and cognition, and the role of teleology in scientific explanation—connecting norms, modeling, and cognition across species.

Dong Luo

Dong Luo (罗栋)

Harmony and Technological Understanding

Abstract: Drawing on perspectives from traditional Chinese thought, this paper explores two distinct dimensions of harmony: non-conflictual coexistence, where differing parties maintain balance without erasing their differences, and organic unity, where dynamic interactions—including tension or competition—contribute to a coherent, self-regulating whole, as seen in natural ecosystems. These understandings of harmony provide a lens for reimagining human–technology relationship in the age of artificial intelligence (AI). The paper proposes two possible models for a future AI-integrated society, each rooted in one of these conceptions of harmony. The first model envisions AI as a controllable tool, integrated into a human-governed system where stability is maintained through oversight and regulation—even amid underlying tensions. The second model imagines a more de-anthropocentric society in which advanced AI agents are recognized as legitimate participants, coexisting with humans through mutual respect and shared social space. Rather than focusing on AI ethics or risk mitigation alone, this analysis prioritizes what kinds of social configurations are possible and desirable for future societies.

Bio: Dong Luo earned his PhD from the Chinese Academy of Sciences with a dissertation on scientific objectivity and invariance. He is currently an Associate Professor at the Institute for Advanced Studies in Philosophy, Science, and Technology at the South China University of Technology. He has research interests in the history and philosophy of mathematics and physics, philosophy of science, and comparative philosophy. In recent years, he has published dozens of journal articles on these topics. He is also completing a book titled Scientific Objectivity: The Will to Understanding in Science, forthcoming with Routledge. Dong Luo also serves as Secretary General of the Chinese Association for the Philosophy of Physics and is a Council Member of the International Alliance for Social Epistemology. Email: luodong@scut.edu.cn. Website: www.luodong.org.

Renzong Qiu

Renzong Qiu (邱仁宗)

A Philosophical Case against Human Heritable Genome/Polygenic Editing

Abstract: The responsibility of science and technology means that scientific and technological professionals have to be in a state of being responsible, answerable, or accountable for something within their power, and being prepared to stand up on behalf of their stakeholders and act and speak for them. By the time of the illegal human embryo genome editing conducted by He Jiankui a number of participants including He himself in the debate on human germline genome editing did not realize that the future generation should be one important member as the whole among stakeholders. Neither Julian Savulescu and his colleagues pay attention to the responsibility for the future generation when they argue for human heritable polygenic editing (HHPE) as the next frontier of genomic medicine. In this presentation, I will first argue that our generation has the obligation for the future generation, and it means that the future generation should be one of the stakeholders in human heritable genome editing (HHGE) and HHPE. Then I will argue that in practice HHGE and HHPE are both scientifically untestable. Now the life expectancy in China is 79 years old, we have to watch Lulu, Nana and Amy whose genomes were edited by He Jiankui for 79 years to know whether they would be infected with HIV or not, even they would not be for whole life, the fact cannot prove that it is due to HHGE that He conducted. The same is with HHPE: the fact that a person who undergoes HHPE does not suffered with cancer in his 79 years old life in China cannot prove the success of HHPE. The testing has been done after He’s embryo editing shows that in Lulu’s and Nana’s genomes there are a number of off-targets, editing errors and mosaicisms which would have negative health impacts to the future generation, as well as many other unknowns. How the medical scientists who conduct HHGE/HHPE could perform the responsibility (including answerability and accountability) for the future generation after the hundred years of their death? So the conclusion is: Both HHGE and HHPE are unjustifiable scientifically and ethically.

Bio:Professor Emeritus, Institute of Philosophy and Honorary Director, Centre for Applied Ethics, Chinese Academy of Social Sciences; Professor, Center for Ethics and Moral Studies, and Director, Institute of Bioethics, Renmin University of China; Professor and Honorary Director, Center for Bioethics, School of Medicine, Xiamen University; Member of the Advisory Committee, Advanced Institute of the Humanities and Social Sciences, University of Electronic Science and Technology of China; Honorary President, Chinese Society for Bioethics; Member, Assessment Panel, Public Policy Research Funding Scheme, Hong Kong; Lifetime Member, Kennedy Institute of Ethics, Georgetown University; Fellow, The Hastings Center; Member, International Institute of Philosophy. Major past posts: Vice-President, Ethics Committee, Ministry of Health, China; Vice-President, Chinese Society for Philosophy of Nature, Science and Philosophy; President, Chinese Society for Philosophy of Science; President, Chinese Society for Bioethics; Chair, Chinese Committee, China-UK Summer School of Philosophy; Member, Committee on Ethical, Legal and Social Implications, International HapMap Consortium; Member, Gender Advisory Panel, UNDP/UNFPA/WHO/Work Bank Special Programme of Research, Development and Research Training in Human Reproduction; Member, UNESCO International Bioethics Committee; Member, UNAIDS Reference Group on AIDS and Human Rights; Assessor, Executive Committee, Division of Logic, Methodology and Philosophy of Science/International Association of History and Philosophy of Science; Member, Board of Directors, International Association of Bioethics; President, Asian Bioethics Association. Laureate of award or prize: 2002 World Network of Technology Awards Ethics; 2009 UNESCO Avicenna Prize of Ethics of Science; 2011 Henry Knowles Beecher Bioethics Award; 2017 Chinese Academy of Social Science Excellent Policy Recommendation; 2020 Chinese Society for Philosophy of Nature, Science and Philosophy Lifetime Achievement Award; 2022 Ministry of Education Policy Study Award co-winner with Professor Lei Ruipeng. Publications:25 books including Realism vs Anti-Realism in Philosophy of Science (co-editor, in English), Bioethics, Reproductive Health and Ethics (3 volumes), Patient’s Rights (co-author), AIDS, Sex and Rights, Biomedical Research Ethics (co-author), Bioethics: Asian Perspectives (in English), Introduction to Bioethics (co-author) , Public Health Ethics (co-author),An Introduction to Political Philosophy (co-editor), Contemporary Studies in Bioethics (co-editor, 2 volumes) and near 450 articles published in China or in other countries, among which 92 articles are in English.

Jorn Witt

Jorn Witt

Responsibility as Value of Scientific Research

Abstract: In the twenty-first century, scientists work in a research environment that is being transformed by globalization, interdisciplinary research projects, team science, and information technologies. As the scientific enterprise evolves, all stakeholders in the scientific community have an ethical obligation to place a high priority on instilling and championing the highest standards of scientific integrity in these new settings and applications. Beside well-designed study protocols, rigorous and precise data collection, thoroughly statistical analysis and a well-defined peer review process the responsibility of the scientists is of own value, especially in medicine, psychology and humanities. In the presentation we will discuss different aspects of responsibility in the scientific process and the value for the scientific results and participants in protocols as well as researchers.

Bio: Ass. Prof. Dr. J.H. Witt is one of the most experienced robotic surgeons in urology worldwide. With a personal experience of almost 10,000 procedures, an overview of more than 25,000 robotic cases over the last 20 years and a high number of publications in this field Dr. Witt must be called, without question, one of the founders of robotic surgery in urology. Since April 2024 Dr. Witt is Head of Department of Urology and Urologic Surgery (“Urokompetenz –Das Zentrum für Urologie”) at Clinic Bel Etage in Düsseldorf. The Clinic Bel Etage is a large private hospital in downtown Düsseldorf. Additionally, he is Senior Consultant at the Department of Urology, University Mannheim, a large tertiary referral center for Urology. Dr. Witt is the formerly Chair of the Department of Urology, Pediatric Urology and Urologic Oncology at St. Antonius-Hospital Gronau (April 2002 to December 2022). He established in Gronau one of the biggest urological departments in Germany. Dr. Witt founded and led there also the Prostate Center Northwest, a certified Comprehensive Cancer Center for prostate cancer and a tertiary referral center for surgery in urologic oncology. After leaving Gronau he reorganized in 2023 as the Director and Chair the Paracelsus- Klinik Golzheim, a large and well renowned hospital for urology in Düsseldorf. Dr. Witt additionally is Co-Founder of the German Society for Robotic Urology (Deutsche Gesellschaft für roboterassistierte Urologie) and past president of several national and international scientific meetings.