Abstract: The aim of this talk is to put forward a philosophical framework for automatic diagnostic systems in medicine. After reviewing some research in the area of oncology, in particular in breast cancer detection, I will address some philosophical questions of epistemological and ontological nature. I will inquire on what is the theory of mind a human agent most endorse when interacting with clinical decision support systems, as well as what type of entities these systems are. I will finally propose some attributes these systems must observe, in order to show that responsibility matters most in the medical domain.
Bio: Atocha Aliseda is a full professor at the National Autonomous University of Mexico (UNAM). She has published and edited books and articles on Logic and Philosophy of Science. Her research topics include abductive reasoning, logics of scientific discovery, and clinical reasoning. In 2022, her co-edited book Philosophy of Medicine: Contributions and Discussions from Mexico (in Spanish), was published.
Abstract: The aim of my presentation is to explore the problem of the scientist's responsibility from the perspective of the political philosophy of science. The political philosophy of science had a special mission in the period of the coronavirus pandemic, when seemingly purely medical and biological problems were elevated to the level of the most complex social problems of organization and financing of health care. Why is the political responsibility of science particularly relevant in connection with the phenomenon of "post-normal" science, as well as with the problem of realizing the political subjectivity of science and its potential "democratization"? Is the main reason relates to the nature and epistemological status of "post-normal" science and the possible role of scientists in political decision-making in situations of significant uncertainty of the future (which is particularly characteristic of ecology)? Firstly, we are inclined to stress the crucial importance of appealing to traditional criteria of rationality, the carrier and conductor of which are scientists for they are working according to the norms and guidelines of "normal" science. Secondly, we argue that, despite the transdisciplinary nature of the problems at the heart of post-normal science, the political subjectivity of modern science is not full-fledged because of the alienation of scientists from important political decisions. Thirdly, science does not participate in politics in an independent way, as an autonomous actor acting on the same plane and on a par with other political actors (like parties or other political communities). Whether science and scientists acquire or lose the status of a political subject depends on the nature of the political milieu in society. Political subjectivity in a certain political climate cannot be the goal, the value of science, and the meaning of the scientist's activity. Striving for political subjectivity as a norm of existence of "post-normal" science could lead to a radical change in the "self-consciousness" of science, its sociocultural status and political weight, which would certainly affect the position of scientists. However, this aspiration makes theoretical and practical sense only as an integral part of the progressive movement towards civil society and democracy.
Bio: Dr., Prof. Valentin A. Bazhanov (born in 1953 in Kazan, Russia, the USSR) received his Kandidature (PhD) from Leningrad (now St.-Petersburg) University. He has been on the faculty of Philosophy at Kazan University since 1979 until 1993. He has been the Senior Research member at the Institute of Philosophy (Moscow) in 1987-88. Since 1993 he is at Ulyanovsk branch of Moscow State University (now Ulyanovsk State University). The Dean of the Humanities (1993 - October, 1995), Key researcher at Baltic Federal University (Kantian Rationality Lab, 2020 - 2022). He won Academy of Sciences of the USSR prize for young researchers (1985) and Kazan University First prize (1989). He is the International Science Foundation winner (1994), British Academy Fellow (1998) and Lakatos Research Fellowship (2007-2008). Dr. Prof. Bazhanov was an Associate Editor of THE REVIEW OF MODERN LOGIC (former MODERN LOGIC) since 1989 to 2004, an Associate Editor of PROBLEMS OF PHILOSOPHY (RAS), EPISTEMOLOGY AND PHILOSOPHY OF SCIENCE (RAS), PHILOSOPHY OF SCIENCE AND TECHNOLOGY (RAS), KANT STUDIES (Baltic Federal University), and LOGICAL INVESTIGATIONS (RAS). He has published 15 books and edited 6 books (included scientific biographies of great geometer N.I. Lobachevsky and forerunner of non-classical logic N.A. Vasiliev). In addition, he has 625 papers to his credit.
Abstract: Grundgesetze is the book in which Frege vindicates his logicist thesis that arithmetic is a part of logic. The received view in the literature endorsed by Dummett, Parsons, Burgess, Heck, and essentially all Fregean scholars is that the system of logic that underlies the book is classical logic. My aim in this talk is twofold. First, I show that the axioms and rules of inference about the horizontal in the subsystem of Grundgesetze without value-ranges fail to capture its intended interpretation. That is, I build a non-standard model in which −a is the true iff a is not the false, thereby confirming a conjecture made by Landini that a → (−a) = a is not provable in this subsystem. This motivates our second and main result, namely, soundness and completeness proofs for the first-order fragment without value-ranges with respect to a new nonclassical semantics of boolean truncation. Simply put, the widespread assumption that Grundgesetze's first-order logic is classical first-order logic is mistaken.
Bio: Bruno Bentzen is an assistant professor of philosophy at Zhejiang University. His work revolves around the philosophy of logic and mathematics, with a particular emphasis on the tradition of mathematical intuitionism and constructive type theory. He has published several works in some of the most prestigious journals in logic and philosophy of mathematics (Review of Symbolic Logic, Notre Dame Journal of Formal Logic, Philosophia Mathematica, History and Philosophy of Logic, Studia Logica, Bulletin of Symbolic Logic, Erkenntnis). His interdisciplinary research can also be found in leading journals from other areas, ranging from phenomenology to computer science (Husserl Studies, Mathematical Structures in Computer Science). Before joining Zhejiang University, he held postdoctoral positions at Carnegie Mellon University and the Czech Academy of Sciences. He is also a member of the Zhejiang University–University of Luxembourg Joint Lab on Advanced Intelligent Systems and Reasoning.
Abstract: The split and gap between ‘is’ and ‘ought,’ between ‘fact’ and ‘value’ or ‘norm,’ as proposed by Hume, Moore and others, are fictional for two reasons. First, there are no purely objective facts, because facts always involve subjective intervention by cognitive subjects. Second, there are no purely subjective norms, because norms must have objective foundations and theoretical bases. Why do we ‘must’ and ‘should?’ This is determined by a combination of the following factors: First, our needs, intentions, and goals – where intentions and goals originate from needs, and the strength of our intentions often depends on the strength of needs. However, needs themselves have objective grounds. Second, the current state of affairs often deviates significantly from our needs and intentions. We therefore strive to change the status quo and create a vision that aligns with our needs and intentions. Third, we rely on relevant broad scientific principles – including those of natural sciences, social sciences, and humanities – as well as on social consensus, such as cultural traditions and conventions. Fourth, we depend on our rational thinking ability: faced with the current situation, and drawing upon relevant theories and social consensus, we rationally deliberate on what we must or should do – how we can satisfy our need, achieve our goals, and turn our visions into reality. Thus, there exists a common thread linking our needs, interests, intentions, goals, the distant reality, and our rational capacity. This commonality bridges the gap between ‘facts’ on the one side and ‘values’ and ‘norms’ on the other.
Bio: Chen Bo is Chair Professor of Humanities and Social Science, Doctoral Supervisor at School of Philosophy, Wuhan University. He has been elected as Titular Member of IIP (Institut International de Philosophie, Paris, 2018) and of AIPS (Académie Internationale de Philosophie des Sciences, Bruxelles, 2021). He was a full Professor for 23 years at the Department of Philosophy, Peking University (1999-2021). His specialty and expertise cover logic and analytic philosophy, especially the philosophy of logic, philosophy of language, history of logic, epistemology, Hume, Frege, Russell, Quine, and Kripke. He is also engaged in comparative study of Chinese and Western philosophy. His representative publications include the monographs Dialogues, Interaction, and Participation: The Process of Integrating into the International Philosophical Community (Beijing: China Renmin University Press, 2020), Analytic Philosophy: Criticism and Construction (Beijing: China Renmin University Press, 2018), Studies on Paradoxes (Beijing: Peking University Press, 2014; second edition, 2017), Studies in Philosophy of Logic (Beijing: China Renmin University Press, 2013), and Studies on Philosophy of Logic (Beijing: China Renmin University Press, 2004; revised and enlarged edition,18 Social Sciences in China 2013), Studies on Quine’s Philosophy: From Logical and Linguistic Points of View (Beijing: SDX Joint Publishing Company, 1998); and the papers “Logical Exceptionalism: Development and Predicaments” (Theoria: A Swedish Journal of Philosophy, vol. 90, 2024, no. 3) and “Social Constructivism of Language and Meaning” (Social Sciences in China [Chinese Edition], 2014, no. 10). E-mail: chen-bo@whu.edu.cn.
Abstract: Contemporary discourse on machine ethics centers on the "autonomy" of AI agents. This study challenges this "illusion of autonomy" through a philosophical genealogy, arguing that Large Language Models (LLMs) are not autonomous agents but the hyperscale materialization of ancient mnemotechnics (ars combinatoria). The paper traces the "tool" concept from Aristotle’s view of memory (mnēmē) as a sensory faculty, through early knowledge-organization machines, to its development as "Conceptual Modelling" by thinkers like Llull, Bruno, and Leibniz. We argue that LLM mechanisms—probabilistic prediction and parametric reduction—are a contemporary extension of these ancient combinatorial techniques. This reveals the non-agentive essence of LLMs as "cultural and social technologies," framing their perceived autonomy as an anthropomorphic projection. Ultimately, this study posits that machine ethics must be "de-anthropomorphized" and reconstituted as a form of media ethics. The ethical focus must shift from fictitious machine subjects to the non-delegable human responsibilities and embedded biases in the design and operation of these "memory machines."
Bio:Pan Deng is an Associate Research Fellow at the School of Social Sciences, Shenzhen University. Deng holds a Ph.D. in Philosophy (Doctor philosophiae) from Ludwig- Maximilian-Universität München (LMU Munich), where the dissertation focused on German Philosophy with a minor in Sinology. Primary research interests include the historical-critical edition of the works of Marx and Engels (MEGA²), Classical German Philosophy, and the Philosophy of Technology. Deng is the Principal Investigator of a General Program grant from the National Social Science Fund of China (NSSFC) and has authored three books and numerous academic articles. Additionally, Deng serves as an Associate Editor for the journal Technology and Language.
Abstract: For the first two decades after World War II, physicists had little appreciation for work on the foundations and philosophy of quantum mechanics. This began to change in the 1960s with the publication of Bell's work, and that shift accelerated in the 1980s with experimental tests that confirmed quantum predictions that contradicted Bell's inequalities. In the 1990s, these developments merged with fundamental experimental research, giving rise to such fields as quantum information and quantum computing. As these fields gained technical relevance and attracted significant funding, fundamental quantum research turned into a prestigious discipline with clear potential for societal impact. This growing prestige and potential impact raise a question: does the development of the field bring more responsibility? In the early stages of research, the social impact seemed minimal, but the landscape has changed - especially with the combination of quantum technology and AI. At the same time, there remains considerable uncertainty about both the promises and potential risks of these developments. So, the argument against responsibility based on ignorance, alongside the argument based on fragmentation of responsibility, should certainly be taken seriously. Yet some potential dangers seem too serious to be ignored. In the talk, we will explore the changing responsibility of scientists in fundamental quantum research, drawing comparisons with other historical cases where scientific advances have posed ethical challenges.
Bio: Dennis Dieks is a professor (em.) of philosophy and foundations of the natural sciences, particularly physics, at Utrecht University. He is one of the discoverers of the no-cloning theorem (1982) and has published extensively on the foundations and interpretation of quantum mechanics, the concept of 'identical particles,' and the philosophy of space and time. He is a member of the Royal Netherlands Academy of Arts and Sciences, the Royal Holland Society of Sciences and Humanities, and the Académie Internationale de Philosophie des Sciences. In 2024, he won the Langerhuizen Oeuvre Prize.
Abstract: In recent years, Artificial Intelligence has made remarkable strides, primarily due to the adoption of deep neural networks. However, one of the fundamental challenges of these models remains their lack of explainability. This issue was already recognized before the advent of Large Language Models (LLMs) such as ChatGPT, but their success has further exacerbated the problem. AI-based neural networks excel in solving complex tasks but are notoriously difficult to interpret in terms of decision-making processes. This practical limitation has significant implications: if we do not understand the algorithm that a network follows, we cannot easily adapt it to different contexts, thereby reducing the flexibility and reliability of the system. Drawing from David Marr's well-known framework, modern AI systems seem to lack the algorithmic level. We can identify the task being performed (computational level) and observe its physical implementation (implementational level), yet the actual reasoning process remains opaque. This issue is both practical and ethical: we cannot make informed decisions about AI utilization without a deeper understanding of its mechanisms. The explainability problem can be approached in two primary ways: logical analysis and studying expert discourse. This specialized technical discourse may provide valuable clues for interpreting neural network activity. To make informed decisions regarding AI deployment, we must enhance our understanding of these models' internal mechanisms. Investigating the language researchers use to describe LLMs may offer a novel avenue for interpreting these machines, potentially bridging the gap between the computational and implementational levels. In subsequent analyses, we will delve deeper into the process of discovery that has led to the development of ChatGPT, seeking to unveil the natural language models deployed by researchers.
Bio: Vincenzo Fano is Full Professor of Logic and Philosophy of Science at the University of Urbino "Carlo Bo". His research spans philosophy of physics, epistemology, history of philosophy, and metaphysics, with a special focus on the interplay between science and philosophy. He currently serves as President of the Italian Society for Logic and Philosophy of Science (SILFS), Director of the Urbino Summer School in Epistemology, and Director of the Urbino International School in Philosophy and Foundations of Physics. He is also a member of the Académie Internationale de Philosophie des Sciences. Author of numerous books and articles in leading journals, he has held visiting positions at institutions such as the University of Western Ontario, the University of Pittsburgh, and Helsinki University.
Abstract: What is the responsibility of a philosopher of science in a societal context infected with fake news and growing distrust in science? In this paper, I begin by examining several key epistemological concepts: truth, belief, justification, and knowledge. I then analyse the characteristics of good and bad arguments supporting belief in scientific claims. Finally, I offer a brief defence the normative approach in the philosophy of science, contrasting it with naturalistic views.
Bio: Michel Ghins is Professor Emeritus of Philosophy at the Université Catholique de Louvain (UCL). He holds a Licentiate in Theoretical Physics from UCL, an M.A. in Philosophy from the University of Pittsburgh, and a Ph.D. in Philosophy from UCL. He has taught history of science, philosophy of science, and philosophy of nature at several institutions, including the University of Campinas (São Paulo, Brazil), the Catholic University of America (Washington, D.C.), UCL, the Pontifical Gregorian University (Rome), and the University of Turin. Professor Ghins has published several books and numerous articles in international journals on the foundations of space-time theories, the history of science, the philosophy of science, and the metaphysics of laws of nature. His most recent book, Scientific Realism and Laws of Nature: A Metaphysics of Causal Powers, was published in the Synthese Library series by Springer, in 2024. He currently serves as Vice-President of the International Academy of Philosophy of Science (AIPS). CV and publications https://www.researchgate.net/profile/Michel-Ghins
Abstract: The responsibility of AI can be first approached as a science of design, analyzing it in terms of the objectives sought, the processes selected, and the expected results, from which consequences can be derived. This responsibility can be that of individual or social subjects. The former are researchers, especially when they choose ends and means in their activity to expand human possibilities via AI; and the latter are organizations when they prioritize objectives, by supporting processes and by highlighting certain results. A key factor for individual and social responsibility in AI is the chosen rationality. Herbert Simon distinguished three types of rationality: (i) administrative, (ii) human in general (based on economic decision-making) and (iii) symbolic. It is in this third type, which Simon associated with AI, where what Thomas Nickles calls "alien reasoning" appears, a type of rationality different from human rationality and which we do not control, which raises questions about responsibility for research processes. In recent years in particular, a group of leading researchers, including Geoffrey Hinton and Daniel Kahneman, have drawn attention to the possible extreme risks of achieving "artificial general intelligence." This in turn raises the question of whether or not to accept a "digital mind" and, if so, how to understand it, which raises the question of responsibility in the context of the limits of science.
Bio: Wenceslao J. Gonzalez is Professor of Logic and Philosophy of Science (University of A Coruña). He has been a Team Leader of the European Science Foundation program entitled "The Philosophy of Science in a European Perspective" (2008-2013). He received the Research Award in Humanities given in 1995 by the Autonomous Community of Galicia (Spain). In 2009 he was named a Distinguished Researcher by the Main National University of San Marcos in Lima (Peru). He was a member of the National Committee for Evaluation of the Scientific Activity (CNEAI) of Spain. Dr. Gonzalez has been a member of the National Commission for the Evaluation of Research Activity (CNEAI). He has been the Director of the Research Center for Philosophy of Science and Technology (CIFCYT) at the University of A Coruña since its creation in 2017. He is also the President of the Philosophy of Science and Technology Foundation (FFCYT), recognized by the Xunta de Galicia as of educational interest (Diario Oficial de Galicia of June 18, 2015) and of Galician interest (Diario Oficial de Galicia of July 7, 2015). He has been Vice-Dean of the Faculty of Humanities. He was President of the Committee of Doctoral Programs at the University of A Coruña (2002-2004). He was the promoter of the Honorary Doctorate awarded by the UDC to John Worrall (London School of Economics) on March 11, 2020.
Abstract: With the publication of Hans Jonas book on “The Principle of Responsibility” in 1979 (German original: Hans Jonas, Das Prinzip Verantwortung, Frankfurt 1979), the category of responsibility became widespread in science and technology. The article shows how Jonas conceived responsibility new with relating it dominantly to the future. Then this concept of responsibility is related to the freedom of science. It will be shown, how both principles support each other in science, but also where they are in conflict.
Bio: Hans-Peter Grosshans is a German theologian and philosopher of religion, with a special focus on questions and problems of hermeneutics, methodology and philosophy of science. From 1990 to 2002 he was teaching at the Faculty of Protestant Theology at the University of Tübingen. He then held teaching positions at the theological faculties of the universities of Hamburg (Germany), Munich (Germany) and Zürich (Switzerland) before in 2008 he took over his present position as professor (chair) for Systematic Theology and director of the Institute for Ecumenical Theology at the Faculty of Protestant Theology of the University of Münster, Germany. From 2016 – 2020 he was dean of his faculty. At the University of Münster Hans-Peter Grosshans is member of the Centre for the Theory of Science and of the interdisciplinary Cluster of Excellence on “Religion and Politics”. Hans-Peter Grosshans is since many years in the Board of the European Society for the Philosophy of Religion and has been president and vice-president of this society. In 2020-21 he was president of the European Academy of Religion (EuARe). He is the present president of the German Society for Philosophy of Religion. He is member of the Executive Board of RESILIENCE: an European Research Infrastructure for Religious Studies. He is main editor of the journal “Theologische Rundschau” (Mohr Siebeck, Tübingen) and editor (with Chr. Danz, J. Dierken and F. Nüssel) of the book series “Dogmatik in derModerne“ (Mohr Siebeck, Tübingen).
Abstract: I have long argued against the Stalnaker/Lewis ‘similarity’ accounts of counterfactuals. Roughly, they say that the counterfactual 'if p were the case, q would be the case is true if and only if at the most similar p-worlds, q is true.' Most philosophers agree with this. I disagree. I will summarise my main arguments against this entire approach and add some new ones. I will offer a paradigm shift based on conditional chances. The counterfactual is true iff the chance of q, given p, equals 1 at a time shortly, but not too shortly, before the truth value of p was settled. I will argue that this account has many advantages over the similarity accounts. What are the chances? I will present my version of a propensity account, and I will argue that it avoids the main objections that have been levelled against propensities. In short, I offer a conditional propensity account of counterfactuals.
Bio: Alan Hájek has been at the Philosophy Program at RSSS, ANU, as Professor of Philosophy since 2005. He studied statistics and mathematics at the University of Melbourne (B.Sc. (Hons). 1982), where he won the Dwight Prize in Statistics. He took an M.A. in philosophy at the University of Western Ontario (1986) and a Ph.D. in philosophy at Princeton University (1993), winning the Porter Ogden Jacobus fellowship. He has taught at the University of Melbourne (1990) and at Caltech (1992-2004), where he received the Associated Students of California Institute of Technology Teaching Award (2004). He has also spent time as a visiting professor at MIT (1995), Auckland University (2000), and Singapore Management University (2005). He has been a Fellow of the Australian Academy of the Humanities since 2007. He was the President of the Australasian Association of Philosophy, 2009-10. His research interests include the philosophical foundations of probability and decision theory, epistemology, the philosophy of science, metaphysics, and the philosophy of religion. His paper "What Conditional Probability Could Not Be" won the 2004 American Philosophical Association Article Prize for "the best article published in the previous two years" by a "younger scholar". Two of his articles were selected by The Philosopher’s Annual as “one of the ten best articles in philosophy” in the previous year: “Waging War on Pascal’s Wager” (2004), and “Degrees of Commensurability and the Repugnant Conclusion”, with Wlodek Rabinowicz (2022).
Abstract: In the era of artificial intelligence, people have clearly recognized the following facts: (1) The nature of human beings is undergoing change; (2) Appropriate algorithms can process knowledge, discover, reorganize, and present relations between pieces of knowledge with extraordinary efficiency. As a result, education must inevitably undergo corresponding changes. All previous educational reforms are merely updates compared to the revolution we currently face, while today's educational revolution requires a complete redesign of the system. It must redefine the nature, function, and content of education from scratch and establish new institutions. As for research universities, standard, foundational, and general education courses will all be delivered using artificial intelligence methods and will be studied independently by students. In other words, a large amount of foundational, repetitive, and standard-answer-oriented instruction and experimentation will be handed over to AI systems. Teachers will be responsible for exploratory and research-oriented advanced courses; in other words, courses in which teachers are directly involved will be equivalent to small, temporary academic research teams, and face-to-face teaching between teachers and students will become a process of collaborative research. At the same time, courses that develop and strengthen uniquely human abilities—such as willpower and emotional intelligence, integrative ability, and teamwork—will become the most important aspect of university education.
Bio: Shuifa Han is Boya Distinguished Professor at Peking University, Director of the Academic Committee of the Department of Philosophy at Peking University, and Director of the Institute of Foreign Philosophy at Peking University. He is also Deputy Director of the Chinese Society for the History of Foreign Philosophy. His main research fields include Kant's philosophy, political philosophy, Max Weber, modern Chinese thoughts, and Hanese philosophy. His major works include A Research on Kant's Theory of "Thing-in-itself" (1990), Max Weber (1998), University and Learning (2008), The Horizon of Justice (2009), Critical Metaphysics (2009); he is also translator of Critique of Practical Reason (Kant), The Methodology of the Social Sciences (Weber). His main papers include "The Concept of Bürger in Kant's Philosophy of Law", "The Third Meaning of Enlightenment: Enlightenment Thought in the Critique of Judgment", "Humanism in the Age of AI", "Hanese Philosophy: Methodological Significance", "Hanese Thought Orders and the Ancient Divine System".
Abstract: In the philosophy of technology, it has become increasingly common to describe the approaches of “classical philosophers of technology,” including Heidegger, as reflections on “Technology” with a capital T and to brand them as “too abstract” or “metaphysical.” Don Ihde, for instance, criticized Heidegger’s approach for being ignorant of the particularities of “concrete technologies.” To improve on classical approaches, many have called for an “empirical turn” within the philosophy of technology, advocating for stronger interdisciplinary work with scientists. I attempt to show that this distinction between “abstract” (Technology) and “concrete” (technologies) is deeply flawed, by demonstrating the “concreteness” of Heidegger’s critique of technology and its relevance for understanding current developments in AI research. To prove this relevance, I will discuss examples from research on “Heidegger’s philosophy of AI” in the areas of ethics, language, ontology, and epistemology. From these discussions, the notion of “ontological responsibility” will emerge which is the responsibility for one’s underlying assumptions about reality.
Bio: Karl Kraatz is Associate Professor in the School of Philosophy at Zhejiang University. His research spans phenomenology and artificial intelligence, with a special focus on transcendental philosophy, particularly Kant and Heidegger, as well as Heideggerian approaches to AI, encompassing ethics, large language models, and the limitations of generative AI. He is the author of two monographs and numerous articles in leading journals. He is co-founder of the international journal Eksistenz: Journal for Intercultural Philosophy and Hermeneutics and founder of Synopsis, a German publishing house specializing in translations of important philosophical works.
Abstract: My colleagues and I consistently uphold that it is an imperative for philosophers of science to focus on philosophical issues raised in emerging sciences and technologies. Synthetic biology may be the one of scientific disciplines which is full of philosophical issues that force us to reflect on them. These philosophical issues include ontological issue, e.g. is synthetic life a life which is comparable with natural life, or a machine which is created with the way of engineering; epidemiological issue, e.g. does we know the life by creating it, or simply, knowing by doing, and what is the implication of this epidemiology with 知行合一,the unity of knowing and doing, an epidemiological tradition from Wang Yangming in Chinese philosophy; methodological issue, e.g. what is the functions and limits of machine metaphor in the study of life; and the ethical issues, e.g. does synthetic biology play the God, violate natural order or biocentric principle and others. Due to the time limit, I will focus on the ontological issue of “Is synthetic life a life or a machine?” and the ethical debate of “does synthetic biology play the God?”. I will argue that the creation of synthetic life does blur the traditional boundary between life and machine, and it will be at the position where moves a little bit towards the direction of the end of machine in the spectrum of life and machine, however, it keeps the same basic characteristic which a natural life possesses, and over the time a synthetic life in the ecological environment will transform into a natural life. Then I will analyze the major argument against synthetic biology, e.g. the argument playing the God, specifically playing the God’s role to creation. I will analyze the meaning of the phrase “playing the God”, and its ambiguity, the possible ways of creation and finally the invalidity of the argument.
Bio: Ruipeng Lei is Professor of Bioethics & Director, Center for Ethics and Governance of Science and Technology, and Vice Dean of Advanced Institute of the Humanities and Social Sciences, University of Electronic Sciences and Technology of China (UESTC) since 2023. She established Research Center for Bioethics at Central China University of Science and Technology in Wuhan in 2002 as Co-founder and Executive Director till 2022. Her research interest focuses on ethical and governance issues relevant to emerging technologies (Xenotransplantation, Human genome editing, Neurotechnologies, AI and Robotics, etc.). She has published over 100 research, review and commentary articles and 10 books written and edited in English and Chinese. She published a comment article “Reboot ethics governance in China” as first author in Nature in 2019 which has a positive impact on a series of regulations and legislation in China. Currently she has completed one National Key R&D project (2019-2024) and is conducting two National Key Projects regarding ethical and regulatory issues raised by synthetic biology, biobank and biomedical and health data as PI. She has been actively participating in national and international organizations including The Hastings Center, WHO COVID-19 Ethics and Governance Working Group, UNESCO Consultation Group of the Draft Recommendation on the Ethics of Neurotechnologies, Asian Bioethics Association, Chinese Society for Bioethics, Chinese Society for Ethics of Science and Technology, Chinese Society for Synthetic Biology etc. Furthermore, she has been involved in ethics informed policy recommendation, formulation and enforcement at national and international level, e.g. Asian Task Force on Prohibition of Organ Trafficking, The Declaration of Istanbul Coordination Group, WHO Ethical Guidelines and Policy briefings regarding COVID-19 pandemic, China Regulation on Ethical Review of Biomedical Research involving Human Subjects, China Biosafety and Biosecurity Law, China Regulation on Human Genetic Resources.
Abstract: This presentation explores the development of explainable and ethical artificial intelligence (AI). It begins by examining the nature of AI and its primary implementation paradigms—symbolic reasoning and model-based learning—highlighting the distinct ethical and safety challenges that arise from each. These challenges differ between non-embodied AI systems (such as issues of reliability, bias, and transparency in large language models) and embodied AI systems (such as accountability and moral dilemmas in autonomous vehicles). To address these issues, the presentation proposes four key technical pillars for building trustworthy AI: establishing a world model for factual grounding, ensuring logical correctness through formal methods, achieving verifiability and explainability via integrated approaches, and embedding ethical and legal norms with causal traceability. Finally, the “3C” principles—Correctness, Clarity, and Compliance—are introduced as a guiding framework for translating theory into practice in the creation of responsible and trustworthy AI systems.
Bio: Beishui Liao is a Full Professor of Logic and Computer Science at Zhejiang University, where he has held the prestigious Qiushi Distinguished Professorship and Changjiang Distinguished Professorship since 2019. He received his Ph.D. in Computer Science from Zhejiang University in 2006 and has since established himself as a leading scholar in logic, formal argumentation, and their applications to multi-agent systems, explainable AI, and ethical AI. Currently, he leads a major project funded by the National Social Science Fund of China on Logics for New Generation Artificial Intelligence. He serves as Vice Dean of the Faculty of Humanities at Zhejiang University, Director of the Institute of Logic and Cognition, and Co-Director of the Zhejiang University–University of Luxembourg Joint Lab on Advanced Intelligent Systems and Reasoning (ZLAIRE). He has been a Guest Professor at the University of Luxembourg since 2014 and has held visiting positions at the University of Texas at Austin, the University of Brescia, the University of Oxford, and the University of Cambridge. An active member of the international academic community, he serves as an Associate Editor of Annals of Mathematics and Artificial Intelligence, AI Logic Corner Editor of the Journal of Logic and Computation, and an editorial board member of Argument & Computation and Journal of Applied Logics. He is also a steering committee member of DEON and COMMA. In 2015, he co-founded the International Conference on Logic and Argumentation (CLAR), which has since become a leading international conference in the field of logic and argumentation. He has published three monographs and numerous papers in top journals and conferences, including Social Sciences in China, Artificial Intelligence (AIJ), Journal of Artificial Intelligence Research (JAIR), Journal of Logic and Computation (JLC), IJCAI, and KR, among others.
Abstract: Animals and humans mainly perceive and change their environment in the form of natural intelligence. Before the advent of AI, although with plenty of machines, the intelligence that uses and controls them was still human intelligence. From now on, this traditional mode of life may change drastically. With the massive introduction of AI, what challenges will humans encounter in their interaction with their environment in the future? Recognizing the difference between the principles and mechanisms of natural vs artificial intelligence will help us recognize such challenges and their severity. This paper uses art creation and application as an example to analyze and explain the fundamental differences between the Bayesian brain (natural intelligence) and the Boltzmann machine (artificial intelligence). The differences of natural kinds and statistical kinds foreshadow a big possible difference between creative processes and their products.
Bio: Chuang Liu is a Distinguished Professor in the School of Philosophy at Fudan University, Shanghai; Chair of the department of philosophy of science and logic, the Director of the Center for the Philosophy and Science of Intelligence (Fudan PSI), and the Academic Director of the Institute of Philosophy, the Chinese Academy of Sciences (CASIP). Chuang Liu is professor emeritus in the Department of Philosophy at the University of Florida. His research interests span in philosophy of physics, scientific methodology, and philosophy of intelligence and has published widely among the top journals in philosophy of science. His recent work engages evolutionary game-theoretic approaches to morality, Markov blankets and active inference in perception and cognition, and the role of teleology in scientific explanation—connecting norms, modeling, and cognition across species.
Abstract: Drawing on insights from Lewis, Halpern, and Pearl, we argue that causality is instrumental in understanding responsibility. Therefore, reasoning about causality seems essential, and this talk presents a new logical framework for doing justice to some of its main aspects. These include the dynamics of updating what we know about the world when making causal judgments, as well as changing current dependencies when performing interventions. The result is a complete and decidable system that can be viewed as a more generic dynamic epistemic extension of earlier work on causal logics which stayed close to particular causal graphs. We conclude with some new topics in causal reasoning that can now be studied rigorously in our setting.
Bio: Fenrong Liu is a Changjiang Distinguished Professor at Tsinghua University, the Amsterdam-China Logic Visiting Chair at the University of Amsterdam, and Co-Director of the Tsinghua-UvA Joint Research Centre for Logic. She is a Member of the Institut International de Philosophie (IIP) and a Corresponding Member of the Académie Internationale de Philosophie des Sciences (AIPS). Her research interests include preference logic, social epistemic logic, graph game logic, logical reasoning for large language models (LLMs), and history of logic in China. She held a visiting position at Harvard University and was a Berggruen Fellow at Stanford University. She is a steering committee member of TARK, PRICAI, LAMAS, and AWPL, and serves on the editorial boards of Synthese, Studia Logica, Global Philosophy, Theoria, and Topoi.
Abstract: Drawing on perspectives from traditional Chinese thought, this paper explores two distinct dimensions of harmony: non-conflictual coexistence, where differing parties maintain balance without erasing their differences, and organic unity, where dynamic interactions—including tension or competition—contribute to a coherent, self-regulating whole, as seen in natural ecosystems. These understandings of harmony provide a lens for reimagining human–technology relationship in the age of artificial intelligence (AI). The paper proposes two possible models for a future AI-integrated society, each rooted in one of these conceptions of harmony. The first model envisions AI as a controllable tool, integrated into a human-governed system where stability is maintained through oversight and regulation—even amid underlying tensions. The second model imagines a more de-anthropocentric society in which advanced AI agents are recognized as legitimate participants, coexisting with humans through mutual respect and shared social space. Rather than focusing on AI ethics or risk mitigation alone, this analysis prioritizes what kinds of social configurations are possible and desirable for future societies.
Bio: Dong Luo earned his PhD from the Chinese Academy of Sciences with a dissertation on scientific objectivity and invariance. He is currently an Associate Professor at the Institute for Advanced Studies in Philosophy, Science, and Technology at the South China University of Technology. He has research interests in the history and philosophy of mathematics and physics, philosophy of science, and comparative philosophy. In recent years, he has published dozens of journal articles on these topics. He is also completing a book titled Scientific Objectivity: The Will to Understanding in Science, forthcoming with Routledge. Dong Luo also serves as Secretary General of the Chinese Association for the Philosophy of Physics and is a Council Member of the International Alliance for Social Epistemology. Email: luodong@scut.edu.cn. Website: www.luodong.org.
Abstract: Scientific objectivity is often taken as a hallmark of trustworthy inquiry, yet the roles of human agency and responsibility in achieving it remain deeply ambivalent. This paper develops a threefold typology of objectivity—impersonal, laborious, and effortless—integrated within a twofold analytical framework distinguishing between the exercise and the suspension of responsibility in scientific practice. Impersonal objectivity aspires to efface the investigator’s presence, treating facts as if they speak for themselves. Laborious objectivity, by contrast, locates credibility in visible acts of responsibility—discipline, precision, and the painstaking labour that secures impartial results. The third, and most paradoxical, form is effortless objectivity, where responsibility seems to consist precisely in not intervening: in yielding to discovery, remaining open, and allowing phenomena to manifest without strain. By juxtaposing these modes, the paper argues that objectivity in science cannot be reduced to either active control or passive receptivity. Instead, it emerges from a dynamic negotiation between intervention and restraint, between accountability and self-effacement. This interplay reveals that responsibility in science does not always lie in asserting mastery, but can equally reside in the disciplined capacity to let the world show itself.
Bio: James W. McAllister is professor of History and Philosophy of Science, University of Leiden. He gained his PhD at the University of Cambridge in 1989. He is the author of Beauty and Revolution in Science (Cornell University Press, 1996), a study of aesthetic ideals and conceptual change in the sciences. His current research addresses the epistemic and aesthetic dimensions of scientific objectivity, the logic of patterns in empirical data, and the differences between the natural and the human sciences. He also investigates the connections between historical and philosophical approaches to science. He is membre titulaire of the Académie Internationale de Philosophie des Sciences, Brussels.
Abstract: The responsibility of science and technology means that scientific and technological professionals have to be in a state of being responsible, answerable, or accountable for something within their power, and being prepared to stand up on behalf of their stakeholders and act and speak for them. By the time of the illegal human embryo genome editing conducted by He Jiankui a number of participants including He himself in the debate on human germline genome editing did not realize that the future generation should be one important member as the whole among stakeholders. Neither Julian Savulescu and his colleagues pay attention to the responsibility for the future generation when they argue for human heritable polygenic editing (HHPE) as the next frontier of genomic medicine. In this presentation, I will first argue that our generation has the obligation for the future generation, and it means that the future generation should be one of the stakeholders in human heritable genome editing (HHGE) and HHPE. Then I will argue that in practice HHGE and HHPE are both scientifically untestable. Now the life expectancy in China is 79 years old, we have to watch Lulu, Nana and Amy whose genomes were edited by He Jiankui for 79 years to know whether they would be infected with HIV or not, even they would not be for whole life, the fact cannot prove that it is due to HHGE that He conducted. The same is with HHPE: the fact that a person who undergoes HHPE does not suffered with cancer in his 79 years old life in China cannot prove the success of HHPE. The testing has been done after He’s embryo editing shows that in Lulu’s and Nana’s genomes there are a number of off-targets, editing errors and mosaicisms which would have negative health impacts to the future generation, as well as many other unknowns. How the medical scientists who conduct HHGE/HHPE could perform the responsibility (including answerability and accountability) for the future generation after the hundred years of their death? So the conclusion is: Both HHGE and HHPE are unjustifiable scientifically and ethically.
Bio:Professor Emeritus, Institute of Philosophy and Honorary Director, Centre for Applied Ethics, Chinese Academy of Social Sciences; Professor, Center for Ethics and Moral Studies, and Director, Institute of Bioethics, Renmin University of China; Professor and Honorary Director, Center for Bioethics, School of Medicine, Xiamen University; Member of the Advisory Committee, Advanced Institute of the Humanities and Social Sciences, University of Electronic Science and Technology of China; Honorary President, Chinese Society for Bioethics; Member, Assessment Panel, Public Policy Research Funding Scheme, Hong Kong; Lifetime Member, Kennedy Institute of Ethics, Georgetown University; Fellow, The Hastings Center; Member, International Institute of Philosophy. Major past posts: Vice-President, Ethics Committee, Ministry of Health, China; Vice-President, Chinese Society for Philosophy of Nature, Science and Philosophy; President, Chinese Society for Philosophy of Science; President, Chinese Society for Bioethics; Chair, Chinese Committee, China-UK Summer School of Philosophy; Member, Committee on Ethical, Legal and Social Implications, International HapMap Consortium; Member, Gender Advisory Panel, UNDP/UNFPA/WHO/Work Bank Special Programme of Research, Development and Research Training in Human Reproduction; Member, UNESCO International Bioethics Committee; Member, UNAIDS Reference Group on AIDS and Human Rights; Assessor, Executive Committee, Division of Logic, Methodology and Philosophy of Science/International Association of History and Philosophy of Science; Member, Board of Directors, International Association of Bioethics; President, Asian Bioethics Association. Laureate of award or prize: 2002 World Network of Technology Awards Ethics; 2009 UNESCO Avicenna Prize of Ethics of Science; 2011 Henry Knowles Beecher Bioethics Award; 2017 Chinese Academy of Social Science Excellent Policy Recommendation; 2020 Chinese Society for Philosophy of Nature, Science and Philosophy Lifetime Achievement Award; 2022 Ministry of Education Policy Study Award co-winner with Professor Lei Ruipeng. Publications:25 books including Realism vs Anti-Realism in Philosophy of Science (co-editor, in English), Bioethics, Reproductive Health and Ethics (3 volumes), Patient’s Rights (co-author), AIDS, Sex and Rights, Biomedical Research Ethics (co-author), Bioethics: Asian Perspectives (in English), Introduction to Bioethics (co-author) , Public Health Ethics (co-author),An Introduction to Political Philosophy (co-editor), Contemporary Studies in Bioethics (co-editor, 2 volumes) and near 450 articles published in China or in other countries, among which 92 articles are in English.
Abstract: In the twenty-first century, scientists work in a research environment that is being transformed by globalization, interdisciplinary research projects, team science, and information technologies. As the scientific enterprise evolves, all stakeholders in the scientific community have an ethical obligation to place a high priority on instilling and championing the highest standards of scientific integrity in these new settings and applications. Beside well-designed study protocols, rigorous and precise data collection, thoroughly statistical analysis and a well-defined peer review process the responsibility of the scientists is of own value, especially in medicine, psychology and humanities. In the presentation we will discuss different aspects of responsibility in the scientific process and the value for the scientific results and participants in protocols as well as researchers.
Bio: Ass. Prof. Dr. J.H. Witt is one of the most experienced robotic surgeons in urology worldwide. With a personal experience of almost 10,000 procedures, an overview of more than 25,000 robotic cases over the last 20 years and a high number of publications in this field Dr. Witt must be called, without question, one of the founders of robotic surgery in urology. Since April 2024 Dr. Witt is Head of Department of Urology and Urologic Surgery (“Urokompetenz –Das Zentrum für Urologie”) at Clinic Bel Etage in Düsseldorf. The Clinic Bel Etage is a large private hospital in downtown Düsseldorf. Additionally, he is Senior Consultant at the Department of Urology, University Mannheim, a large tertiary referral center for Urology. Dr. Witt is the formerly Chair of the Department of Urology, Pediatric Urology and Urologic Oncology at St. Antonius-Hospital Gronau (April 2002 to December 2022). He established in Gronau one of the biggest urological departments in Germany. Dr. Witt founded and led there also the Prostate Center Northwest, a certified Comprehensive Cancer Center for prostate cancer and a tertiary referral center for surgery in urologic oncology. After leaving Gronau he reorganized in 2023 as the Director and Chair the Paracelsus- Klinik Golzheim, a large and well renowned hospital for urology in Düsseldorf. Dr. Witt additionally is Co-Founder of the German Society for Robotic Urology (Deutsche Gesellschaft für roboterassistierte Urologie) and past president of several national and international scientific meetings.
Abstract: The Scientific Revolution of the 17th century was an event of paramount importance in world history. Kant responded to it with his transcendental philosophy which distinguishes between appearances and things in themselves, and maintains that human beings can only know the former, but not the latter. Contemporary philosophies such as Thomas Nagel’s realism, Hubert Dreyfus’ robust realism, and Quentin Meillassoux’s speculative materialism have gone far beyond transcendental philosophy, and this can be clearly seen by examining four key issues: the responses to the Scientific Revolution, the reactivation of theory of primary and secondary qualities, the differentiation of two concepts of things in themselves, and the inquiry into the way in which we access things in themselves. To deepen the inquiry in this direction, it is necessary to foster new problem consciousness and to attempt to elaborate a switching theory of realism which distinguishes meaning-impoverished things in themselves and meaning-rich things in themselves, and advocates placing the focus on the latter. Based on the switching theory of reality and the concept of perspectival totality it implies, we can better understand man’s place in the universe and the mission of philosophy in the post-metaphysical era.
Bio: Yu Zhenhua, Professor of Philosophy, Department of Philosophy, East China Normal University (ECNU). Ph.D., ECNU, 1998; Ph.D., University of Bergen, Norway, 2006. Prof. Yu’s main fields of interest are epistemology, metaphysics, and comparative philosophy. His Chinese publications include How is Metaphysical Wisdom Possible? (2000, 2015), The Tacit Dimension of Human Knowledge (2012, 2022), A New Inquiry into the Philosophy of Knowledge and Action (2025, forthcoming), and dozens of articles in Chinese academic journals. His English publications appear in journals such as International Philosophical Quarterly, Philosophy Today, Dao: A Journal of Comparative Philosophy. Prof. Yu is a Yangtze River Scholar, Ministry of Education in China; Fulbright research scholar, New York University (2016-2017); visiting scholar, Harvard-Yenching Institute (2006-2007); secretary of International Society for Metaphysics; vice president of Chinese Society of Epistemology (2014-2024); member of board of directors, Polanyi Society, USA (2006-2015); co-director of Knowledge and Action Lab, Joriss, between ECNU and Ecole Normale Superieure/Lyon, France.
Abstract: In economic theory, something is a public good if it is non-rival (its consumption by one agent does not diminish the amount of it available to others) and non-excludable (once it is provided, it is not viable to charge a price to the users). Knowledge and information have often been understood as public goods in this sense, but different epistemic items may have in different degrees the mentioned properties. Furthermore, though the theory of public goods has been used in order to answer the normative and political question of how scientific research has to be socially organised, little work has been done regarding the possible relevance of the theory to the understanding of the epistemological nature of knowledge, information, belief, and the like. In this paper, a Brandomian inferentialist model of rationality and knowledge is employed in order to provide a framework within which to offer an answer to these epistemological questions.
Bio: Jesús Zamora-Bonilla (Madrid, 1963) is full professor of Philosophy of Science at UNED (Spain's National Open University). He holds PhDs both in Philosophy and in Economics, and has worked mainly on the problems of scientific realism, scientific rationality, philosophy of the social sciences, and the application of game-theoretic models to the process of scientific research. He has published numerous papers in some of the most important journals of the area, like Philosophy of Science; Synthese; Erkenntnis; Journal of Economic Methodology; Economics and Philosophy; Philosophy of the Social Sciences; etc. In Spanish, he has also published several books on philosophical topics for the general public and contributed often to other popular channels (like TV, radio, newspapers, blogs, and social networks).
Abstract: In the twenty-first century, scientists work in a research environment that is being transformed by globalization, interdisciplinary research projects, team science, and information technologies. As the scientific enterprise evolves, all stakeholders in the scientific community have an ethical obligation to place a high priority on instilling and championing the highest standards of scientific integrity in these new settings and applications. Beside well-designed study protocols, rigorous and precise data collection, thoroughly statistical analysis and a well-defined peer review process the responsibility of the scientists is of own value, especially in medicine, psychology and humanities. In the presentation we will discuss different aspects of responsibility in the scientific process and the value for the scientific results and participants in protocols as well as researchers.
Bio: Jure Zovko is since 2008 a permanent member of Institut international de philosophie (Paris); Since 2010. he is also titulaires member of L' Académie Internationale de Philosophie des Sciences (Bruxelles). Since 2010 he is vice-president of Internationale Hegel-Gesellschaft. He was a president of Comité de cooptation de Institut International de philosophie (2015-2018) and Member of the Presidium, the assesor of the Academic Council of Académie Internationale de Philosophie des Sciences (2015-2018). He is since 2018 vice-president of Institut international de philosophie (Paris); He is a member of Steering Committee of FISP (Fédération Internationale des Sociétés de Philosophie). He is a member of philosophical associations Deutsche Gesellschaft für Religionsphilosophie; International Plato Society, Internationale Hegel-Gesellschaft, Schlegel Gesellschaft. Since 2021 he is president of L' Académie Internationale de Philosophie des Sciences (Bruxelles). Since 2021 he is president Institut international de philosophie (Paris - Nancy).