The Sixth International Workshop on Logics and Argumentation for New-Generation Artificial Intelligence will be held on 17 September 2026, co-located with the 11th International Conference on Computational Models of Argument (COMMA 2026) in Barcelona, Spain.
Call for Papers
The Sixth International Workshop on Logics and Argumentation for New-Generation Artificial Intelligence (LNGAI 2026) focuses on the interaction between symbolic and subsymbolic forms of argumentation reasoning in agentic AI systems. On the one hand, it studies how subsymbolic models—such as neural and large language models—can generate, approximate, or support argumentation. On the other hand, it examines how formal and computational argumentation can be used to analyse, guide, constrain, and verify reasoning produced by learning-based models and agents. Particular attention is given to hybrid and neuro-symbolic approaches, in which symbolic and subsymbolic components are combined to support reliable, interpretable, and verifiable reasoning in agentic AI systems. The workshop also welcomes contributions that explore how such integrations can be applied in domains such as machine ethics, explainable AI (XAI), and AI & Law. LNGAI is a well-established workshop series. Five previous editions have been successfully organised, with proceedings published as edited volumes, and selected contributions appearing as special issues in the AI Logic corner of the Journal of Logic and Computation.
Motivation and Connection to the Main COMMA Conference
LNGAI 2026 addresses a timely COMMA challenge: as agentic and LLM-based systems increasingly produce natural-language rationales, we need principled ways to represent, evaluate, and verify these rationales as arguments under uncertainty, conflict, and normative constraints. The workshop brings together work on computational argumentation, nonmonotonic reasoning, and neuro-symbolic methods to study how learning-based models can support argument construction, and how formal argumentation can guide, constrain, and explain their behaviour. This directly supports COMMA themes around argumentation for agents, argumentation-based explanation, and argumentation for normative reasoning, with concrete applications in XAI, machine ethics, and AI & Law.
Information on Past Iterations
LNGAI is a well-established workshop series associated with the national key project “Research on Logics for New-Generation Artificial Intelligence” (2021–2025). LNGAI is a series of international workshops on the logical foundations of AI, providing a forum for researchers worldwide to exchange ideas on non-monotonic logics, argumentation, causality, knowledge graphs, reasoning about norms and values, and beyond. Originating from the project Research on Logics for New-Generation Artificial Intelligence (2021–2025), the series has grown into an annual international forum, supported by ZLAIRE (the Zhejiang University– University of Luxembourg Joint Lab on Advanced Intelligent Systems and REasoning).
Topics of interest include, but are not limited to:
- LLM-assisted generation of logical representations, arguments, and reasoning structures
- Logical and argumentation-based analysis of reasoning generated by language models
- Computational argumentation for guiding, constraining, and explaining language-model-based systems
- Hybrid and neuro-symbolic architectures combining language models with symbolic reasoning
- Neuro-symbolic knowledge representation and reasoning methods for language models
- Knowledge injection into, and extraction from, language models
- Commonsense reasoning integrating language models with knowledge representation and reasoning
- Argumentation, deliberation, negotiation, and dialogical reasoning with language-model-based agents
- Reasoning about agency, autonomy, and learning in systems built with language models
- Planning, action, and decision-making in agentic systems and workflows
- Logical and computational models of generative agents
- Cooperation, coordination, and communication in multi-agent systems involving generative agents
- Logic-based verification, safety, and controllability of language-model-based and agentic systems
- Logics for reasoning about knowledge, beliefs, goals, intentions, actions, and plans
- Non-monotonic, defeasible, and uncertain reasoning in language-model-based and agent-based systems
- Verification and formal analysis of agents and multi-agent systems
- Argumentation-based explanation and interpretable reasoning
- Normative reasoning and applications in machine ethics and AI & Law