Keynote Speaker

Fangzhen Lin (Hong Kong University of Science and Technology)

Leila Amgoud Title: Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems
Abstract: For a natural language problem that requires some non-trivial reasoning to solve, there are at least two ways to do it using a large language model (LLM). One is to ask it to solve it directly. The other is to use it to extract the facts from the problem text and then use a theorem prover to solve it. In this note, we compare the two methods using ChatGPT and GPT4 on a series of logic word puzzles, and conclude that the latter is the right approach.

Bio: Prof. Lin is currently a Professor in the Department of Computer Science and Engineering of the Hong Kong University of Science and Technology. Prior to that, he was with the Department of Computer Science of University of Toronto and Stanford University. He received his BS degree at Fuzhou University, MS degree at Beijing University and PhD at Stanford University.
His research interests is in artificial intelligence, and in particular, principles of knowledge representation, reasoning, and learning and their applications in programming languages, robotics, multiagent systems, game theory and social choice theory, language understanding etc. He is an AAAI Fellow and received a Croucher Senior Research Fellowship.