ELLIS header
University of Stuttgart Logo
Max Planck Institute for Intelligent Systems Logo

Robust Knowledge Extraction from Large Language Models using Social Choice Theory

Nico Potyka, Yuqicheng Zhu, Yunjie He, Evgeny Kharlamov, Steffen Staab

Proseedings of the 23rd International Conference on Autonomous Agents and Multi-Agent Systems, 2024.


Abstract

Large-language models (LLMs) have the potential to support a wide range of applications like conversational agents, creative writing, text improvement, and general query answering. However, they are ill-suited for query answering in high-stake domains like medicine because they generate answers at random and their answers are typically not robust - even the same query can result in different answers when prompted multiple times. In order to improve the robustness of LLM queries, we propose using ranking queries repeatedly and to aggregate the queries using methods from social choice theory. We study ranking queries in diagnostic settings like medical and fault diagnosis and discuss how the Partial Borda Choice function from the literature can be applied to merge multiple query results. We discuss some additional interesting properties in our setting and evaluate the robustness of our approach empirically.

Links


BibTeX

@inproceedings{noauthororeditor2023robust, author = {Potyka, Nico and Zhu, Yuqicheng and He, Yunjie and Kharlamov, Evgeny and Staab, Steffen}, booktitle = {Proseedings of the 23rd International Conference on Autonomous Agents and Multi-Agent Systems}, title = {Robust Knowledge Extraction from Large Language Models using Social Choice Theory}, year = {2024} }