Training-Free Mixture of Agents for Multi-Document Summarization Leveraging LLMs and Knowledge Graphs
Published in Neural Computing and Applications, 2025
Abstract
Multi-Document Summarization (MDS) play a critical role in distilling essential information from collections of textual data. Existing approaches often struggle to capture complex inter-document relationships, rely heavily on large amounts of labeled data for supervised training, or exhibit limited generalization across domains and languages. To address these limitations, we introduce the Mixture of Agents (MoA) framework, a novel, training-free and modular paradigm for MDS. MoA orchestrates three specialized agents operating in parallel across refinement layers: (1) an Extractor Agent that selects key sentences, (2) a KGSum Agent that constructs and summarizes knowledge graphs, and (3) an Abstractor Agent that generates coherent abstractive summaries. Each agent leverages pre-trained Large Language Models (LLMs), enabling MoA to operate without task-specific supervised training. This work introduces a multi-agent architecture for MDS and provides comprehensive evaluations on both English and Vietnamese datasets. Experiments on four benchmarks—Multi-News, Multi-XScience (English), VN-MDS, and ViMs (Vietnamese)—demonstrate that MoA achieves state-of-the-art ROUGE scores on Multi-News and yields competitive or superior results on the remaining datasets. Our findings highlight MoA as a robust, generalizable, and data-efficient approach to MDS.
Keywords
Multi-Document Summarization, Large Language Models, Mixture of Agents, Knowledge Graphs
Recommended citation: Vuong Tuan-Cuong, Trang Mai Xuan, Cuong Nguyen Tien, Luong Thien Van. (2025). "Training-Free Mixture of Agents for Multi-Document Summarization Leveraging LLMs and Knowledge Graphs." Neural Computing and Applications, pp. 798-804.
Download Paper