Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension.
To address these limitations, we present KBQA-R1, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback. Furthermore, we introduce Referenced Rejection Sampling (RRS), a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences.
The KBQA-R1 framework consists of two main stages: (1) Cold-Start Data Synthesis via Referenced Rejection Sampling (RRS), and (2) Reinforcement Learning via Group Relative Policy Optimization (GRPO).
Specifically, we model the reasoning process as a sequence of actions (e.g., relation retrieval, set operations). The model receives feedback from the environment (Virtuoso engine) to adjust its policy, ensuring generated queries are not only syntactically correct but also executable against the specific KB schema.
We evaluated KBQA-R1 on three standard benchmarks: WebQSP, GrailQA, and GraphQuestions. The results demonstrate that our RL-based approach significantly outperforms strong baselines, particularly in scenarios requiring multi-hop reasoning and schema linking.
@article{sun2025kbqa,
title={KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering},
author={Sun, Xin and Chen, Zhongqi and Zheng, Xing and Liu, Qiang and Wu, Shu and Song, Bowen and Wang, Zilei and Wang, Weiqiang and Wang, Liang},
journal={arXiv preprint arXiv:2512.10999},
year={2025}
}