KBQA-R1: Reinforcing Large Language Models
for Knowledge Base Question Answering

Xin Sun1,2, Zhongqi Chen2, Xing Zheng2, Qiang Liu3, Shu Wu3, Bowen Song2, Zilei Wang1, Weiqiang Wang2, Liang Wang3
1University of Science and Technology of China,  2Ant Group,  3Chinese Academy of Sciences
KBQA-R1 Teaser
Figure 1: KBQA-R1 Overview. Our model treats KBQA as a multi-turn decision process, optimizing interaction via Reinforcement Learning to bridge the gap between LLM reasoning and strict knowledge base execution.

Abstract

Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension.

To address these limitations, we present KBQA-R1, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback. Furthermore, we introduce Referenced Rejection Sampling (RRS), a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences.

Methodology

The KBQA-R1 framework consists of two main stages: (1) Cold-Start Data Synthesis via Referenced Rejection Sampling (RRS), and (2) Reinforcement Learning via Group Relative Policy Optimization (GRPO).

KBQA-R1 Pipeline
Figure 2: The Framework of KBQA-R1. Left: RRS constructs high-quality SFT data by filtering trajectories that match ground-truth logical forms. Right: GRPO optimizes the policy by rewarding successful execution and reasoning steps.

Specifically, we model the reasoning process as a sequence of actions (e.g., relation retrieval, set operations). The model receives feedback from the environment (Virtuoso engine) to adjust its policy, ensuring generated queries are not only syntactically correct but also executable against the specific KB schema.

Experimental Results

We evaluated KBQA-R1 on three standard benchmarks: WebQSP, GrailQA, and GraphQuestions. The results demonstrate that our RL-based approach significantly outperforms strong baselines, particularly in scenarios requiring multi-hop reasoning and schema linking.

Experimental Results
Figure 3: Main Results. Comparison with state-of-the-art methods on WebQSP and GrailQA benchmarks. KBQA-R1 achieves superior performance by effectively grounding reasoning in execution feedback.

BibTeX

@article{sun2025kbqa,
  title={KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering},
  author={Sun, Xin and Chen, Zhongqi and Zheng, Xing and Liu, Qiang and Wu, Shu and Song, Bowen and Wang, Zilei and Wang, Weiqiang and Wang, Liang},
  journal={arXiv preprint arXiv:2512.10999},
  year={2025}
}