R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization

1Nanyang Technological University
2Tsinghua University

Abstract

Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs’ reasoning ability beyond passively imitating positive reasoning paths. To this end, we design Step-wise Group Relative Policy Optimization (StepGRPO), a new online reinforcement learning framework that enables MLLMs to self-improve reasoning ability via simple, effective and dense step-wise rewarding. Specifically, StepGRPO introduces two novel rule-based reasoning rewards: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards the reasoning paths that contain necessary intermediate reasoning steps via a soft key-step matching technique, while StepRAR rewards reasoning paths that follow a well-structured and logically consistent reasoning process through a reasoning completeness and logic evaluation strategy. With the proposed step-wise reward mechanisms, StepGRPO effectively mitigates the sparse reward issue for MLLMs and encourages more structured and logically consistent reasoning process. Extensive experiments over 8 benchmarks demonstrate the superiority of the proposed StepGRPO.

Overview of StepGRPO

Overview of the proposed StepGRPO: StepGRPO consists of two phases: a policy warm-up phase and a step-wise online policy optimization phase. After the warm-up, the policy model generates a group of reasoning paths and assigns step-wise rewards using two proposed mechanisms: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards reasoning paths that contain key intermediate steps, identified using a soft key-step matching technique. StepRVR rewards reasoning paths based on completeness and logical consistency, ensuring they are well-structured. StepGRPO then estimates the advantage for policy optimization by using the average step-wise reasoning reward of a group of sampled reasoning paths as a baseline. Examples for StepRAR and StepRVR are illustrated in (a) and (b), respectively.

Main Results

Method MathVista MMStar Math-V ChartQA DynaMath HallBench MathVerse MME Average
Closed-Source Model
GPT-4o 63.8 63.9 30.3 85.7 63.7 55.0 39.4 2329 64.5
Claude-3.5 Sonnet 67.7 62.2 - 90.8 64.8 55.0 - 1920 -
Open-Source Model
Cambrain-1-8B 49.0 - - 73.3 - - - - -
MM-1.5-7B 47.6 - - 78.6 - - - 1861 -
Idefics3-LLaMA3-8B 58.4 55.9 - 74.8 - - - 1937 -
InternVL2-8B 58.3 61.5 - 83.3 39.7 - - 2210 -
MiniCPM-V-2.6-8B 60.6 57.5 - - - 48.1 - 2348 -
DeepSeek-VL2-MOE-4.5B 62.8 61.3 - 86.0 - - - 2253 -
Reasoning Model
LLaVA-CoT-11B 54.8 57.6 - - - 47.8 - - -
LLaVA-Reasoner-8B 50.6 54.0 - 83.0 - - - - -
Insight-V-8B 49.8 57.4 - 77.4 - - - 2069 -
Mulberry-7B 63.1 61.3 - 83.9 45.1 54.1 - 2396 -
LlamaV-o1-11B 54.4 59.4 - - - 63.5 - - -
Qwen2-VL-2B 43.0 48.0 12.4 73.5 24.9 41.7 19.7 1872 41.2
Qwen2-VL-2B-GRPO 41.4 46.2 16.0 72.5 24.2 42.2 19.9 1930 41.4
R1-VL-2B 52.1 49.8 17.1 75.2 29.4 44.0 26.2 2048 45.8
Qwen2-VL-7B 58.2 60.7 16.3 83.0 42.1 50.6 32.5 2327 53.3
Qwen2-VL-7B-GRPO 55.1 59.8 19.1 81.3 33.9 48.5 30.9 2335 51.4
R1-VL-7B 63.5 60.0 24.7 83.9 45.2 54.7 40.0 2376 57.1

Table 1: Main experimental results.

Qualitative Comparison

Case Study 1

BibTeX

 
      @article{zhang2025r1,
  title={R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization},
  author={Zhang, Jingyi and Huang, Jiaxing and Yao, Huanjin and Liu, Shunyu and Zhang, Xikun and Lu, Shijian and Tao, Dacheng},
  journal={arXiv preprint arXiv:2503.12937},
  year={2025}}