PR-IQA: Partial-Reference Image Quality Assessment
for
Diffusion-Based Novel View Synthesis
Abstract
Diffusion models are promising for sparse-view novel view synthesis (NVS), as they can generate pseudo-ground-truth views to aid 3D reconstruction pipelines like 3D Gaussian Splatting (3DGS). However, these synthesized images often contain photometric and geometric inconsistencies, and their direct use for supervision can impair reconstruction. To address this, we propose Partial-Reference Image Quality Assessment (PR-IQA), a framework that evaluates diffusion-generated views using reference images from different poses, eliminating the need for ground truth. PR-IQA first computes a geometrically consistent partial quality map in overlapping regions. It then performs quality completion to inpaint this partial map into a dense, full-image map. This completion is achieved via a cross-attention mechanism that incorporates reference-view context, ensuring cross-view consistency and enabling thorough quality assessment. When integrated into a diffusion-augmented 3DGS pipeline, PR-IQA restricts supervision to high-confidence regions identified by its quality maps. Experiments demonstrate that PR-IQA outperforms existing IQA methods, achieving full-reference-level accuracy without ground-truth supervision. Thus, our quality-aware 3DGS approach more effectively filters inconsistencies, producing superior 3D reconstructions and NVS results.
Partial-Reference IQA (PR-IQA)
PR-IQA (Partial-Reference Image Quality Assessment) evaluates the quality of diffusion-generated novel views without requiring a ground-truth image. It first estimates a reliable partial quality map in geometrically overlapping regions between the query and reference views using feature similarity. Then, a quality completion network predicts a dense quality map for the entire image by propagating these reliable signals to unobserved regions through a reference-conditioned cross-attention architecture. This approach enables accurate, pixel-level quality estimation by leveraging cross-view consistency, achieving performance comparable to full-reference IQA methods while operating without ground-truth supervision.
Results
PR-IQA-Guided 3D Gaussian Splatting
We integrate PR-IQA into a sparse-view 3D Gaussian Splatting (3DGS) pipeline to improve reconstruction quality when using diffusion-generated views. PR-IQA first evaluates multiple candidate images generated by the diffusion model and selects the highest-quality one as a pseudo ground-truth for each viewpoint. It then produces a dense quality map that is used to create a pixel-wise confidence mask, allowing the 3DGS optimization to focus only on reliable regions while ignoring artifacts. This dual filtering strategy—image-level selection and pixel-level masking—effectively prevents inconsistent or low-quality regions from affecting training, resulting in more accurate and stable 3D reconstructions.
Results
Acknowledgement
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2026-RS-2020-II201789), and the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2026-RS-2023-00254592) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).
BibTeX
@article{Choi2026PRIQA,
title={PR-IQA: Partial-Reference Image Quality Assessment for Diffusion-Based Novel View Synthesis},
author={Choi, Inseong and Lee, Siwoo and Nam, Seung-Hun and Song, Soohwan},
year={2026},
eprint={2604.04576},
archivePrefix={arXiv},
primaryClass={cs.CV}
}