We propose an imperative sparse-MVS leaderboard and call for the community’s attention on the general sparse MVS problem with large range of baseline angles. The proposed sparse-MVS leader-board is built on the large-scale DTU dataset and Tanks-and-Temples dataset with sparsely sampled camera views. The sparse-MVS setting selects one view from every n consecutive camera index, i.e., {1, n + 1, 2n + 1,...}, where n is termed as Sparsity positively related with baseline angle θ. The poor performance of the state-of-the-art MVS algorithms on the proposed leader-board demonstrates the necessity of further effort and attention from the community on the achieving MVS with various degrees of sparsity.

Getting Started

The experiment settings and evaluation methods is in description. To submit the results, check the instructions in submission. The leaderboard and the data visualization are shown in leaderboard.


If you find the Sparse-MVS benchmark useful[Arxiv][2020 TPAMI][GitHub], please consider citing

    title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis}, 
    author={M. {Ji} and J. {Zhang} and Q. {Dai} and L. {Fang}},
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},