Video Super-Resolution Quality Assessment Challenge 2024



The unique challenge for developing an objective Super-Resolution Quality metric

  • Large Super-Resolution Quality Assessment dataset covering many major Video Super-Resolution use cases
  • Evaluating metrics on three subsets: Easy, Moderate and Hard
  • Dataset consists of ~1200 videos, with scores based on >150000 votes


News and Updates

  • May 7th, 2024 - Super-Resolution Quality Assessment Challenge Announced
  • May 31th, 2024 - Training and Test Sets published on the “Participate” page
  • May 31th, 2024 - Сhallenge has started!
  • June 3th, 2024 - Changed level distribution in all sets
  • June 12th, 2024 - Published timeline of Challenge
  • July 15th, 2024 - Challenge timeline updated
  • July 24th, 2024 - Code Sharing phase has started
    (if you have not received the link to the form, please contact the organizers)
  • July 30th, 2024 - Challenge Closing phase deadline updated
  • August 1st, 2024 - Preliminary Final Leaderboard has been published
  • August 2nd, 2024 - Subjective scores on the test set has been published (see “Participate”)
  • August 20th, 2024 - Final Leaderboard has been published

Final Leaderboard

Here you can find combined challenge results. Public Score: score on the public test set. Private Score: score on the private test set. Final Score: combined challenge score ((Public Score + 2 * Private Score) / 3)

Team Type Public Score Private Score Final Score
QA-FTE NR Image 0.8661 0.8575 0.8604
TVQA-SR NR Image 0.8906 0.8448 0.8601
SJTU MMLab NR Video 0.8906 0.8362 0.8543
Wink NR Video 0.8864 0.8014 0.8297
sv-srcb-lab NR Video 0.7926 0.8432 0.8263
Q-Align (baseline) NR Image 0.7028 0.7855 0.7580

Motivation

Video and Image Super-resolution (SR) has garnered extensive research in recent years, with new articles appearing monthly. To evaluate the quality of SR performance, Image and Video Quality Assessment metrics are often used. However, the comparison showed that these metrics correlate poorly with human perception. The classical PSNR and SSIM methods have been shown to be incapable of estimating SR quality, but they are still used in research papers. Other, deep learning based methods are generally poor at capturing specific artifacts arising from SR methods. Hence, the task of Super-Resolution Quality Assessment is different from the task of simple Image and Video Quality Assessment. This is further confirmed by our benchmarks: Video Quality Metrics and Super-Resolution Quality Metrics. The main objective of this challenge is to stimulate research and advance the field of metrics oriented specifically to Super-Resolution.

The task is to develop an Image/Video Super-Resolution Quality Assessment metric. Because metrics are wrong…

PaQ-2-PiQ↑: 56.896
HyperIQA↑: 0.287
MUSIQ↑: 30.596


PaQ-2-PiQ↑: 58.564
HyperIQA↑: 0.328
MUSIQ↑: 35.054






HyperIQA↑: 0.330


HyperIQA↑: 0.341




PaQ-2-PiQ↑: 75.880


PaQ-2-PiQ↑: 72.033




PaQ-2-PiQ↑: 56.518


PaQ-2-PiQ↑: 67.859






Challenge Timeline

  • May 31th: Development and Public Test phases –– training and testing data are released (results of the methods are shown only on the public test set), testing server is opened
  • July 14th 24th: Code Sharing phase –– testing server is closed, participants should share the working code of methods
  • July 19th 29th: Closing phase –– code sharing deadline
  • July 21st 31st: Private Test phase –– final results of the methods will be shown
  • August 4th 11th: Paper submission deadline for entries from the challenges

Challenge Data

We provide the participants with train (~600 videos) and test (~200 videos in public set and ~400 videos in private set) subsets that cover many Video Super-Resolution usecases. Sets are made of Ground-Truth videos and the same videos after applying bicubic downsampling, video codecs and Super-Resolution methods. The videos have no fixed resolution.

Participants will see test results on only 200 test videos, full results will be available by the end of the competition (see “Participate” page for how the final result is evaluated).

All the videos of the test set are divided into three levels: Easy, Moderate and Hard. This division is based on:

  • which SOTA Image and Video Quality Assessment metrics incorrectly evaluate the video;

  • the number of distortions on the video, that are purely Super-Resolution artifacts.

The division is as follows:

  • “Easy” level includes videos without special artifacts (or with very weak distortions): only blur, noise, etc.

  • “Moderate” level includes videos without special artifacts, videos with weak distortions, as well as videos with very strong SR distortions, which are caught by a significant part of metrics.

  • “Hard” level includes videos with obvious distortions that are not handled by most metrics

Dataset will be available as soon as challenge starts. Further details can be found in the information provided on the “Terms and Conditions” page.

Participate

To participate in the challenge, you must register on the “Participate” page. There you can also read about submit format and upload the results of your method. The submission rules are described on the “Terms and Conditions” page.

A leaderboard is automatically built based on the results of metrics testing. You can find it on the “Leaderboard” page.

Organizers

  • Ivan Molodetskikh

  • Artem Borisov

  • Dmitry Vatolin

  • Radu Timofte

If you have any questions, please e-mail sr-qa-challenge-2024@videoprocessing.ai.

Other challenges can be found on the AIM 2024 Page

Citation

@inproceedings{aim2024vsrqa,
  title={AIM 2024 Challenge on Video Super-Resolution Quality Assessment: Methods and Results},
  author={Ivan Molodetskikh and Artem Borisov and Dmitriy S Vatolin and Radu Timofte and others},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
  year={2024}
}
03 Oct 2024