Joss Whittle and Mark W. Jones
In Full-Reference Image Quality Assessment (FR-IQA) images are compared
with ground truth images that are known to be of high visual quality.
These metrics are utilized in order to rank algorithms under test on
their image quality performance. Throughout the progress of Monte Carlo
rendering processes we often wish to determine whether images being
rendered are of sufficient visual quality, without the availability of a
ground truth image. In such cases FR-IQA metrics are not applicable and
we instead must utilise No-Reference Image Quality Assessment (NR-IQA)
measures to make predictions about the perceived quality of unconverged
images. In this work we propose a deep learning approach to NR-IQA,
trained specifically on noise from Monte Carlo rendering processes, which
significantly outperforms existing NR-IQA methods and can produce quality
predictions consistent with FR-IQA measures that have access to ground
truth images.
CCS Concepts: Computing methodologies --> Machine learning; Neural
networks; Computer graphics; Image processing
full paper
<<<
back