Tone mapping operators (TMO) are pivotal in rendering High Dynamic Range (HDR) content on limited dynamic range media. Analysing the quality of tone mapped images depends on several objective factors and a combination of several subjective factors like aesthetics, fidelity etc. Objective
Image quality assessment (IQA) metrics are often used to evaluate TMO quality but they do not always reflect the ground truth. A robust alternative to objective IQA metrics is subjective quality assessment. Although, subjective experiments provide accurate results, they can be time-consuming
and expensive to conduct. Over the last decade, crowdsourcing experiments have become more popular for collecting large amount of data within a shorter period of time for a lesser cost. Although they provide more data requiring less resources, lack of controlled environment for the experiment
results in noisy data. In this work1, we propose a comprehensive analysis of crowdsourcing experiments with two different groups of participants. Our contributions include a comparative study and a collection of methods to detect unreliable participants in crowdsourcing experiments
in a TMO quality evaluation scenario. These methods can be utilized by the scientific community to increase the reliability of the gathered data.