<?xml version="1.0"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "journalpublishing3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML"
         xmlns:xlink="http://www.w3.org/1999/xlink"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         article-type="research-article"
         dtd-version="3.0"><front>
      <journal-meta>
         <journal-id journal-id-type="publisher-id">jist</journal-id>
         <journal-id journal-id-type="coden">JIMTE6</journal-id>
         <journal-title-group>
            <journal-title>Journal of Imaging Science and Technology</journal-title>
            <abbrev-journal-title abbrev-type="IST">J. Imaging Sci. Technol.</abbrev-journal-title>
            <abbrev-journal-title abbrev-type="publisher">J. Imaging Sci. Technol.</abbrev-journal-title>
         </journal-title-group>
         <issn pub-type="ppub">1062-3701</issn>
         <issn pub-type="epub">1943-3522</issn>
         <publisher>
            <publisher-name>Society for Imaging Science and Technology</publisher-name>
         </publisher>
      </journal-meta>
      <article-meta>
         <article-id pub-id-type="publisher-id">020403</article-id>
         <article-id pub-id-type="doi">10.2352/J.ImagingSci.Technol.2026.70.2.020403</article-id>
         <article-id pub-id-type="manuscript">2025007</article-id>
         <article-categories><subj-group subj-group-type="article-type"><subject>Work Presented at Electronic Imaging 2026</subject></subj-group>
         </article-categories>
         <title-group>
            <article-title>Quality Evaluation of Contrast-Enhanced Images: Central Asian Perspectives</article-title>
            <alt-title alt-title-type="short">Quality evaluation of contrast-enhanced images: Central Asian perspectives</alt-title>
         </title-group>
         <contrib-group content-type="all">
            <contrib contrib-type="author">
               <name>
                  <surname>Kadyrova</surname>
                  <given-names>Altynay</given-names>
               </name>
               <xref ref-type="aff" rid="jist2025007af1"/>
               <xref ref-type="aff" rid="jist2025007em1"/>
            </contrib>
            <aff id="jist2025007af1">Department of Computer Science, KIMEP University, Almaty, Kazakhstan</aff>
            <ext-link id="jist2025007em1" ext-link-type="email">a.kadyrova@kimep.kz</ext-link>
         </contrib-group>
         <contrib-group content-type="all">
            <contrib contrib-type="author">
               <name>
                  <surname>Pedersen</surname>
                  <given-names>Marius</given-names>
               </name>
               <xref ref-type="aff" rid="jist2025007af2"/>
            </contrib>
            <aff id="jist2025007af2">Department of Computer Science, Norwegian University of Science and Technology, Gj&#x00F8;vik, Norway</aff>
            <author-comment content-type="short-author-list">
               <p>Kadyrova and Pedersen</p>
            </author-comment>
         </contrib-group>
         <pub-date pub-type="ppub">
            <month>03</month>
            <year>2026</year>
         </pub-date>
         <volume>70</volume>
         <issue seq="15">2</issue>
         <fpage>1</fpage>
         <lpage>11</lpage>
         <history>
            <date date-type="received">
               <day>1</day>
               <month>8</month>
               <year>2025</year>
            </date>
            <date date-type="accepted">
               <day>17</day>
               <month>3</month>
               <year>2026</year>
            </date>
         </history>
         <permissions>
            <copyright-statement>&#x00A9; Society for Imaging Science and Technology</copyright-statement>
            <copyright-year>2026</copyright-year>
            <license license-type="open-access"
                     xlink:href="https://creativecommons.org/licenses/by/4.0/">
               <license-p/>
            </license>
         </permissions>
         <abstract>
            <title>Abstract</title>
            <p>Culture can play a significant role in evaluating image quality. Therefore, this work considered one of the least studied cultural regions of observers, examining the impact of Central&#x00A0;Asian culture on image quality evaluation. More specifically, it investigated how they evaluate the quality of contrast-enhanced images. It was found that observer evaluations vary and can be divided into groups. These groups may have their individual preferences for the quality of contrast-enhanced images. Therefore, the personalization factor should be incorporated into the quality evaluation of (contrast-) enhanced images. Furthermore, the results were compared with another population and differences were found in the overall outcomes of the two observer groups. The variations observed&#x00A0;could be due to cultural differences. In addition, this study introduced the Central Asian Contrast-Enhanced Image Quality Dataset&#x00A0;(CACEIQD). A variety of image quality metrics, including deep learning techniques, were tested on the dataset. The results indicate that the dataset is challenging and highlight an area for metric improvement. This dataset can be helpful for future research in the field of enhanced image quality evaluation.</p>
         </abstract>
         <kwd-group>
            <kwd>image quality</kwd>
            <kwd>image enhancement</kwd>
            <kwd>contrast</kwd>
            <kwd>culture</kwd>
            <kwd>metrics</kwd>
         </kwd-group>
         <counts>
            <page-count count="11"/>
         </counts>
         <custom-meta-group>
            <custom-meta>
               <meta-name>ccc</meta-name>
               <meta-value>1062-3701/2026/70(2)/020403/11/$25.00</meta-value>
            </custom-meta>
            <custom-meta>
               <meta-name>printed</meta-name>
               <meta-value>Printed in the USA</meta-value>
            </custom-meta>
         </custom-meta-group>
      </article-meta>
   </front>
   <body><sec id="jist2025007us1">
         <label>1.</label>
         <title>Introduction</title>
         <p>Image quality has been widely studied across various domains, including computer graphics, color reproduction, material appearance, and printing. Its evaluation depends on multiple attributes&#x2014;such as color, gloss, and naturalness&#x2014;and can be influenced by external factors such as illumination, viewing distance, sample geometry, and cultural background. Among these factors, culture has been shown to play a significant role, as observers from different cultural groups often interpret and judge image quality differently&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib1">1</xref>].</p>
         <p>Despite increasing interest in cross-cultural differences, most existing image quality studies rely on observers from Western countries, East Asia, or South Asia. Some recent online studies do not report demographic information&#x00A0;at all. As a result, one major cultural region&#x2014;Central Asia&#x2014;remains largely absent from the literature, with only a very small number of observers from this region included in published image quality evaluations. This lack of&#x00A0;representation is particularly evident in studies that focus on enhanced images, where cultural differences may influence how improvements or artifacts are perceived.</p>
         <p>The Central Asian region encompasses ethnicities from Kazakhstan, Turkmenistan, Uzbekistan, Tajikistan, and Kyrgyzstan. We hypothesize that these populations have been significantly underrepresented in prior work, and therefore current conclusions about perceived image quality or preferred enhancement levels may not generalize to this cultural group. To address this gap, our goal is to investigate how observers from Central Asia evaluate image quality, with particular emphasis on their perception of enhanced images. We introduce what is, to the best of our knowledge, the first dataset dedicated to enhanced image quality ratings from Central Asian observers. This dataset expands the cultural diversity of existing resources and enables researchers to draw more inclusive and representative conclusions about image enhancement. Furthermore, we examine whether distinct subgroups exist within the Central Asian observer population. We also compare the Central Asian observer results with another population, Norwegians, to find whether there are differences between different populations. Finally, we investigate whether existing image quality metrics are able to predict the judgments of the observers.</p>
         <p>The structure of this paper is as follows. We first review related work on image enhancement, overenhancement, quality evaluation of enhanced images, and cultural influences. We then describe our methodology before presenting our results and discussion. Finally, we conclude with a summary and outline future research directions.</p>
         <p></p>
      </sec>
      <sec id="jist2025007us2">
         <label>2.</label>
         <title>Related Works</title>
         <sec id="jist2025007us2-1">
            <label>2.1</label>
            <title>Image Enhancement Techniques</title>
            <p>Images can be distorted or enhanced. There are many works dedicated to image quality assessments via image distortions and degradation&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib2">2</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib8">8</xref>] compared to image enhancement. Image enhancement is commonly referred to as a processing step that can improve the quality of an image&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib9">9</xref>].</p>
            <p>There are works that review image enhancement techniques&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib10">10</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib12">12</xref>]. Liu et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib10">10</xref>] covered previous surveys, existing classification of image enhancement techniques, and current enhanced image databases. They suggested some perspectives on the development of future image enhancement techniques. They also discussed the challenges related to image enhancement. Their proposed classification of image enhancement methods is as follows: image contrast enhancement, image sharpness enhancement, image color correction, image de-artifacting, and image enhancement for multiple quality attributes.</p>
            <p>They observed that newly created enhancement approaches tend to use machine learning frameworks and consider a human visual system. Their conclusion was that it is desirable to come up with a universal Image Quality Metric (IQM) and databases for performance assessment of image enhancement approaches.</p>
            <p>A year earlier, Qi et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib11">11</xref>] had conducted a survey of image enhancement methods in terms of three aspects: unsupervised methods, supervised methods, and quality evaluation. Similar to Liu et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib10">10</xref>], they concluded that deep learning based methods are the dominant models.</p>
            <p>Low-light image enhancement techniques can be applied to images captured under poor illumination&#x00A0;conditions to enhance the visual effect of such images&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib12">12</xref>]. Wang et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib12">12</xref>] classified such techniques into seven&#x00A0;categories: gray transformation, histogram equalization, Retinex, frequency-domain, image fusion, defogging model, and machine learning methods. They concluded that the selection of the suitable image enhancement algorithm is application dependent.</p>
            <p>Image enhancement techniques have a wide range&#x00A0;of application areas that signify their demand. For example, image enhancement techniques can be used for underwater&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib13">13</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib16">16</xref>], medical&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib17">17</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib19">19</xref>], satellite&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib20">20</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib22">22</xref>], and natural images&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib23">23</xref>, <xref ref-type="bibr" rid="jist2025007bib24">24</xref>], among others. Next, we discuss overenhancement of images that could happen in practice.</p>
         </sec>
         <sec id="jist2025007us2-2">
            <label>2.2</label>
            <title>Image Overenhancement</title>
            <p>It is important to highlight the term overenhancement. When using image enhancement methods, overenhancement can occur, and therefore the quality of images can decrease. Nonetheless, a study by Azimian et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib25">25</xref>] showed that their observers had agreed on being able to detect when overenhancement occurs. Their Subjective Enhanced Image Dataset (SEID) contains 30 reference images, and the contrast stretching technique was applied to produce high- and low-contrast versions of the reference images. Fifteen observers participated in their experiment, and ethnicity information of the observers was not mentioned.</p>
            <p>To find whether the images were enhanced well or overenhanced, quality evaluation needs to be performed. Therefore, we next provide the literature on quality evaluation of images.</p>
         </sec>
         <sec id="jist2025007us2-3">
            <label>2.3</label>
            <title>Quality Evaluation</title>
            <p>The quality assessment of enhanced images is considered a challenging task&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib26">26</xref>, <xref ref-type="bibr" rid="jist2025007bib27">27</xref>]. It can be conducted subjectively via psychophysical experiments and/or objectively via IQMs. In the case of subjective evaluation, images (e.g., enhanced) are judged by observers in a controlled (e.g., in a lab) or uncontrolled (e.g., on field, online) environment, thereby producing subjective data. In the case of objective evaluation, existing IQMs can also evaluate the same images judged by the observers, thereby producing objective data. Eventually, IQM scores and subjective scores of observers can be checked for correlation.</p>
            <p>The IQMs applied to distorted images prevail over those applied to enhanced images&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib26">26</xref>]. Hence, IQMs are typically designed to assess image distortion, and there are fewer methods to assess image enhancement&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib28">28</xref>]. Amirshahi et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib29">29</xref>] evaluated the performance of IQMs on contrast-enhanced images. Twenty-eight IQMs were evaluated to check their suitability to assess contrast-enhanced images. Their dataset contains 26 original images, and four contrast enhancement methods (Retinex, s-shaped contrast correction, Contrast Limited Adaptive Histogram Equalization [CLAHE], and Natural Rendering of Color Image using Retinex) were used. A paired comparison method was used to judge the quality of the images by 15 observers under dark-room conditions. The ethnicity of the observers is not mentioned in this work. Overall, they found that the tested IQMs did not correlate well with the perceived contrast-enhanced image quality. This again proves that current IQMs that are mostly designed for distorted images are incapable of working with enhanced images. Gu et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib30">30</xref>] highlighted that such IQMs did not yield satisfactory results when applied to enhanced images.</p>
            <p>Five attributes (brightness, contrast, saturation, sharpness, and warmth) were used to enhance 16 natural color images in the dataset introduced by Kadyrova et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib31">31</xref>]. They conducted an online experiment to collect subjective scores on enhanced images using a forced-choice paired comparison method. They had 45 observers; however, their ethnicity is not mentioned. They tested 38 IQMs on their dataset images and concluded that it is a difficult task for IQMs to process enhanced images.</p>
            <p>Either sharpness, contrast, brightness, and color or their combination was edited to enhance 26 color images that were used in an experiment conducted in a dark room by Vu et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib32">32</xref>]. In their study, nine observers (ethnicity is not mentioned) judged the quality of enhanced images by employing pairwise comparison and multiple-stimulus continuous quality evaluation paradigms. They used three full-reference IQMs in two modes: first mode&#x2014;original image was input as the reference; second mode (reverse mode)&#x2014;enhanced image was input as the reference. The results revealed that applying tested IQMs in reverse mode can improve enhanced image quality evaluation.</p>
            <p>Qureshi et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib33">33</xref>] created the Contrast Enhancement Evaluation Database (CEED2016) consisting of 30 original color images. They used six contrast enhancement methods&#x2014;Adaptive Edge Based Contrast Enhancement, CLAHE, Discrete Cosine Transform, Global Histogram Equalization, Top Hat Transformation, and Multiscale Retinex&#x2014;and six contrast metrics. They adapted the pairwise preference based ranking protocol (Condorcet method), and 15 observers participated in their experiment under laboratory conditions. They mentioned that their observers were of different genders, age groups, and backgrounds. However, it is not clear what exactly they mean by different backgrounds. Hence, ethnicity information appears to be absent in this study. Their results showed that some of the metrics tested are inconsistent with subjective scores of the observers.</p>
            <p>The Underwater Image Enhancement Benchmark with 950 real-world underwater images was created by Li et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib34">34</xref>]. They employed 12 image enhancement methods and conducted a paired comparison experiment with 50 observers (ethnicity is not mentioned). Moreover, they proposed an underwater image enhancement network (called Water-Net). Cherepkova et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib35">35</xref>] found that there can be individual differences in contrast preferences of natural images between observers. In addition, they mentioned that these individual differences in contrast preferences should be considered in image quality evaluations, image enhancement, and related fields. They had 22 observers (ethnicity is not mentioned) and used a Three-Alternative Forced Choice procedure with a modified adaptive staircase algorithm.</p>
            <p>In studies where ethnicity information is not mentioned, we assume that the authors did not collect it because, for example, they did not find it important for their study. Furthermore, we assume that Central Asian observers were not present at all or were present in a very limited number in existing studies based on the location information of the articles.</p>
            <p>Therefore, we next provide the literature on culture.</p>
         </sec>
         <sec id="jist2025007us2-4">
            <label>2.4</label>
            <title>Culture</title>
            <p>We used a narrowed definition for culture similar to that by Senthilkumar et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib36">36</xref>]: culture is determined by geopolitical boundaries (e.g., countries, continents).</p>
            <p>In 2006, Aslam&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib37">37</xref>] stated that most of the works have a Western focus. There have also been several studies showing differences between cultures.</p>
            <p>Color is one of the most important attributes in&#x00A0;imaging applications. There are works that show that considerable differences exist between cultures in color preferences [<xref ref-type="bibr" rid="jist2025007bib38">38</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib40">40</xref>]. There are also considerable differences between cultures in terms of color semantics&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib37">37</xref>, <xref ref-type="bibr" rid="jist2025007bib41">41</xref>, <xref ref-type="bibr" rid="jist2025007bib42">42</xref>].</p>
            <p>Color emotion was evaluated in a set of countries (Argentina, Spain, Sweden, France, Germany, and others) via psychophysical experiments using semantic scales: heavy&#x2013;light, warm&#x2013;cool, active&#x2013;passive, and like&#x2013;dislike&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib43">43</xref>]. Argentinian observers&#x2019; responses differed from others in the like&#x2013;dislike scale. Argentinians preferred passive color pairs more than others based on factor analysis. Ou et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib43">43</xref>] noted that the effects of gender, age, and professional background are also present along with cultural differences.</p>
            <p>There have been works related to image quality and the impact of culture. Lin and Patterson&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib1">1</xref>] found a difference between Taiwanese and American subjects when assessing the image quality of mobile devices.</p>
            <p>In a study that investigated the differences in mobile display color appearance, Europeans preferred a lower color temperature than Asians over the entire range of illuminants that they tested&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib44">44</xref>]. This study provided a cultural-sensitive approach via their two regression equations (one for Europeans and the other for Asians) to improve the appearance of products. In this way, mobile displays can acquire accurate colorimetric reproduction of images, which in turn can positively impact the image quality process.</p>
            <p>Fernandez et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib45">45</xref>] found that the cultural background of observers causes preference variability, and it was demonstrated to be statistically significant in color preference reproduction. They defined colorimetric adjustment dimensions by combining five (hue naturalness, mid-tone lightness accuracy, mid-tone detail, image naturalness, mid-tone chroma correctness) of the most important image or color quality terms based on the authors&#x2019; expertise. Their gamma and chroma adjustment dimensions showed the most considerable preference variation between cultures. For instance, lighter image reproduction was preferred by Japanese while Americans preferred slightly less chromatic reproduction than others. They concluded that differences are present between cultures for some color reproduction preferences.</p>
            <p>A recent study by Saupe and Pin&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib46">46</xref>] explored&#x00A0;differences at the national level in quality evaluation using crowd-sourced datasets that contain responses from Japan, Serbia, Venezuela, Russia, India, the USA, and Brazil. They found considerable cross-cultural variations in terms of rating behavior.</p>
            <p>It is worth noting the extreme response style where a group of observers tend to select the most extreme option on the scale. For example, Americans tend to select extreme options in comparison to those from East Asian countries&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib47">47</xref>].</p>
            <p>In summary, it can be clearly seen that the existing work did not focus on recruiting observers with diverse ethnicities or cultural backgrounds in the quality evaluation of enhanced images. As a result, there is a demand for this study, which focuses on the Central Asian population for enhanced image quality evaluation.</p>
         </sec>
         <sec id="jist2025007us2-5">
            <label>2.5</label>
            <title>Gap and Motivation</title>
            <p>Across enhancement evaluation studies, cultural representation is negligible: participant ethnicity is often unreported, and Central Asian observers appear absent or minimal. At the same time, cross-cultural work strongly suggests that preferences and rating styles vary by culture. Therefore, conclusions about enhanced image quality drawn from Western/East/South Asian samples may not generalize to Central Asia.</p>
            <p>The problem we address in this work is &#x201C;How do observers from Central Asia perceive the quality of enhanced images, and how well do existing IQMs perform in predicting their judgments?&#x201D;</p>
            <p>This motivates our contribution: we introduce (i) a dedicated dataset of enhanced image quality scores from Central Asian observers, (ii) analysis benchmarking IQMs on the collected subjective scores, and (iii) comparative analysis of Central Asian observer data with respect to another population.</p>
            <p></p>
         </sec>
      </sec>
      <sec id="jist2025007us3">
         <label>3.</label>
         <title>Methodology</title>
         <p>Our workflow is illustrated in Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig1">1</xref>. We first start with the relevant dataset selection. Afterwards, image enhancement methods are applied to enhance the images. Next, a psychophysical experiment is conducted, and we further analyze the data.</p>
         <fig id="jist2025007fig1"><label>Figure&#x00A0;1.</label>
            <caption id="jist2025007fc1">
               <p>Our methodology workflow. The steps proceed from left to right.</p>
            </caption>
            <graphic id="jist2025007f1_online" content-type="online"
                     xlink:href="jist2025007f1_online.jpg"/>
         </fig><p></p>
         <p>As there are already existing datasets focused on image enhancement, we chose to work with the original images from the SEID&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib25">25</xref>] dataset. This dataset, &#x2018;Central Asian Contrast-Enhanced Image Quality Dataset (CACEIQD)&#x2019;, is a mixture of images from the following two datasets: CEED&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib48">48</xref>] and the Colourlab Contrast Enhanced Image Dataset&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib29">29</xref>]. In this way, the authors of the SEID dataset aimed to have diversity in the dataset in terms of image colorfulness and visual contents.</p>
         <p>We applied several image enhancement methods on the original images (the original images had a resolution of 512 &#x00D7; 512 pixels). Adaptive Gamma Correction with Weighting Distribution (AGCWD)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib49">49</xref>] was selected because it produces enhanced images of higher quality as demonstrated by experimental results. The AGCWD method enhances the contrast of images and improves the brightness through gamma correction and probability distribution of luminance pixels. The CLAHE&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib50">50</xref>] and Retinex&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib51">51</xref>] methods were selected because they are among the most common methods for image enhancement. Fuzzy-Contextual Contrast Enhancement (FCCE)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib52">52</xref>] was chosen as it preserves the natural characteristics of the image and enhances contrast. Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig2">2</xref> demonstrates the image enhancement methods used.</p>
         <fig id="jist2025007fig2"><label>Figure&#x00A0;2.</label>
            <caption id="jist2025007fc2">
               <p>Image enhancement methods: original image (bottom) and four enhanced versions (top).</p>
            </caption>
            <graphic id="jist2025007f2_online" content-type="online"
                     xlink:href="jist2025007f2_online.jpg"/>
         </fig><p>Compared to existing datasets focusing on contrast enhancement, namely&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib29">29</xref>, <xref ref-type="bibr" rid="jist2025007bib48">48</xref>, <xref ref-type="bibr" rid="jist2025007bib51">51</xref>], the inclusion of FCCE and AGCWD is new, as they overlap with CLAHE and Retinex.</p>
         <p>In Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig3">3</xref>, we show contrast levels in the images using the RAMMG metric&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib53">53</xref>]. The contrast is enhanced in all the 30 scenes compared to the original image.</p>
         <fig id="jist2025007fig3"><label>Figure&#x00A0;3.</label>
            <caption id="jist2025007fc3">
               <p>The contrast levels in the images via RAMMG metric. Blue  = original, orange  = AGCWD, yellow  = CLAHE, purple  = FCCE, and green  = Retinex. The <italic>Y</italic> -axis shows RAMMG metric values (higher values indicate greater contrast).</p>
            </caption>
            <graphic id="jist2025007f3_online" content-type="online"
                     xlink:href="jist2025007f3_online.jpg"/>
         </fig><p>After the images were enhanced, we conducted a psychophysical experiment with 30 observers (12 males, 18 females) with an average age of 24.5 years. The observers had normal color vision. A Snellen chart and an Ishihara test were used to check their visual acuity and color vision, respectively. The recruited Central Asian observers were Kazakhs except for four observers (three from Tajikistan, one from Kyrgyzstan). All observers can be considered non-experts (i.e., without previous experience in image quality). Compared to existing datasets, which have observers of mixed background or have not stated the background of the observers, our dataset, to the best of our knowledge, is the only one with a majority of Central Asian observers.</p>
         <p>Before starting the experiment, consent was obtained from the observers. The experiment was conducted in a dark room (AOC 24 LCD Monitor) with the following instruction: &#x201C;Please choose the image with the highest quality.&#x201D; QuickEval&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib54">54</xref>], a web platform, was used for the experiment. We chose to conduct the experiment with the paired comparison method as it was considered to be the easiest for the observers. The original images were included in the dataset. The observers were not informed that the original images were included and that contrast was varied to prevent potential bias.</p>
         <p>The distance between the observer&#x2019;s eyes and the monitor was around 50 cm. There was no time restriction, and the average duration was approximately 17 minutes per observer.</p>
         <p>Furthermore, to compare the results of the Central Asian observers, we have conducted an additional experiment with eight Norwegian observers. The experiment was carried out under similar conditions to that with Central Asian observers but on a Dell U2419HC Monitor. Results from the two experiments are compared.</p>
         <p>We process the data from the experiment to <italic>z</italic>-scores&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib55">55</xref>]. Relevant IQMs are tested for correlation between observer data and IQM data.</p>
         <p></p>
      </sec>
      <sec id="jist2025007us4">
         <label>4.</label>
         <title>Results and Discussion</title>
         <sec id="jist2025007us4-1">
            <label>4.1</label>
            <title>Subjective Scores of Observers</title>
            <p>Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig4">4</xref>  shows the <italic>z</italic>-scores&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib55">55</xref>] of all images for four enhancement methods and the original, plotted with a 95% confidence interval according to Montag&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib56">56</xref>]. The higher the <italic>z</italic>-score, the higher the quality. When considering individual pictures, images enhanced by different methods including the original were ranked as having higher and lower quality in different scenes. However, when considering all the image scenes together, the images enhanced by CLAHE were rated the highest quality followed by Retinex. The original images were not rated the highest quality while FCCE-processed images were ranked as having the lowest quality (Fig.&#x00A0;<xref ref-type="fig" rid="jist2025007fig4">4</xref>). The FCCE-enhanced images were rated the lowest quality probably due to overenhancement and related artifacts. During contrast enhancement, artifacts such as halo effects, ringing, blocking, and color shift might appear.</p>
            <fig id="jist2025007fig4"><label>Figure&#x00A0;4.</label>
               <caption id="jist2025007fc4">
                  <p>The <italic>z</italic>-scores for all image scenes together for Central Asian observers. The scores are plotted with a 95% confidence interval.</p>
               </caption>
               <graphic id="jist2025007f4_online" content-type="online"
                        xlink:href="jist2025007f4_online.jpg"/>
            </fig><p>We have also used the Bradley&#x2013;Terry model in the overall frequency matrix, which is a probability model for the outcome of pairwise comparisons between items. We report the ability (<italic>&#x03B2;</italic>) of an item to win in a paired comparison. Table&#x00A0;<xref ref-type="table" rid="jist2025007tabI">I</xref> indicates that CLAHE has the highest value, consistent with the <italic>z</italic>-score plot in Fig.&#x00A0;<xref ref-type="fig" rid="jist2025007fig4">4</xref>. Furthermore, we calculate the pairwise significance matrix (<italic>p</italic>-values, Table&#x00A0;<xref ref-type="table" rid="jist2025007tabII">II</xref>), which shows that there is a statistically significant difference between CLAHE and the other methods.</p>
            <table-wrap id="jist2025007tabI">
               <label>Table&#x00A0;I.</label>
               <caption id="jist2025007tcI">
                  <p>Bradley&#x2013;Terry <italic>&#x03B2;</italic> values. Higher value indicates better performance.</p>
               </caption>
               <table frame="void">
                  <colgroup>
                     <col align="left"/>
                     <col align="center"/>
                  </colgroup>
                  <thead>
                     <tr>
                        <th align="left">Method</th>
                        <th align="center"><italic>&#x03B2;</italic></th>
                     </tr>
                  </thead>
                  <tbody>
                     <tr>
                        <td align="left">CLAHE</td>
                        <td align="center"> 0.17709</td>
                     </tr>
                     <tr>
                        <td align="left">Retinex</td>
                        <td align="center"> 0.094622</td>
                     </tr>
                     <tr>
                        <td align="left">Original</td>
                        <td align="center"> &#x2212; 0.049994</td>
                     </tr>
                     <tr>
                        <td align="left">AGCWD</td>
                        <td align="center"> &#x2212; 0.051779</td>
                     </tr>
                     <tr>
                        <td align="left">FCCE</td>
                        <td align="center"> &#x2212; 0.16994</td>
                     </tr>
                  </tbody>
               </table>
            </table-wrap><table-wrap id="jist2025007tabII">
               <label>Table&#x00A0;II.</label>
               <caption id="jist2025007tcII">
                  <p>Pairwise significance matrix (<italic>p</italic>-values).</p>
               </caption>
               <table frame="void">
                  <colgroup>
                     <col align="left"/>
                     <col align="center"/>
                     <col align="center"/>
                     <col align="center"/>
                     <col align="center"/>
                     <col align="center"/>
                  </colgroup>
                  <thead>
                     <tr>
                        <th align="left"/>
                        <th align="center">                             Original</th>
                        <th align="center">AGCWD</th>
                        <th align="center">CLAHE</th>
                        <th align="center">FCCE</th>
                        <th align="center">Retinex</th>
                     </tr>
                  </thead>
                  <tbody>
                     <tr>
                        <td align="left">Original</td>
                        <td align="center">&#x2014;</td>
                        <td align="center">0.960</td>
                        <td align="center">1.265e&#x2212;10</td>
                        <td align="center">0.001</td>
                        <td align="center">4.096e&#x2212;05</td>
                     </tr>
                     <tr>
                        <td align="left">AGCWD</td>
                        <td align="center">0.960</td>
                        <td align="center">&#x2014;</td>
                        <td align="center">9.065e&#x2212;11</td>
                        <td align="center">0.001</td>
                        <td align="center">3.287e&#x2212;05</td>
                     </tr>
                     <tr>
                        <td align="left">CLAHE</td>
                        <td align="center">1.265e&#x2212;10</td>
                        <td align="center">9.06e&#x2212;11</td>
                        <td align="center">&#x2014;</td>
                        <td align="center">0</td>
                        <td align="center">0.020</td>
                     </tr>
                     <tr>
                        <td align="left">FCCE</td>
                        <td align="center">0.001</td>
                        <td align="center">0.001</td>
                        <td align="center">0</td>
                        <td align="center">&#x2014;</td>
                        <td align="center">6.861e&#x2212;14</td>
                     </tr>
                     <tr>
                        <td align="left">Retinex</td>
                        <td align="center">4.095e&#x2212;05</td>
                        <td align="center">3.287e&#x2212;05</td>
                        <td align="center">0.020</td>
                        <td align="center">6.861e&#x2212;14</td>
                        <td align="center">&#x2014;</td>
                     </tr>
                  </tbody>
               </table>
            </table-wrap><p>Hierarchical clustering (unweighted average distance was used to calculate distances between the clusters with the Euclidean distance metric) revealed three clusters of observers (Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig5">5</xref>). Observer 9 evaluated the quality of the images considerably differently from the others. Additionally, observers 24 and 29, 8 and 11, 7 and 20, and 6 and 30 evaluated images more or less in the same way, as the heights of distances are smaller in the cluster marked in red. Observers 13 and 22 also evaluated images in a similar way in the cluster marked in blue.</p>
            <fig id="jist2025007fig5"><label>Figure&#x00A0;5.</label>
               <caption id="jist2025007fc5">
                  <p>The hierarchical clustering that shows how similarly observers evaluated quality of images. The higher the distance, the more dissimilar the evaluation.</p>
               </caption>
               <graphic id="jist2025007f5_online" content-type="online"
                        xlink:href="jist2025007f5_online.jpg"/>
            </fig><p>Although images enhanced by CLAHE followed by Retinex were ranked the highest quality when considering all image scenes together, this was not the case when the results of individual observers were analyzed.</p>
            <p>From the dendrogram (Fig.&#x00A0;<xref ref-type="fig" rid="jist2025007fig5">5</xref>) and the results of evaluations of each observer for all images, we can divide observers into groups. One group (blue cluster) perceived images enhanced with AGCWD (followed by Retinex) as better quality whereas those with CLAHE and FCCE as lower quality. We can assume that observers in the blue cluster rated brighter (due to AGCWD) images as better quality and darker (due to FCCE) images as lower quality. Senior (13 and 22) and younger (3 and 16) observers also provided similar evaluations. The observers in the blue cluster are all females except one.</p>
            <p>In contrast to the blue cluster, another group (subset 1 of the red cluster&#x2014;observers 4, 6, 30, 14, 5, and 12) perceived images enhanced with AGCWD and Retinex as lower quality. Observers in subset 2 of the red cluster (observers 1, 23, 7, 20, 10, and 28) perceived images enhanced with FCCE as better quality and the original as lower quality. Subset 1 has one male observer, and subset 2 of the red cluster has two male observers.</p>
            <p>The remaining observers in the red cluster (subset 3) did not show a clear pattern or find significant differences. It is worth mentioning that subset 3 of the red cluster contains eight males and four females. The only observer (9, female) in the black cluster found the original images as better quality unlike subset 2 of the red cluster.</p>
            <p>In this light, it seems that individual preference for the perceived quality of contrast-enhanced images can be an unavoidable factor. This emphasizes that such individual preferences should be considered in image quality evaluations. This is in line with the findings of Cherepkova et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib35">35</xref>] that there may be individual differences in contrast preferences among observers.</p>
            <p>Moreover, we can propose that there may be differences between genders in contrast-enhanced image quality evaluations among Central Asian observers based on the results. To test this assumption, more work is needed that focuses particularly on gender in the evaluation.</p>
            <p>Analyzing each image in the experiment, we note that in only three scenes (numbers 10, 13, and 30) the original has the highest <italic>z</italic>-score (Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig6">6</xref>, plotted with 95% confidence intervals calculated according to Montag&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib56">56</xref>]). For example, Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig7">7</xref> shows the original and enhanced versions of image 30. Based on the RAMMG metric values in Fig.&#x00A0;<xref ref-type="fig" rid="jist2025007fig3">3</xref>, the original version of image 30 has a lower level of contrast compared to the enhanced versions, which can also be perceived from Fig.&#x00A0;<xref ref-type="fig" rid="jist2025007fig7">7</xref>. Image 10 follows a similar pattern while for image 13, the version enhanced via FCCE has a slightly lower contrast compared to the original version.</p>
            <fig id="jist2025007fig6"><label>Figure&#x00A0;6.</label>
               <caption id="jist2025007fc6">
                  <p>The <italic>z</italic>-scores for 30 images with 95% confidence intervals for Central Asian observers.</p>
               </caption>
               <graphic id="jist2025007f6_online" content-type="online"
                        xlink:href="jist2025007f6_online.jpg"/>
            </fig><fig id="jist2025007fig7"><label>Figure&#x00A0;7.</label>
               <caption id="jist2025007fc7">
                  <p>Image 30: original and enhanced versions.</p>
               </caption>
               <graphic id="jist2025007f7_online" content-type="online"
                        xlink:href="jist2025007f7_online.jpg"/>
            </fig><p>It is also worth noting here that the difference between the original and one or more of the enhanced images is small. In a few other enhanced images, the differences are clearly greater from the original. We can also observe that for some images, enhancement can significantly decrease quality, such as in images 9 and 12. It is also apparent that none of the enhancement techniques provide the best results for all images.</p>
            <p>More in-depth analysis of the enhanced images, such as image 9 (Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig8">8</xref>), reveals that AGCWD is capable of enhancing details in the shadow region while FCCE exhibits the opposite behavior and does not enhance shadow details. Observers have been highly consistent in their ratings for images with regard to FCCE and AGCWD. A similar observation is made for images 3, 12, and 14, where FCCE is unable to enhance shadow details. In image 19, where FCCE has the highest <italic>z</italic>-score, it produces a natural image while Retinex and AGCWD produce images that are brighter with more pixels being clipped in the highlights compared to FCCE. For image 22, Retinex and CLAHE provide the highest <italic>z</italic>-scores with images that have acceptable contrast and a natural appearance. Regarding image 22, FCCE produces a darker image while AGCWD renders it excessively bright with both versions being less natural than those generated by Retinex and CLAHE.</p>
            <fig id="jist2025007fig8"><label>Figure&#x00A0;8.</label>
               <caption id="jist2025007fc8">
                  <p>Comparison of AGCWD and FCCE for image 9.</p>
               </caption>
               <graphic id="jist2025007f8_online" content-type="online"
                        xlink:href="jist2025007f8_online.jpg"/>
            </fig><p>Analysis of features of the original image, such as mean lightness, mean saturation, and detail level, and checking their correlation with resulting <italic>z</italic>-scores of each enhancement method did not reveal a relationship. These simple features do not seem to contain direct information on predicting the enhancement quality.</p>
         </sec>
         <sec id="jist2025007us4-2">
            <label>4.2</label>
            <title>Comparison between Central Asian and Norwegian Observers</title>
            <p>Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig9">9</xref>  shows the <italic>z</italic>-score plots for the eight Norwegian observers for all image scenes together. We can see that the highest <italic>z</italic>-score is for Retinex, followed by AGCWD, CLAHE, the original, and finally FCCE. Compared to the Central Asian observers who scored CLAHE the highest, Norwegian observers have scored CLAHE lower. Both groups do not prefer FCCE; Norwegian observers have rated this lower than Central Asian observers. Norwegians also rated AGCWD to be slightly above average, whereas Central Asians rated it to be slightly below average. It is worth noting that there are more observers in the Central Asian experiment than in the Norwegian experiment.</p>
            <fig id="jist2025007fig9"><label>Figure&#x00A0;9.</label>
               <caption id="jist2025007fc9">
                  <p>The <italic>z</italic>-scores for all image scenes together for the Norwegian observers. The scores are plotted with a 95% confidence interval.</p>
               </caption>
               <graphic id="jist2025007f9_online" content-type="online"
                        xlink:href="jist2025007f9_online.jpg"/>
            </fig><p>Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig10">10</xref>  shows the <italic>z</italic>-scores for each image in the dataset. We can see that the Norwegian observers consistently rated FCCE low. This includes image 19&#x2014;Central Asian observers strongly preferred this image. This is a scene for which Central Asian observers seem to prefer a darker image compared to Norwegians. Moreover, regarding image 28 where Norwegians prefer AGCWD, which is an image with increased lightness, there are smaller differences between the enhancement algorithms for the Central Asian observers. We also see similarities between the observer groups for certain images, which indicates that image content could play a role.</p>
            <fig id="jist2025007fig10"><label>Figure&#x00A0;10.</label>
               <caption id="jist2025007fc10">
                  <p>The <italic>z</italic>-scores for single-image scenes together for the Norwegian observers. The scores are plotted with a 95% confidence interval.</p>
               </caption>
               <graphic id="jist2025007f10_online" content-type="online"
                        xlink:href="jist2025007f10_online.jpg"/>
            </fig></sec>
         <sec id="jist2025007us4-3">
            <label>4.3</label>
            <title>Objective Scores of IQMs</title>
            <p>Given that the original image is part of the evaluated images, we have focused on no-reference IQMs. We have calculated the following IQMs: BRISQUE&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib57">57</xref>], Language-Image Quality Evaluator (LIQE)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib58">58</xref>], CLIPIQA&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib59">59</xref>], CNNIQA&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib60">60</xref>], Neural Image Assessment (NIMA)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib61">61</xref>], NRQM&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib62">62</xref>], PIQE&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib63">63</xref>], PAQ2PIQ&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib64">64</xref>], ARNIQA&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib65">65</xref>], ENTROPY&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib66">66</xref>], Multi-dimension Attention Network for No-Reference Image Quality Assessment (MANIQA)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib67">67</xref>], TOPIQ&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib68">68</xref>], UNIQUE&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib69">69</xref>], Weighted Average Deep Image QuAlity Measure (WADIQAM)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib70">70</xref>], LAION-Aesthetics predictor (LAIONAES)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib71">71</xref>], Perceptual Image (PI)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib72">72</xref>], Fog Aware Density Evaluator (FADE)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib73">73</xref>], High Order Statistics Aggregation (HOSA)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib74">74</xref>], Natural Image Quality Evaluator (NIQE)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib75">75</xref>], Perceptual Sharpness Index (PSI)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib76">76</xref>], and Just Noticeable Blur Metric (JNBM)&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib77">77</xref>]. These metrics span a wide range, including blur metrics, natural image predictors, aesthetics, and general quality.</p>
            <p>Investigation of the overall Pearson and Spearman correlation coefficients between the IQMs and subjective scores (<italic>z</italic>-scores) reveals that none of the IQMs perform well. These results align with previous research showing that it is challenging to assess the quality of enhanced images&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib29">29</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib31">31</xref>]. This might indicate that current IQMs should be customized for a specific application.</p>
            <p>We also analyzed the correlation per image for each of the tested IQMs. A boxplot of the Spearman correlation per image is shown in Figure&#x00A0;<xref ref-type="fig" rid="jist2025007fig11">11</xref>. We found that some IQMs at the image level correlate well, but when correlation is calculated overall the performance drops. This implies that, similar to reports in the literature&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib78">78</xref>&#x2013;<xref ref-type="bibr" rid="jist2025007bib80">80</xref>], scale problems may persist in enhanced images&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib31">31</xref>]. The highest performing IQM is LIQE. The LIQE is based on vision&#x2013;language correspondence. The IQM has been trained to simultaneously conduct blind image quality assessment, scene classification, and distortion identification. Despite being trained on datasets with distortions, it performs the best on our dataset. This could be due to the combination of scene classification with identification of enhancement-related distortion.</p>
            <fig id="jist2025007fig11"><label>Figure&#x00A0;11.</label>
               <caption id="jist2025007fc11">
                  <p>Boxplot of Spearman correlation per image.</p>
               </caption>
               <graphic id="jist2025007f11_online" content-type="online"
                        xlink:href="jist2025007f11_online.jpg"/>
            </fig><p></p>
         </sec>
      </sec>
      <sec id="jist2025007us5">
         <label>5.</label>
         <title>Conclusions and Future Perspectives</title>
         <p>We found that Central Asian observers are in agreement with observations from other studies ([<xref ref-type="bibr" rid="jist2025007bib26">26</xref>, <xref ref-type="bibr" rid="jist2025007bib27">27</xref>]) that the evaluation of enhanced image quality is a challenging task. Moreover, original images tend to be perceived as lower quality compared to the enhanced versions when considering them overall. This also aligns with the results of existing studies. We determine that individual preferences might exist in the evaluations of contrast-enhanced images.</p>
         <p>The IQMs tested did not correlate with the observers&#x2019; perception on our introduced dataset, which indicates that current IQMs find it difficult to measure the quality of enhanced images, consistent with the results of existing work&#x00A0;[<xref ref-type="bibr" rid="jist2025007bib31">31</xref>]. As a result, we can assume that current IQMs should be customized for a specific application.</p>
         <p>In addition, we have conducted another experiment with Norwegian observers. The comparative analysis showed that differences are present between two observer groups. These variations could be due to cultural differences. For some images, observer groups showed similarities in quality perception. This might illustrate that image content could play a role.</p>
         <p>One of the shortcomings of this study is that Kazakh observers were the majority who represented the Central Asian group (due to practical limitations). Therefore, expanding this research with more representatives from Central Asian countries would be the aim of future work.</p>
         <p>In conclusion, whether designing a universal IQM or creating customized IQMs for image enhancement evaluation, the dataset developed in this work would be highly beneficial.</p>
         <p></p>
      </sec>
   </body>
   <back>
      <ack>
         <title>Acknowledgment</title>
         <p>The authors would like to thank the observers for their participation in the experiment. Marius Pedersen is supported by the Research Council of Norway through the &#x201C;Quality and Content&#x201D; project (Grant Number 324663).</p>
      </ack>
      <ref-list content-type="numerical">
         <title>References</title>
         <ref id="jist2025007bib1">
            <label>1</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Lin</surname>
                     <given-names>P.-H.</given-names>
                  </name>
                  <name>
                     <surname>Patterson</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>Investigation of perceived image quality and colourfulness in mobile displays for different cultures, ambient illumination, and resolution</article-title>
               <source>Ergonomics</source>
               <volume>55</volume>
               <fpage>1502</fpage>
               <lpage>1512</lpage>
               <page-range>1502&#x2013;12</page-range>
               <pub-id pub-id-type="doi">10.1080/00140139.2012.724715</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib2">
            <label>2</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Larson</surname>
                     <given-names>E. C.</given-names>
                  </name>
                  <name>
                     <surname>Chandler</surname>
                     <given-names>D. M.</given-names>
                  </name>
               </person-group>
               <year>2010</year>
               <article-title>Most apparent distortion: full-reference image quality assessment and the role of strategy</article-title>
               <source>J. Electron. imaging</source>
               <volume>19</volume>
               <elocation-id content-type="artnum">011006</elocation-id>
               <pub-id pub-id-type="doi">10.1117/1.3267105</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib3">
            <label>3</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Damera-Venkata</surname>
                     <given-names>N.</given-names>
                  </name>
                  <name>
                     <surname>Kite</surname>
                     <given-names>T. D.</given-names>
                  </name>
                  <name>
                     <surname>Geisler</surname>
                     <given-names>W. S.</given-names>
                  </name>
                  <name>
                     <surname>Evans</surname>
                     <given-names>B. L.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2000</year>
               <article-title>Image quality assessment based on a degradation model</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>9</volume>
               <fpage>636</fpage>
               <lpage>650</lpage>
               <page-range>636&#x2013;50</page-range>
               <pub-id pub-id-type="doi">10.1109/83.841940</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib4">
            <label>4</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Ahn</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Choi</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Yoon</surname>
                     <given-names>K.</given-names>
                  </name>
               </person-group>
               <year>2021</year>
               <article-title>Deep learning-based distortion sensitivity prediction for full-reference image quality assessment</article-title>
               <source>Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition</source>
               <fpage>344</fpage>
               <lpage>353</lpage>
               <page-range>344&#x2013;53</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVPRW53098.2021.00044</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib5">
            <label>5</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Agnolucci</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Galteri</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Bertini</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Del Bimbo</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>ARNIQA: learning distortion manifold for image quality assessment</article-title>
               <source>Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision</source>
               <fpage>189</fpage>
               <lpage>198</lpage>
               <page-range>189&#x2013;98</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/WACV57701.2024.00026</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib6">
            <label>6</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Min</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Zhai</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Gu</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Liu</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>X.</given-names>
                  </name>
               </person-group>
               <year>2018</year>
               <article-title>Blind image quality estimation via distortion aggravation</article-title>
               <source>IEEE Trans. Broadcast.</source>
               <volume>64</volume>
               <fpage>508</fpage>
               <lpage>517</lpage>
               <page-range>508&#x2013;17</page-range>
               <pub-id pub-id-type="doi">10.1109/TBC.2018.2816783</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib7">
            <label>7</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Liu</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Liu</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Huang</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2014</year>
               <article-title>No-reference image quality assessment based on spatial and spectral entropies</article-title>
               <source>Signal Process. Image Commun.</source>
               <volume>29</volume>
               <fpage>856</fpage>
               <lpage>863</lpage>
               <page-range>856&#x2013;63</page-range>
               <pub-id pub-id-type="doi">10.1016/j.image.2014.06.006</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib8">
            <label>8</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Sheikh</surname>
                     <given-names>H. R.</given-names>
                  </name>
                  <name>
                     <surname>Sabir</surname>
                     <given-names>M. F.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2006</year>
               <article-title>A statistical evaluation of recent full reference image quality assessment algorithms</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>15</volume>
               <fpage>3440</fpage>
               <lpage>3451</lpage>
               <page-range>3440&#x2013;51</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2006.881959</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib9">
            <label>9</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Chandler</surname>
                     <given-names>D. M.</given-names>
                  </name>
                  <name>
                     <surname>Alam</surname>
                     <given-names>M. M.</given-names>
                  </name>
                  <name>
                     <surname>Phan</surname>
                     <given-names>T. D.</given-names>
                  </name>
               </person-group>
               <year>2014</year>
               <article-title>Seven challenges for image quality research</article-title>
               <source>Proc. SPIE</source>
               <volume>9014</volume>
               <elocation-id content-type="artnum">901402</elocation-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib10">
            <label>10</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Liu</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Wang</surname>
                     <given-names>R.</given-names>
                  </name>
               </person-group>
               <year>2022</year>
               <article-title>Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives</article-title>
               <source>Digit. Signal Process.</source>
               <volume>127</volume>
               <elocation-id content-type="artnum">103547</elocation-id>
               <pub-id pub-id-type="doi">10.1016/j.dsp.2022.103547</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib11">
            <label>11</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Qi</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>Z.</given-names>
                  </name>
                  <name>
                     <surname>Sun</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Lou</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Lian</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Zhao</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Deng</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Ma</surname>
                     <given-names>Y.</given-names>
                  </name>
               </person-group>
               <year>2021</year>
               <article-title>A comprehensive overview of image enhancement techniques</article-title>
               <source>Arch. Comput. Meth. Eng.</source>
               <volume>29</volume>
               <fpage>1</fpage>
               <lpage>25</lpage>
               <page-range>1&#x2013;25</page-range>
            </element-citation></ref>
         <ref id="jist2025007bib12">
            <label>12</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Wang</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Wu</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Yuan</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Gao</surname>
                     <given-names>Z.</given-names>
                  </name>
               </person-group>
               <year>2020</year>
               <article-title>An experiment-based review of low-light image enhancement methods</article-title>
               <source>IEEE Access</source>
               <volume>8</volume>
               <fpage>87884</fpage>
               <lpage>87917</lpage>
               <page-range>87884&#x2013;917</page-range>
               <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2992749</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib13">
            <label>13</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Anwar</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Li</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <year>2020</year>
               <article-title>Diving deeper into underwater image enhancement: a survey</article-title>
               <source>Signal Process. Image Commun.</source>
               <volume>89</volume>
               <elocation-id content-type="artnum">115978</elocation-id>
               <pub-id pub-id-type="doi">10.1016/j.image.2020.115978</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib14">
            <label>14</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Islam</surname>
                     <given-names>M. J.</given-names>
                  </name>
                  <name>
                     <surname>Xia</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Sattar</surname>
                     <given-names>J.</given-names>
                  </name>
               </person-group>
               <year>2020</year>
               <article-title>Fast underwater image enhancement for improved visual perception</article-title>
               <source>IEEE Robot. Autom. Lett.</source>
               <volume>5</volume>
               <fpage>3227</fpage>
               <lpage>3234</lpage>
               <page-range>3227&#x2013;34</page-range>
               <pub-id pub-id-type="doi">10.1109/LRA.2020.2974710</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib15">
            <label>15</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Zhang</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Zhuang</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Sun</surname>
                     <given-names>H.-H.</given-names>
                  </name>
                  <name>
                     <surname>Li</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Kwong</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Li</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <year>2022</year>
               <article-title>Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>31</volume>
               <fpage>3997</fpage>
               <lpage>4010</lpage>
               <page-range>3997&#x2013;4010</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2022.3177129</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib16">
            <label>16</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Li</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Anwar</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Hou</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Cong</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Guo</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Ren</surname>
                     <given-names>W.</given-names>
                  </name>
               </person-group>
               <year>2021</year>
               <article-title>Underwater image enhancement via medium transmission-guided multi-color space embedding</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>30</volume>
               <fpage>4985</fpage>
               <lpage>5000</lpage>
               <page-range>4985&#x2013;5000</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2021.3076367</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib17">
            <label>17</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Ullah</surname>
                     <given-names>Z.</given-names>
                  </name>
                  <name>
                     <surname>Farooq</surname>
                     <given-names>M. U.</given-names>
                  </name>
                  <name>
                     <surname>Lee</surname>
                     <given-names>S.-H.</given-names>
                  </name>
                  <name>
                     <surname>An</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <year>2020</year>
               <article-title>A hybrid image enhancement based brain MRI images classification technique</article-title>
               <source>Med. Hypotheses</source>
               <volume>143</volume>
               <elocation-id content-type="artnum">109922</elocation-id>
               <pub-id pub-id-type="doi">10.1016/j.mehy.2020.109922</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib18">
            <label>18</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Lu</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Healy Jr</surname>
                     <given-names>D. M.</given-names>
                  </name>
                  <name>
                     <surname>Weaver</surname>
                     <given-names>J. B.</given-names>
                  </name>
               </person-group>
               <year>1994</year>
               <article-title>Contrast enhancement of medical images using multiscale edge representation</article-title>
               <source>Opt. Eng.</source>
               <volume>33</volume>
               <fpage>2151</fpage>
               <lpage>2161</lpage>
               <page-range>2151&#x2013;61</page-range>
               <pub-id pub-id-type="doi">10.1117/12.172254</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib19">
            <label>19</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Huang</surname>
                     <given-names>Z.</given-names>
                  </name>
                  <name>
                     <surname>Wang</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Hu</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Xu</surname>
                     <given-names>Y.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>RetiGAN: a hybrid image enhancement method for medical images</article-title>
               <source>2024 5th Int&#x2019;l. Conf. on Computer Vision, Image and Deep Learning (CVIDL)</source>
               <fpage>25</fpage>
               <lpage>29</lpage>
               <page-range>25&#x2013;9</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVIDL62147.2024.10603883</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib20">
            <label>20</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Demirel</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Ozcinar</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Anbarjafari</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <year>2009</year>
               <article-title>Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition</article-title>
               <source>IEEE Geosci. Remote Sensing Lett.</source>
               <volume>7</volume>
               <fpage>333</fpage>
               <lpage>337</lpage>
               <page-range>333&#x2013;7</page-range>
               <pub-id pub-id-type="doi">10.1109/LGRS.2009.2034873</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib21">
            <label>21</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Lisani</surname>
                     <given-names>J.-L.</given-names>
                  </name>
                  <name>
                     <surname>Michel</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Morel</surname>
                     <given-names>J.-M.</given-names>
                  </name>
                  <name>
                     <surname>Petro</surname>
                     <given-names>A. B.</given-names>
                  </name>
                  <name>
                     <surname>Sbert</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <year>2016</year>
               <article-title>An inquiry on contrast enhancement methods for satellite images</article-title>
               <source>IEEE Trans. Geosci. Remote Sens.</source>
               <volume>54</volume>
               <fpage>7044</fpage>
               <lpage>7054</lpage>
               <page-range>7044&#x2013;54</page-range>
               <pub-id pub-id-type="doi">10.1109/TGRS.2016.2594339</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib22">
            <label>22</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Demirel</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Anbarjafari</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <year>2011</year>
               <article-title>Discrete wavelet transform-based satellite image resolution enhancement</article-title>
               <source>IEEE Trans. Geosci. Remote Sens.</source>
               <volume>49</volume>
               <fpage>1997</fpage>
               <lpage>2004</lpage>
               <page-range>1997&#x2013;2004</page-range>
               <pub-id pub-id-type="doi">10.1109/TGRS.2010.2100401</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib23">
            <label>23</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Lal</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Chandra</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Rahman</surname>
                     <given-names>Z.-ur</given-names>
                  </name>
                  <name>
                     <surname>Jobson</surname>
                     <given-names>D. J.</given-names>
                  </name>
                  <name>
                     <surname>Woodell</surname>
                     <given-names>G. A.</given-names>
                  </name>
               </person-group>
               <year>2014</year>
               <article-title>Efficient algorithm for contrast enhancement of natural images</article-title>
               <source>Int. Arab J. Inf. Technol.</source>
               <volume>11</volume>
               <fpage>95</fpage>
               <lpage>102</lpage>
               <page-range>95&#x2013;102</page-range>
            </element-citation>
         </ref>
         <ref id="jist2025007bib24">
            <label>24</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Rahman</surname>
                     <given-names>Z.-ur</given-names>
                  </name>
                  <name>
                     <surname>Jobson</surname>
                     <given-names>D. J.</given-names>
                  </name>
                  <name>
                     <surname>Woodell</surname>
                     <given-names>G. A.</given-names>
                  </name>
               </person-group>
               <year>2004</year>
               <article-title>Retinex processing for automatic image enhancement</article-title>
               <source>J. Electron. Imaging</source>
               <volume>13</volume>
               <fpage>100</fpage>
               <lpage>110</lpage>
               <page-range>100&#x2013;10</page-range>
               <pub-id pub-id-type="doi">10.1117/1.1636183</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib25">
            <label>25</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Azimian</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Torkamani-Azar</surname>
                     <given-names>F.</given-names>
                  </name>
                  <name>
                     <surname>Amirshahi</surname>
                     <given-names>S. A.</given-names>
                  </name>
               </person-group>
               <year>2021</year>
               <article-title>How good is too good? A subjective study on over enhancement of images</article-title>
               <source>29th Color and Imaging Conf.</source>
               <publisher-name>IS&#x0026;T</publisher-name>
               <publisher-loc>Springfield, VA</publisher-loc>
               <pub-id pub-id-type="doi">10.2352/issn.2169-2629.2021.29.83</pub-id>
            </element-citation></ref>
         <ref id="jist2025007bib26">
            <label>26</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Chandler</surname>
                     <given-names>D. M.</given-names>
                  </name>
               </person-group>
               <year>2013</year>
               <article-title>Seven challenges in image quality assessment: past, present, and future research</article-title>
               <source>Int. Scholarly Res. Not.</source>
               <volume>2013</volume>
               <elocation-id content-type="artnum">905685</elocation-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib27">
            <label>27</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Cheng</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Chen</surname>
                     <given-names>G.</given-names>
                  </name>
               </person-group>
               <year>2017</year>
               <article-title>Evaluation of image quality metrics for sharpness enhancement</article-title>
               <source>Proc. 10th Int&#x2019;l. Symp. on Image and Signal Processing and Analysis</source>
               <fpage>115</fpage>
               <lpage>120</lpage>
               <page-range>115&#x2013;20</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/ISPA.2017.8073580</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib28">
            <label>28</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Lin</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Dong</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Xue</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <year>2005</year>
               <article-title>Visual distortion gauge based on discrimination of noticeable contrast changes</article-title>
               <source>IEEE Trans. Circuits Syst. Video Technol.</source>
               <volume>15</volume>
               <fpage>900</fpage>
               <lpage>909</lpage>
               <page-range>900&#x2013;9</page-range>
               <pub-id pub-id-type="doi">10.1109/TCSVT.2005.848345</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib29">
            <label>29</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Amirshahi</surname>
                     <given-names>S. A.</given-names>
                  </name>
                  <name>
                     <surname>Kadyrova</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2019</year>
               <article-title>How do image quality metrics perform on contrast enhanced images?</article-title>
               <source>2019 8th European Workshop on Visual Information Processing (EUVIP)</source>
               <fpage>232</fpage>
               <lpage>237</lpage>
               <page-range>232&#x2013;7</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/EUVIP47703.2019.8946143</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib30">
            <label>30</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Gu</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Zhai</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Lin</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Liu</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2015</year>
               <article-title>The analysis of image contrast: from quality assessment to automatic enhancement</article-title>
               <source>IEEE Trans. Cybern.</source>
               <volume>46</volume>
               <fpage>284</fpage>
               <lpage>297</lpage>
               <page-range>284&#x2013;97</page-range>
               <pub-id pub-id-type="doi">10.1109/TCYB.2015.2401732</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib31">
            <label>31</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Kadyrova</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Ahmad</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Mandal</surname>
                     <given-names>D. J.</given-names>
                  </name>
                  <name>
                     <surname>Nguyen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Zimmermann</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <article-title>Image enhancement dataset for evaluation of image quality metrics</article-title>
               <source>IST Int&#x2019;l. Symp. on Electronic Imaging 2022, Image Quality and System Performance XIX</source>
               <year>2022</year>
               <publisher-name>IS&#x0026;T</publisher-name>
               <publisher-loc>Springfield, VA</publisher-loc>
               <pub-id pub-id-type="doi">10.2352/EI.2022.34.9.IQSP-317</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib32">
            <label>32</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Vu</surname>
                     <given-names>C. T.</given-names>
                  </name>
                  <name>
                     <surname>Phan</surname>
                     <given-names>T. D.</given-names>
                  </name>
                  <name>
                     <surname>Banga</surname>
                     <given-names>P. S.</given-names>
                  </name>
                  <name>
                     <surname>Chandler</surname>
                     <given-names>D. M.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>On the quality assessment of enhanced images: a database, analysis, and strategies for augmenting existing methods</article-title>
               <source>2012 IEEE Southwest Symp. on Image Analysis and Interpretation</source>
               <fpage>181</fpage>
               <lpage>184</lpage>
               <page-range>181&#x2013;4</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/SSIAI.2012.6202483</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib33">
            <label>33</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Qureshi</surname>
                     <given-names>M. A.</given-names>
                  </name>
                  <name>
                     <surname>Beghdadi</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Sdiri</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Deriche</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Alaya-Cheikh</surname>
                     <given-names>F.</given-names>
                  </name>
               </person-group>
               <year>2016</year>
               <article-title>A comprehensive performance evaluation of objective quality metrics for contrast enhancement techniques</article-title>
               <source>2016 6th European Workshop on Visual Information Processing (EUVIP)</source>
               <fpage>1</fpage>
               <lpage>5</lpage>
               <page-range>1&#x2013;5</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/EUVIP.2016.7764589</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib34">
            <label>34</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Li</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Guo</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Ren</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Cong</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Hou</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Kwong</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Tao</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <year>2019</year>
               <article-title>An underwater image enhancement benchmark dataset and beyond</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>29</volume>
               <fpage>4376</fpage>
               <lpage>4389</lpage>
               <page-range>4376&#x2013;89</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2019.2955241</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib35">
            <label>35</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Cherepkova</surname>
                     <given-names>O.</given-names>
                  </name>
                  <name>
                     <surname>Amirshahi</surname>
                     <given-names>S. A.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>Individual contrast preferences in natural images</article-title>
               <source>J. Imaging</source>
               <volume>10</volume>
               <fpage>25</fpage>
               <pub-id pub-id-type="doi">10.3390/jimaging10010025</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib36">
            <label>36</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Senthilkumar</surname>
                     <given-names>N. K.</given-names>
                  </name>
                  <name>
                     <surname>Ahmad</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Andreetto</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Prabhakaran</surname>
                     <given-names>V.</given-names>
                  </name>
                  <name>
                     <surname>Prabhu</surname>
                     <given-names>U.</given-names>
                  </name>
                  <name>
                     <surname>Dieng</surname>
                     <given-names>A. B.</given-names>
                  </name>
                  <name>
                     <surname>Bhattacharyya</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Dave</surname>
                     <given-names>S.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>Beyond aesthetics: cultural competence in text-to-image models</article-title>
               <source>Adv. Neural Inf. Process. Syst.</source>
               <volume>37</volume>
               <fpage>13716</fpage>
               <lpage>13747</lpage>
               <page-range>13716&#x2013;47</page-range>
            </element-citation>
         </ref>
         <ref id="jist2025007bib37">
            <label>37</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Aslam</surname>
                     <given-names>M. M.</given-names>
                  </name>
               </person-group>
               <year>2006</year>
               <article-title>Are you selling the right colour? A cross-cultural review of colour as a marketing cue</article-title>
               <source>J. Mark. Commun.</source>
               <volume>12</volume>
               <fpage>15</fpage>
               <lpage>30</lpage>
               <page-range>15&#x2013;30</page-range>
               <pub-id pub-id-type="doi">10.1080/13527260500247827</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib38">
            <label>38</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Garth</surname>
                     <given-names>T. R.</given-names>
                  </name>
               </person-group>
               <year>1922</year>
               <article-title>The color preferences of five hundred and fifty-nine full-blood Indians</article-title>
               <source>J. Exp. Psychol.</source>
               <volume>5</volume>
               <fpage>392</fpage>
               <pub-id pub-id-type="doi">10.1037/h0072088</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib39">
            <label>39</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Choungourian</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <year>1968</year>
               <article-title>Color preferences and cultural variation</article-title>
               <source>Perceptual Motor Skills</source>
               <volume>26</volume>
               <fpage>1203</fpage>
               <lpage>1206</lpage>
               <page-range>1203&#x2013;6</page-range>
               <pub-id pub-id-type="doi">10.2466/pms.1968.26.3c.1203</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib40">
            <label>40</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Shoyama</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Tochihara</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Kim</surname>
                     <given-names>J.</given-names>
                  </name>
               </person-group>
               <year>2003</year>
               <article-title>Japanese and Korean ideas about clothing colors for elderly people: intercountry and intergenerational differences</article-title>
               <source>Color Res. Appl.</source>
               <volume>28</volume>
               <fpage>139</fpage>
               <lpage>150</lpage>
               <page-range>139&#x2013;50</page-range>
               <pub-id pub-id-type="doi">10.1002/col.10132</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib41">
            <label>41</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Oyama</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Tanaka</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Chiba</surname>
                     <given-names>Y.</given-names>
                  </name>
               </person-group>
               <year>1962</year>
               <article-title>Affective dimensions of colors a cross-cultural study</article-title>
               <source>Japan. Psychological Res.</source>
               <volume>4</volume>
               <fpage>78</fpage>
               <lpage>91</lpage>
               <page-range>78&#x2013;91</page-range>
               <pub-id pub-id-type="doi">10.4992/psycholres1954.4.78</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib42">
            <label>42</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Madden</surname>
                     <given-names>T. J.</given-names>
                  </name>
                  <name>
                     <surname>Hewett</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Roth</surname>
                     <given-names>M. S.</given-names>
                  </name>
               </person-group>
               <year>2000</year>
               <article-title>Managing images in different cultures: a cross-national study of color meanings and preferences</article-title>
               <source>J. Int. Mark.</source>
               <volume>8</volume>
               <fpage>90</fpage>
               <lpage>107</lpage>
               <page-range>90&#x2013;107</page-range>
               <pub-id pub-id-type="doi">10.1509/jimk.8.4.90.19795</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib43">
            <label>43</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Ou</surname>
                     <given-names>L. C.</given-names>
                  </name>
                  <name>
                     <surname>Ronnier Luo</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Sun</surname>
                     <given-names>P. L.</given-names>
                  </name>
                  <name>
                     <surname>Hu</surname>
                     <given-names>N. C.</given-names>
                  </name>
                  <name>
                     <surname>Chen</surname>
                     <given-names>H. S.</given-names>
                  </name>
                  <name>
                     <surname>Guan</surname>
                     <given-names>S. S</given-names>
                  </name>
                  <name>
                     <surname>Woodcock</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Caivano</surname>
                     <given-names>J. L.</given-names>
                  </name>
                  <name>
                     <surname>Huertas</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Trem&#x00E9;au</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Billger</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>A cross-cultural comparison of colour emotion for two-colour combinations</article-title>
               <source>Color Res. Appl.</source>
               <volume>37</volume>
               <fpage>23</fpage>
               <lpage>43</lpage>
               <page-range>23&#x2013;43</page-range>
               <pub-id pub-id-type="doi">10.1002/col.20648</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib44">
            <label>44</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Choi</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Suk</surname>
                     <given-names>H.-J.</given-names>
                  </name>
               </person-group>
               <year>2015</year>
               <article-title>A comparative study of psychophysical judgment of color reproductions on mobile displays between Europeans and Asians</article-title>
               <source>Proc. SPIE</source>
               <volume>9395</volume>
               <fpage>212</fpage>
               <lpage>220</lpage>
               <page-range>212&#x2013;20</page-range>
            </element-citation></ref>
         <ref id="jist2025007bib45">
            <label>45</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Fernandez</surname>
                     <given-names>S. R.</given-names>
                  </name>
                  <name>
                     <surname>Fairchild</surname>
                     <given-names>M. D.</given-names>
                  </name>
                  <name>
                     <surname>Braun</surname>
                     <given-names>K.</given-names>
                  </name>
               </person-group>
               <year>2005</year>
               <article-title>Analysis of observer and cultural variability while generating &#x201C;preferred&#x201D; color reproductions of pictorial images</article-title>
               <source>J. Imaging Sci. Technol.</source>
               <volume>49</volume>
               <fpage>96</fpage>
               <pub-id pub-id-type="doi">10.2352/J.ImagingSci.Technol.2005.49.1.art00012</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib46">
            <label>46</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Saupe</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Del Pin</surname>
                     <given-names>S. H.</given-names>
                  </name>
               </person-group>
               <year>2025</year>
               <article-title>Uncovering cultural influences on perceptual image and video quality assessment through adaptive quantized metric models</article-title>
               <source>J. Perceptual Imaging</source>
               <volume>7</volume>
            </element-citation></ref>
         <ref id="jist2025007bib47">
            <label>47</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Chen</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Lee</surname>
                     <given-names>S.-ying</given-names>
                  </name>
                  <name>
                     <surname>Stevenson</surname>
                     <given-names>H. W.</given-names>
                  </name>
               </person-group>
               <year>1995</year>
               <article-title>Response style and cross-cultural comparisons of rating scales among East Asian and North American students</article-title>
               <source>Psychological Sci.</source>
               <volume>6</volume>
               <fpage>170</fpage>
               <lpage>175</lpage>
               <page-range>170&#x2013;5</page-range>
               <pub-id pub-id-type="doi">10.1111/j.1467-9280.1995.tb00327.x</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib48">
            <label>48</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Beghdadi</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Qureshi</surname>
                     <given-names>M. A.</given-names>
                  </name>
                  <name>
                     <surname>Sdiri</surname>
                     <given-names>B.</given-names>
                  </name>
                  <name>
                     <surname>Deriche</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Alaya-Cheikh</surname>
                     <given-names>F.</given-names>
                  </name>
               </person-group>
               <year>2018</year>
               <article-title>CEED - a database for image contrast enhancement evaluation</article-title>
               <source>2018 Colour and Visual Computing Symposium (CVCS)</source>
               <fpage>1</fpage>
               <lpage>6</lpage>
               <page-range>1&#x2013;6</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVCS.2018.8496603</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib49">
            <label>49</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Huang</surname>
                     <given-names>S.-C.</given-names>
                  </name>
                  <name>
                     <surname>Cheng</surname>
                     <given-names>F.-C.</given-names>
                  </name>
                  <name>
                     <surname>Chiu</surname>
                     <given-names>Y.-S.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>Efficient contrast enhancement using adaptive gamma correction with weighting distribution</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>22</volume>
               <fpage>1032</fpage>
               <lpage>1041</lpage>
               <page-range>1032&#x2013;41</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2012.2226047</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib50">
            <label>50</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Zuiderveld</surname>
                     <given-names>K.</given-names>
                  </name>
               </person-group>
               <year>1994</year>
               <article-title>Contrast limited adaptive histogram equalization</article-title>
               <source>Graph. Gems</source>
               <volume>4</volume>
               <fpage>474</fpage>
               <lpage>485</lpage>
               <page-range>474&#x2013;85</page-range>
            </element-citation>
         </ref>
         <ref id="jist2025007bib51">
            <label>51</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Qureshi</surname>
                     <given-names>M. A.</given-names>
                  </name>
                  <name>
                     <surname>Beghdadi</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Deriche</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2017</year>
               <article-title>Towards the design of a consistent image contrast enhancement evaluation measure</article-title>
               <source>Signal Process. Image Commun.</source>
               <volume>58</volume>
               <fpage>212</fpage>
               <lpage>227</lpage>
               <page-range>212&#x2013;27</page-range>
               <pub-id pub-id-type="doi">10.1016/j.image.2017.08.004</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib52">
            <label>52</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Parihar</surname>
                     <given-names>A. S.</given-names>
                  </name>
                  <name>
                     <surname>Verma</surname>
                     <given-names>O. P.</given-names>
                  </name>
                  <name>
                     <surname>Khanna</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <year>2017</year>
               <article-title>Fuzzy-contextual contrast enhancement</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>26</volume>
               <fpage>1810</fpage>
               <lpage>1819</lpage>
               <page-range>1810&#x2013;9</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2017.2665975</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib53">
            <label>53</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Rizzi</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Algeri</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Medeghini</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Marini</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <year>2004</year>
               <article-title>A proposal for contrast measure in digital images</article-title>
               <source>Conf. on Colour in Graphics, Imaging, and Vision</source>
               <fpage>187</fpage>
               <lpage>192</lpage>
               <page-range>187&#x2013;92</page-range>
               <publisher-name>IS&#x0026;T</publisher-name>
               <publisher-loc>Springfield, VA</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib54">
            <label>54</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Van Ngo</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Storvik</surname>
                     <given-names>J.</given-names>
                     <suffix>Jr.</suffix>
                  </name>
                  <name>
                     <surname>Dokkeberg</surname>
                     <given-names>C. A.</given-names>
                  </name>
                  <name>
                     <surname>Farup</surname>
                     <given-names>I.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2015</year>
               <article-title>QuickEval: a web application for psychometric scaling experiments</article-title>
               <source>Proc. SPIE</source>
               <volume>9396</volume>
               <fpage>212</fpage>
               <lpage>224</lpage>
               <page-range>212&#x2013;24</page-range>
            </element-citation>
         </ref>
         <ref id="jist2025007bib55">
            <label>55</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Engeldrum</surname>
                     <given-names>P. G.</given-names>
                  </name>
               </person-group>
               <source>Psychometric Scaling: a Toolkit for Imaging Systems Development</source>
               <year>2000</year>
               <publisher-name>Imcotek Press</publisher-name>
               <publisher-loc>Winchester, MA</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib56">
            <label>56</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Montag</surname>
                     <given-names>E. D.</given-names>
                  </name>
               </person-group>
               <year>2006</year>
               <article-title>Empirical formula for creating error bars for the method of paired comparison</article-title>
               <source>J. Electron. Imaging</source>
               <volume>15</volume>
               <fpage>010502</fpage>
               <lpage>010502</lpage>
               <page-range>010502&#x2013;</page-range>
               <pub-id pub-id-type="doi">10.1117/1.2181547</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib57">
            <label>57</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Mittal</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Moorthy</surname>
                     <given-names>A. K.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>No-reference image quality assessment in the spatial domain</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>21</volume>
               <fpage>4695</fpage>
               <lpage>4708</lpage>
               <page-range>4695&#x2013;708</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2012.2214050</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib58">
            <label>58</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Zhang</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Zhai</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Wei</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Ma</surname>
                     <given-names>K.</given-names>
                  </name>
               </person-group>
               <year>2023</year>
               <article-title>Blind image quality assessment via vision-language correspondence: a multitask learning perspective</article-title>
               <source>Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition</source>
               <fpage>14071</fpage>
               <lpage>14081</lpage>
               <page-range>14071&#x2013;81</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVPR52729.2023.01352</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib59">
            <label>59</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Wang</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Chan</surname>
                     <given-names>K. C.</given-names>
                  </name>
                  <name>
                     <surname>Loy</surname>
                     <given-names>C. C.</given-names>
                  </name>
               </person-group>
               <year>2023</year>
               <article-title>Exploring clip for assessing the look and feel of images</article-title>
               <source>Proc. of the AAAI Conf. on Artificial Intelligence</source>
               <volume>37</volume>
               <fpage>2555</fpage>
               <lpage>2563</lpage>
               <page-range>2555&#x2013;63</page-range>
               <publisher-name>AAAI Press</publisher-name>
               <publisher-loc>Washington, DC, USA</publisher-loc>
               <pub-id pub-id-type="doi">10.1609/aaai.v37i2.25353</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib60">
            <label>60</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Kang</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Ye</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Li</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Doermann</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <year>2014</year>
               <article-title>Convolutional neural networks for no-reference image quality assessment</article-title>
               <source>Proc. IEEE Conf. on Computer Vision and Pattern Recognition</source>
               <fpage>1733</fpage>
               <lpage>1740</lpage>
               <page-range>1733&#x2013;40</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVPR.2014.224</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib61">
            <label>61</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Talebi</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Milanfar</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <year>2018</year>
               <article-title>NIMA: neural image assessment</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>27</volume>
               <fpage>3998</fpage>
               <lpage>4011</lpage>
               <page-range>3998&#x2013;4011</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2018.2831899</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib62">
            <label>62</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Ma</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>C.-Y.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>X.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>M.-H.</given-names>
                  </name>
               </person-group>
               <year>2017</year>
               <article-title>Learning a no-reference quality metric for single-image super-resolution</article-title>
               <source>Comput. Vis. Image Underst.</source>
               <volume>158</volume>
               <fpage>1</fpage>
               <lpage>16</lpage>
               <page-range>1&#x2013;16</page-range>
               <pub-id pub-id-type="doi">10.1016/j.cviu.2016.12.009</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib63">
            <label>63</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Venkatanath</surname>
                     <given-names>N.</given-names>
                  </name>
                  <name>
                     <surname>Praneeth</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Sumohana</surname>
                     <given-names>S. C.</given-names>
                  </name>
                  <name>
                     <surname>Swarup</surname>
                     <given-names>S. M.</given-names>
                  </name>
               </person-group>
               <year>2015</year>
               <article-title>Blind image quality evaluation using perception based features</article-title>
               <source>2015 21st National Conf. on Communications (NCC)</source>
               <fpage>1</fpage>
               <lpage>6</lpage>
               <page-range>1&#x2013;6</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/NCC.2015.7084843</pub-id>
            </element-citation></ref>
         <ref id="jist2025007bib64">
            <label>64</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Ying</surname>
                     <given-names>Z.</given-names>
                  </name>
                  <name>
                     <surname>Niu</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Gupta</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Mahajan</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Ghadiyaram</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <year>2020</year>
               <article-title>From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality</article-title>
               <source>Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition</source>
               <fpage>3575</fpage>
               <lpage>3585</lpage>
               <page-range>3575&#x2013;85</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/CVPR42600.2020.00363</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib65">
            <label>65</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Agnolucci</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Galteri</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Bertini</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Del Bimbo</surname>
                     <given-names>A.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>Arniqa: learning distortion manifold for image quality assessment</article-title>
               <source>Proc. of the IEEE/CVF Winter Conf. on Applications of Computer Vision</source>
               <fpage>189</fpage>
               <lpage>198</lpage>
               <page-range>189&#x2013;98</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
               <pub-id pub-id-type="doi">10.1109/WACV57701.2024.00026</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib66">
            <label>66</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Gonzalez</surname>
                     <given-names>R. C.</given-names>
                  </name>
                  <name>
                     <surname>Woods</surname>
                     <given-names>R. E.</given-names>
                  </name>
                  <name>
                     <surname>Eddins</surname>
                     <given-names>S. L.</given-names>
                  </name>
               </person-group>
               <year>2003</year>
               <article-title>Digital image processing using MATLAB</article-title>
               <source>Digital Image Processing Using MATLAB, Chapter 11</source>
               <publisher-name>Prentice Hall</publisher-name>
               <publisher-loc>New Jersey</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib67">
            <label>67</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Yang</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Wu</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Shi</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Lao</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Gong</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Cao</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Wang</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>Y.</given-names>
                  </name>
               </person-group>
               <year>2022</year>
               <article-title>Maniqa: multi-dimension attention network for no-reference image quality assessment</article-title>
               <source>Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition</source>
               <fpage>1191</fpage>
               <lpage>1200</lpage>
               <page-range>1191&#x2013;200</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib68">
            <label>68</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Chen</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Mo</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Hou</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Wu</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Liao</surname>
                     <given-names>L.</given-names>
                  </name>
                  <name>
                     <surname>Sun</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Yan</surname>
                     <given-names>Q.</given-names>
                  </name>
                  <name>
                     <surname>Lin</surname>
                     <given-names>W.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>Topiq: a top-down approach from semantics to distortions for image quality assessment</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>33</volume>
               <fpage>2404</fpage>
               <lpage>2418</lpage>
               <page-range>2404&#x2013;18</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2024.3378466</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib69">
            <label>69</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Zhang</surname>
                     <given-names>W.</given-names>
                  </name>
                  <name>
                     <surname>Ma</surname>
                     <given-names>K.</given-names>
                  </name>
                  <name>
                     <surname>Zhai</surname>
                     <given-names>G.</given-names>
                  </name>
                  <name>
                     <surname>Yang</surname>
                     <given-names>X.</given-names>
                  </name>
               </person-group>
               <year>2021</year>
               <article-title>Uncertainty-aware blind image quality assessment in the laboratory and wild</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>30</volume>
               <fpage>3474</fpage>
               <lpage>3486</lpage>
               <page-range>3474&#x2013;86</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2021.3061932</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib70">
            <label>70</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Bosse</surname>
                     <given-names>S.</given-names>
                  </name>
                  <name>
                     <surname>Maniry</surname>
                     <given-names>D.</given-names>
                  </name>
                  <name>
                     <surname>M&#x00FC;ller</surname>
                     <given-names>K. R.</given-names>
                  </name>
                  <name>
                     <surname>Wiegand</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Samek</surname>
                     <given-names>W.</given-names>
                  </name>
               </person-group>
               <year>2017</year>
               <article-title>Deep neural networks for no-reference and full-reference image quality assessment</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>27</volume>
               <fpage>206</fpage>
               <lpage>219</lpage>
               <page-range>206&#x2013;19</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2017.2760518</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib71">
            <label>71</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Schuhmann</surname>
                     <given-names>C.</given-names>
                  </name>
               </person-group>
               <source>LAION Aesthetics Predictor</source>
               <year>2022</year>
               <comment>online <uri xlink:href="https://laion.ai/blog/laion-aesthetics/">https://laion.ai/blog/laion-aesthetics/</uri></comment>
            </element-citation>
         </ref>
         <ref id="jist2025007bib72">
            <label>72</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Blau</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Mechrez</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Timofte</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Michaeli</surname>
                     <given-names>T.</given-names>
                  </name>
                  <name>
                     <surname>Zelnik-Manor</surname>
                     <given-names>L.</given-names>
                  </name>
               </person-group>
               <year>2018</year>
               <article-title>The 2018 PIRM challenge on perceptual image super-resolution</article-title>
               <source>European Conf. on Computer Vision</source>
               <fpage>334</fpage>
               <lpage>355</lpage>
               <page-range>334&#x2013;55</page-range>
               <publisher-name>Springer International Publishing</publisher-name>
               <publisher-loc>Cham</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib73">
            <label>73</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Choi</surname>
                     <given-names>L. K.</given-names>
                  </name>
                  <name>
                     <surname>You</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2015</year>
               <article-title>Referenceless prediction of perceptual fog density and perceptual image defogging</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>24</volume>
               <fpage>3888</fpage>
               <lpage>3901</lpage>
               <page-range>3888&#x2013;901</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2015.2456502</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib74">
            <label>74</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Xu</surname>
                     <given-names>J.</given-names>
                  </name>
                  <name>
                     <surname>Ye</surname>
                     <given-names>P.</given-names>
                  </name>
                  <name>
                     <surname>Li</surname>
                     <given-names>Q.</given-names>
                  </name>
                  <name>
                     <surname>Du</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Liu</surname>
                     <given-names>Y.</given-names>
                  </name>
                  <name>
                     <surname>Doermann</surname>
                     <given-names>D.</given-names>
                  </name>
               </person-group>
               <year>2016</year>
               <article-title>Blind image quality assessment based on high order statistics aggregation</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>25</volume>
               <fpage>4444</fpage>
               <lpage>4457</lpage>
               <page-range>4444&#x2013;57</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2016.2585880</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib75">
            <label>75</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Mittal</surname>
                     <given-names>A.</given-names>
                  </name>
                  <name>
                     <surname>Soundararajan</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Bovik</surname>
                     <given-names>A. C.</given-names>
                  </name>
               </person-group>
               <year>2012</year>
               <article-title>Making a &#x201C;completely blind&#x201D; image quality analyzer</article-title>
               <source>IEEE Signal Process. Lett.</source>
               <volume>20</volume>
               <fpage>209</fpage>
               <lpage>212</lpage>
               <page-range>209&#x2013;12</page-range>
               <pub-id pub-id-type="doi">10.1109/LSP.2012.2227726</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib76">
            <label>76</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Feichtenhofer</surname>
                     <given-names>C.</given-names>
                  </name>
                  <name>
                     <surname>Fassold</surname>
                     <given-names>H.</given-names>
                  </name>
                  <name>
                     <surname>Schallauer</surname>
                     <given-names>P.</given-names>
                  </name>
               </person-group>
               <year>2013</year>
               <article-title>A perceptual image sharpness metric based on local edge gradient analysis</article-title>
               <source>IEEE Signal Process. Lett.</source>
               <volume>20</volume>
               <fpage>379</fpage>
               <lpage>382</lpage>
               <page-range>379&#x2013;82</page-range>
               <pub-id pub-id-type="doi">10.1109/LSP.2013.2248711</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib77">
            <label>77</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Ferzli</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Karam</surname>
                     <given-names>L. J.</given-names>
                  </name>
               </person-group>
               <year>2009</year>
               <article-title>A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB)</article-title>
               <source>IEEE Trans. Image Process.</source>
               <volume>18</volume>
               <fpage>717</fpage>
               <lpage>728</lpage>
               <page-range>717&#x2013;28</page-range>
               <pub-id pub-id-type="doi">10.1109/TIP.2008.2011760</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib78">
            <label>78</label>
            <element-citation publication-type="book"><person-group person-group-type="author">
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Farup</surname>
                     <given-names>I.</given-names>
                  </name>
               </person-group>
               <year>2016</year>
               <article-title>Improving the robustness to image scale of the total variation of difference metric</article-title>
               <source>2016 3rd Int&#x2019;l. Conf. on Signal Processing and Integrated Networks (SPIN)</source>
               <fpage>116</fpage>
               <lpage>121</lpage>
               <page-range>116&#x2013;21</page-range>
               <publisher-name>IEEE</publisher-name>
               <publisher-loc>Piscataway, NJ</publisher-loc>
            </element-citation>
         </ref>
         <ref id="jist2025007bib79">
            <label>79</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Hlayhel</surname>
                     <given-names>R.</given-names>
                  </name>
                  <name>
                     <surname>Mobini</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Agossou</surname>
                     <given-names>B. E.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
                  <name>
                     <surname>Amirshahi</surname>
                     <given-names>S. A.</given-names>
                  </name>
               </person-group>
               <year>2024</year>
               <article-title>Colourlab image database: optical aberrations</article-title>
               <source>London Imaging Meeting</source>
               <volume>5</volume>
               <fpage>22</fpage>
               <pub-id pub-id-type="doi">10.2352/lim.2024.5.1.5</pub-id>
            </element-citation>
         </ref>
         <ref id="jist2025007bib80">
            <label>80</label>
            <element-citation publication-type="journal"><person-group person-group-type="author">
                  <name>
                     <surname>Ahmed</surname>
                     <given-names>T. U.</given-names>
                  </name>
                  <name>
                     <surname>Amirshahi</surname>
                     <given-names>S. A.</given-names>
                  </name>
                  <name>
                     <surname>Pedersen</surname>
                     <given-names>M.</given-names>
                  </name>
               </person-group>
               <year>2023</year>
               <article-title>Image demosaicing: subjective analysis and evaluation of image quality metrics</article-title>
               <source>Electron. Imaging</source>
               <volume>35</volume>
               <fpage>1</fpage>
               <lpage>6</lpage>
               <page-range>1&#x2013;6</page-range>
               <pub-id pub-id-type="doi">10.2352/EI.2023.35.8.IQSP-301</pub-id>
            </element-citation>
         </ref>
      </ref-list>
   </back>
</article>