<!DOCTYPE article PUBLIC '-//NLM//DTD Journal Publishing DTD v2.1 20050630//EN' 'http://uploads.ingentaconnect.com/docs/dtd/ingenta-journalpublishing.dtd'>
<article article-type="research-article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="aggregator">72010604</journal-id>
      <journal-title>Electronic Imaging</journal-title>
      <issn pub-type="ppub">2470-1173</issn><issn pub-type="epub"></issn>
      <publisher>
        <publisher-name>Society for Imaging Science and Technology</publisher-name>
        <publisher-loc>7003 Kilworth Lane, Springfield, VA 22151 USA</publisher-loc>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.2352/ISSN.2470-1173.2021.11.HVEI-110</article-id>
      <article-id pub-id-type="sici">2470-1173(20210118)2021:11L.1101;1-</article-id>
      <article-id pub-id-type="publisher-id">ei_24701173_v2021n11_input/s2.xml</article-id>
      <article-id pub-id-type="other">/ist/ei/2021/00002021/00000011/art00002</article-id>
      <article-categories>
        <subj-group>
          <subject>Articles</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Deep Quality evaluator guided by 3D Saliency for Stereoscopic Images</article-title>
      </title-group>
      <contrib-group>
        <contrib>
          <name>
            <surname>Messai</surname>
            <given-names>Oussama</given-names>
          </name>
        </contrib>
        <contrib>
          <name>
            <surname>Chetouani</surname>
            <given-names>Aladine</given-names>
          </name>
        </contrib>
        <contrib>
          <name>
            <surname>Hachouf</surname>
            <given-names>Fella</given-names>
          </name>
        </contrib>
        <contrib>
          <name>
            <surname>Seghir</surname>
            <given-names>Zianou Ahmed</given-names>
          </name>
        </contrib>
      </contrib-group>
      <pub-date>
        <day>18</day>
        <month>01</month>
        <year>2021</year>
      </pub-date>
      <volume>2021</volume>
      <issue>11</issue>
      <fpage>110-1</fpage>
      <lpage>110-7</lpage>
      <permissions>
        <copyright-year>2021</copyright-year>
      </permissions>
      <abstract>
        <p>
          <italic>Due to the use of 3D contents in various applications, Stereo Image Quality Assessment (SIQA) has attracted more attention to ensure good viewing experience for the users. Several methods have been thus proposed in the literature with a clear improvement for deep learning-based methods.
 This paper introduces a new deep learning-based no-reference SIQA using cyclopean view hypothesis and human visual attention. First, the cyclopean image is built considering the presence of binocular rivalry that covers the asymmetric distortion case. Second, the saliency map is computed taking
 into account the depth information. The latter aims to extract patches on the most perceptual relevant regions. Finally, a modified version of the pre-trained vgg-19 is fine-tuned and used to predict the quality score through the selected patches. The performance of the proposed metric has
 been evaluated on 3D LIVE phase I and phase II databases. Compared with the state-of-the-art metrics, our method gives better outcomes.</italic>
        </p>
      </abstract>
      <kwd-group>
        <kwd>Stereoscopic image quality assessment</kwd>
        <kwd>Convolutional Neural Network</kwd>
        <kwd>3D Saliency</kwd>
        <kwd>Human Visual System</kwd>
      </kwd-group>
    </article-meta>
  </front>
</article>
