Regular
Contrast SensitivityColor-Enhancing Lenses
Diffusion Modeldeep learning
Fitzpatrick skin type
Human Vision Perception
Image Quality Assessmentindividual topology angleImage Generation AIimage-to-image
Opponent Colors
skin melanin networkskin melanin indexskin tone detectionskin type detectionSpatial-Chromatic Patternsskin structures
Tinted EyewearTransmittance
Vision and Language
4-AFC Experiment
 Filters
Month and year
 
  27  4
Image
Pages 243-1 - 243-5,  © 2025 Society for Imaging Science and Technology 2025
Volume 37
Issue 9
Abstract

GPT-4, which is a multimodal large-scale language model, was released on March 14, 2023. GPT-4 is equipped with Transformer, a machine learning model for natural language processing, which trains a large neural network through unsupervised learning, followed by reinforcement learning from human feedback (RLHF) based on human feedback. Although GPT-4 is one of the research achievements in the field of natural language processing (NLP), it is a technology that can be applied not only to natural language generation but also to image generation. However, specifications for GPT-4 have not been made public, therefore it is difficult to use for research purposes. In this study, we first generated an image database by adjusting parameters using Stable Diffusion, which is a deep learning model that is also used for image generation based on text input and images. And then, we carried out experiments to evaluate the 3D CG image quality from the generated database, and discussed the quality assessment of the image generation model.

Digital Library: EI
Published Online: February  2025
  50  5
Image
Pages 246-1 - 246-8,  © 2025 Society for Imaging Science and Technology 2025
Volume 37
Issue 9
Abstract

This research explores the effect of various eyewear lenses, designed with varied transmittance properties, on human visual perception. These lenses are developed to enhance contrast for spatial-chromatic patterns like cyan-red (CR) and magenta-green (MG) compared to lenses with more uniform transmittance. The study evaluates participants’ accuracy and response times in identifying contrast patterns, aiming to understand how different eyewear configurations affect these visual metrics. Two experiments were conducted: the first adjusted spatial frequencies to determine visibility thresholds with different eyewear, while the second utilized a 4-alternative forced-choice (4-AFC) method to measure participants’ ability to identify contrast patterns. Results indicate that eyewear with varied transmittance enhances contrast sensitivity for these chromatic pairs more effectively than uniform transmittance lenses, offering valuable insights into optimizing color-enhancing eyewear for improving certain aspects of visual performance and providing broader applications in enhancing human visual perception across various visual tasks.

Digital Library: EI
Published Online: February  2025
  41  26
Image
Pages 256-1 - 256-4,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2025
Volume 37
Issue 9
Abstract

Accurate and precise classification/quantification of skin pigmentation is critical to address health inequities such as for example racial bias in pulse oximetry. Current skintone classification methods rely on measuring or estimating the color. These methods include a measurement device or subjective matching with skintone color scales. Robust detection of skin type and melanin index is challenging, as these methods require precise calibration. And recently acquired sun exposure may affect the measurements due to tanning or erythema. The proposed system differentiates and quantifies skin type and melanin index by exploiting the variance in skin structures and skin pigmentation network across skin types. Our result with a small study shows skin structure patterns are a robust, color independent method for skin tone classification. A real-time system demo shows the practical viability of the method.

Digital Library: EI
Published Online: February  2025

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]