GPT-4, which is a multimodal large-scale language model, was released on March 14, 2023. GPT-4 is equipped with Transformer, a machine learning model for natural language processing, which trains a large neural network through unsupervised learning, followed by reinforcement learning from human feedback (RLHF) based on human feedback. Although GPT-4 is one of the research achievements in the field of natural language processing (NLP), it is a technology that can be applied not only to natural language generation but also to image generation. However, specifications for GPT-4 have not been made public, therefore it is difficult to use for research purposes. In this study, we first generated an image database by adjusting parameters using Stable Diffusion, which is a deep learning model that is also used for image generation based on text input and images. And then, we carried out experiments to evaluate the 3D CG image quality from the generated database, and discussed the quality assessment of the image generation model.
This research explores the effect of various eyewear lenses, designed with varied transmittance properties, on human visual perception. These lenses are developed to enhance contrast for spatial-chromatic patterns like cyan-red (CR) and magenta-green (MG) compared to lenses with more uniform transmittance. The study evaluates participants’ accuracy and response times in identifying contrast patterns, aiming to understand how different eyewear configurations affect these visual metrics. Two experiments were conducted: the first adjusted spatial frequencies to determine visibility thresholds with different eyewear, while the second utilized a 4-alternative forced-choice (4-AFC) method to measure participants’ ability to identify contrast patterns. Results indicate that eyewear with varied transmittance enhances contrast sensitivity for these chromatic pairs more effectively than uniform transmittance lenses, offering valuable insights into optimizing color-enhancing eyewear for improving certain aspects of visual performance and providing broader applications in enhancing human visual perception across various visual tasks.
Accurate and precise classification/quantification of skin pigmentation is critical to address health inequities such as for example racial bias in pulse oximetry. Current skintone classification methods rely on measuring or estimating the color. These methods include a measurement device or subjective matching with skintone color scales. Robust detection of skin type and melanin index is challenging, as these methods require precise calibration. And recently acquired sun exposure may affect the measurements due to tanning or erythema. The proposed system differentiates and quantifies skin type and melanin index by exploiting the variance in skin structures and skin pigmentation network across skin types. Our result with a small study shows skin structure patterns are a robust, color independent method for skin tone classification. A real-time system demo shows the practical viability of the method.