Back to articles
Proceedings Paper
Volume: 33 | Article ID: 36
Abstract
Abstract

Large Language Models (LLMs) are advanced neural networks designed to interpret and generate human-like text thanks to their structure and to having been trained on vast amounts of data. They can perform a wide range of natural language processing tasks, including text generation, translation, summarization, and question-answering, and are the engines of conversational AI platforms like ChatGPT, Gemini or Claude. A key feature of such LLMs is their inference of a subsequent piece of text from preceding pieces of text. As such, their computational structure lends itself to making other, similar sequential inferences. While the acquisition of color measurements may at first seem far removed from the domain of LLMs, it too can be thought of as a sequential process, consisting of the measurement of a sequence of stimuli, and therefore open to sequential inference. The present paper introduces an adaptation of LLMs to color data, and more broadly to sensor data, and their application to generating measurements from a preceding sequence, based on pre–training transformers with sensor data sequences. Promising first results are shared that point to low color differences when models are prompted with similar magnitude data to those being constructed using generative AI (GenAI).

Subject Areas :
Views 35
Downloads 13
 articleview.views 35
 articleview.downloads 13
  Cite this article 

Ján Morovič, Peter Morovič, "LLMs Speak LABin Color and Imaging Conference,  2025,  pp 191 - 196,  https://doi.org/10.2352/CIC.2025.33.1.36

 Copy citation
  Copyright statement 
Copyright ©2025 Society for Imaging Science and Technology 2025
cic
Color and Imaging Conference
2166-9635
2166-9635
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA