
Large Language Models (LLMs) are advanced neural networks designed to interpret and generate human-like text thanks to their structure and to having been trained on vast amounts of data. They can perform a wide range of natural language processing tasks, including text generation, translation, summarization, and question-answering, and are the engines of conversational AI platforms like ChatGPT, Gemini or Claude. A key feature of such LLMs is their inference of a subsequent piece of text from preceding pieces of text. As such, their computational structure lends itself to making other, similar sequential inferences. While the acquisition of color measurements may at first seem far removed from the domain of LLMs, it too can be thought of as a sequential process, consisting of the measurement of a sequence of stimuli, and therefore open to sequential inference. The present paper introduces an adaptation of LLMs to color data, and more broadly to sensor data, and their application to generating measurements from a preceding sequence, based on pre–training transformers with sensor data sequences. Promising first results are shared that point to low color differences when models are prompted with similar magnitude data to those being constructed using generative AI (GenAI).
Ján Morovič, Peter Morovič, "LLMs Speak LAB" in Color and Imaging Conference, 2025, pp 191 - 196, https://doi.org/10.2352/CIC.2025.33.1.36