In future manufacturing human-machine interaction will evolve towards flexible and smart collaboration. It will meet requirements from the optimization of assembly processes as well as from motivated and skilled human behavior. Recently, human factors engineering has substantially progressed by means of detailed task analysis. However, there is still a lack in precise measuring cognitive and sensorimotor patterns for the analysis of long-term mental and physical strain. This work presents a novel methodology that enables real-time measurement of cognitive load based on executive function analyses as well as biomechanical strain from non-obtrusive wearable sensors. The methodology works on 3D information recovery of the working cell using a precise stereo measurement device. The worker is equipped with eye tracking glasses and a set of wearable accelerometers. Wireless connectivity transmits the sensor-based data to a nearby PC for monitoring. Data analytics then recovers the 3D geometry of gaze and viewing frustum within the working cell and furthermore extracts the worker's task switching rate as well as a skeleton-based approximation of worker's posture associated with an estimation of biomechanical strain of muscles and joints. First results enhanced by AI-based estimators demonstrate a good match with the results of an activity analysis performed by occupational therapists.
Augmented Reality (AR) can seamlessly create an illusion of virtual elements blended into the real world scene, which is one of the most fascinating human-machine interaction technologies. AR has been utilized in a variety of real-life applications including immersive collaborative gaming, fashion appreciation, interior design, and assistive devices for individuals with vision impairments. This paper contributes a real-time AR application, ARFurniture, which will allow the users to envision furniture-ofinterests in different colors and different styles, all from their smart devices. The core software architecture consists of deep-learning based semantic segmentation and fast-speed color transformation. Our software architecture allows the user prompt the system to colorize the style of the furniture-of-interest within the scene on their mobile devices, and has been successfully deployed on mobile devices. In addition, using eye gaze as a pointing indicator, a head-mounted user-centric augmented reality based indoor decoration style colorization concept is discussed. Related algorithms, system design, and simulation results for ARFurniture are presented. Furthermore, a no-reference image quality measure, Naturalness Image Quality Evaluator (NIQE), was utilized to evaluate the immersiveness and naturalness of ARFuniture. The results demonstrate that ARFurniture has game-changing value to enhance user experience in indoor decoration.
Autonomous driving has the potential to positively impact the daily life of humans. Techniques such as imaging processing, computer vision, and remote sensing have been highly involved in creating reliable and secure robotic cars. Conversely, the interaction between human perception and autonomous driving has not been deeply explored. Therefore, the analysis of human perception during the cognitive driving task, while making critical driving decisions, may provide great benefits for the study of autonomous driving. To achieve such an analysis, eye movement data of human drivers was collected with a mobile eye-tracker while driving in a automotive simulator built around an actual physical car, that mimics a realistic driving experience. Initial experiments have been performed to investigate the potential correlation between the driving behaviors and fixation patterns of the human driver.
In recent year, the Head-up display is studied actively. Among them, 3D HUD attracts rising attention. In HUD application, it is assumed that stereoscopic image is displayed at far distance. Super Multi-View display provides a smooth parallax. Even if 3D image is at far distance, it be able to display 3D image which has appropriate depth. In previous studies, high resolution required for SMV. However, there are restrictions to increase the resolution. Therefore, we propose a novel SMV display using time division multiplexing and eye tracking techniques. Our new system is not required high resolution display. The proposed display consists of DMD and light source array. The ray from the light source array is reflected at DMD, and form an image at the vicinity of the pupil. The position forming an image depends on light source position. The image which is display on the DMD is changed corresponding to the focal point. To confirm the principle of the proposed method, we experiment about creating a viewing zone only in the vicinity of the pupil. From the result, we confirmed 8 viewpoints in the horizontal direction at 18.8 mm viewing zone.