Back to articles
JIST-first
Volume: 35 | Article ID: SDA-387
Image
Estimation of Motion Sickness in Automated Vehicles using Stereoscopic Visual Simulation
  DOI :  10.2352/J.ImagingSci.Technol.2022.66.6.060405  Published OnlineNovember 2022
Abstract
Abstract

Automation of driving leads to decrease in driver agency, and there are concerns about motion sickness in automated vehicles. The automated driving agencies are closely related to virtual reality technology, which has been confirmed in relation to simulator sickness. Such motion sickness has a similar mechanism as sensory conflict. In this study, we investigated the use of deep learning for predicting motion. We conducted experiments using an actual vehicle and a stereoscopic image simulation. For each experiment, we predicted the occurrences of motion sickness by comparing the data from the stereoscopic simulation to an experiment with actual vehicles. Based on the results of the motion sickness prediction, we were able to extend the data on a stereoscopic simulation in improving the accuracy of predicting motion sickness in an actual vehicle. Through the performance of stereoscopic visual simulation, it is considered possible to utilize the data in deep learning.

Subject Areas :
Views 21
Downloads 7
 articleview.views 21
 articleview.downloads 7
  Cite this article 

Yoshihiro Banchi, Takashi Kawai, "Estimation of Motion Sickness in Automated Vehicles using Stereoscopic Visual Simulationin Electronic Imaging,  2022,  pp 060405-1 - 060405-10,  https://doi.org/10.2352/J.ImagingSci.Technol.2022.66.6.060405

 Copy citation
  Copyright statement 
Copyright Β© Society for Imaging Science and Technology 2022
  Article timeline 
  • received July 2022
  • accepted November 2022
  • PublishedNovember 2022

Preprint submitted to:
jist
JIMTE6
Journal of Imaging Science and Technology
J. Imaging Sci. Technol.
J. Imaging Sci. Technol.
1062-3701
1943-3522
Society for Imaging Science and Technology
1.
Introduction
Autonomous driving technology is making great progress through advanced sensing technology such as LIDAR and advanced computing technology such as image analysis using deep learning. Although fully automated driving has yet to be achieved, driver assistance systems have made it possible for drivers to drive without having to operate most driving aspects in recent years. Automated driving can be categorized into levels according to the degree of automation [1, 2]. As various countries continue to establish laws and regulations related to automated driving, vehicles with level 3 performance have started to appear in the market. As the level of automated driving progresses, the necessity for drivers to operate the vehicle decreases, and at level 5, the driver does not need to operate the vehicle at all.
One aspect of concern with driver engagement becoming completely unnecessary is automated driving motion sickness. One of the reasons why drivers do not tend to get motion sickness is the effect of agency on such motion sickness. We learned, for example, that susceptibility to inducing motion sickness differs according to the type of agency, and, in automobiles, passengers (those sitting in the passenger seat) are more likely to be affected than drivers [3]. Particularly, motion sickness can occur due to the decrease in driver agency caused by automated driving. Hence, there are chances that automated driving motion sickness may occur [4]. As it is necessary for the driver to operate the vehicle, up to level 5, in case of emergencies, paying attention to the driver’s condition in addition to controlling the vehicle is necessary.
Agency is also thought to play a part in simulator-based motion sickness. Agency is related to presence [5], and we know that this affects simulator-based motion sickness [6]. On the other hand, in technologies such as virtual reality (VR), agency and immersion are positioned as important elements; therefore, there are concerns about the occurrence of simulator-based motion sickness and VR-based motion sickness [7, 8].
One cause of motion sickness, including automated driving motion sickness and simulator-based motion sickness, is the sensory conflict [9]. The sensory conflict theory states that motion sickness is caused by visual or somatosensory information that is inconsistent with expected information based on past experience. Automated driving can also be viewed as a type of sensory conflict, and the degree of conflict is thought to increase further when the driver engages in activities other than driving the vehicle. One difference between automated driving motion sickness and simulator-based motion sickness is that in simulator-based motion sickness, sensory information is presented, whereas, in automated driving motion sickness, somatosensory information is not presented. Therefore, studies have been conducted on mitigation techniques, such as matching visual and somatosensory information [10] and limiting visual information [11] to reduce motion sickness. Furthermore, in terms of in-vehicle behavior linked to automated driving, VR-based experiments have also been conducted to link the presentation of somatosensory information with visual information [12].
Studies are being conducted to evaluate motion sickness and predict the occurrence of the same. To use objective indicators to evaluate motion sickness, many correlations between objective indicators and motion sickness have been investigated [13–15]. In recent years, such investigations have used machine learning [16–18]. Although these studies were able to assess heart rate (HR), electromyography (EMG), and electroencephalography (EEG), it is difficult to attach a sensor to other indicators, thereby making it unrealistic to implement machine learning. By using machine learning however, it is possible to evaluate relationships based on multiple and single objective indicators. When examining human behavior, machine learning is able to predict a fixed level of accuracy [19, 20], and if this is possible in the area of motion sickness, it could contribute to resolving issues related to the onsets of motion sickness.
Researchers have attempted to estimate the occurrence of cybersickness, which is a type of motion sickness [21]. In the mixed reality (MR) environment, we obtained different indicators concerning discomfort and general physiology while puzzle assembly tasks were continuously performed. Then, we estimated the discomfort based on the physiological indicators using a deep learning model. The results suggested that it was possible to predict the onset of cybersickness to a certain extent. However, it is said that a high number of samples is generally required for machine learning, and it is difficult to secure a high number of samples based on a test environment.
2.
Purpose
In this study, we aimed to develop a system than predicts the occurrence and early detection of motion sickness for automated driving. However, due to the COVID crisis, and based on similarities in mechanisms such as sensory conflict, this study also aims to examine data expansion using stereoscopic simulation.
3.
Methods
3.1
Driving an Actual Vehicle (Experiment 1)
3.1.1
Measures
As a subjective indicator, participants were asked to respond every 10 s in regard to three levels of motion sickness (1: No sickness, 2: Slight sickness, or 3: High sickness). In the prediction of motion sickness in automobiles, it is considered essential to predict it early and encourage rest, especially regarding the motion sickness of the driver. For this reason, we placed particular importance on detecting the slight motion sickness stage or no sickness, so data acquisition was set to three levels. As objective indicators, HOT-2000 (NeU) was used to acquire cerebral blood flow, pulse rate, and acceleration, while electrodermal activity (EDA) was acquired using biosignalsplux (10 Hz). Low-frequency (LF) and high-frequency (HF) bands were also calculated based on the pulse rate.
3.1.2
Stimuli
For the driving course, several driving patterns were set in advance, and from these, several patterns wherein motion sickness could easily occur were selected. The course consisted of a combination of left and right turns and emergency braking/emergency acceleration. The content of the driving course is summarized in Table I. All the drivers were trained to drive at the same pace.
Table I.
Driving course.
Time (sec)Driving patterns
0–15Sharp cutbacks left and right
15–35Turning 2.5 times to the left
35–55Turning 2.5 times to the right
55–65Stop by emergency brake/emergency acceleration Γ— 3 times
65–95Turn three times in a Figure 8 pattern
95–105Sharp cutbacks left and right
105–125Turning three times to the left
125–130Stopping using the emergency brake
3.1.3
Procedure
Eleven adult drivers participated in the experiment. The purpose of the experiment and what the participants had to do in the experiment was explained and informed consent was obtained from them. Participant was explained that 1 (Slight sickness) is a different state from 0 (No sickness), and evaluation as 1 should be made when feeling even a little sick. Then, various devices were attached for measurement purposes, and the participants were asked to sit in the passenger seat. To ensure stable measurement and posture, they placed their left hands on a jig and lightly gripped the door handle with their right hand as if holding it. The discomfort level was checked before starting the experiment, and the participants only had to verbally rate their discomfort level every 10 s during the experiment. The sound to be evaluated was confirmed and the experiment was started. From the start of the drive, participants verbally rated their discomfort according to a cue from an electronic metronome every 10 s. When a participant suffered extreme motion sickness, the experiment was stopped.
3.2
Simulation through Stereoscopic Imaging (Experiment 2)
3.2.1
Measures
The indicators of Experiment 1 were acquired for the simulation experiment. Specifically, as a subjective indicator, participants were asked to respond every 10 s in regard to three levels of motion sickness: (1: Nothing, 2: Slightly Sickness, or 3: Highly Sickness). As objective indicators, cerebral blood flow, pulse rate, acceleration, and EDA were obtained (10 Hz). LF and HF bands were also calculated based on the pulse rate.
3.2.2
Stimuli
VR cameras were attached to the participants’ heads, and the course run in Experiment 1 was captured (Figure 1). An Insta360EVO capable of capturing 3DVR180 videos was used as the VR camera. The video size was 3840 Γ— 3840 px (one eye: 3840 Γ— 1920, 50 fps, 20 Mbpm) and the duration was 130 s.
Figure 1.
VR camera attached to the participant’s head.
3.2.3
Procedure
A total of 54 adults participated in the experiment. The purpose of the experiment and what the participants had to do in the experiment was explained and informed consent was obtained from them. Participant was explained that 1 (Slight sickness) is a different state from 0 (No sickness), and evaluation as 1 should be made when feeling even a little sick. Then, various measurement devices were attached, and participants were asked to sit in the bucket seat to begin the simulation (Figure 2). VIVE Pro VR headset (HTC) was used for presenting the VR video stimuli. The discomfort level was checked before starting the experiment, and the participants only had to verbally rate their discomfort level every 10 s during the experiment. From the start of the video, participants verbally rated their discomfort based on an electronic metronome cue every 10 s.
Figure 2.
Layout of the stereoscopic visual simulation experiment.
4.
Results
Within each indicator, there were many participant data and time interval data with missing values, and these were excluded from the analysis data. For this reason, the result was analyzed for only 9 people in Experiment 1 and 52 in Experiment 2.
4.1
Subjective Indicators
(1) Real vehicle experiment (Experiment 1)
In regard to the three levels of motion sickness, there were 85 cases of 1 (No sickness), 31 cases of 2 (Slight Sickness), and 1 case of 3 (High Sickness). The transition of motion sickness for each time interval is shown in Figure 3 on Experiment 1. In the plots, the horizontal axis indicates the time interval, and the vertical axis indicates the average of motion sickness in all participants. Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12,96) = 2.216, p = 0.017, and Ξ·2 = 0.109). Shaffer post-hoc paired test with time factor was conducted but there were no significant differences.
(2) Stereoscopic visual simulation
In regard to the three levels of motion sickness, there were 456 cases of 1 (No sickness), 196 cases of 2 (Slight Sickness), and 24 cases of 3 (High Sickness). The transition for each time interval is shown in Fig. 3 on Experiment 2. In the plots, the horizontal axis indicates the time interval, and the vertical axis indicates the average of motion sickness in all participants. Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12, 612) = 11.736, p < 0.001, and Ξ·2 = 0.090). Shaffer post-hoc paired test with time factor showed significant differences in Time 0-10 sec (T1)–T9, T10, T13, T2–T7, T8, T9, T10, T11, T12, T13, T3–T7, T8, T9, T10, T13, T4–T9, T10, T13, T5–T10, T13, T6–T10, and T13 (Table A1 and Table A2).
Figure 3.
Transition of motion sickness.
4.2
Objective Indicators
To judge the correspondence with the subjective indicators, we calculated the mean value of each indicator, using 10 s as an interval. Additionally, the objective indicators used were EDA, pulse rate, and LF/HF, and each participant was normalized from βˆ’1 by MinMaxScaler to see the variation.
4.2.1
EDA
(1) Real vehicle experiment (Experiment 1)
The mean values and standard error for each time interval are shown in Figure 4 on Experiment 1. The mean value before normalization was 9.16 (ΞΌS). Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12,96) = 4.142, p < 0.001, Ξ·2 = 0.259). Shaffer post-hoc paired test with time factor showed a significant difference between T1 and T2.
(2) Stereoscopic visual simulation
The mean values and standard error for each time interval are shown in Fig. 4 on Experiment 2. The mean value before normalization was 8.14 (ΞΌS). Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12,96) = 3.604, p < . 001, and Ξ·2 = 0.039). Shaffer post-hoc paired test with time factor showed significant differences between T1–T2, T3, T4, and T11 (Table A3 and Table A4).
Figure 4.
Results of EDA.
4.2.2
Pulse Rate
(1) Real vehicle experiment (Experiment 1)
The mean values and standard error for each time interval are shown in Figure 5 on Experiment 1. The mean value before normalization was 101.47 (bpm). Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12,96) = 3.969, p < 0.001, and Ξ·2 = 0.250). Shaffer post-hoc paired test with time factor showed significant differences between T3 and T7.
(2) Stereoscopic visual simulation
The mean values and standard error for each time interval are shown in Fig. 5 on Experiment 2. The mean value before normalization was 83.71 (bpm). Significant differences were seen in one-way repeated measures ANOVA with time interval (F(12,96) = 4.311, p < 0.001, and Ξ·2 = 0.036). Shaffer post-hoc paired test with time factor showed significant differences between T1–T3, T2–T3, T4, T3–T6, and T4–T6 (Table A5 and Table A6).
Figure 5.
Results of Pulse Rate.
4.2.3
LF/HF
(1) Real vehicle experiment (Experiment 1)
The mean values and standard error for each time interval are shown in Figure 6 on Experiment 1. The mean value before normalization was 2.85. Significant differences were not seen in one-way repeated measures ANOVA with time interval (F(12,96) = 1.595, p = 0.106, and Ξ·2 = 0.052).
(2) Stereoscopic visual simulation
The mean values and standard error for each time interval are shown in Fig. 6 on Experiment 2. The mean value before normalization was 3.06. Significant differences were not seen in one-way repeated measures ANOVA with time interval (F(12,96) = 1.258, p = 0.240, and Ξ·2 = 0.011).
Figure 6.
Result of LF/HF.
5.
Prediction of Motion Sickness using Deep Learning
5.1
Model
For the model to predict motion sickness, a deep learning model based on one-dimensional convolutional neural network (1DCNN) was used. The prediction was performed using a recurrent neural network (RNN); however, as the 1DCNN model had higher precision, we reported the 1DCNN model results. The model had three layers–convolution, pooling, and dropout–and two fully connected layers. In the first fully connected layer, we input the predicted value for motion sickness in the previous interval. Leaky ReLU was used as the activation function, and Softmax was used as the output layer. The categorical cross-entropy loss function and Adam optimization algorithm were used. An image of the model is shown in Figure 7.
Figure 7.
Image of the model used for predicting motion sickness.
5.2
Datasets
Data with missing values were excluded, so datasets were for 9 people in Experiment 1 and 52 in Experiment 2 and the data were divided every 10 s to predict motion sickness levels. For objective indicators, EDA, pulse rate, and LF/HF were used. Each indicator was normalized in the MinMaxScaler as βˆ’1–1 for each participant. When predicting motion sickness, as there were few cases of 3 (High Sickness), it was considered to be 0 (No sickness) as 1 (No Sickness), and 1 (Sickness) as 2 (Slight Sickness) and 3 (High Sickness). So that the data of Experiment 1 is based on 85 cases of 0, 32 cases of 1, and the data of Experiment 2 is based on 456 cases of 0, 220 cases of 1.
5.3
Real Vehicle Experiment (Experiment 1)
For accuracy verification, the average value of the results in four-fold cross-validation randomly in person and time was used. A summary of the confusion matrix of the four sessions is shown in Table II. The mean accuracy of the four sessions was 0.833, with the F1-score for β€œNothing” being 0.883 and F1-score for β€œSickness” being 0.684.
Table II.
Confusion matrix of Experiment 1.
MSPREDICTION
level0: Nothing1: Sickness
Cross validation1TRUE0170
1411
20205
125
30251
114
40193
154
5.4
Stereoscopic Visual Simulation
For accuracy verification, the average value of the results in the four-fold cross-validation randomly in person and time was used. A summary of the confusion matrix of the four sessions is shown in Table III. The mean accuracy for the four sessions was 0.844, with the F1-score for β€œNothing” being 0.885 and F1-score for β€œSickness” being 0.745.
Table III.
Confusion matrix of Experiment 2.
MSPREDICTION
level0: Nothing1: Sickness
Cross validation1TRUE010318
1843
209317
11051
301049
11048
401115
13025
5.5
Stereoscopic Visual Simulation and Real Vehicle Experiments (Experiment 1 and 2)
To validate the effect of expanding the training data from Experiment 2, a four-fold cross-validation was performed on Experiment 1 data. The data from the stereoscopic video experiment was also added to the training data (Figure 8). A summary of the confusion matrix for the four sessions is shown in Table IV. The mean accuracy for the four sessions was 0.866, with the F1-score for β€œNothing” being 0.896 and F1-score for β€œSickness” being 0.754.
Figure 8.
Image of stereoscopic visual simulation’s extended learning datasets.
Table IV.
Stereoscopic visual simulation extension confusion matrix.
MSPREDICTION
level0: Nothing1: Sickness
Cross validation1TRUE0134
1114
20205
116
30242
123
40202
109
6.
Discussion
6.1
Subjective Indicators
It is known that the user experience changes with viewing time in VR [22]. Thus, the effect of time and somatosensory information was analyzed by comparing Experiment 1 and Experiment 2. The results of the actual vehicle experiment demonstrated a tendency for motion sickness to increase, with a peak at the 110-s interval. The results of the stereoscopic visual experiment demonstrated a tendency for motion sickness to increase, with a peak at the 100- and 130-s intervals. Where each experiment demonstrated an increasing trend, the fact that the position of the peak differs is thought to be due to the fact that the degree of sensory conflict differs in the various cases of vehicle sickness and simulation sickness. Particularly, it is affected by the presence or absence of somatosensory information being presented.
6.2
Objective Indicators
Compared to the real vehicle experiment, each indicator in the stereoscopic visual experiment tended to vary by only a small degree. In the results of ANOVA for the real vehicle experiment, a large effect size was seen for EDA and HR, whereas a small effect size was seen for LF/HF bands. Furthermore, in the results of ANOVA for the stereoscopic visual experiment, a small degree of change was seen for all participants. This suggests that the response of the objective index was greater for the real vehicle than for the stereoscopic visuals.
If we compare the peak positions for the objective and subjective indicators, we can see that the peak position does not match for any participants. This suggests that evaluating the response to specific objective indicators and subjective indicators is difficult.
6.3
Predicting Motion Sickness
The cross-validation accuracy of the real vehicle test was 0.833, and the cross-validation accuracy of the stereoscopic visual experiment was 0.844. When the learning data used in the stereoscopic visual experiment was extended, the cross-validation accuracy was 0.866. In a previous study, predicting the motion sickness levels through a self-organizing neural fuzzy inference network (SONFIN) using EEG signals, an overall accuracy of about 82% through experiments was achieved [23]. When predicting motion sickness levels through a three-dimensional convolutional neural network (3DCNN) using 3D image information, the correlation between a simulator sickness questionnaire (SSQ) [24] and prediction score was 0.845 [25]. When predicting the VR sickness levels through a Deep Long Short Term Memory Model (LSTM) using posture instability signal, the correlation between a SSQ and prediction score was 0.89 [26]. These results suggest that the accuracy for predicting motion sickness within the respective tests could be predicted with constant accuracy. Therefore, it is thought that the objective indicators obtained in this study can contribute to predicting motion sickness to some extent. Additionally, by adding the stereoscopic visual simulation data to the real vehicle experiment, it was confirmed that the accuracy for predicting motion sickness in the real vehicle experiment could be improved. Thus, even when data is obtained under different environments, the data capture common responses within the scope of sensory conflict.
7.
Summary
In this study, we attempted to predict motion sickness in automated driving based on data extensions through a stereoscopic visual simulation. We conducted a real vehicle experiment and a stereoscopic visual simulation and predicted the presence or absence of motion sickness using deep learning. We confirmed that it was possible to predict motion sickness in a real vehicle with a consistent level of accuracy. Moreover, we were able to extend the learning data using stereoscopic visual simulation, and it was suggested that this may improve accuracy in predicting motion sickness in actual vehicles. This suggests that in environments where it is difficult to conduct experiments with real objects, data acquisition through stereoscopic image simulations can be utilized for deep learning and other data applications. Moreover, as different tendencies were observed in individual indicators, such as objective and subjective indicators, it is necessary to be careful when handling the data.
Acknowledgment
The authors would like to thank Mahiro Ito, Yusuke Ohira, Shintaro Kakuda, Haruka Kato, Taisei Tsukahara, Soichiro Sawai and Yuki Takagi for support with the experiments. This work is supported by the AISIN CORPORATION.
Appendix
 
Table A1.
Post-hoc test results of Subjective indicators of Real vehicle experiment (Experiment 1).
p-value
O–1O sec10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.3470.3470.0810.3470.0810.0810.0810.0350.0350.0510.0810.169
T21.000–1.0000.1691.0000.1690.1690.1690.0810.0810.1040.1690.347
T31.0000.000–0.1691.0000.1690.1690.1690.0810.0810.1040.1690.347
T42.0001.5121.512–0.1691.0001.0001.0000.3470.3470.1691.0000.347
T51.0000.0000.0001.512–0.1690.1690.1690.0810.0810.1040.1690.347
T62.0001.5121.5120.0001.512–1.0001.0000.5940.3470.1691.0000.347
T72.0001.5121.5120.0001.5120.000–1.0000.3470.5940.3471.0000.347
T82.0001.5121.5120.0001.5120.0000.000–0.3470.5940.3471.0000.347
T92.5302.0002.0001.0002.0000.5551.0001.000–1.0000.5940.3470.169
T102.5302.0002.0001.0002.0001.0000.5550.5550.000–0.3470.3470.169
T112.2941.8351.8351.5121.8351.5121.0001.0000.5551.000–0.1690.081
T122.0001.5121.5120.0001.5120.0000.0000.0001.0001.0001.512–0.347
T131.5121.0001.0001.0001.0001.0001.0001.0001.5121.5122.0001.000–
Table A2.
Post-hoc test results of Subjective indicators of Stereoscopic visual experiment (Experiment 2).
p-value
O–1O sec10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.4850.5690.0570.0510.0190.0010.0010.0000.0000.0010.0020.000
T20.704–0.0440.0040.0020.0020.0000.0000.0000.0000.0000.0000.000
T30.5742.062–0.1030.0580.0510.0000.0000.0000.0000.0010.0020.000
T41.9483.0451.660–0.5690.3710.0060.0060.0000.0000.0280.0190.000
T51.9963.2671.9390.574–0.5690.0060.0110.0020.0000.0440.0180.000
T62.4313.3351.9960.9030.574–0.0190.0700.0170.0000.1590.2550.000
T73.4775.4194.1732.8442.8502.414–0.7420.5320.0310.6220.4440.059
T83.4515.1963.9652.8502.6351.8490.331–0.2610.0190.7850.5320.019
T94.4285.8784.5923.7563.2672.4700.6291.137–0.0830.3220.1680.135
T105.4055.8385.2504.4284.2294.3822.2172.4311.767–0.0170.0151.000
T113.4384.5923.4722.2682.0621.4280.4960.2751.0002.470–0.7090.006
T123.2474.7613.2672.4142.4421.1510.7720.6291.4002.5210.375–0.002
T135.6836.9085.9154.6964.8044.0811.9352.4311.5190.0002.8503.335–
Table A3.
Post-hoc test results of EDA of Real vehicle experiment (Experiment 1).
p-value
O-1O sec10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.0000.0020.0060.0480.0710.0060.0170.0280.0440.0290.0250.009
T25.829–0.1550.0420.0460.0440.6090.2780.2010.1260.1330.3710.514
T34.4311.569–0.0270.0490.0550.7530.5910.4470.2670.3010.7270.975
T43.6622.4142.710–0.1090.1090.1460.6340.9190.6840.7640.6400.381
T52.3362.3642.3141.806–0.3680.0380.1720.3300.6730.5390.2190.122
T62.0782.3932.2411.8030.954–0.0270.0710.1790.3700.2410.0910.069
T73.7000.5330.3251.6092.4812.692–0.0570.0370.0290.0860.4050.682
T82.9931.1640.5590.4951.4992.0862.227–0.3760.0880.2620.8600.404
T92.6821.3920.8000.1051.0371.4742.4980.938–0.0570.3980.4370.082
T102.3911.7081.1930.4220.4370.9512.6631.9462.221–0.6660.0790.005
T112.6461.6721.1050.3100.6421.2661.9601.2080.8930.448–0.1080.025
T122.7590.9480.3620.4861.3351.9220.8780.1820.8192.0121.811–0.457
T133.4250.6830.0320.9261.7272.0970.4250.8811.9913.8092.7640.782–
Table A4.
Post-hoc test results of EDA of Stereoscopic visual experiment (Experiment 2).
p-value
O–1O secc10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.0000.0000.0000.0050.0060.0070.0070.0110.0060.0000.0020.003
T26.517–0.1350.2220.0680.1420.3580.4000.4230.5890.6580.9780.875
T34.7921.519–0.6680.1540.3130.6970.7350.7390.9610.3100.6360.538
T44.2501.2380.432–0.0390.3120.8380.8590.8440.8950.1830.4820.403
T52.9721.8671.4472.123–0.7050.3470.3490.4430.3050.0250.1360.122
T62.8491.4951.0191.0210.380–0.3030.4200.5190.3480.0220.1480.141
T72.8340.9280.3920.2050.9511.041–0.9740.9710.6390.0400.2630.246
T82.8110.8500.3410.1780.9470.8130.033–0.9070.5330.0260.2210.213
T92.6370.8090.3350.1970.7740.6490.0370.118–0.3380.0100.1660.176
T102.8610.5450.0500.1331.0360.9480.4720.6280.967–0.0150.2680.237
T113.8080.4461.0271.3532.3162.3752.1112.2932.6882.528–0.0280.592
T123.2520.0280.4770.7081.5171.4711.1331.2401.4071.1202.270–0.545
T133.1980.1590.6200.8441.5761.4981.1751.2631.3731.1970.5390.609–
Table A5.
Post-hoc test results of Pulse rate of Real vehicle experiment (Experiment 1).
p-value
O–1O sec10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.0330.0060.0850.5420.6720.5600.4790.7260.7090.5880.2910.322
T22.569–0.3740.3790.1340.0460.0130.1510.1260.1460.0410.1540.152
T33.6650.941–0.0380.0030.0010.0000.0030.0090.0210.0060.0060.003
T41.9660.9322.488–0.0470.0010.1550.3080.2720.2750.0410.7780.534
T50.6361.6684.2012.347–0.0050.8190.6000.7920.8300.1050.4660.360
T60.4392.3664.7554.7903.843–0.2630.0540.2570.2170.7530.0950.045
T70.6083.2085.8481.5680.2371.205–0.6420.9980.9860.2990.3120.322
T80.7431.5884.1171.0880.5472.2550.483–0.4100.5710.0700.5320.581
T90.3631.7113.4041.1810.2731.2210.0030.870–0.9490.0920.3870.352
T100.3871.6092.8651.1720.2221.3400.0180.5900.066–0.0130.4390.447
T110.5642.4333.7122.4351.8300.3261.1112.0881.9123.179–0.1040.068
T121.1301.5753.7110.2920.7651.8921.0790.6540.9160.8141.836–0.852
T131.0551.5844.1730.6500.9712.3811.0550.5750.9890.8002.1110.193–
Table A6.
Post-hoc test results of Pulse rate of Stereoscopic visual experiment (Experiment 2).
p-value
O–1O sec10–2020–3030–4040–5050–6060–7070–8080–9090–100100–110110–120120–130
T1T2T3T4T5T6T7T8T9T10T11T12T13
t-valueT1–0.9650.0000.0020.0520.3680.2730.0260.2120.2160.2230.2090.821
T20.045–0.0000.0000.0210.2560.2000.0120.1410.185O.1760.1660.827
T34.6716.856–0.3310.0930.0000.0020.0410.0050.0100.0080.0150.001
T43.3564.7290.982–0.1430.0000.0050.1590.0160.0190.0170.0470.002
T51.9912.3811.7121.489–0.0020.1030.8550.2280.2070.2510.3510.049
T60.9091.1493.9474.2843.190–0.6330.0670.4720.5060.5820.5860.429
T71.1071.2983.3252.9651.6610.480–0.0780.7110.7600.8040.7580.255
T82.3022.6212.0971.4300.1841.8731.798–0.1000.1950.2060.3290.013
T91.2641.4962.9392.4961.2210.7240.3731.675–0.9040.9080.9850.088
T101.2541.3442.6632.4301.2790.6700.3071.3130.122–0.9820.9300.113
T111.2351.3732.7422.4591.1610.5550.2491.2820.1160.023–0.8770.079
T121.2711.4042.5292.0380.9410.5480.3100.9860.0190.0890.156–0.025
T130.2280.2193.6623.3062.0150.7971.1522.5631.7391.6151.7912.304–
References
1SAE International, β€œSAE Levels of Driving AutomationTM Refined for Clarity and International Audience,” 3 May 2021
2KoopmanP.WagnerM.2017Autonomous vehicle safety: an interdisciplinary challengeIEEE Intell. Transp. Syst. Mag.9909690–610.1109/MITS.2016.2583491
3RolnickA.LubowR. E.1991Why is the driver rarely motion sick? The role of controllability in motion sicknessErgon.34867879867–7910.1080/00140139108964831
4DielsC.BosJ. E.2016Self-driving carsicknessAppl. Ergon.53374382374–8210.1016/j.apergo.2015.09.009
5JicolC.WanC. H.DolingB.IllingworthC. H.YoonJ.HeadeyC.LutterothC.ProulxM. J.PetriniK.NeillE. O.2021Effects of emotion and agency on presence in virtual realityConf. on Human Factors in Computing Systems - Proc.1131–13ACMNew York City, NY10.1145/3411764.3445588
6WeechS.KennyS.Barnett-CowanM.2019Presence and cybersickness in virtual reality are negatively related: A reviewFrontiers in Psychology1015810.3389/fpsyg.2019.00158
7McCauleyM. E.SharkeyT. J.1992Cybersickness: Perception of self-motion in virtual environmentsPresence: Teleoperators and Virtual Environments1311318311–810.1162/pres.1992.1.3.311
8Jr.L. J.2000A discussion of cybersickness in virtual environmentsACM SIGCHI Bull.32475647–5610.1145/333329.333344
9ReasonJ. T.BrandJ. J.Motion Sickness1975Academic PressNew York
10NgA. K. T.ChanL. K. Y.LauH. Y. K.2020A study of cybersickness and sensory conflict theory using a motion-coupled virtual reality systemDisplays6110192210.1016/j.displa.2019.08.004
11IshakS.BubkaA.BonatoF.2018Visual occlusion decreases motion sickness in a flight simulatorPerception47521530521–3010.1177/0301006618761336
12McGillM.NgA.BrewsterS.2017I am the passenger: How visual motion cues can influence sickness for in-car VRConf. on Human Factors in Computing Systems - Proc.565556685655–68ACMNew York City, NY10.1145/3025453.3026046
13KimY. Y.KimH. J.KimE. N.KoH. D.KimH. T.2005Characteristic changes in the physiological components of cybersicknessPsychophysiology42616625616–2510.1111/j.1469-8986.2005.00349.x
14MinB. C.ChungS. C.MinY. K.SakamotoK.2004Psychophysiological evaluation of simulator sickness evoked by a graphic simulatorAppl. Ergon.35549556549–5610.1016/J.APERGO.2004.06.002
15LiuR.PeliE.HwangA. D.2017Measuring visually induced motion sickness using wearable devicesProc. IS&T Electronic Imaging: Human Vision and Electronic Imaging218223218–23IS&TSpringfield, VA10.2352/ISSN.2470-1173.2017.14.HVEI-147
16LiX.ZhuC.XuC.ZhuJ.LiY.WuS.2020VR motion sickness recognition by using EEG rhythm energy ratio based on wavelet packet transformComput. Methods Programs Biomed.18810526610.1016/j.cmpb.2019.105266
17LiY.LiuA.DingL.2019Machine learning assessment of visually induced motion sickness levels based on multiple biosignalsBiomed. Signal Process. Control49202211202–1110.1016/j.bspc.2018.12.007
18RecentiM.RicciardiC.AubonnetR.PiconeI.JacobD.SvanssonH. Á. R.AgnarsdóttirS.KarlssonG. H.BaeringsdóttirV.PetersenH.GargiuloP.2021Toward predicting motion sickness using virtual reality and a moving platform assessing brain, muscles, and heart signalsFrontiers in Bioengineering and Biotechnology913210.3389/fbioe.2021.635661
19VepakommaP.DeD.DasS. K.BhansaliS.2015A-Wristocracy: Deep learning on wrist-worn sensing for recognition of user complex activities2015 IEEE 12th Int’l. Conf. on Wearable and Implantable Body Sensor Networks, BSNIEEEPiscataway, NJ10.1109/BSN.2015.7299406
20RonaoC. A.ChoS. B.2016Human activity recognition with smartphone sensors using deep learning neural networksExpert Syst. Appl.59235244235–4410.1016/j.eswa.2016.04.032
21BanchiY.TsuchiyaK.HiroseM.TakahashiR.YamashitaR.KawaiT.2022Evaluation and estimation of discomfort during continuous work with mixed reality systems by deep learningProc. IS&T Electronic Imaging: Stereoscopic Displays and Applications309-1309-4309-1–4IS&TSpringfield, VA10.2352/EI.2022.34.2.SDA-309
22HΓ€kkinenJ.OhtaF.KawaiT.2018Time course of sickness symptoms with HMD viewing of 360-degree videosJ. Imaging Sci. Technol.60403-160403-1160403-1–60403-1110.2352/J.ImagingSci.Technol.2018.62.6.060403
23LinC.-T.TsaiS.-F.KoL.-W.2013EEG-based learning system for online motion sickness level estimation in a dynamic vehicle environmentIEEE Trans. Neural Netw. Learning Systems24168917001689–70010.1109/TNNLS.2013.2275003
24LeeT. M.YoonJ.-C.LeeI.-K.2019Motion sickness prediction in stereoscopic videos using 3D convolutional neural networksIEEE Trans. Vis. Comput. Graphics25191919271919–2710.1109/TVCG.2019.2899186
25KennedyR. S.LaneN. E.BerbaumK. S.LilienthalM. G.1993Simulator sickness questionnaire: An enhanced method for quantifying simulator sicknessInt. J. Aviation Psychology3203220203–2010.1207/s15327108ijap0303Λ™3
26WangY.ChardonnetJ.-R.MerienneF.2019VR sickness prediction for navigation in immersive virtual environments using a deep long short term memory model2019 IEEE Conf. on Virtual Reality and 3D User Interfaces (VR)187418811874–81IEEEPiscataway, NJ10.1109/VR.2019.8798213