Virtual crowds for non-combative environments play an important role in modern military operations and often create complications for the combatant forces involved. To address this problem, we are developing crowd simulation capable of generating crowds of non-combative civilians that exhibit a variety of individual and group behaviors at a different level of fidelity. Commercial game technology is used for creating an experimental setup to model an urban megacity environment and the physical behaviors of human characters that make up the crowd. The main objective of this work is to verify the feasibility of designing a collaborative virtual environment (CVE) and its usability for training security agents to respond to emergency situations like active shooter events, bomb blasts, fire and smoke. We present a hybrid (human-artificial) platform where experiments for disaster response can be performed in CVE by including AI agents and User-controlled agents. AI agents are computer controlled agents to include behaviors such as hostile agents, non-hostile agents, leader following agents, goal following agents, selfish agents, and fuzzy agents. User-controlled agents are autonomous agents for specific situation roles such as police officer, medic, firefighter, and swat official. The novelty of our work lies in modeling behaviors for AI agents or computer-controlled agents so that they can interact with user-controlled agents in an immersive training environment for emergency response and decision making. The hybrid platform aids in creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios.
Many people cannot see depth in stereoscopic displays. These individuals are often highly motivated to recover stereoscopic depth perception, but because binocular vision is complex, the loss of stereo has different causes in different people, so treatment cannot be uniform. We have created a virtual reality (VR) system for assessing and treating anomalies in binocular vision. The system is based on a systematic analysis of subsystems upon which stereoscopic vision depends: the ability to converge properly, appropriate regulation of suppression, extraction of disparity, use of disparity for depth perception and for vergence control, and combination of stereoscopic depth with other depth cues. Deficiency in any of these subsystems can cause stereoblindness or limit performance on tasks that require stereoscopic vision. Our system uses VR games to improve the function of specific, targeted subsystems.