A collection of articles on remote research in cognition and perception using the Internet for the Journal of Perceptual Imaging is presented. Four original articles cover the topics of exact versus conceptual replication of cognitive effects (e.g., mental accounting), effects of facial cues on the perception of avatars, cultural influences on perceptual image and video quality assessment, and how Internet habits influence social cognition and social cognitive research. The essentials of these articles are summarized here, and their contributions are embedded within a wider view and historical perspective on remote research in cognition and perception using the Internet.
The motivation for use of biosensors in audiovisual media is made by highlighting problem of signal loss due to wide variability in playback devices. A metadata system that allows creatives to steer signal modifications as a function of audience emotion and cognition as determined by biosensor analysis.
A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure, safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.
The race to commercialize self-driving vehicles is in high gear. As carmakers and tech companies focus on creating cameras and sensors with more nuanced capabilities to achieve maximal effectiveness, efficiency, and safety, an interesting paradox has arisen: the human factor has been dismissed. If fleets of autonomous vehicles are to enter our roadways they must overcome the challenges of scene perception and cognition and be able to understand and interact with us humans. This entails a capacity to deal with the spontaneous, rule breaking, emotional, and improvisatory characteristics of our behaviors. Essentially, machine intelligence must integrate content identification with context understanding. Bridging the gap between engineering and cognitive science, I argue for the importance of translating insights from human perception and cognition to autonomous vehicle perception R&D.