At the Multisensory Brain and Cognition Lab, we recognize the transformative potential of artificial intelligence (AI) in shaping our perception of the world. As AI technologies continue to evolve and become an integral part of our lives, it is crucial to understand how humans perceive and interact with AI-generated worlds. Our lab combines cutting-edge research in script prompt engineering, Generative Pre-trained Transformers (GPT), and AI-generated artwork to explore this fascinating intersection of human perception and AI.
By harnessing the power of AI in generating virtual environments, our research aims to investigate how humans perceive, understand, and navigate these AI-created worlds, informing the development of more immersive, engaging, and accessible experiences.
AI operates at three levels:
Assisted AI systems
These systems aid individuals in decision-making and task execution based on predefined procedures. Machines perform actions while humans make decisions (Munoko et al., 2020).
Augmented AI systems
In this type, machines and humans collaborate in decision-making. These systems can interact with their environment and learn from users, demonstrating "analytical intelligence" (Guang-huan, 2017; Munoko et al., 2020).
Autonomous AI systems
Autonomous AI systems can adapt to diverse circumstances and take actions independently, without human intervention (Kokina & Davenport, 2017). These systems exhibit both "intuitive" and "empathetic" intelligence, allowing them to adapt to new situations and interact with individuals effectively.
Key research areas in Human-AI Integration at the Multisensory Brain and Cognition Lab include:
Perception of AI-Generated Environments: Investigating how humans perceive and interpret AI-generated visual, auditory, and tactile stimuli in virtual and augmented reality environments, and how these perceptions differ from those in the physical world.
Script Prompt Engineering and GPT: Exploring the use of advanced language models, such as GPT, to create realistic and engaging narratives, and examining their effects on human cognition, emotion, and decision-making in AI-generated worlds.
AI-Generated Artwork and Aesthetics: Studying the impact of AI-generated artwork on human perception and appreciation of art, and understanding how AI can contribute to the creation of novel and captivating aesthetics.
Navigation and Spatial Awareness: Examining the cognitive processes that enable humans to navigate and orient themselves in AI-generated environments, with the goal of improving the design of these virtual spaces to better accommodate human needs and preferences.
Ethical Considerations and Trust: Investigating the ethical implications of human-AI interactions in AI-generated worlds, fostering trust in AI systems, and ensuring that they are designed and deployed in ways that prioritize human values and well-being.
At the Multisensory Brain and Cognition Lab, our interdisciplinary team of researchers is dedicated to advancing our understanding of human perception in AI-generated worlds. Our goal is to contribute to the development of AI technologies that seamlessly integrate into our lives, enhancing our abilities, enriching our experiences, and fostering a deeper understanding of the world around us.
We invite you to learn more about our research in Human-AI Integration and join us in our pursuit of knowledge as we explore the frontiers of human perception and artificial intelligence. Welcome to the Multisensory Brain and Cognition Lab, where scientific inquiry and technological innovation come together to shape the future of human-AI interaction.
Dr. Michael Barnett-Cowan | Associate Professor | Department of Kinesiology and Health Sciences| BMH Building 1042 (office), TJB 1001-1003 (lab) | University of Waterloo | 200 University | Waterloo, Ontario N2L 3G1 Canada mbc@uwaterloo.ca
Copyright © 2023 Multisensory Brain & Cognition Lab - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.