SeeIntuit (a combination of “See-Into-It” and “See-Intuition”) was commissioned by the Cultural Programs of the National Academy of Science for the first annual USA Science and Engineering Festival at the National Mall in Washington DC on October 2010. The mission of the festival is to “re-invigorate the interest of our nation’s youth in science, technology, engineering and math (STEM) by producing and presenting the most compelling, exciting, educational and entertaining science gatherings in the United States.” Along with many universities who supported the festival with presentations, such as MIT, Georgia Tech, and Carnegie Melon, UMBC was represented by the IRC with its work concerning a balanced blend of art and science. The USA Science and Engineering Festival was not only a large audience, it was substantially UMBC/IRC’s audience. The IRC’s booth sat between the stage where Nobel Laureates spoke about their research and a display created by Disney to showcase the technology used to create their new movie TRON: Legacy.
SeeIntuit is a reconfiguring of new technologies to talk about innovation: the key issue facing the fields of science and technology as we rely increasingly upon them to rebuild the US economy. Created by IRC Associate Director Lee Boot, it offered visitors a chance to experience and learn about their own intuitive minds. It is based on fMRI and EEG imaging of the human brain solving puzzles done by Mark Beeman and John Kounios of Northwestern and Drexel universities, respectively. New research indicates that the human brain is not only structured to provide intuitive insight, but that such capability, because it is not a linear process like reason, is what gives the human mind value beyond that of machines. It allows the brain to make connections before “we” do. It allows a brain to be smarter than its owner.
The main interactive component of the SeeIntuit display booth consists of four "peepholes" positioned around the sides of the booth. Participants are encouraged to look through all of the peepholes, where they will see four short segmented and repeating videos, each consisting of seemingly unrelated imagery. The participants are told that all of the events in the videos were shot on the same day at the same location. The idea is to try and imagine a single event at which all four films could have been taken. In short, it’s a visual puzzle. The visitor’s answers are entered via a keyboard attached to the booth, or by cell phone. These answers then appear anonymously on a large monitor for all to see. Using a video game controller, visitors can also interact with a real-time 3D computer model visualization of the brain that reveals the progression of neural activation involved in solving the puzzle with intuitive insight.
SeeIntuit was first presented to the UMBC community at the Commons for a few days during the 2010 Fall semester. Viewers approached the booth with curiosity and open mindedness, and those who chose to participate in the activities gave very positive reviews of the experience. Reactions and data were considered in order to make adjustments to improve the experience for the debut in Washington.
At the festival in October, IRC staff members were present to encourage participation as well as explain the purpose of the project. Over the course of two days, the booth received countless visitors, and obtained over 400 responses from what participants saw in the peepholes. Data from visitors’ text-entered guesses, as well as production materials and notes, were collected, organized, and placed on a website (www.seeintuit.com, designed and implemented by Online Information Designer Abbey Salvo) in order to document the project and provide a resource for anyone who wants to experience the booth online. Website visitors can look through the peepholes, explore the booth, see the behind-the-scenes of the project and review all results that participants entered into the booth during the events. You can also find the "correct answer" (spoiler alert!).
The visualizations were primarily based on findings from the following papers:
Hasson, U., Nir Y, Levy, I., Fuhrmann, G., Malach, R. (2004) Intersubject Synchronization of Cortical Activity During Natural Vision. Science, 303(5664), pp. 1634-164
Jaaskelainen, L.P., Koskentalo, K., Balk, M., Autti, T., Kauramaki, J., Pomren, C., Sams M. (2008). Inter-Subject Synchronization of Prefrontal Cortex Hemodynamic Activity During Natural Viewing. The Open Neuroimaging Journal, 2, pp.14-19
Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., et al. (2004). Neural Activity When People Solve Verbal Problems with Insight. PLoS Biol, 2(4), e97, pp. 500-510