Dining at the Virtual Buffet link

What do you get when you bring together a team of researchers from Psychology, Computer Science, Information Systems, and Visual Media Arts? The Imaging Research Center has long prided itself on being an interdisciplinary hub, believing that the best work comes from a conversation among and across fields. In this case, asking questions about what college students eat has led to an exciting new way to assess food choices. We have built a virtual reality dining hall that allows researchers to study individual food choices while comparing the virtual to the actual world. This project, nicknamed “The Virtual Buffet,” is an exemplar of IRC work in the areas of programming, modeling, and photogrammetry, and has already produced important findings.

UMBC Professor of Psychology Charissa Cheah was awarded the IRC’s 2017 Summer Faculty Research Fellowship for her proposal to use a virtual version of UMBC’s student dining hall (True Grit's) to test food choices and potential interventions. This virtual environment allows researchers to exert complete control over any variables (food position, lighting, portion size), and also allows for replicability anywhere in the world. But before the variables could be manipulated, Cheah needed to confirm that subjects made their food choices in the virtual environment (VR) in the same ways that they did in real life (RL). Thus the initial phase of this project sought to answer three questions:

  • What are the physiological correlates, particularly neurological, of food decision-making behaviors within the VR and the RL buffet setting?

  • How consistent are individuals’ food decision-making processes between the VR setting and the RL buffet setting?

  • What are the specific differences in human behavior between the two environments, and how can they be controlled for in future experiments?

The virtual buffet faithfully replicates the layout and offerings at True Grit's

The IRC’s main task was to build the virtual buffet, which would then be used in our existing virtual reality lab (officially the Observation and Measurement-Enabled Mixed Reality Lab, or OMMRL). To model the dining hall, buffet tables, food and accessories, a variety of tools common in the video game and entertainment industries were used by IRC artists and engineers. Key among them is photogrammetry: a process that uses computation to calculate 3d computer models of people, places, or things using multiple photographs taken from different perspectives. The process can be extremely accurate and also save time compared to modeling things from scratch. First IRC staff carefully photographed the inside of the dining hall—its walls, floor and ceiling, buffet tables, food trays, and the food itself. Using long exposures allowed for sharp focus of the foreground and the background in these images despite the low light of the dining hall; photogrammetry works best with sharp, clear images. Every subject was made visible in at least two, preferably three images so that enough overlap existed for a stable geometric mesh (3D model) to be built. The food itself was scanned (photographed) in the IRC's photogrammetry facility.

The images were then fed into a computer application to produce the virtual 3D models. The geometry and surface colors and patterns, known as textures, that the photographs provided of the various objects, were edited separately. Both required considerable repair and optimization using modeling and animation tools. The surface textures were then "wrapped" around the geometry. The process of building the virtual buffet required balancing two competing needs: in order to appear real, the models needed to be as detailed as possible, but the more detailed they were, the more work it took for the computer to display them. More detailed models display and refresh on computer screens more slowly than simpler ones. Virtual reality has to be much faster (in frames per second) than television or movies, and if it moves too slowly, the time lag can derail a subject's impression of reality and even lead to people getting motion sickness. Therefore, in some cases the polygonal faces of the 3D models had to be either subdivided (increased) or decimated (reduced) in order for the program to run both smoothly and realistically. The interface that allows subjects to move through the space, collect food, and have it measured was programmed in the UnReal™ game authoring software.

It only takes a few minutes to become comfortable with the VR controllers

The resulting VR space is an attractive and convincing replica of the original. Subjects first pick up a plate and then can choose from up to seven different portion sizes of foods such as carrots, mashed potatoes, salmon, pizza, and desserts. The drink dispenser fills glasses with water or soda. When a subject has made his or her selections, they set their plate down by the cash register, the computer reads the volume of each item, and calculates the total calorie count for the plate.

The entire interaction between the subject and the virtual space is tracked and recorded in a variety of media: by being captured with both point-of-view and third-person (or objective camera) video and motion sensing devices. UMBC Assistant Professor of Information Systems Jiaqui Gong led a team that outfitted the participants with biometric sensors to capture physiological data such as heart rate, heart rate variability, galvanic skin response, and prefrontal cortex activation using functional near-infrared spectroscopy (fNIRs). The sensor data was then synchronized with the video data and time stamps. The true value of all of this data comes from the synchronization, connecting it to the movement of the subjects through their environments.

Going through the real dining hall with the sensors

These three datasets: the volume and calories of food, the video-recorded behaviors, and the physiological sensor data, are then compared with similar datasets taken with subjects in the real True Grit's dining hall. Subjects who used the virtual buffet; also went through the actual cafeteria and selected a meal, while outfitted with the same biometric sensors and additionally, had their eye movements tracked. Half the subjects did the virtual version first and vice versa. analyzed together to seek insights. These datasets were also synchronized.

According to a recent paper that has been accepted for publication, preliminary results are very encouraging. Not only did participants select similar plates of food (in terms of calorie content) in both the virtual and real environments, but neurological data from each environment show similar patterns, particularly in pre-frontal cortex activity. This indicated that similar levels of anxiety and other emotional factors were present in both experiences. Because emotional factors that can result from any number of triggers, strongly impact behavior, these similarities are critical to validate virtual reality as a viable environment for both observing and intervening in human behaviors. This early work in the OMMRL is serving two separate but related research programs: one about food choices, the other about the value of computer-generated realities for human behavior research.

Finally, subjects reported in interviews and surveys that they found the VR experience to be relatively natural, rather than alien. At the end of the day, this kind of self-reporting, despite all the sensor data, is important so that researchers understand how subjects are going to feel about participating in these kinds of experiments, and what they think they are feeling when they do.

Work continues to refine the buffet and devise new, and larger studies. This is an exciting new program for the IRC, and with this early study, a promising one.