House of Desktop Dining

Website and Streaming, Machine Learning
Research sponsored by SNAP and led by Jenny Rodenhouse
Role: data collection,model training, HTML/CSS/JS coding

Dining Room Entrance

The Room 
Visit dining room located at


The House of Desktop Dining is a digital restaurant to visit and eat with your computer. Automatically generating a table for two, the restaurant uses select visual and audio data from the top mukbang performers to prepare a set of personalized food pairings with your body’s posture and mouth sounds. 

Questions we kept in mind include:  How do machines see physical world?  What does that mean?How does machine vision affects human activities and performance?


Teachable Machine, RunwayML


Audio and Visual data collection

Through observation and discussion, our team found that popular mukbang videos represent typical types and food and the performers, the YouTubers, use exaggerated postures, movements and sounds to highlight foods and the eating process. Thus, we collected our data from mukbang videos on YouTube, exported and seperated frames and sound to train model according to different kinds of typical mukbang food and their corresponding gesters and sound. 

Machine Learning and Model training 

Among the data we collected and cleaned, we use both Teachable Machine and Runway ML to train the data and finetune our model. Model training represents the typical way of how current machines, like computers and smart devices, see human and physical environments. 

Training Process and Evolvement of Machine’s Vision of a burger