top of page

Interactive 3D Visualization with ML

Team Telepathway is a cross-functional team of programmers, designers, and project managers. In collaboration with Google, we explored the visualization of Machine Learning and Data Science, providing an engaging interactive experience

For more information, please check out our website, slide deck, and paper abstract
Google Research
Tools Used
Unity 3D, Maya, C#
Jan - May, 2022


Project Brief

Telepathway is a blue-sky discovery project working with Google at the Entertainment Technology Center (ETC) at Carnegie Mellon University. Telepathway explores the use of 3D visualizations and interactions to represent the high-dimensionality features of Machine Learning models. We aim to leverage iteration as a tool to create a diverse set of prototypes, each focused on a different machine learning model, to push the boundaries of spatial visualization. Ultimately, we hope to clarify these models visually and better allow the user to express intentional control over their output, based on a deeper understanding of the algorithms beneath the surface of machine learning.

My Role

UX Designer, poster Designer, and the primary author of the research paper
I was responsible for building hypotheses, designing UI for interactive experiences, running user tests, and writing the academic research paper submitted to SIGGRAPH.

Target Users

​People in higher education who are intimidated to explore the concept of ML

Global Hypothesis

Providing an engaging experience can raise interest and curiosity that can shift attitudes toward ML and build a cyclical relationship between the pursuit of learning and engagement
Anchor 1

Overall Process

Screen Shot 2022-09-02 at 11.02.20 PM.png
5 Prototypes
7 User Testing Sessions
4 Data Types
3 Platforms
105 Playtesters


Slide Deck
Siggraph Poster
Paper Abstract
Final Video


Project Summary

Design Explorations

Prototype 1 - Isochromatic Deconstruction using K-Means

We visualized RGB datasets using the K-means clustering algorithm to split a painting into isochromatic layers. The team used the RGB value as an XYZ data set to cluster images into layers grouped by color proximity. This experience visualizes a 2D image into a 3D space, making a unique impression. We built the project in Unity 3D and visualized it in a virtual space with an Oculus Quest 2 Head Mounted Display and trackers.
PC, VR (Oculus Quest)
Data Type
RGB data from images

Factors that made the experience successful​

  • Visualizing direct and unambiguous data. In this case, a 2D image of a painting was quite lucid as a pixel data set and created a memorable impression.
  • ​Allowing customization led to personal investment in image selection, which made the experience more relatable and engaging.
  • The VR variant pulls users further into an immersive experience, adding additional avenues in which users can invest their attention, and offering a more visually striking and memorable experience.

Deconstruction in VR

Prototype 2 - Haptic Music

Fast Fourier Transform
PC, Ultrahaptics
Data Type
Music (notes, beats, accents)
We mapped beats, notes, and accents of a song to independent sensations across the hand to create visual metaphors on screen and matching sound and haptics to maximize the experience of ‘feeling’ music. This allowed users to haptically feel a clustering algorithm centered around sonic data, and to haptically visualize any user-selected song through STRATOS Ultrahaptics

Factors that made the experience successful​

  • The novelty of interaction bolstered the level of interest. Though interest began high from novelty alone, this novelty also led to returnability, and those subsequent re-engagements led to increasing levels of understanding.

Prototype 3 - Narrative Game with Reinforcement Learning

Reinforcement learning visualized using a dinosaur (agent), meat (reward), grass color (q-value) and lava (punish- ment) to represent narrative using a game format
Data Type
AI Locomotion

Factors that made the experience successful​

  • The primary engagement learning centered around storytelling, which, when used with related metaphors, increased investment and inclinations towards sustained, longer-term interactions.
  • Gratifying experience by rewarding and punishing agent behavior boosts user attentiveness and results in a higher content-retention rate.
  • Lastly, since confusion detracts from immersion and scaffolding prevents confusion, scaffolding became a key component of sustained engagement.
  • Supporting user experience by providing levels, with each level focusing on a subsequent function core to reinforcement learning, fosters a better understanding of the possibilities, and constraints present within a given machine learning model.


Impact & Conclusion

Accessible metaphors are powerful tools for understanding. For unknown technology, these visuals can be intuitively explanatory and offer memorable imagery for future reference. Artistic perspectives on high-level topics may be undervalued but can offer unique insights to creatively illuminate those esoterica and provide audiences with an approachable common ground.

Relatable, visual metaphors help increase informational accessibility and emotional memorability. With greater accessibility comes greater engagement, creating a recursive response, which feeds into a cycle that potentially results in more lasting paradigm shifts.



Through this project, I learned how to try fast, fail fast, and do it again. At first, we had a fear of failure, so we spent a lot of time in the research phase. Because of this, we learned that iteration is key. Making rough visual prototypes can help better communication, and allow things to move faster.

Moreover, knowing when to fold was important for us as it was a discovery project. Defining milestones and success metrics was helpful.
bottom of page