top of page
BoM_AIConcept1.png

BODY OF MINE began as a USC Thesis Project, in collaboration between the Film & Television Production, Interactive Media, and Future Cinema / Media Arts & Practice programs. The prototype and demo video, created with Unreal Engine 4.27.2, were developed under the mentorship of esteemed faculty members at USC's School of Cinematic Arts, including Academy Award© Winner Michael Fink, John Brennan III (Lion King, Jungle Book), and VR/XR pioneer and technologist Scott Fisher, all of whose extensive experience will continue guiding the project moving forward, along with VR pioneers Nonny de la Peña and Mary Matheson of ASU.

The prototype utilizes the latest of accessible, at-home motion capture technology, relying on five HTC Vive Trackers (feet, hands, and waist) along with an IK System and custom Blueprints to enable users to step inside of a MetaHuman from home. The guided choreography and 360° dance sequences were captured using USC's OptiTrack performance capture stage, cleaned and edited in Motive and Motion Builder, before being retargeted in Unreal Engine based on the user's gender. The environment, animations, and game progression utilize Sequencer, assets sourced from Epic Games Marketplace and Quixel Bridge, and triggerable Blueprints based on spatialized colliders to enable a completely full-body, controller-less experience. Development took place on Cameron's AMD Ryzen 9 3900X and RTX 3090, which we plan to use throughout the production process.

BODY OF MINE had its first public appearance at the USC 2022 Games Expo, where it was recognized for its combination of intimacy, empathy, and forward technology. Users were struck by the power of its use case of VR, both as a form of social awareness and of internal exploration. While we continue to develop the tools to bring the experience to a larger Oculus and Steam audience, we plan on continuing to travel with the experience, doing live installations at festivals, conferences, and queer-centered events, such as Pride festivals and LGBTQ+ museums. 

How We Got Here

Community Contributions

With advancements in the metaverse demanding new approaches to customizable and first person avatars, we believe in the democratization of real-time, full-body tracking solutions, and want to be able to bring not just this experience, but the full power of immersive technology to mass audiences. We plan on contributing to the Unreal Engine Community by sharing our development in the following seven areas:

Accessible Real Time Body Tracking

While currently relying on Vive trackers for its full-body immersion, we are working to make this even more accessible by streaming real time performance capture from a smartphone directly into Unreal Engine. Utilizing new wide-angle phone camera lens with depth sensing, users will simply place their phone in front of them and be able to step inside a character, with the synchronization between headset, camera, and motion capture implemented within Unreal Engine. This pipeline would be shared with the Unreal Community to enable other full-body VR experiences within consumer headsets, without the need for expensive, inaccessible gadgets.

We are working to input finger tracking data from headsets such as the Quest into a MetaHuman, combined with real time smartphone body tracking, broadening the range of possible interactivity without needing to hold a controller.

Finger Tracking

Real Time Mesh Morphing

We want to develop real time systems for the fluid modification of skeletal meshes in VR. Such a system would allow users to customize the gender of a MetaHuman on a spectrum, as well as body type, skin tone, and facial features. This would also allow gender-curious individuals to see the effects testosterone or estrogen might have over time, or to explore levels of androgyny. 

Currently, the level of full immersion within a virtual human is limited by users' abilities to realistically interact with other avatars, including their own. We want to develop systems to better represent humanlike motion by both creating barriers to prevent the 'passthrough' illusion when one body part overlaps another, as well as utilizing advancements within cloth simulation to mimic the ability to stretch, pull, and interact with your own skin.

Skin Simulation

To recreate the experience of voice dysphoria, we are developing tools to combine advancements in AI generated voices within Unreal experiences. This would allow users to speak freely out loud, with their voice then modified real-time to mimic the effects that estrogen or testosterone might have during transitioning. We believe real-time voice synthesizers have broad implications for the further advancement of customizable avatars in the metaverse, as we continue to look for ways of best expressing our identity in VR spaces, both visually and verbally.

Synthetic Speech

Gestural Interaction

Controllers are great for a wide range of VR games and tools; yet, for many use cases, controllers get in the way. We do not hold controllers in our hands in real life, and eliminating this small barrier can help VR developers build VR experiences that better represent the human experience. 

We are working to combine advancements in motion analytics to bring AI pose detection and intelligent movement into the headset. This would allow for both greater interaction and greater intimacy, as recognition of specific movement patterns could help tailor the guided portions to each user.

Machine Learning in UE5

bottom of page