Machine Learning Assisted Visual Effects

I was introduced to Runway ML this semester by my independent study professor Golan Levin. I was showing him a music video I was working on that involved a lot of manual and semi-automatic rotoscoping, when he suggested I look into Runway as a resource for workflow efficiency and general experimentation. Once I got it working, I couldn’t believe how simple and powerful the application was. The following gifs are several different layers making up a specific VFX shot from the video:

This is the raw footage, which you can see has a single green frame at the beginning. I added this so that I could data mosh the video and retain most of that green while having bits and pieces of the original footage appearing over time.
This is the same clip after moshing. I was happy about the part where his hand comes up and reveals part of his sweatshirt. I used this to add present and tactile texture in the final composite.
Z-depth pass exported from RunwayML.
The final shot appears at 00:55 in the finished piece.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s