Machine Learning Assisted Visual Effects


I was introduced to Runway ML this semester by my independent study professor Golan Levin. I was showing him a music video I was working on that involved a lot of manual and semi-automatic rotoscoping, when he suggested I look into Runway as a resource for workflow efficiency and general experimentation. Once I got it working, I couldn’t believe how simple and powerful the application was. The following gifs are several different layers making up a specific VFX shot from the video:

This is the raw footage, which you can see has a single green frame at the beginning. I added this so that I could data mosh the video and retain most of that green while having bits and pieces of the original footage appearing over time.
This is the same clip after moshing. I was happy about the part where his hand comes up and reveals part of his sweatshirt. I used this to add present and tactile texture in the final composite.
Z-depth pass exported from RunwayML.
The final shot appears at 00:55 in the finished piece.
Comments

  1. Hey! Do you use Twitter? I’d like to follow you if that would be okay. I’m absolutely enjoying your blog and look forward to new updates.

    1. Yep! @huwythechew – I’m so happy you appreciate the blog, thanks!

Leave a comment