For the Future
Thinking about where to go next, or just eventually, in looking into facial mocap options, I found a few studies looking to animate facial expressions purely through voice. I think this is particularly interesting because it could be a large breakthrough in streamlining the animation process by removing much of the equipment you would need to capture the actual actor's expression. This could automatically animate to a vocal performance and has potential to be used in-engine for games that have or require an active voice chat.Lip sync in games that have voice chat lobbies can really enhance the experience and cooperative feel of a game, but these lip sync methods usually only animate the jaw, resulting in a very blank staring character regardless of the words coming out of their mouth. The dissonance can be jarring and ruin immersion. But having a system that can analyze and quickly output accurate facial expressions to just a voice input could really help to increase immersive experiences, especially in the foreseeable future of communication in VR or cooperative video games.
Deep Learning Approach for Speech Animation
Lipsync from Audio
Audio Driven Pose and Emotion
This last video in particular in particularly interesting showing that full facial animation is possible from pure audio clues.
I didn't get to read all these articles quite yet. So I will get to that this week hopefully? I probably need some help in figuring out how to progress from here.
Comments
Post a Comment