Apple’s animated emojis – Animojis – for the iPhone X just announced today are getting lots of attention, partly because the tech behind them likely extends from the company’s acquisition of Faceshift in 2015.
While that’s certainly not been officially confirmed, back then, Faceshift was doing some very cool things with driving animated avatars directly (ie. in real-time) from video of your own face, coupled with depth sensing tech – effectively the same thing that happens with these Animojis via the iPhone cameras.
Several tools have of course also been developed elsewhere that use input video and facial performance to drive animated characters, but for fun, I thought it might be interesting to go back to specific pieces of computer graphics research from 2009, 2010 and 2011 that each partly served as the origins of Faceshift.
Other continued research efforts also played a part in the development of Faceshift, but these papers below (which also have accompanying videos), were key and show how the facial animation of CG avatars would be driven in real-time from video captured of human performances.
FACE/OFF: LIVE FACIAL PUPPETRY
Thibaut Weise, Hao Li, Luc Van Gool, Mark Pauly
Proceedings of the Eighth ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2009, 08/2009 – SCA ’09
ROBUST SINGLE-VIEW GEOMETRY AND MOTION RECONSTRUCTION
Hao Li, Bart Adams, Leonidas J. Guibas, Mark Pauly
ACM Transactions on Graphics, Proceedings of the 2nd ACM SIGGRAPH Conference and Exhibition in Asia 2009, 12/2009 – SIGGRAPH ASIA ’09
EXAMPLE-BASED FACIAL RIGGING
Hao Li, Thibaut Weise, Mark Pauly
ACM Transactions on Graphics, Proceedings of the 37th ACM SIGGRAPH Conference and Exhibition 2010, 07/2010 – SIGGRAPH 2010
REALTIME PERFORMANCE-BASED FACIAL ANIMATION
Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
ACM Transactions on Graphics, Proceedings of the 38th ACM SIGGRAPH Conference and Exhibition 2011, 08/2011 – SIGGRAPH 2011