Example-driven Virtual Cinematography by Learning Camera Behaviors
HONGDA JIANG^ —— CFCS, Peking University & AICFVE, Beijing Film Academy
BIN WANG^ —— AICFVE, Beijing Film Academy
XI WANG —— University Rennes, Inria, CNRS, IRISA & AICFVE, Beijing Film Academy
MARC CHRISTIE —— University Rennes, Inria, CNRS, IRISA & AICFVE, Beijing Film Academy
BAOQUAN CHEN* —— CFCS, Peking University & AICFVE, Beijing Film Academy
(^ equal contribution, * corresponding author)
We propose the design of a camera motion controller which has the ability to automatically extract camera behaviors from different film clips (on the left) and re-apply these behaviors to a 3D animation (center).
Designing a camera motion controller that has the capacity to move a virtual camera automatically in relation with contents of a 3D animation, in a cinematographic and principled way, is a complex and challenging task. Many cinematographic rules exist, yet practice shows there are significant stylistic variations in how these can be applied. In this paper, we propose an example-driven camera controller which can extract camera behaviors from an example film clip and re-apply the extracted behaviors to a 3D animation, through learning from a collection of camera motions.
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2020)
This work was supported in part by the National Key R&D Program of China (2018YFB1403900, 2019YFF0302902). We also thank Anthony Mirabile and Ludovic Burg from University Rennes, Inria, CNRS, IRISA and Di Zhang from AICFVE, Beijing Film Academy for their help in animation generation and rendering.