Deformation Capture and Modeling of Soft Objects

Abstract

We present a data-driven method for deformation capture and modeling of general soft objects. We adopt an iterative framework that consists of one component for physics-based deformation tracking and another for spacetime optimization of deformation parameters. Low cost depth sensors are used for the deformation capture, and we do not require any force-displacement measurements, thus making the data capture a cheap and convenient process. We augment a state-of-the-art probabilistic tracking method to robustly handle noise, occlusions, fast movements and large deformations. The spacetime optimization aims to match the simulated trajectories with the tracked ones. The optimized deformation model is then use to boost the accuracy of the tracking results, which can in turn improve the deformation parameter estimation itself in later iterations. Numerical experiments demonstrate that the tracking and parameter optimization components complement each other nicely. Our spacetime optimization of the deformation model includes not only the material elasticity parameters and dynamic damping coefficients, but also the reference shape which can differ significantly from the static shape for soft objects. The resulting optimization problem is highly nonlinear in high dimensions, and challenging to solve with previous methods. We propose a novel splitting algorithm that alternates between reference shape optimization and deformation parameter estimation, and thus enables tailoring the optimization of each subproblem more efficiently and robustly. Our system enables realistic motion reconstruction as well as synthesis of virtual soft objects in response to user stimulation. Validation experiments show that our method not only is accurate, but also compares favorably to existing techniques. We also showcase the ability of our system with high quality animations generated from optimized deformation parameters for a variety of soft objects, such as live plants and fabricated models.

Publication
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2020)

Acknowledgement

We thank the anonymous reviewers for their constructive comments. We are grateful to the authors of [Chen et al. 2014] for sharing their data and fabricated models for our validation and comparison experiments. We also thank Jiacheng Ren, Jiangtao Shen, Keng Hua Sing, Francois Faure and Matthieu Nesme for their help and thoughtful discussions at the various stages of developing this project. This work is supported in part by Singapore Ministry of Education Academic Research Fund Tier 2 (MOE2011-T2-2-152); Microsoft Research Asia Collaborative Research Program (FY14-RES-OPP-002); NSFC (61402459, 61379090, 61331018); National 973 Program (2014CB360503); Shenzhen Innovation Program (CXB201104220029A, JCYJ20140901003939034, JCYJ20130401170306810); NSERC Discovery Grant (84306).