Basic Information
I am a PhD student at ETH Zurich , co-advised by Prof. Siyu Tang and Prof. Andreas Geiger. My research topic is computer vision, specifically in building controllable neural implicit representations for human bodies with clothes. I am also interested in differentiable combinatorial optimization and its application in computer vision. Before I came to ETH, I worked as a researcher in Kording Lab at University of Pennsylvania. Even before that, I worked as a senior system engineer at the autonomous driving group of Baidu. I got my Master's degree from University of California, Irvine under supervision of Prof. Charless Fowlkes
Social
Publications
Authors:Shaofei Wang , Božidar Antić , Andreas Geiger , Siyu Tang
IntrinsicAvatar learns relightable and animatable avatars from monocular videos, without any data-driven priors.Authors:Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, Siyu Tang
Interaction with environments is one core ability of virtual humans and remains a challenging problem. We propose a method capable of generating a sequence of natural interaction events in real cluttered scenes.Authors:Shaofei Wang, Katja Schwarz, Andreas Geiger, Siyu Tang
Given sparse multi-view videos, ARAH learns animatable clothed human avatars that have detailed pose-dependent geometry/appearance and generalize to out-of-distribution poses.Authors:Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, Siyu Tang
Synthesizing natural interactions between virtual humans and their 3D environments is critical for numerous applications, such as computer games and AR/VR experiences. We propose COINS, for COmpositional INteraction Synthesis with Semantic Control.Authors:Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang
MetaAvatar is meta-learned model that represents generalizable and controllable neural signed distance fields (SDFs) for clothed humans. It can be fast fine-tuned to represent unseen subjects given as few as 8 monocular depth images.Authors:Shaofei Wang, Andreas Geiger and Siyu Tang
Registering point clouds of dressed humans to parametric human models is a challenging task in computer vision. We propose novel piecewise transformation fields (PTF), a set of functions that learn 3D translation vectors which facilitates occupancy learning, joint-rotation estimation and mesh registration.