Contact-Aware Refinement of Human Pose Pseudo-Ground Truth via Bioimpedance Sensing
Abstract
Capturing accurate 3D human pose in the wild would provide valuable data for training pose estimation and motion generation methods. While video-based estimation approaches have become increasingly accurate, they often fail in common scenarios involving self-contact, such as a hand touching the face. In contrast, wearable bioimpedance sensing can cheaply and unobtrusively measure ground truth skin-to-skin contact. Consequently, we propose a novel framework that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account. Our method, BioTUCH, initializes the pose using an off-the-shelf estimator and introduces contact-aware pose optimization during measured self-contact: reprojection error and deviations from the input estimate are minimized while enforcing vertex proximity constraints. We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture. Testing with three input pose estimators, we demonstrate an average of 11.7% improvement in reconstruction accuracy. We also present a miniature wearable bioimpedance sensor that enables efficient largescale collection of contact-aware training data for improving pose estimation and generation using BioTUCH.
Video
Poster
BibTex
@inproceedings{Forte25-ICCV-BioTUCH,
title = {Contact-Aware Refinement of Human Pose Pseudo-Ground Truth via Bioimpedance Sensing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
address = {Honolulu, USA},
month = oct,
year = {2025},
note = {Nikos Athanasiou and Giulia Ballardini contributed equally to this publication},
slug = {forte25-iccv-biotuch},
author = {Forte, Maria-Paola and Athanasiou*, Nikos and Ballardini*, Giulia and Bartels, Jan Ulrich and Kuchenbecker, Katherine J. and Black, Michael},
month_numeric = {10}
}
Contact
For questions, please contact forte@is.mpg.de