I’m a Ph.D. Candidate in Robotics at the University of Michigan, working in the MMINT Lab, advised by Prof. Nima Fazeli and co-advised by Prof. Andrew Owens. I successfully defended my doctoral dissertation on November 21st, 2025.
My research focuses on tactile-centric robot learning, including cross-sensor tactile generation, tactile representation learning, and visuo-tactile models for manipulation and in-hand state estimation.
My research aims to enable robots to understand and interact with the physical world through rich multimodal tactile perception, supporting more adaptive and robust manipulation.
I am excited to continue advancing tactile sensing and robot learning, and to explore opportunities where robots can understand and interact with the physical world through touch.
Samanta Rodriguez*, Yiming Dou*, Miquel Oller, Andrew Owens, Nima Fazeli
9th Conference on Robotic Learning (CoRL), 2025
We prenset a method for tansferring manipulation policies between different tactile sensors by generating cross-sensor tactile signals. Using either a paired diffusion model (T2T) or an unpaired depth-based approach (T2D2), the method enables zero-shot policy transfer without retraining. We demonstrate it on a marble rolling task, where policies learned with one sensor are successfully applied to another.
Samanta Rodriguez*, Yiming Dou*, William van den Bogert, Miquel Oller, Kevin So, Andrew Owens, Nima Fazeli
International Conference on Robotics and Automation (ICRA), 2025
We present a contrastive self-supervised learning method to unify tactile feedback across different sensors, using paired tactile data. By treating paired signals as positives and unpaired ones as negatives, our approach learns a sensor-agnostic latent representation, capturing shared information without relying on reconstruction or task-specific supervision.
Samanta Rodriguez*, Yiming Do*u, Miquel Oller, Andrew Owens, Nima Fazeli
Preprint, 2024
The diversity of touch sensor designs complicates general-purpose tactile processing. We address this by training a diffusion model for cross-modal prediction, translating tactile signals between GelSlim and Soft Bubble sensors. This enables sensor-specific methods to be applied across sensor types.