Blind Object Tracking in Clutter

Links Abstract Highlights Reference

toggle TOC (ctrl + ⇔)



Retrieving an object from cluttered spaces such as cupboards, refrigerators, or bins requires tracking objects with limited or no visual sensing. In these scenarios, contact feedback is necessary to estimate the pose of the objects, yet the objects are movable while their shapes and number may be unknown, making the association of contacts with objects extremely difficult. While previous work has focused on multi-target tracking, the assumptions therein prohibit using prior methods given only the contact-sensing modality. Instead, this paper proposes the method Soft Tracking Using Contacts for Cluttered Objects (STUCCO) that tracks the belief over contact point locations and implicit object associations using a particle filter. This method allows ambiguous object associations of past contacts to be revised as new information becomes available. We apply STUCCO to the Blind Object Retrieval problem, where a target object of known shape but unknown pose must be retrieved from clutter. Our results suggest that our method outperforms baselines in four simulation environments, and on a real robot, where contact sensing is noisy. In simulation, we achieve grasp success of at least 65% on all environments while no baselines achieve over 5%.


Higher FMI is better while lower contact error is better. Error bars indicate 20-80th percentile. Ours is in blue.

4 simulated environments with a floating gripper. Top shows the initial condition and the bottom shows near the end of one of our runs. Robot state trail and best estimate for target pose are in blue, tracked contact points are crosses with each color corresponding to a segmented object. Our method achieves at least 65% grasp success across all environments while none of the baselines exceed 5%.

Real environment with (top) initial condition and (bot) successful tracking in one of our runs.


Soft Tracking Using Contacts for Cluttered Objects to Perform Blind Object Retrieval
Sheng Zhong, Nima Fazeli, and Dmitry Berenson
IEEE Robotics and Automation Letters (RA-L) (Presented at ICRA 2022), accepted.