qui 02 janeiro 2020

ICRA 2020 ViTac workshop Program

ViTac 2020: Closing the Perception-Action Loop with Vision and Tactile Sensing

Time: 9am-6:30pm (Paris Time), Sunday 31st May 2020

Workshop homepage.

The workshop will take place online (live) via Zoom.

To participate in the workshop, please register here for the Zoom meeting.

Participation will be free of charge.

Workshop Slack channel to ask questions to speakers in addition to live discussions (you need to join the ICRA 2020 Slack workspace first).

Recordings of the workshop have been uploaded to our YouTube channel.We had around 500 registrations!

The chat texts in the meeting: morning session and afternoon session.

All the following times are in Paris Time.

Time Speaker Title
09:00 – 09:10 Organizers Opening & Welcome

Session 1:

09:10-9:35 Shan Luo, University of Liverpool
Session Intro & Visuo-Tactile Robotic Perception
09:35-10:00 Robert Haschke, Bielefeld University Invited talk: Tactile-servoing based manipulation
10:00-10:25 Huaping Liu, Tsinghua University
Invited talk: Active Multi-Modal Perception
10:25-10:50 Lorenzo Natale, IIT
Invited talk: From object detection to pose estimation using visual and tactile feedback

Session 2:

10:50-11:15 Nathan Lepora, University of Bristol Session intro & Optical tactile sensing with a human touch
11:15-11:40 Kaspar Althoefer, Queen Mary University of London
Invited talk: The integrated force/tactile sensor – A vision-based approach
11:40-12:20 Vincent Hayward, UPMC (Keynote) Invited talk: Latest results concerning the mechanics of touch
12:20-12:30 Lightning presenters Lightning presentations (5mins each)
[12:20-12:25] Accurate estimation of the 3D contact force distribution with an optical tactile sensor (Live demonstration). Carmelo Sferrazza and Raffaello D’Andrea [pdf]
[12:25-12:30] STAM: An Attention Model for Tactile Texture Recognition. Guanqun Cao and Shan Luo [pdf]

12:30 – 14:20 Lunch Break

Session 3:

14:20-14:50 Gordon Cheng, Technische Universität München (TUM) Session Intro & Realising Humanoid Multi-Modal Close-loop Perception-Action
14:50-15:30 Peter Allen, Columbia University (Keynote) Invited talk: Generative Attention Learning: A “GenerAL” Framework for High-Performance Multi-Fingered Grasping in Clutter Using Vision and Touch
15:30-16:00 Lightning presentations & Discussions Lightning presentations (5mins each)
[15:30-15:35] Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera. Damian Bogunowicz, Aleksandr Rybnikov, Komal Vendidandi and Fedor Chervinskii [pdf]
[15:35-15:40] Exploiting touch sensing around fingers. Daniel Fernandes Gomes, Zhonglin and Shan Luo [pdf]
[15:40-15:45] Object pose estimation with geometry and tactile image matching. Maria Bauza Villalonga and Alberto Rodrigeuz [pdf]
[15:45-15:50] Incremental shape and pose estimation from planar pushing using contact implicit surfaces. Sudharshan Suresh, Joshua Mangelson and Michael Kaess [pdf]
[15:50-16:00] Discussions


Session 4:

16:00-16:25 Wenzhen Yuan, CMU Session Intro & Connecting touch and vision for object property perception
16:25-16:50 Roberto Calandra, Facebook Invited talk: Towards in-hand manipulation from vision and touch
16:50-17:30 Dieter Fox, University of Washington and Nvidia (Keynote) Invited talk: Learning and Modeling Touch
17:30-17:55 Alberto Rodriguez, MIT Invited talk: Tactile-driven Dexterity: Object localization, manipulation, and assembly
17:55-18:30 Speakers & attendees Group discussion
18:30 Finish

Support of IEEE RAS Technical Committees

  • Haptics,
  • Human-Robot Interaction and Coordination,
  • Computer and Robot Vision,
  • Cognitive Robotics.