R01: Demonstration of state-of-the-art Social Intelligence on PAL Robotics’ ARI and TIAGo platforms (PAL Robotics)
PAL Robotics, a leading European robotics company, will demonstrate the novel ROS4HRI open-source framework for Social Intelligence on PAL Robotics social robots ARI and TIAGo. ROS4HRI enables the creation of complex, interoperable pipelines for human-robot interaction, integrating heterogeneous AI techniques into a modular framework. Although, the ROS4HRI framework itself does not introduce major new AI techniques, it offers for the first time a standardized approach to build complex perception pipelines for human-aware, Social AI. ROS4HRI is ROS-based allowing a smooth bridging between AI and Robotics research.
R02: Frida: A Narrative Robot Artist with Versatile Styles (The Robotics Institute, Carnegie Mellon University)
CMU will show interactive demonstration of their robot painter, Frida. As human artists, Frida paints still lifes or portraits based on visual perception through a camera sensor or narrative art based on a theme or a story described in natural language. In this online exhibit, Frida will demonstrate the process of creating an artifact, from generating visual contents in simulation to painting them on a physical canvas using a brush and a palette of paints, based on the photographs or text messages given by attendees. During the demonstration, participants will select style images and describe the painting that they wish to see our robot artist, Frida, paint. The demonstration will show an attempt at creative artificial intelligence and inspire attendees to focus their AI research on aiding human communication and bringing people together.
R03: The Tilburg Dexterous Hand: A Low Cost Research Platform for Everyone (Tilburg University)
Tilburg University present a preview of the low-cost Tilburg Dexterous Hand. During the interactive demonstration, visitors will be able to control the robot hand by moving their own, through a hand pose tracking system implemented via perception pipeline framework MediaPipe. The robot hand is intended to facilitate research in dexterous manipulation and deep reinforcement learning for robotics by significantly reducing the cost of the required hardware. The design of the hand attempts at improving the speed of simulation, which helps with the high compute requirements of training DRL agents.
R04: Sasha – Please Tidy Up (Technische Universität Wien)
In this experiment TU Wien will show their methods to tidy up in a room. By scanning the environment with the robot (Toyota HRS) a 3D reconstruction is created that serves as reference for the object map of the orderly room. Advanced machine learning techniques are used to detect objects, estimate their pose and grasps. The robot then checks locally for objects deviating from this map, detects moved and also novel items, and acts accordingly. The robot exploit multiple reconstruction and object detection pipelines including the use of support surfaces to prune false hypotheses. Making robots see is at the core of embodied AI, visual perception, and planning for grasping with the robot. Of particular relevance is to integrate such technology and show how applications can profit from recent research.