open-set perception problems are prevalent in many application areas

Anti-spam

space robotics

Phishing Detect

autonomous driving

Smart Scan

underwater robotics

Smart Scan

household robotics

Smart Scan

agriculture & forestry

Smart Scan

robot learning

What are current limitations and challenges in open-set robot perception?
Will closed-set limitations be overcome by massively scaling up training data for foundation models, or is that not sufficient for the open-set problem?
Are open-set problems addressed differently in different robotic application areas? What can we learn from other domains? Why are there differences?
How can we mirror the limitations experienced in the field into better open-set perception benchmarks?

Robots are envisioned to perform complex tasks in cluttered households, continuously changing plantations, hardly perceivable ocean floors, or entirely unexplored other planets. The key to acting in such surroundings and understanding them is to possess a robust sense of (visual) perception. However, most robot perception methods today rely on the closed-set assumption, which contradicts the open-set world they have to interact with. Most environments are constantly changing with previously unknown classes (e.g. relatively new e-scooters do not appear in many datasets). For planetary exploration, the closed-set assumption is in itself rather problematic since one tries to search for the unknown. Robust autonomous systems must be resilient and adaptable: They need to detect open-set samples and leverage these unknown instances for continual improvement and adaptation within the system.

Dealing with the open-set problem is now more crucial than ever. Since robots are deployed in more parts even in our day-to-day lives, it is important to guarantee secure functionality. This workshop proposal comes at a time when anybody can buy robots that can easily walk through family homes or greenhouses but still fail to reliably distinguish between a coffee table and a bench. Right now researchers know the architectures that can scale to large closed sets, but what is lacking behind are methods and tools that are able to deal with novel observations that robots make during their deployment. In this context, robotic perception first requires the ability to understand unknown situations. The detection of potentially new information is crucial for a safe interaction but then opens opportunities to adapt the perception on the fly. To this end, we divide the topics into two main groups, namely the Detection of and the Learning from Open-Set Samples.

Talks and Submissions cover:

Detection of Open-Set Samples:

Learning from Open-Set Samples:

Speakers from Diverse Application Areas

A speaker giving a talk in a lecture Hall.

Poster Presentations and Highlight Talks

Call for Contributions
A person standing in front of a poster, explaining their research to 3 listeners.

Talks

Client Logo

Prof. Angela Dai

TU Munich

Client Logo

Dr. Gideon Billings

University of Sydney

Client Logo

Prof. Holger Ceasar

TU Delft

Client Logo

Dr. Dimity Miller

Queensland University of Technology

Client Logo

Prof. Chris McCool

Uni Bonn

Client Logo

Prof. Rudolph Triebel

KIT

Client Logo

Prof. Ingmar Posner

Uni Oxford

Organizers

Client Logo

J. Stephany Berrio Perez

University of Sydney

Client Logo

Marcus G. Müller

German Aerospace Center (DLR)

Client Logo

Hermann Blum

Uni Bonn | Lamarr Institute | ETH Zürich

Client Logo

Maximilian Durner

German Aerospace Center (DLR)

Client Logo

Marc Pollefeys

ETH Zürich

Client Logo

Roland Siegwart

ETH Zürich