ICRA 2022: Deep Visual Navigation Under Partial Observability (2022)

My First ICRA Paper: Deep Visual Navigation on a Real Robot

This post marks a milestone in my research journey — my first paper at ICRA 2022.
More importantly, it represents years of learning, experimentation, failures, and collaboration.

During my time at NUS (2020), while working on the IntentionNet project, I got the opportunity to work with Boston Dynamics’ Spot. At the time, Spot did not have autonomous navigation capabilities for our use case and was primarily operated via remote control. What started as curiosity quickly turned into a deep research problem: can we enable reliable visual navigation on a real robot under partial observability?

Our team built a deep learning–based visual navigation system that allowed the robot to:

  • Navigate from one location to another
  • Follow paths and take turns
  • Avoid obstacles in real environments
  • Detect and reason about doors (including glass doors)
  • Operate without a pre-built map

One of the hardest challenges was partial observability — the robot never has complete information about the environment. We worked extensively on system design, onboard perception, planning strategies, and failure handling when visual cues were ambiguous or misleading.

I contributed to:

  • Designing and integrating the sensor suite (camera, depth, lidar, IMU, odometry)
  • Upgrading and stabilizing the navigation pipeline
  • Visual navigation and localization in map-less environments
  • Extensive real-world testing, debugging, and data collection
  • Experiments on NVIDIA Isaac and successful lab demos

Beyond implementation, this work taught me how fragile real-world autonomy can be — and how much effort is required to move from simulation to a physical robot that must make decisions in uncertain, dynamic environments.

The work eventually led to our ICRA 2022 paper:

📄 Deep Visual Navigation under Partial Observability
Paper: https://arxiv.org/abs/2109.07752
Video: https://www.youtube.com/watch?v=N0wa04ZPZN0

Video Credits: AiBo

I’m deeply grateful to the professor and the entire team for the mentorship, trust, and collaboration. Working on this project — and on Spot — was genuinely a dream come true and shaped the way I think about autonomous systems, robustness, and applied research.