3D Reconstruction Using 2D LiDAR and Sensor Fusion (2018)

Building a 3D Point Cloud Scanner Using a 2D LiDAR, a Rotating Platform, and a Compass (ROS)

There’s a special kind of curiosity that shows up in robotics.

The kind that asks:

“What if I don’t have the perfect sensor… but I can move it?”

That question led me to this project.

I wanted to know whether a 2D LiDAR, when paired with a compass sensor and a rotating mechanical platform, could reconstruct a real 3D point cloud of a person—not a simulation, not CAD, but an actual human standing in front of the scanner.

So I built a system that did exactly that.

🎥 Project demo:
https://www.youtube.com/watch?v=CJX31VMMlIc


Why This Was Interesting

A 2D LiDAR normally gives you a single slice of the world.

It’s great for:

  • mapping corridors
  • obstacle detection
  • 2D SLAM
  • navigation on flat planes

But it doesn’t capture 3D shape.

So the challenge became:

How do you create 3D perception with a sensor that only sees a 2D plane?

The answer is motion.

If you rotate the LiDAR and track the angle precisely, each scan becomes a different slice. Stack enough slices, and you can reconstruct a 3D structure.

But only if your angles are correct.

That’s where the compass came in.


The Core Concept

The idea was simple and beautiful:

  1. Mount a 2D LiDAR on a rotating axis
  2. Add a compass (orientation sensor) to measure heading
  3. Capture continuous 2D scans as the platform rotates
  4. Convert each scan into 3D coordinates using the angle
  5. Fuse all scans into a single 3D point cloud
  6. Visualize the result in ROS (RViz)

In other words:

A 2D scan + rotation angle = 3D slice
Many slices = full 3D point cloud


Mechanical Build: Where the Real Work Starts

This wasn’t just software.

To make this work, I built a mechanical structure to:

  • hold the LiDAR and sensor rigidly
  • rotate smoothly around a fixed axis
  • maintain consistent alignment across scans
  • avoid wobble and vibration

Because in point clouds, tiny mechanical errors become giant spatial distortions.

Robotics always punishes sloppy hardware.


Sensor Fusion: Compass + LiDAR

The compass provided the rotational heading. The LiDAR provided radial distance points in the scan plane.

Each LiDAR measurement gives a point in polar coordinates:

  • distance r
  • scan angle θ (within the LiDAR plane)

Then rotation of the platform adds a second angle:

  • heading φ (rotation about the axis)

Using that, each 2D point becomes a 3D point.

Conceptually:

  • LiDAR gives you the slice
  • Compass tells you where the slice belongs in 3D space

With many slices captured over a full rotation, the system reconstructs a 3D shape.


ROS Pipeline

ROS made this project scalable and clean.

The system followed a pipeline like:

  • publish LiDAR scans (/scan)
  • publish orientation/heading (/imu or custom compass topic)
  • transform frames using tf (rotation over time)
  • accumulate scans into a cloud (PointCloud2)
  • visualize in RViz

The most satisfying part was seeing the cloud build up in real time—scan after scan—until a human form emerged from dots.


The Moment It Worked

At first, the point cloud looked wrong.

  • warped
  • stretched
  • twisted
  • duplicated

And that’s expected.

Because 3D reconstruction is only as good as:

  • alignment
  • timing
  • calibration
  • mechanical stability

But after tuning the fusion, timing, and structure…

The point cloud became recognizable.

Not a perfect 3D mesh.
Not cinematic.

But real.

A human, captured through a rotating 2D sensor, visible from multiple angles in 3D.

That moment felt like a core robotics truth becoming visible:

You don’t always need expensive sensors.
You need the right geometry and the right transforms.


What This Project Taught Me

This project quietly taught me skills I still use:

  • How 3D perception can be built from simple components
  • Why coordinate frames and transforms matter
  • How mechanical stability affects algorithmic output
  • How sensor fusion creates new sensing capabilities
  • Why ROS is powerful for perception pipelines

And it reinforced something I believe strongly:

Robotics is creativity constrained by physics.


Closing Thought

Most people see a 2D LiDAR and think “flat world.”

I saw it and asked:

“What if I rotate it and let it learn depth?”

That curiosity turned a 2D sensor into a 3D scanning system—
and turned a question into a working point cloud.