Category: Robot

How I Built an Autonomous Car That Could Brake on Its Own (2015)

Building a Real Autonomous Car for the Driverless Car Challenge

In 2015, I worked on something that scared me a little.

Not because it was complicated.
Not because it was ambitious.

But because if it failed, it failed in the real world.

That year, I led a team to build a real autonomous car for a driverless car challenge—one that could detect obstacles in front and automatically apply the brakes.

This wasn’t a simulation.
This wasn’t a toy car.

This was a vehicle that moved under its own decision-making.

🎥 Project video:
https://www.youtube.com/watch?v=QCA4uGT7D6w


Why an Autonomous Car?

By 2015, I had already built:

  • A drone that taught me stability and control
  • A vision-based robot that taught me perception and tracking

But something was missing.

Both systems were impressive—but forgiving.

A drone crash is dramatic but contained.
A robot losing track is annoying but recoverable.

A car is different.

A car introduces responsibility.

I wanted to work on a system where:

  • Decisions had real consequences
  • Latency mattered
  • Safety wasn’t optional
  • “Almost working” wasn’t good enough

That curiosity led me to the Driverless Car Challenge—and to taking on the role of Tech Lead for the project.


My Role: Tech Lead, Not Just a Builder

As Tech Lead, my responsibility wasn’t just to write code.

It was to:

  • Architect the entire autonomous pipeline
  • Make trade-offs between speed, accuracy, and safety
  • Ensure subsystems worked together reliably
  • Decide when the car should not act

This meant thinking beyond algorithms and into system behavior.


What the Car Had to Do

The core functionality sounds simple:

  • Detect objects in front of the car
  • Decide whether they pose a collision risk
  • Automatically apply brakes when necessary

But implementing this required coordinating multiple layers:

  • Sensor input (vision and/or distance sensing)
  • Real-time perception
  • Decision logic
  • Actuation (brake control)
  • Fail-safe behavior

Unlike earlier projects, this was a closed-loop safety system.


From Perception to Decision

Detection alone wasn’t enough.

The system had to answer harder questions:

  • How close is “too close”?
  • How fast are we approaching the object?
  • Is the object stationary or moving?
  • Is braking safer than continuing?

This was my first deep exposure to decision thresholds under uncertainty.

False positives caused unnecessary braking.
False negatives were unacceptable.

That tension shaped how I think about autonomous systems even today.


The Hardest Part: Trusting the System

The most difficult moment wasn’t writing the code.

It was letting the car decide.

There’s a psychological leap when you stop overriding the system and allow it to act:

  • To detect
  • To decide
  • To brake

You don’t just test the algorithm.
You test your confidence in your own engineering judgment.


When It Worked

The first successful runs weren’t flashy.

The car moved.
An obstacle appeared.
The system detected it.
And the brakes engaged—smoothly, deliberately, without panic.

No sudden stops.
No human intervention.

Just a machine doing what it was designed to do.

That moment felt fundamentally different from every project before it.

This wasn’t automation.

This was responsible autonomy.


What This Project Taught Me

This project changed how I approach engineering problems.

It taught me that:

  • Perception is only useful when paired with decision logic
  • Control systems must serve safety, not performance alone
  • Real-world AI must fail conservatively
  • The best systems know when not to act

Most importantly, it taught me that autonomy is not intelligence—it’s accountability.


From Driverless Cars to Real-World AI

Years later, when I worked on:

  • Autonomous platforms
  • Computer vision systems in production
  • Safety-critical AI pipelines

I recognized the same principles.

Latency.
Edge cases.
Fail-safe defaults.
Human trust.

Those lessons didn’t come from papers.

They came from watching a car stop because my code told it to.


Looking Back

The drone taught me balance.
The robot taught me perception.
The car taught me responsibility.

Each project raised the stakes.

And by 2015, I wasn’t just building machines anymore.

I was building systems that had to be trusted.

When My Robot Started Paying Attention To Me (2015)

From Making Things Fly to Teaching Them How to See

In 2015, I watched a machine I had built do something new.

It didn’t just move.
It didn’t just balance.

It noticed a human—and followed them.

That moment marked a shift in my thinking.
Up until then, I had been teaching machines how to behave.
This time, I was teaching one how to perceive.


From Flight to Following

A year earlier, I had built my first drone. That project taught me control, stability, and respect for physics.

But something kept bothering me.

Flying was impressive—but reactive.
The system responded to inputs, not intent.

I wanted to build something that could:

  • Observe its environment
  • Make decisions based on visual input
  • Interact with a human without explicit commands

That curiosity became my college major project in 2015:

A person-following robot using image-based pattern recognition.

🎥 Project demo video:
https://www.youtube.com/watch?v=vECu79wfyEg


The Idea: Simple on Paper, Difficult in Reality

The goal sounded straightforward:

  • Detect a person visually
  • Track them in real time
  • Move the robot to follow while maintaining distance

But in practice, this meant solving multiple problems at once:

  • Real-time image processing
  • Pattern recognition under changing lighting
  • Frame-to-frame consistency
  • Motion control linked to perception
  • Noise, latency, and false positives

Unlike the drone, where physics dominated, this robot lived in a messy visual world.


Teaching a Machine to See

The robot didn’t “recognize a person” the way humans do.

It relied on:

  • Image-based patterns
  • Visual features and contrasts
  • Continuous frame analysis
  • Heuristics that worked most of the time

And “most of the time” turned out to be the real challenge.

Lighting changes broke detection.
Background clutter confused tracking.
Sudden movement caused loss of lock.

This was my first real exposure to a core truth of computer vision:

Vision systems don’t fail loudly. They fail ambiguously.


When Perception Meets Control

The hardest part wasn’t detection alone.

It was closing the loop.

Every visual decision had physical consequences:

  • Move too fast → overshoot
  • Move too slow → lose the subject
  • Turn aggressively → lose frame stability

Perception and motion had to agree.

This was where my earlier experience with control systems paid off. I wasn’t just tuning motors—I was tuning behavior.

The robot had to feel intentional, not mechanical.


What Didn’t Work (At First)

A lot didn’t.

  • Detection flickered under poor lighting
  • Background patterns caused false tracking
  • Latency created delayed reactions
  • Minor calibration errors caused drifting behavior

I stopped trying to make it “perfect” and focused on making it robust enough to work reliably.

That decision mattered.


The Moment It Felt Alive

Then came the moment that changed everything.

A person walked in front of the robot.
The robot locked on.
It adjusted its position.
And it followed.

Not aggressively.
Not blindly.

Deliberately.

Watching it move based on what it saw—not what I told it—felt fundamentally different from anything I had built before.

This wasn’t automation anymore.
This was early autonomy.


What This Project Gave Me

This robot taught me things that stayed with me far longer than the project itself:

  • Perception is fragile but powerful
  • Intelligence emerges from tight perception–action loops
  • Vision without control is useless
  • Control without perception is blind

Most importantly, it taught me that real-world AI is never clean—and that’s okay.


From Person-Following Robots to Applied AI

Years later, when I worked on:

  • Computer vision systems
  • Real-time perception pipelines
  • AI models deployed outside the lab

I recognized the same patterns.

The same compromises.
The same trade-offs.
The same need to make systems work despite imperfect inputs.

That understanding started in 2015—with a robot trying to follow a human.


Closing Reflection

The drone taught me how systems stay stable.
This robot taught me how systems understand.

One learned how to fly.
The other learned how to see.

Together, they quietly defined the direction my work would take.

AI and Robotics