How I Built an Autonomous Car That Could Brake on Its Own (2015)

Building a Real Autonomous Car for the Driverless Car Challenge
In 2015, I worked on something that scared me a little.
Not because it was complicated.
Not because it was ambitious.
But because if it failed, it failed in the real world.
That year, I led a team to build a real autonomous car for a driverless car challenge—one that could detect obstacles in front and automatically apply the brakes.
This wasn’t a simulation.
This wasn’t a toy car.
This was a vehicle that moved under its own decision-making.
🎥 Project video:
https://www.youtube.com/watch?v=QCA4uGT7D6w
Why an Autonomous Car?
By 2015, I had already built:
- A drone that taught me stability and control
- A vision-based robot that taught me perception and tracking
But something was missing.
Both systems were impressive—but forgiving.
A drone crash is dramatic but contained.
A robot losing track is annoying but recoverable.
A car is different.
A car introduces responsibility.
I wanted to work on a system where:
- Decisions had real consequences
- Latency mattered
- Safety wasn’t optional
- “Almost working” wasn’t good enough
That curiosity led me to the Driverless Car Challenge—and to taking on the role of Tech Lead for the project.
My Role: Tech Lead, Not Just a Builder
As Tech Lead, my responsibility wasn’t just to write code.
It was to:
- Architect the entire autonomous pipeline
- Make trade-offs between speed, accuracy, and safety
- Ensure subsystems worked together reliably
- Decide when the car should not act
This meant thinking beyond algorithms and into system behavior.
What the Car Had to Do
The core functionality sounds simple:
- Detect objects in front of the car
- Decide whether they pose a collision risk
- Automatically apply brakes when necessary
But implementing this required coordinating multiple layers:
- Sensor input (vision and/or distance sensing)
- Real-time perception
- Decision logic
- Actuation (brake control)
- Fail-safe behavior
Unlike earlier projects, this was a closed-loop safety system.
From Perception to Decision
Detection alone wasn’t enough.
The system had to answer harder questions:
- How close is “too close”?
- How fast are we approaching the object?
- Is the object stationary or moving?
- Is braking safer than continuing?
This was my first deep exposure to decision thresholds under uncertainty.
False positives caused unnecessary braking.
False negatives were unacceptable.
That tension shaped how I think about autonomous systems even today.
The Hardest Part: Trusting the System
The most difficult moment wasn’t writing the code.
It was letting the car decide.
There’s a psychological leap when you stop overriding the system and allow it to act:
- To detect
- To decide
- To brake
You don’t just test the algorithm.
You test your confidence in your own engineering judgment.
When It Worked
The first successful runs weren’t flashy.
The car moved.
An obstacle appeared.
The system detected it.
And the brakes engaged—smoothly, deliberately, without panic.
No sudden stops.
No human intervention.
Just a machine doing what it was designed to do.
That moment felt fundamentally different from every project before it.
This wasn’t automation.
This was responsible autonomy.
What This Project Taught Me
This project changed how I approach engineering problems.
It taught me that:
- Perception is only useful when paired with decision logic
- Control systems must serve safety, not performance alone
- Real-world AI must fail conservatively
- The best systems know when not to act
Most importantly, it taught me that autonomy is not intelligence—it’s accountability.
From Driverless Cars to Real-World AI
Years later, when I worked on:
- Autonomous platforms
- Computer vision systems in production
- Safety-critical AI pipelines
I recognized the same principles.
Latency.
Edge cases.
Fail-safe defaults.
Human trust.
Those lessons didn’t come from papers.
They came from watching a car stop because my code told it to.
Looking Back
The drone taught me balance.
The robot taught me perception.
The car taught me responsibility.
Each project raised the stakes.
And by 2015, I wasn’t just building machines anymore.
I was building systems that had to be trusted.