Face Recognition Using Deep Learning on Embedded Systems(2017)

When a Machine Learned to Recognize Faces

There’s a difference between seeing a face and recognizing one.

Seeing is geometry.
Recognition is identity.

In this project, I crossed that line.

I wasn’t just detecting a human anymore.
I was teaching a machine to answer a harder question:

“Have I seen this person before?”

That question sounds simple. It isn’t.


Why Face Recognition?

By the time I started this work, computer vision had already proven it could:

  • Detect objects
  • Track motion
  • Follow humans

But identity was different.

Face recognition wasn’t about bounding boxes or pixels—it was about trust, accuracy, and constraints.

And the biggest constraint?

This system had to run on an embedded computer.

No GPUs.
No cloud.
No shortcuts.

Just limited memory, limited compute, and real-time expectations.


The Real Challenge: Deep Learning on Embedded Hardware

Training a deep learning model on a workstation is easy.

Running it on an embedded processor is not.

I quickly learned that most face recognition pipelines were:

  • Too heavy
  • Too slow
  • Too dependent on high-end hardware

So the problem became:

How do you make a neural network small enough to live on an embedded system—without making it useless?

That forced real engineering decisions.


Designing for Constraints, Not Comfort

To make it work, I had to rethink everything:

Model selection

  • Chose lightweight CNN architectures
  • Avoided deep, memory-heavy networks
  • Balanced accuracy vs inference time

Pipeline optimization

  • Face detection → alignment → feature extraction
  • Reduced resolution intelligently
  • Minimized preprocessing overhead

Deployment reality

  • Limited RAM
  • Limited CPU
  • Strict real-time performance

Every millisecond mattered.
Every extra layer had a cost.

This wasn’t research anymore.
This was deployment engineering.


Making Recognition Reliable

Accuracy wasn’t optional.

False positives are annoying in demos—but dangerous in real systems.

So I focused on:

  • Consistent face embeddings
  • Stable matching thresholds
  • Robust performance across lighting variations

The goal wasn’t to recognize everyone.

The goal was to recognize the right person, reliably.

🎥 Project demo:
https://www.youtube.com/watch?v=JBiHg4bifRU


What This Project Really Taught Me

This project changed how I think about AI systems.

It taught me that:

  • Model size matters as much as model accuracy
  • Real-world AI lives under constraints, not assumptions
  • Deployment is where most AI systems fail
  • Security is as much about reliability as intelligence

Most importantly, it taught me that AI isn’t impressive unless it works where it’s needed.


From Face Recognition to Visual Security

Looking back, this project quietly shaped a lot of what came later.

It gave me hands-on understanding of:

  • Visual security systems
  • Identity-based access control
  • Embedded AI deployment
  • Real-world performance tradeoffs

The same principles now apply across:

  • Smart surveillance
  • Access control
  • Robotics
  • Automotive systems
  • Edge AI devices

The hardware changed.
The lesson stayed.


Closing Thought

Teaching a machine to recognize a face isn’t about intelligence.

It’s about discipline:

  • Knowing what to remove
  • Knowing what to keep
  • Knowing when “good enough” is actually correct

That’s where engineering stops being theoretical—and starts being real.