Embedded Linux Board Farms 101: The Requirements That Actually Matter
You push a new kernel. The board reboots. SSH never comes back. If that scenario makes you nervous, this post is for you.
The default embedded Linux workflow feels like a tiny desktop: plug in HDMI, attach a keyboard, and start hacking. That works fine when you're next to the board — but it falls apart the moment your devices under test (DUTs) live in a rack or a remote lab. KVM switches are an option, but they only add complexity and cost.
If you're an embedded developer who's outgrown the "plug in HDMI" workflow — or you're building a shared lab, adding CI, or just tired of walking over to power-cycle a hung board — this post lays out the requirements that actually matter.
In this post, I'll cover why the KVM-style approach fails for remote development, and what we can do about it: serial consoles, remote power control, reimaging, and logging. I'll discuss what I use at home, and what changes when you scale this up. It's a teaser for my lightning talk at the Embedded Online Conference in May.
Two Ways to Think About an Embedded Linux DUT
You can think about a DUT in two ways:
| Aspect | KVM mindset | Remote mindset |
|---|---|---|
| Primary interface | HDMI + keyboard/mouse | Serial console + SSH (when available) |
| Where you stand | Next to the DUT | Anywhere with network access |
| Recovery | Manual power cycle | Remote power + boot control |
| Debugging | Look at the screen, type commands | Capture bootloader/kernel logs, inspect later |
| Scaling | 1 DUT, 1 user | Many DUTs, many users/CI jobs |
WARNING: SSH Is Not Enough
Here's a common failure pattern: you deploy a new kernel, the DUT reboots, and SSH never comes back. If your only plan is SSH, you're out of luck: one bad kernel and you've lost your only access to the DUT. What you need in that moment are answers to questions like:
- Did the bootloader run?
- Did the kernel start?
- Did it panic?
- Did it drop into an initramfs shell?
- Did systemd hit emergency mode?
With a properly configured serial console, all of that information is available. That's why your Embedded Linux setup should start by wiring each DUT's UART to the network using a small Linux box, a USB serial bank, or a console server.
SSH is great for application-level debugging. But if you're changing bootloaders, kernels, initramfs, or networking, you need a console that survives those changes.
Remote Power and Real Recovery
Now imagine you can see the failure on the serial console: maybe the kernel panics, or the bootloader can't mount the root filesystem. The next requirement is simple: you need to be able to power-cycle the DUT (and sometimes change its boot mode) without touching it.
For many DUTs, you also care about how they boot — specifically, forcing them into recovery or USB boot mode when the storage is broken. Locally, you might flip DIP switches or hold buttons. Remotely, you can select boot mode using GPIO pins or relays via a board-farm controller. Alternatively, you can sometimes use a bootloader that has a predictable recovery path you can trigger remotely.
In my home lab, I do this with low-cost hardware (smart plugs/relays + a console host). What matters is that I can recover and reimage a DUT from anywhere, even from the show floor at Embedded World, not just when I'm standing next to it.
Interactive Debugging vs. Reimaging and Logs
When a DUT is remote (or tests run unattended), "fix it live" stops scaling. The workflow shifts toward "produce a known-good image, deploy it, and inspect logs and artifacts when something fails." This requires the following:
- You can remotely install a full image (boot + rootfs) with minimal steps.
- You can remotely view boot logs and verify the system came up cleanly.
- You have a system for generating your images. This could be Ansible, Yocto, or many other tools.
Remote Means "Nobody On-Site Needs to Know Linux"
One more advantage of the remote mindset becomes clear once you deploy DUTs somewhere you can't walk to every day. With the KVM approach, you implicitly assume that if something breaks, someone with Linux skills will be on-site. With the remote approach, you don't need someone on-site. That drives a simple requirement: everything described above (console, power, recovery, reimaging) must be doable without someone on-site who knows Linux.
In my case, I built my home lab to behave as if the DUTs were remote, even though they're physically nearby. I can move them to a server closet, or work off-site entirely, and my wife doesn't need to power-cycle my DUTs for me.
Automation and Smart-Home Integration (Optional Layer)
Once the primitives are in place, the interesting part begins. The same smart devices you use to control your thermostats and lights can become first-class development tools: Home Assistant dashboards that show you which boards are online, Tasmota smart switches that power-cycle a hung DUT from your phone, and environmental sensors that flag when your test rack is running hot. This is the layer that turns a remote lab into an intelligent one.
Maker-Grade at Home, Enterprise Gear at Scale
None of this requires expensive console servers or data center PDUs. My home lab is built on a small budget:
- A Raspberry Pi 5 as the management host
- Serial adapters wired to each DUT's UART
- Smart plugs or relay boards for power control
- Open-source tools and some glue scripts
That's enough to escape the HDMI/keyboard trap and get most of the behavior you want from a proper board farm. If you have similar requirements at larger scale, you may want to investigate commercial options. When you have dozens of DUTs and many users, your make-vs-buy decision will likely look different from mine, and it may be worth paying for:
- Console servers instead of dangling USB dongles
- Managed PDUs instead of hobby relay boards
- Commercial lab management software
Want to See How It Looks End-to-End?
This post focused on the practical requirements of Embedded Linux board farms, and the trade-offs involved. In my lightning talk at the Embedded Online Conference in May, I'll walk through a concrete implementation of everything described here: serial consoles, remote power cycling via Tasmota smart switches, Home Assistant for monitoring and automation, and Proxmox for managing build environments — all on a hobbyist budget. I'll also cover what I'd do differently as the setup scales.
If your mental picture of an Embedded Linux DUT is "a tiny desktop with HDMI and a keyboard," but you're starting to think about remote access, CI, or shared labs, this talk will help you get started with that transition.
Sidebar: Board Farm "Pre-Flight" List (KVM → Remote)
Console
- Out-of-band (OOB) console per DUT (UART) reachable remotely
- Bootloader and kernel output are accessible and not dependent on SSH
- Deterministic port mapping (DUT ID → console port)
Power / Recovery
- Remote hard power cycle per DUT (API/CLI)
- Remote recovery/boot mode control or a guaranteed fallback path
- One-command "unwedge" sequence (power → recovery → reimage)
Provisioning / State
- Scripted reimage to baseline (boot artifacts + rootfs)
- Idempotent provisioning (safe to rerun after partial failure)
- Linux system image-as-code
Remote Reality
- Zero physical interaction required (no HDMI, no keyboard, no SD swapping)
- Optional in-band access (SSH/agent) is a bonus, not the foundation
You might also like
- Comments
- Write a Comment Select to add a comment
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Please login (on the right) if you already have an account on this platform.
Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers:







