Quick Start
Define and deploy your first configuration.
Concepts
Gain an understanding of Miru’s core concepts.
Introduction
As you scale your fleet of robots, configuration complexity compounds! Each robot’s config quickly diverges from the others. New hardware versions introduce new parameters, customer environments require site-specific tuning, and feature flags evolve independently of application code. What starts as a handful of shared config files becomes dozens (or hundreds) of subtly different configs across the fleet. The task of managing configs is also now spread across the team. Software engineers ship software releases to the fleet that require config upgrades. Support and field engineers make one-off changes to keep robots running at customer sites. These users have varying levels of technical ability, which means they require different tools and abstractions. Miru exists to replace this complexity by providing teams with tools to manage, version, and deploy configs for their robot fleets.What makes config management so difficult?
Tedious and error-prone workflows
In early deployments, configuration is managed manually. Engineers SSH into robots, edit text files directly, copy configurations between devices, and eyeball diffs to understand what changed. This is problematic for several reasons:- Typos and other simple errors lead to unnecessary downtime and slow fixes
- SSH requires an open inbound port, which significantly increases the surface area for potential security breaches
- Teams must hire software-savvy Deployment Engineers, despite a preferred background in Mechanical or Electrical Engineering
Difficult versioning
Unlike application code, configs are versioned on multiple dimensions, both across software releases and devices. Furthermore, thanks to device-specific settings or one-off edits, configs drift independently of software releases. Over time, there is no longer a clear answer to basic questions like:- What configuration was this robot running last week?
- Which changes were applied fleet-wide versus locally?
- What state do we return to when something breaks?
Lack of auditability
The history of config deployments are scattered across developer terminals, Slack messages, and the heads of field engineers. As the fleet grows, teams lose track of:- Who made a change?
- When it was made?
- Why was it made?
- What exactly changed?

