In the early 2010s, the world of configuration management was dominated by tools that treated infrastructure as something to be described declaratively and converged upon slowly. Puppet required you to learn a custom DSL and accept that changes would propagate on their own schedule. Chef demanded that sysadmins become Ruby programmers. Both tools were powerful, but neither could answer a simple question in real time: what is actually happening on my ten thousand servers right now? Thomas Hatch built SaltStack to answer that question — and in doing so, created a configuration management and remote execution framework that could communicate with tens of thousands of machines in seconds, not minutes. Salt did not just manage infrastructure; it gave engineers a live, responsive nervous system for their entire data center.
Early Life and the Road to Systems Engineering
Thomas Hatch grew up in the western United States with an early fascination for computers and how systems work at a fundamental level. Unlike many tech pioneers who came through elite computer science programs, Hatch’s path was more self-directed. He developed deep expertise in Linux systems administration and Python programming through hands-on work rather than academic theory. This practical orientation would later shape SaltStack’s design philosophy — the tool was built by someone who had actually spent years managing servers and understood the daily frustrations of operations teams.
Before creating Salt, Hatch worked in systems administration and infrastructure roles where he encountered the pain points that every operations engineer knew well: deploying changes across hundreds of machines was slow, gathering real-time information about system state was nearly impossible without SSH-ing into each box individually, and existing tools forced a choice between speed and correctness. Configuration management tools like Puppet excelled at defining desired state but were not designed for ad-hoc command execution. Orchestration was an afterthought. Hatch wanted a single tool that could do both — manage configuration and execute commands across an entire fleet, instantly.
The Birth of SaltStack
Why Existing Tools Fell Short
To understand why Salt mattered, you need to understand what infrastructure management looked like around 2011. The dominant tools were Puppet (created by Luke Kanies in 2005) and Chef (released by Adam Jacob in 2009, later stewarded by Jesse Robbins at Opscode). Both were built around the idea of convergent configuration: you declare what the system should look like, and the tool makes it so. This model worked well for ensuring consistency, but it had significant limitations.
First, both Puppet and Chef used an agent-pull model. Agents on managed nodes would periodically check in with a central server (every 30 minutes by default in Puppet’s case), download their configuration catalog, and apply changes. This meant that when you pushed a critical security patch, you might wait up to half an hour for all nodes to pick it up. In a world moving toward continuous deployment and rapid incident response, this latency was unacceptable.
Second, neither tool was designed for remote execution — running arbitrary commands across your fleet in real time. If you needed to check disk usage on all your web servers, or restart a service across a cluster, you reached for a separate tool like Fabric, Capistrano, or plain SSH loops. This meant operations teams juggled multiple tools for tasks that were fundamentally related.
Third, Puppet used its own declarative DSL (Puppet Language), and Chef required writing Ruby code (recipes and cookbooks). Both had steep learning curves that created friction for adoption, particularly among sysadmins who were comfortable with shell scripts and Python but not necessarily with Ruby or custom DSLs.
The Design of Salt
Hatch began working on Salt in late 2011, and the first public release came in March 2011 on GitHub. The project was written entirely in Python, which was already the lingua franca of the systems administration world. But what truly set Salt apart was its communication layer: ZeroMQ.
While other tools relied on HTTP or custom protocols for server-agent communication, Salt used ZeroMQ (later adding a pure-TCP transport as an alternative) — a high-performance asynchronous messaging library. This gave Salt a persistent, bidirectional communication channel between the master and all minions (Salt’s term for managed nodes). The result was remarkable speed: a Salt master could send a command to thousands of minions and receive responses in seconds. This was not incremental improvement over existing tools; it was a fundamentally different experience.
Salt’s architecture was built around several key components. The Salt Master served as the central control point, maintaining persistent connections with all minions. Salt Minions were lightweight agents running on managed nodes. The targeting system allowed operators to select machines by hostname, regular expression, IP range, operating system, custom metadata (called grains), or any combination thereof. And the execution module system provided hundreds of built-in functions for common operations tasks.
A typical Salt command looked deceptively simple but was extraordinarily powerful:
# Check disk usage on all web servers running Ubuntu
salt -G 'os:Ubuntu' -E 'web-server-*' disk.usage
# Install nginx on all nodes in the "loadbalancer" group
salt -N loadbalancer pkg.install nginx
# Run a state file to enforce full configuration on the database tier
salt 'db-*' state.apply mysql.server
# Gather real-time system info from every machine in the fleet
salt '*' grains.items --out=json | python3 -c "
import sys, json
data = json.load(sys.stdin)
for host, info in data.items():
print(f\"{host}: {info['os']} {info['osrelease']}, {info['num_cpus']} CPUs, {info['mem_total']}MB RAM\")
"
This combination of remote execution and configuration management in a single tool was Salt’s defining innovation. You could use the same infrastructure, the same targeting language, and the same Python-based module system to both enforce long-term configuration state and respond to immediate operational needs.
Technical Contributions and Architecture
The State System and YAML-Based Configuration
Salt’s configuration management layer, called the State system, used YAML files with Jinja2 templating — technologies that most sysadmins already knew. This was a deliberate contrast to Puppet’s custom DSL and Chef’s Ruby-based approach. A Salt state file (with the .sls extension) described desired system state in a format that was immediately readable:
# /srv/salt/webserver/init.sls
# Ensure nginx is installed, configured, and running
nginx_package:
pkg.installed:
- name: nginx
- version: ">=1.24.0"
nginx_config:
file.managed:
- name: /etc/nginx/nginx.conf
- source: salt://webserver/files/nginx.conf
- template: jinja
- context:
worker_processes: {{ grains['num_cpus'] }}
server_name: {{ pillar['domain'] }}
- require:
- pkg: nginx_package
nginx_service:
service.running:
- name: nginx
- enable: True
- watch:
- file: nginx_config
The state system supported requisites (require, watch, onchanges) for ordering dependencies, Jinja2 templating for dynamic configuration based on node-specific data, and the Pillar system for securely distributing sensitive data like passwords, API keys, and certificates to specific nodes. Pillar data was encrypted in transit and only sent to the nodes that needed it — a security feature that addressed a genuine weakness in some competing tools where secrets could leak across nodes.
The Grain and Pillar Systems
Salt introduced two complementary data systems that gave it exceptional flexibility. Grains were facts about each minion — operating system, kernel version, IP addresses, CPU count, memory, custom metadata — collected automatically and available for targeting and templating. This was similar to Puppet’s Facter or Chef’s Ohai, but deeply integrated into Salt’s targeting system. You could target commands at all CentOS machines with more than 8 CPUs in a specific data center with a single command.
Pillars were the inverse: data pushed from the master to specific minions based on targeting rules. Pillar data lived on the master and was only transmitted to nodes that matched specific criteria. This made Pillars ideal for secrets management — database passwords, TLS certificates, API tokens — and for per-environment configuration where different nodes needed different values for the same variables.
Event-Driven Infrastructure with the Reactor
One of Salt’s most forward-thinking features was its event bus and reactor system. Every action in Salt — a command execution, a state run, a minion connecting, a job completing — generated an event on an internal message bus. The Reactor system allowed operators to define automatic responses to these events. If a minion’s disk usage exceeded 90%, Salt could automatically trigger a cleanup job. If a new minion connected, it could automatically receive its baseline configuration. This event-driven model anticipated the infrastructure-as-code and DevOps patterns that would become standard practice years later.
Salt SSH and Agentless Operation
Recognizing that not every environment could accommodate agents, Hatch also built Salt SSH — a mode that executed Salt commands over SSH without requiring a minion to be installed. This gave Salt the agentless capability that later made Ansible (created by Michael DeHaan in 2012) wildly popular, though with the trade-off of reduced speed compared to the ZeroMQ transport. Salt was one of the few tools that let operators choose their communication model based on their specific constraints — persistent agents for speed and real-time control, or SSH for simplicity and zero footprint.
SaltStack in the Industry
SaltStack Inc. was founded by Hatch to provide commercial support and enterprise features around the open-source Salt project. The company operated in the competitive configuration management market alongside Puppet Labs, Chef (Opscode), and the rapidly growing Ansible (acquired by Red Hat in 2015). Salt found particular adoption in environments that demanded speed and scale — large web companies, CDN providers, hosting companies, and any organization managing thousands or tens of thousands of nodes.
LinkedIn, for example, used Salt to manage its massive server fleet, taking advantage of Salt’s speed for rapid deployments across thousands of machines. CloudFlare, Lyft, and numerous hosting providers adopted Salt for similar reasons. The tool was especially valued in environments where real-time visibility and control were critical — you could not afford to wait 30 minutes for a configuration change to propagate when you were responding to a production incident.
In November 2020, VMware acquired SaltStack, integrating Salt’s automation capabilities into its vRealize Automation suite and broader infrastructure management platform. The acquisition validated Salt’s technology and its relevance to enterprise infrastructure automation, even as the broader market increasingly consolidated around a handful of dominant tools. For organizations evaluating their infrastructure automation strategy, understanding the trade-offs between tools like Salt, Ansible, Terraform, and Kubernetes remains essential — a challenge where project management platforms can help teams organize their evaluation process and track migration decisions systematically.
Philosophy and Engineering Principles
Speed as a Feature
Hatch consistently argued that speed was not just a nice-to-have in infrastructure tooling — it was a fundamental feature that changed how people used the tool. When a command returns results from 5,000 machines in 3 seconds, you use it differently than when it takes 10 minutes. Speed enables exploratory operations: checking system state, querying configurations, validating deployments. It turns infrastructure management from a batch process into an interactive conversation with your fleet. This philosophy aligned with the broader DevOps movement’s emphasis on fast feedback loops and the idea that faster cycles lead to safer operations, not riskier ones.
Python as the Right Choice
The decision to build Salt entirely in Python was both practical and philosophical. Python was already the dominant language in systems administration and operations. By building Salt in Python, Hatch ensured that the tool’s internals were accessible to its users — a sysadmin who needed a custom execution module could write one in the same language they used for their own scripts. This lowered the barrier to extending Salt compared to tools that required learning Ruby or a custom DSL. Salt’s module system was designed for contribution: drop a Python file in the right directory, and it became available as a Salt function across your entire fleet.
Batteries Included
Salt shipped with an enormous library of built-in modules — hundreds of execution modules, state modules, grains modules, pillar modules, returner modules, and more. It had native support for managing packages, services, files, users, cron jobs, cloud instances, network devices, containers, and dozens of other resource types out of the box. Hatch believed that operations engineers should not need to hunt for community-written modules for basic tasks. This batteries-included approach meant that Salt was productive from the first install, a philosophy that echoes the design sensibility that makes tools like Toimi effective for web agencies — comprehensive built-in functionality that eliminates the need to cobble together multiple specialized solutions.
Salt’s Place in the Configuration Management Evolution
The history of configuration management can be divided into distinct eras, and Salt occupied a critical position in the timeline. CFEngine (Mark Burgess, 1993) established the concept of automated configuration management. Puppet (2005) made it accessible to mainstream operations teams with its declarative model. Chef (2009) appealed to the developer-operations crossover audience with its Ruby-based approach. Salt (2011) added speed and remote execution. Ansible (2012) prioritized simplicity and agentless operation. And Terraform (2014, by Mitchell Hashimoto) shifted the conversation toward provisioning cloud infrastructure itself, rather than configuring what ran on it.
Each tool represented a different set of trade-offs, and Salt’s specific contribution was proving that configuration management and remote execution were not separate problems requiring separate tools. Before Salt, the mental model was: use Puppet or Chef for configuration, use Fabric or MCollective for ad-hoc execution. Salt unified these concerns under a single framework with a single communication layer and a single targeting language. This insight influenced subsequent tools and shaped how the industry thought about infrastructure automation.
The rise of containerization with Docker (2013) and orchestration with Kubernetes (2014) changed the landscape significantly. In a world of immutable containers and declarative orchestration, traditional configuration management became less central for some workloads. But Salt adapted — it gained modules for managing Docker containers, Kubernetes clusters, and cloud provider resources, and it remained essential for managing the underlying infrastructure that containers run on. Even in a containerized world, someone still needs to configure the hosts, the networks, the storage, and the security policies.
Legacy and Continuing Influence
Thomas Hatch’s most enduring contribution may be the idea that infrastructure tools should be fast enough to be interactive. Before Salt, configuration management was something you set up and left running in the background. Salt made it possible — and natural — to have a real-time dialogue with your infrastructure. This mental model shift influenced how subsequent tools were designed and how operations teams thought about their relationship with the systems they managed.
Salt’s event-driven architecture, with its internal event bus and reactor system, anticipated the broader industry move toward event-driven infrastructure and GitOps workflows. The idea that infrastructure changes should trigger automatic responses — testing, validation, remediation — is now a standard practice in modern platform engineering. Salt was building this capability years before it became a mainstream expectation.
The Salt project continues as an active open-source community project under VMware (now part of Broadcom following the 2023 acquisition). Its core architecture — the ZeroMQ transport, the grain and pillar systems, the targeting language, the execution module framework — remains influential. Engineers who learned infrastructure automation through Salt carry its concepts into whatever tools they use next, whether that is Ansible, Terraform, Kubernetes, or custom internal platforms.
Hatch himself has continued to work on infrastructure and security tooling, contributing to the broader ecosystem of tools that help organizations manage complex, distributed systems. His work with Salt demonstrated a principle that applies across all of software engineering: the best tools are not just functional — they are fast enough, flexible enough, and intuitive enough that they change how people think about the problem they are solving. Salt did not just automate infrastructure; it made infrastructure conversational.
Key Facts
- Name: Thomas Hatch
- Known for: Creating SaltStack (Salt), an open-source configuration management and remote execution framework
- First Salt release: March 2011
- Language: Python, with YAML/Jinja2 for configuration
- Key innovation: Unified configuration management and remote execution with ZeroMQ-based high-speed communication
- Company: SaltStack Inc. (founded by Hatch, acquired by VMware in November 2020)
- Current status: Salt continues as an open-source project; Hatch remains active in infrastructure and security technology
Frequently Asked Questions
Who is Thomas Hatch?
Thomas Hatch is an American software engineer and entrepreneur who created SaltStack (commonly known as Salt), an open-source configuration management and remote execution tool. He founded SaltStack Inc. to provide commercial support for the project, and the company was acquired by VMware in 2020. Hatch is recognized for building one of the fastest and most scalable infrastructure automation platforms in the DevOps ecosystem.
What is SaltStack and why was it created?
SaltStack is an open-source tool for managing and automating IT infrastructure. It was created because existing configuration management tools like Puppet and Chef were too slow for real-time operations and lacked built-in remote execution capabilities. Salt used a ZeroMQ messaging layer to communicate with thousands of servers in seconds, combining configuration management with the ability to run arbitrary commands across an entire fleet instantly.
How does SaltStack differ from Ansible?
The primary difference is architectural. Salt uses persistent agent connections (minions) with a ZeroMQ message bus for high-speed communication, while Ansible is agentless and communicates over SSH. This makes Salt significantly faster for large-scale deployments — communicating with thousands of nodes in seconds versus the sequential SSH execution in Ansible. Salt also includes a built-in event bus and reactor system for event-driven automation. However, Ansible’s agentless approach is simpler to set up and requires no software installation on managed nodes, which contributed to its wider adoption among smaller teams.
What happened to SaltStack?
VMware acquired SaltStack in November 2020 and integrated Salt’s automation technology into its vRealize Automation and Aria Automation platforms. The open-source Salt project continues to be developed and maintained by the community. Following Broadcom’s acquisition of VMware in 2023, Salt remains part of the broader VMware infrastructure automation portfolio.
Is SaltStack still relevant in the age of Kubernetes?
Yes, though its role has evolved. While Kubernetes handles container orchestration, Salt remains relevant for managing the underlying infrastructure — bare-metal servers, virtual machines, network configurations, security policies, and cloud resources. Salt has modules for managing Docker, Kubernetes, and major cloud providers, making it complementary to container orchestration rather than a competitor. Organizations with hybrid environments — a mix of containers, VMs, and physical servers — often use Salt alongside Kubernetes for comprehensive infrastructure management.
What programming language is SaltStack written in?
Salt is written entirely in Python, which was a deliberate choice by Thomas Hatch. Python was already the dominant language among systems administrators and DevOps engineers, making Salt’s internals accessible and extensible. Users write Salt state files in YAML with Jinja2 templating, and custom modules are written in Python — no need to learn a custom DSL or a different programming language.