Tech Pioneers

Michael DeHaan: Creator of Ansible, the Automation Tool That Proved Simplicity Wins

Michael DeHaan: Creator of Ansible, the Automation Tool That Proved Simplicity Wins

In the late hours of a February evening in 2012, a systems engineer named Michael DeHaan pushed the first commit of a new automation tool to GitHub. He called it Ansible, after the faster-than-light communication device from Ursula K. Le Guin’s science fiction — a fitting name for software that promised to reach any server in your infrastructure instantly, without installing anything on it first. Within two years, Ansible had become the fastest-growing open-source automation platform in history. Within four years, Red Hat acquired the company behind it for over 150 million dollars. What made Ansible extraordinary was not what it could do — Puppet, Chef, and Salt could all automate infrastructure. What made it extraordinary was what it refused to do. It refused to require agents on managed machines. It refused to demand its own programming language. It refused to be complex. In a field dominated by tools that required weeks of study before you could configure a single server, DeHaan built something you could learn in an afternoon. That radical commitment to simplicity did not just produce a popular tool. It changed how the entire industry thinks about automation.

Early Life and the Path to Systems Engineering

Michael DeHaan grew up in the Research Triangle area of North Carolina, a region steeped in technology thanks to the presence of IBM, Cisco, and dozens of smaller tech firms. He studied computer science at North Carolina State University, where he developed a deep interest in systems administration and the emerging field of Linux infrastructure. Unlike many computer science students who gravitated toward application development, DeHaan was drawn to the problems that happened after code was written — the messy, unglamorous work of deploying, configuring, and maintaining the servers that ran everything.

This operational focus would define his entire career. DeHaan was the kind of engineer who understood that the most sophisticated application in the world was worthless if you could not reliably get it running on production servers at two in the morning when something broke. He was fascinated by the gap between how software was built and how it was deployed — a gap that, in the early 2000s, was typically bridged by fragile shell scripts, manually maintained wiki pages, and the institutional knowledge locked inside the heads of a few overworked system administrators.

Red Hat and the Creation of Cobbler

The Provisioning Problem

After graduating, DeHaan joined Red Hat, the company that had done more than any other to commercialize Linux and prove that open-source software could be a viable business model. At Red Hat, he worked on the Emerging Technologies team, where he confronted one of the most tedious and error-prone tasks in data center management: provisioning bare-metal servers. In the mid-2000s, setting up a new server meant manually booting it from installation media, walking through configuration wizards, and then spending hours applying the correct packages, network settings, and security policies. In a data center with hundreds or thousands of machines, this process was a bottleneck that could take weeks.

DeHaan’s response was Cobbler, a provisioning tool he created around 2006. Cobbler automated the entire lifecycle of server provisioning — from network booting via PXE, through operating system installation using Kickstart files, to initial configuration. What set Cobbler apart from previous provisioning solutions was its focus on managing the relationships between different provisioning components. A single Cobbler server could manage multiple Linux distributions, multiple hardware profiles, multiple network configurations, and multiple post-installation scripts, combining them in any arrangement needed. This was also the period when Mark Shuttleworth was building Ubuntu into the dominant server distribution, and tools like Cobbler were essential to managing Linux infrastructure at scale.

Lessons from Cobbler

Cobbler taught DeHaan several lessons that would directly inform the design of Ansible years later. First, he learned that simplicity was not a nice-to-have — it was the single most important feature of any infrastructure tool. The tools that people actually used were the ones they could understand quickly. Second, he learned that infrastructure tools should work with existing conventions rather than replacing them. Cobbler used Kickstart files, PXE booting, and DHCP — technologies that every Linux administrator already knew. It did not invent a new provisioning protocol; it orchestrated the ones that already existed. Third, and most importantly, DeHaan learned that the biggest barriers to adoption were not technical but psychological. Engineers would not use a tool they could not trust, and trust came from transparency — from being able to read the tool’s configuration and understand exactly what it would do before it did it.

The Genesis of Ansible

Why Another Automation Tool?

By 2012, the configuration management landscape was dominated by three major tools: Puppet (created by Luke Kanies in 2005), Chef (created by Adam Jacob in 2009), and Salt (created by Thomas Hatch in 2011). All three were powerful, well-funded, and had growing communities. Creating yet another tool in this space seemed, to many observers, unnecessary. But DeHaan saw fundamental problems with the existing solutions that he believed could not be fixed incrementally — they required a completely different architectural approach.

The first problem was agents. Puppet, Chef, and Salt all required installing and maintaining agent software on every managed machine. This meant that before you could automate a server, you first had to manually or semi-manually bootstrap the agent onto it — a bootstrapping problem that the tools were supposed to eliminate. Agents also created their own operational burden: they needed to be updated, monitored, and secured. They consumed resources on managed machines. And they created a dependency that, if the agent failed, could leave machines in an unmanageable state.

The second problem was complexity. Puppet used its own declarative language (Puppet DSL). Chef used Ruby. Salt used a mixture of YAML, Jinja2, and Python. Each tool required learning not just the tool itself but an entire ecosystem of concepts, patterns, and conventions. DeHaan had watched colleagues at Red Hat struggle for weeks to become productive with these tools, and he knew that most system administrators in the real world did not have weeks to invest in learning a new configuration management platform.

The third problem was the gap between orchestration and configuration. Existing tools were designed primarily for maintaining desired state on individual machines — ensuring that packages were installed, files were in the right place, services were running. But real-world operations required orchestrating actions across multiple machines in a specific order: take web servers out of the load balancer, update the application, run database migrations, run tests, add web servers back to the load balancer. This kind of procedural, multi-machine orchestration was awkward or impossible with tools designed for single-machine state management.

The Architecture of Simplicity

DeHaan’s answer to all three problems was radical simplicity. Ansible would use SSH — the protocol that every Linux machine already ran — as its transport layer. No agents to install, no ports to open, no daemons to manage. If you could SSH into a machine, you could manage it with Ansible. The managed machine needed nothing except a Python interpreter, which was already present on virtually every Linux distribution.

For its configuration language, DeHaan chose YAML — a data serialization format so simple that non-programmers could read and write it. Ansible playbooks described the desired state of infrastructure in a format that looked almost like English. There was no compilation step, no type system, no abstract classes. You wrote what you wanted to happen, and Ansible made it happen. This stands in sharp contrast to the approach taken by tools like Terraform, which uses its own HCL language — though both share the principle of infrastructure as code.

A simple Ansible playbook to deploy and configure a web application illustrates this philosophy perfectly:

---
# deploy-webapp.yml - A complete deployment in plain YAML
- name: Deploy web application to production
  hosts: webservers
  become: yes
  serial: "25%"        # Rolling deploy: 25% of servers at a time
  max_fail_percentage: 0  # Stop if any server fails

  vars:
    app_version: "2.4.1"
    app_dir: /opt/webapp
    app_user: deploy

  pre_tasks:
    - name: Remove server from load balancer
      community.general.haproxy:
        state: disabled
        host: "{{ inventory_hostname }}"
        backend: app_servers
      delegate_to: "{{ groups['loadbalancers'][0] }}"

  tasks:
    - name: Pull latest application code
      git:
        repo: https://github.com/team/webapp.git
        dest: "{{ app_dir }}"
        version: "v{{ app_version }}"
      become_user: "{{ app_user }}"

    - name: Install Python dependencies
      pip:
        requirements: "{{ app_dir }}/requirements.txt"
        virtualenv: "{{ app_dir }}/venv"
      become_user: "{{ app_user }}"

    - name: Apply database migrations
      command: "{{ app_dir }}/venv/bin/python manage.py migrate --noinput"
      args:
        chdir: "{{ app_dir }}"
      become_user: "{{ app_user }}"
      run_once: true  # Migrations only need to run once

    - name: Restart application service
      systemd:
        name: webapp
        state: restarted

    - name: Wait for application health check
      uri:
        url: "http://localhost:8080/health"
        status_code: 200
      register: health
      retries: 10
      delay: 5
      until: health.status == 200

  post_tasks:
    - name: Re-enable server in load balancer
      community.general.haproxy:
        state: enabled
        host: "{{ inventory_hostname }}"
        backend: app_servers
      delegate_to: "{{ groups['loadbalancers'][0] }}"

    - name: Notify deployment channel
      community.general.slack:
        token: "{{ slack_token }}"
        channel: "#deployments"
        msg: "v{{ app_version }} deployed to {{ inventory_hostname }}"
      run_once: true

Several design decisions in this playbook reveal Ansible’s philosophy. The serial: "25%" directive tells Ansible to process servers in batches, performing a rolling deployment rather than updating everything simultaneously. The run_once: true flag on the migration task ensures database migrations execute on a single server only. The delegate_to directive in the pre- and post-tasks orchestrates the load balancer from the managed server’s perspective. Every line is readable by anyone with basic sysadmin knowledge. There is no custom DSL to learn, no Ruby classes to write, no compilation step to run. This is what DeHaan meant by simplicity: the playbook is simultaneously the documentation, the automation, and the audit trail.

Ansible’s Explosive Growth

Community and Galaxy

DeHaan released Ansible as an open-source project on GitHub on February 23, 2012. The response was immediate and overwhelming. Within months, Ansible had more GitHub stars than tools that had been available for years. The first major contribution from the community came in the form of modules — self-contained units of code that taught Ansible how to manage specific resources. DeHaan had designed the module system to be deliberately easy to extend: a module was simply a script (usually Python) that accepted JSON input and produced JSON output. Anyone who could write a basic Python script could write an Ansible module.

In 2013, DeHaan launched Ansible Galaxy, a community hub for sharing reusable Ansible roles — pre-packaged collections of tasks, templates, and variables for common infrastructure patterns. Galaxy transformed Ansible from a tool into a platform. Instead of writing playbooks from scratch, administrators could pull down community-maintained roles for everything from installing Docker to hardening SSH configurations to deploying Kubernetes clusters. By 2015, Galaxy hosted thousands of roles, and its growth mirrored the explosive adoption of Ansible itself.

The community also benefited from DeHaan’s active involvement. Unlike some project founders who stepped back once their tool gained traction, DeHaan was deeply involved in reviewing pull requests, answering questions on mailing lists, and setting the project’s technical direction. He had a clear vision for what Ansible should be and — just as importantly — what it should not be. He repeatedly pushed back against features that would add complexity, even when those features were requested by large enterprise users. This discipline was essential to maintaining Ansible’s core value proposition.

AnsibleWorks and the Enterprise Push

In 2013, DeHaan co-founded AnsibleWorks (later renamed Ansible, Inc.) to build commercial products around the open-source project. The flagship product was Ansible Tower (now part of Red Hat Ansible Automation Platform), a web-based management interface that added role-based access control, job scheduling, graphical inventory management, and audit logging to Ansible’s command-line foundation. Tower addressed the enterprise requirements that large organizations demanded — centralized management, compliance tracking, credential management — without compromising the simplicity of the underlying playbook format.

The business model was straightforward: the core automation engine remained open source, while the enterprise management layer was a paid product. This approach, proven by companies from Red Hat to MySQL to MongoDB, allowed Ansible to grow its community and mindshare through the open-source project while generating revenue from organizations that needed enterprise-grade management capabilities. For teams managing complex automation workflows, modern project management tools became essential companions to Ansible Tower, helping coordinate the human side of deployment processes.

The Red Hat Acquisition and Beyond

The 150 Million Dollar Validation

On October 16, 2015, Red Hat announced the acquisition of Ansible, Inc. for over 150 million dollars. The acquisition was significant for several reasons. First, it validated the agentless, YAML-based approach that many in the industry had initially dismissed as too simple for serious enterprise use. Second, it brought Ansible full circle — DeHaan had started his career at Red Hat, built Cobbler there, and now his second major creation was returning to the company that had shaped his engineering philosophy.

Under Red Hat’s stewardship, Ansible was integrated into a broader automation strategy. Red Hat combined Ansible with its existing management tools to create the Ansible Automation Platform, a comprehensive solution for enterprise automation that spanned network devices, cloud resources, security policies, and application deployments. The platform added features like Automation Hub (a curated repository of enterprise-certified content), Automation Analytics (insights into how automation was being used across an organization), and Execution Environments (containerized runtime environments that ensured consistent playbook execution). When IBM acquired Red Hat in 2019 for 34 billion dollars, Ansible was one of the crown jewels that made the acquisition compelling.

DeHaan’s Role After the Acquisition

Following the Red Hat acquisition, DeHaan continued to contribute to Ansible’s development but gradually shifted focus. He was candid about the challenges of transitioning from startup founder to corporate employee — a common difficulty in tech acquisitions. The creative freedom and rapid decision-making of a small startup inevitably clash with the processes and politics of a large corporation. DeHaan eventually moved on from Red Hat to pursue other projects, including work on Vespene, a build and CI/CD system that reflected his continued interest in making infrastructure operations simpler and more accessible.

His departure from day-to-day Ansible development did not diminish his influence on the project. The architectural decisions he made in those early months of 2012 — agentless design, YAML playbooks, SSH transport, an extensible module system — remained the foundations of Ansible through every subsequent version. These were not arbitrary technical choices; they were expressions of a philosophy about how automation tools should work, and that philosophy continued to guide the project’s development long after DeHaan stepped back.

Design Philosophy and Lasting Principles

Batteries Included, Complexity Excluded

DeHaan’s most significant contribution to the field of IT automation was not a piece of software but a design philosophy. He argued, repeatedly and persuasively, that the most powerful feature a tool could have was the ability to be understood. In a blog post that became widely quoted in the DevOps community, he wrote that Ansible’s goal was to be the tool you could hand to a new team member and expect them to be productive within hours, not weeks. This was a direct challenge to the prevailing assumption that powerful tools must be complex — that there was an inherent tradeoff between capability and usability.

Ansible’s “batteries included” approach embodied this philosophy. Out of the box, Ansible shipped with hundreds of modules covering virtually every common infrastructure task: managing packages, users, files, services, cloud resources, network devices, databases, and more. You did not need to install plugins, configure providers, or download additional components. A fresh Ansible installation could manage a heterogeneous infrastructure of Linux servers, Windows machines, network switches, and cloud resources immediately. This approach mirrored the broader DevOps principle championed by Patrick Debois — that automation should break down barriers, not create new ones.

Idempotency as a Core Value

One of DeHaan’s most important technical insights was making idempotency not just a feature but the default behavior. An idempotent operation is one that produces the same result whether you run it once or a hundred times. In Ansible, if you declare that a package should be installed, Ansible checks whether it is already installed before taking action. If the package is present, Ansible does nothing. This means you can run the same playbook repeatedly — during development, during troubleshooting, during disaster recovery — without fear of unintended side effects.

This property was transformative for how operations teams thought about automation. Before Ansible, many automation scripts were dangerous to run more than once — they might create duplicate users, overwrite configuration files, or restart services unnecessarily. DeHaan made the safe-by-default behavior a core architectural principle, not something users had to remember to implement. This design choice alone reduced the risk and anxiety associated with automation by an order of magnitude, encouraging teams to automate tasks they would previously have done manually out of fear. Effective automation, combined with strategic digital project management, enables organizations to deliver faster while maintaining reliability across their entire technology stack.

Ansible’s Impact on the Industry

Ansible’s influence extends far beyond the tool itself. Its success demonstrated several principles that have shaped every automation tool built since:

The agentless architecture proved that you did not need to install proprietary software on managed machines to achieve enterprise-grade automation. This principle has been adopted across the industry, with even previously agent-based tools adding agentless modes. SSH and WinRM, the protocols Ansible uses, are now universally accepted as viable transport layers for large-scale automation.

YAML as a configuration language established a new standard for infrastructure definition. Before Ansible, the assumption was that automation required a “real” programming language — Ruby, Python, or a custom DSL. Ansible showed that a data format could express complex operational logic when combined with a well-designed execution engine. Today, YAML is the lingua franca of DevOps, used by Kubernetes, GitHub Actions, Docker Compose, and countless other tools. The CI/CD ecosystem, including tools like Jenkins, increasingly embraces declarative pipeline definitions influenced by Ansible’s YAML-first approach.

The “batteries included” philosophy demonstrated that the initial out-of-box experience matters enormously for adoption. Tools that required extensive setup before they became useful lost to Ansible’s instant gratification — install it, write a playbook, run it, see results. This lesson has been internalized by tool builders across the infrastructure space.

As of 2025, Ansible remains one of the most widely used automation platforms in the world. Stack Overflow surveys consistently rank it among the top infrastructure tools. It is used by organizations ranging from small startups to Fortune 100 enterprises, government agencies, and research institutions. Its ecosystem includes thousands of community-maintained collections, a vibrant conference circuit (AnsibleFest and regional meetups), and deep integrations with every major cloud provider and network vendor.

Frequently Asked Questions

What is Ansible and why did Michael DeHaan create it?

Ansible is an open-source automation platform for configuring servers, deploying applications, and orchestrating multi-machine workflows. Michael DeHaan created it in 2012 because he believed existing automation tools — Puppet, Chef, and Salt — were too complex for most system administrators to adopt. His key innovation was an agentless architecture that used SSH for communication and YAML for configuration, eliminating the need to install agent software on managed machines and reducing the learning curve from weeks to hours.

What was Cobbler and how did it lead to Ansible?

Cobbler was a Linux provisioning tool that DeHaan created around 2006 while working at Red Hat. It automated bare-metal server provisioning through PXE booting and Kickstart files. Cobbler taught DeHaan that infrastructure tools must be simple and work with existing technologies rather than replacing them — principles that directly shaped the design of Ansible years later.

Why does Ansible use SSH instead of agents?

DeHaan chose SSH as Ansible’s transport layer because it was already installed and configured on virtually every Linux server. This eliminated the bootstrapping problem inherent in agent-based tools, where you must first install agent software before you can automate a machine. SSH also meant Ansible required no additional ports to be opened in firewalls and no additional daemons to be monitored and maintained.

What happened when Red Hat acquired Ansible?

Red Hat acquired Ansible, Inc. in October 2015 for over 150 million dollars. The open-source Ansible engine continued to be developed by the community, while Red Hat built the Ansible Automation Platform — an enterprise product combining Ansible with centralized management, role-based access control, analytics, and certified automation content. When IBM acquired Red Hat in 2019, Ansible became a core component of IBM’s hybrid cloud strategy.

How did Ansible change the DevOps industry?

Ansible proved that automation tools did not need to be complex to be powerful. Its success established YAML as the standard language for infrastructure configuration, popularized agentless architectures across the industry, and demonstrated that simplicity and a low learning curve were competitive advantages rather than limitations. Today, Ansible’s design principles are reflected in tools ranging from Kubernetes manifests to CI/CD pipeline definitions.

What is Michael DeHaan doing after Ansible?

After the Red Hat acquisition, DeHaan eventually stepped back from day-to-day Ansible development. He pursued other projects including Vespene, an open-source CI/CD and build system. He continues to be recognized as one of the most influential figures in the infrastructure automation space, and the design philosophy he embedded in Ansible — simplicity, agentless architecture, and human-readable configuration — continues to influence how the industry builds operations tools.

Is Ansible still relevant in the age of Kubernetes and cloud-native?

Ansible remains highly relevant and widely used. While Kubernetes handles container orchestration, Ansible fills complementary roles: provisioning the infrastructure that Kubernetes runs on, managing network devices and legacy systems that cannot be containerized, orchestrating multi-step deployment workflows, and handling day-two operations tasks. Ansible’s flexibility and low barrier to entry ensure it remains a foundational tool in modern infrastructure management, often working alongside Kubernetes rather than competing with it.