SaltStack Config – Starting At The Beginning

Overview

I could quite easily just go straight into some SaltStack examples and configurations, completely bypassing what it is, where it came from and how we plan on using it in the future but that would be no fun. So in this article I’m going to cover some of the fundamentals surrounding SaltStack and SaltStack Config so that you can get a grounding of exactly what it is meant to achieve and how it goes about it, before we go deep into the weeds.

Its One Product Right?

Well err sort of but no, it isn’t. It’s a combination of open source components that make up “Salt” (the Salt Master and Salt Minion) with an Enterprise management layer and API over the top (SaltStack Config). In other words the best of both worlds. In amongst that we also have product renaming (SaltStack Enterprise to SaltStack Config following VMware’s purchase). I’ll cover the difference between the components shortly but lets just spend a minute covering what the overall solution is supposed to achieve.

The objective of SaltStack Config is to form a complete configuration management system that has automation, provisioning and orchestration capabilities all rolled in together. So not only can you apply a configuration to a client (i.e. a minion) at build or any other time, you can also persist and enforce that configuration on a minion so that it stays in compliance (with optional vulnerability remediation) combined with a nice GUI interface and RBAC.

The Basic Architecture

So lets look at Salt first before we get too deep too fast. The graphic below is taken from the old SaltStack Enterprise documentation but it explains things quite nicely. In it’s most basic form a Salt deployment is a “Master” server and one or more “Minions” (expect “master” to be evolved to some other name in the near future).

The Master server provides integration points to cloud providers (e.g. AWS, VMware etc.), a file server on which to store configurations to apply to Minions, an authentication point, an interface for storing sensitive key pair values, a store of the latest arbitrary information of all Minions and modules for managing the Master configuration.

Salt jobs are published onto a ZeroMQ (0MQ) event bus for a specific target (one or more Minions) where Minions listen constantly to see if there are any jobs that they need to process. This means that jobs are executed immediately with zero polling intervals. The Minion collects the job from the event bus rather than the job being pushed to the Minion. The Minion establishes its persistent connection to the event bus by talking to TCP port 4505 on the Master and then sends any results back of jobs that it has executed on TCP port 4506. This type of architecture makes a Salt environment massively scalable.

The “Grains” interface on a Minion provides information that is specific to each Minion, for example the hostname, IP address, disk size, OS version etc. Each piece of information is known as a “Grain” and is static in nature as it is generated when the Minion daemon starts. Bespoke Grains can be created on Minions as desired in the form of key pair values, for example “DC: London”. Grains can be referenced and used for filtering and logic in Salt state files (i.e. the configuration you want to apply to a Minion), for example to apply a state if the targeted Minion was in the London Datacenter.

Beacons and Reactors work together in Salt where the Beacon (which resides on a Minion) sends an event onto the event bus to the Master when something specific happens on the Minion (i.e. something that a Salt admin defines). The Reactors job is to carry out a specific action when the Master sees the Beacon. A basic example of this would be a Beacon to send an event when a user changes a config file and a Reactor to apply a state to the Minion that returns the file back to it’s specified contents.

So What About SaltStack Enterprise/Config

The Enterprise or Config stack is where things start getting interesting (not that they weren’t before!). Again this diagram is taken from the SaltStack documentation. SaltStack Enterprise/Config adds the top layer to the Open Source solution so far. You will often here this referred to as “RaaS” (Returner as a Service) but that is only one component of SaltStack Enterprise/Config. Essentially the Enterprise/Config stack provides the console GUI and centralized job management interface, a centralized file server backed by PostgreSQL, an in-memory database provided by Redis and the Enterprise API. This is not the same API as on the Master!

The Enterprise/Config stack is responsible for sending centrally defined jobs for distribution to Minions via the relevant Master servers. This is done via a Master Plugin interface installed on each Master that connects it to Enterprise/Config.

Is That It?

Well in reality no, not by a long way. The architecture can be expanded to Multi-Master configurations backed by different types of file stores such as GIT. State files can be generated that cover some extremely complex use cases using Jinja templating and a variety of languages and formats including YAML and Python. The are a huge amount of state modules available to cover all sorts of requirements and use-cases in your environment not to mention orchestration but we’ll come to all that in future articles. Plus, we haven’t even touched on SaltStack State and SaltStack Comply…

How Does This Fit In With vRA?

Currently (Feb 2021) SaltStack Enterprise has now become SaltStack Config and the version aligned to vRealize versions which is currently 8.3. It is also referenced as SSC for short. The method of deployment has been evolved into deploying from vRealize Suite Lifecycle Manager as a single appliance (SaltStack Enterprise and Masters were previously always deployed on user installed VMs or physical servers with a variety of config options to split up the components). Now SaltStack Config is deployed via vRSLCM onto a Photon OS based appliance with all Config roles and Master daemon installed onto the same appliance (there are other traditional installation methods still available).

So it seems at the moment we are in a grey area with how the product will eventually look as it is further integrated into the VMware portfolio. Regardless, the functionality is still the same although currently constrained to the number of VMs it can support (around 5000) if deployed via vRSLCM.

In the next article I will start peeling back the layers of SaltStack and SaltStack Config before we get to some example use cases and configurations.