Minions: salt "clients", aka hosts / provision targets. (not to be confused with the salt command-line client
master: the salt server, drives the provisioning of minions. the
salt cli client runs on the master. The master is an ensemble of several services and worker processes.
- Publisher (port 4505): which minions must be able to access for pull-mode
- EventPublisher (IPC only):
- MWorker: one or more "master workers", which handle salt operations concurrently
- ReqServer (port 4506): pop work and push to MWorker, plus receiving replies so MWorker doesn't have to block
- File Server (?): transfers files to minions on demand from the state tree
Grains are basically facts in the ansible/puppet world.
Pillar is a global value/config storage, spelled out on the master. This is basically YAML which is layed out in folder hierarchies, which maps to environment config. It's minion-specific and invisible to others, so useful for secrets.
References to state, state trees, and .sls refer to YAML arranged in a folder hierarchy, similar to the Pillar layout. These files are stored on the Salt master. Basically
.sls files are the core of actual configuration management, similar to ansible playbooks. The state tree may also contain raw files to be provisioned to hosts, etc. As in Ansible, raw YAML (a data structure) becomes program-like using inclusion/extension, and as in Ansible, Jinja is used under the hood to render templated data.
.. placeholder ..
.. placeholder re: the event system ..
Unanswered Salt-specific questions on design, architecture, and implementation. Site-specific questions are in the next section.
The difference between Pillar and the state tree seems to simply be just that Pillar is for config storage, and state tree refers to file storage. Is there anything more subtle about this situation?
Masters and minions require bootstrapping with the standard shell script. I very quickly hit a
ERROR: End of life distributions are not supportedmessage using an older version of Ubuntu that was used by a stale github project. Here is a similar issue against the project. Since I'd want to use configuration management to fix this kind of situation, this seems like a deal breaker.. what's the protocol here?
What is the situation in the trenches for boots-on-the-ground ops who will expect reusable patterns for common infrastructure components? This needs a case study. Are there patterns for things like Zookeeper, Hashicorp vault, LAMP stack, Cassandra? Are the patterns solid, are they fairly OS-agnostic?
What are the possibilities for push mode? Pull mode means minions must be able to access ports on the master, etc, which is potentially painful given the endless variations of firewalls, VPNs, VPCs, and bastion setups out there in the wild.
Are there possibilities for agent-free usage? Compared with agent-free interaction, agent-based could be painful in some circumstances, for instance with multimachine setups on non-customized AMIs.
No option for intelligent routing? The job flow docs state that all minions receive all commands from master, then check whether the command is intended for them. (Questions of routing aside, encryption here does ensure we don't leak secrets to evil minions)
Short list of intro questions that ought to be relevant for most site-specific installations/use-cases.
- Using master or masterless?
- If master, where is the master, how does it deal with inside/outside the VPN or VPC?
- Default transport (zeromq)?
Compare and Contrast
SaltStack architecture is well-thought out, and really very beautiful. But "using a distributed system to solve a problem means you now have 2 problems". This additional complexity may really only be justified for "enterprise" bigness, and do we think that starts at a dozen servers, 100 servers, or 1000? Without lots of servers, an Ansible user accustomed to agent-free, server-free decentralized-by-default interaction with CM will probably be left wondering.. what is all this SaltStack architecture actually for?
Ansible is mostly a CM language, and only very slightly a CM driver, a value store, or a server. For most use-cases there is zero architecture to learn, but then again there is zero to leverage. (Ansible Tower is probably beginning to change some of this). Turning Ansible into something more scalable/distributed is mostly a matter of having many workers, and that can be done by leveraging an existing system for work distribution (like Jenkins slaves).
Speed, Eventing & Pull-mode
This section is partially answering "what is the SaltStack architecture for?", a question that was posed in the last section.
A primary aspect of SaltStack architecture is the queue, and the simplest consequence is having workers out-of-the-box, which also immediately implies speed. Presumably server/agent interaction also offers huge speed increases over ansible's use of ssh as a ubiquitous transport mechanism.
The eventing mechanism
CM as process of templating, inclusion, extension
Jinja as a templating engine is all that ansible supports, whereas in salt this aspect is pluggable, for better or worse. As a result of supporting pydsl etc there's no upper limit on how gnarly
.sls files can get, and no guarantee about their portability between separate Salt installations.
Quick links and further reading: