In my last blog, I briefly touched on how HashiCorp Consul and Puppet work together to provide a better experience through managed infrastructure, ACLs, fast replication, and near real-time deployment changes. However, in this post, I’ll dive a little deeper into each one of those components to see how they actually work.
By itself, Puppet is great at managing a multitude of different types of applications and services. From the running processes to the packages, the users and security settings. Puppet is also fantastic for higher level application management—from simple tomcat/java apps to full Oracle and SQL servers. In most cases, the site-specific information for these types of deployments is stored in Hiera for easier management.
While Hiera is great in many ways, there are a few things it could do better. One area of contention with Hiera is sometimes the integration with other management tools, such as orchestration and deployment tools. And sometimes this is compounded by the fact that the configuration information about a node is determined at provision time by the user or other derived means. Those tools then need to have a way to run git to check out that data—and then check it back in. Sometimes the lack of an API can prove to be a painful process involving jump boxes and lots more moving parts (to break).
One of Consul’s features is a key/value store that can be accessed via API, CLI, or web UI. Orchestration systems, such as vRealize Orchestrator or Terraform, can add/update values for nodes as they are provisioning them without a lot of extra work or moving parts. On the Puppet Master itself, it can access those values via a Hiera lookup, using a forge module by Craig Dunn. While this module does not specifically mention Hiera integration, it does work well with it. Below is an example
If you want to see how this looks during a lookup, you can use a ‘puppet lookup –explain’. This shows how the Hiera lookup went through the various hierarchy levels to find the ‘consul_message’ key that you are looking for:
From the Consul UI, you can see and manage the k/v pairs (with the right authentication, of course). In this screenshot, you can see that I’ve put the
The one issue I have had with using this method is managing the lookup_options. Normally these can be placed in the .yaml files inside your Hiera data folder. Since this is a more dynamic lookup, they are pulled from the Consul key/value store. When you query the k/v store, the answer you get will be a JSON object, with the value of the key you searched for in the ‘Value’ key, and the actual value is base64 encoded. Since Hiera is looking for a simple hash, coming back, you’ll get errors about “lookup_options should be a hash” if you try and add a key for lookup_options. I’m currently working on a patch that should help address this, but for now, it’s a limitation. Luckily there are some other ways to provide those configurations, which are outlined in the lookup docs.
There’s another daemon called consul-template, which allows you to update configuration files on your system with values from the Consul k/v store. The advantage this has over a native Puppet implementation is that of speed and native integration with Consul. Take, for example, you are managing DNS servers and one of them goes down. While DNS will natively fall-over to the next server, it will cause a delay. So, if you want to quickly change the DNS order on many servers very quickly, you can use consul-template.
One of the issues or concerns that I have heard about using Consul either in a template or as an addition/replacement of Hiera, is that it natively provides no method of tracking changes that are made—only enforcing who can make changes via ACL. One implementation I’ve seen that seems to address this well is having your Consul data—or perhaps a subset—hosted in a Git server. So, whenever there is a commit, the data is validated and then loaded into Consul. This does go against some of the reasons to use Consul that I listed above, so it may only make sense to have it managing a specific folder or folders—and let there still be some dynamic k/v data for those applications that need it.
Consul itself can also be managed with this forge module by Kyle Anderson. It can be configured to not only manage the OpenSource version, but also Consul Enterprise. A couple of limitations I’ve run into with it are around the default service definition not being extensible to allow for newer changes, like Connect. The main configuration file does allow this with a config_hash that you can pass to the module, which takes what you pass in, and directly uses that in the JSON configuration file. I’ve recently submitted a Pull Request to add the same ability to the service definitions, which was merged today, so it should be in the new release of the module.
A missing feature of the forge module is managing the license for Consul Enterprise, but I’m working on another PR now that will add that feature. Two other longer-term features will be ACL and Intention Management. My end-goal is to make sure OSS and Enterprise are both fully supported options.
To stand up the initial bootstrap server is pretty straight-forward—just a few variables to pass in:
This would be for a bootstrap server to get the cluster up and going—and where I normally have the UI running—so I can immediately log in and see what it’s doing (
As you can see from the examples above, HashiCorp Consul and Puppet are very complementary products with only a slight overlap in functionality—but for different use cases. If you are looking to have more dynamic or API-driven Hiera data, Consul is definitely worth looking at since it’s fairly easy to deploy and can be managed by Puppet itself.
Looking to stay on top of the latest and greatest in enterprise technology? Follow AHEAD on Twitter and LinkedIn for tech news from our experts, partners
Lastly, check out more demos like this one in The LAB and subscribe for updates on new resources.