Automate your network configuration with Consul-Terraform-Sync
Consul-Terraform-Sync (CTS) allows you to build integrations that automatically apply network and security infrastructure changes reacting to changes in the Consul service catalog. CTS monitors changes to the services in Consul catalog and triggers Terraform runs to automate your network infrastructure.
You can configure CTS to execute one or more automation tasks. Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure.
In this tutorial, you will learn how to configure CTS to connect to your Consul cluster and monitor the Consul catalog for changes. Once CTS detects a change to the service mesh, it will trigger Terraform to update security group rules for a jumphost instance. These rules will allow the instance to communicate with the related services from the Consul catalog.
Prerequisites
The tutorial assumes that you are familiar with Consul and its core functionality. If you are new to Consul, refer to the Consul Getting Started tutorials collection.
For this tutorial, you will need:
- An HCP account configured for use with Terraform
- An AWS account configured for use with Terraform
- git >= 2.0
- aws-cli >= 2.0
- terraform >= 1.4
- jq >= 1.6
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
Change into the directory that contains the complete configuration files for this tutorial.
This repository contains Terraform configuration to spin up the initial infrastructure and all files to deploy Consul, the sample application, and the API Gateway resources.
Here, you will find the following Terraform configuration:
instance-scripts/
directory contains the provisioning scripts and configuration files used to bootstrap the EC2 instances and CTSprovisioning/
directory contains the CTS Terraform module, as well as configuration file templatesapplication-instance.tf
defines the EC2 application instances provisioned with Nginxcts-instance.tf
defines the EC2 CTS instancehcp.tf
defines the HashiCorp Virtual Network (HVN) and HCP cluster resourcesoutputs.tf
defines outputs you will use to authenticate and connect to your EC2 instancesproviders.tf
defines AWS and HCP provider definitions for Terraformvariables.tf
defines variables you can use to customize the tutorialterraform.tfvars
defines the actual values of the variablesvpc.tf
defines the AWS VPC resources
Using the following GitHub repository, you will provision the following resources:
- An HCP HashiCorp Virtual Network (HVN)
- An HCP Consul Dedicated server Cluster
- An AWS VPC
- An AWS key pair
- An AWS EC2 instance running Consul server agent and CTS
- An AWS EC2 instance with an nginx application deployment
Deploy your infrastructure
With these Terraform configuration files, you are ready to deploy your infrastructure.
Initialize your Terraform configuration to download the necessary providers and modules.
Then, create the infrastructure. Confirm the run by entering yes
.
Note
The default target AWS region to deploy is `us-east-2`. If you wish to deploy to another region, modify the `terraform.tfvars` file accordingly.
It will take a few minutes to deploy your infrastructure. Once the deploy completes, it will return a list of outputs you will use to complete the tutorial.
Terraform deployed the infrastructure for this tutorial which includes your AWS VPC, HCP HVN, as well as your HCP Consul Dedicated cluster.
In order to log on to the instances, configure your SSH key manager agent to use the correct SSH key identity file.
Export the CTS instance IP into a variable for further use.
Configure the Consul ACL system for CTS
In this section, you will review the ACL policies required for CTS, define a policy with sufficient privileges, and create a token with the same privileges for CTS to use when communicating with the Consul cluster.
In production environments, we recommend enabling access control lists (ACLs) to secure your Consul deployment. When ACLs is enabled, you need to pass a token to CTS so that it can access information from Consul.
Review cts-policy.hcl
for the ACL policies required for CTS to interact with Consul. Notice that CTS requires permissions to register itself as a service, write Terraform state in the Consul KV, and observe updates to Consul services and nodes.
Consul CTS service privileges
CTS automatically registers itself as a service with Consul, which requires write permissions for the CTS service. The name of the CTS service defaults to Consul-Terraform-Sync
. Refer to the consul.service_registration
configuration for options to change this name or to disable this feature.
The policy in cts-policy.hcl
grants write permissions for the CTS service using the default service name.
Consul KV privileges
By default, CTS uses Consul as a backend for Terraform. As a result, the token needs permissions to store Terraform state inside Consul KV and to use sessions to ensure locking during the Terraform state changes. However, you can configure the path in the driver.terraform.backend
section of the configuration.
Note
Consul is the default backend for Consul-Terraform-Sync. It is important to note that it does not support encryption. In a production environment we recommend you to use a Terraform backend that supports encryption.
Consul catalog privileges
CTS needs access to the Consul catalog to retrieve information about services registered with Consul. You need a token that can read the services you want to monitor for CTS. If you want CTS to only have access to a limited set of services, define them specifically in the policy rules.
For Consul Enterprise, the token also needs access to the service's namespaces.
The policy in cts-policy.hcl
only grants access to services that start with nginx
.In addition, the policy grants read permissions over all nodes. However, if there is a common prefix for all the nodes that host your services, we recommend you restrict the rules to the node prefix.
Generate a Consul token for CTS
Now, create the policy for the CTS with the minimum permissions.
After you create the policy, generate a token associated with this policy. Assign the token's SecretID to an environment variable to use later.
Modify the CTS configuration file to use the token you have just created.
Review CTS configuration file
You can configure CTS with HCL or JSON configuration files. In a CTS configuration file, you will find the following components:
- Consul configuration to authenticate and interact with your Consul cluster
- General configuration specific to the CTS daemon, for example the logging level or port selection
- Terraform driver section to relay provider discovery and installation information to Terraform
- Task definition to configure which data should the CTS daemon monitor for change, and what to perform once this change has been detected
For the full list of available options, refer to the CTS documentation.
Inspect the CTS configuration file on the CTS instance.
Consul block
The consul
block configures CTS so it can query the Consul catalog when it executes a task. This tutorial pre-populates the configuration with the Consul management token. Update this value to use the ACLs token you generated earlier scoped for CTS.
In a fully secured mTLS environment, we recommend including the certificates required to communicate with Consul. In this tutorial, since CTS and Consul are hosted on the same virtual machine, traffic stays local and unencrypted.
We recommend hosting CTS on a dedicated node with a Consul agent. This ensures dedicated resources for network automation and enables you to fine tune security and privilege separation between the network administrators and the other Consul agents.
Global configs
You can configure the CTS daemon using top level options. For example, you can configure:
- The
log_level
parameter specifies how detailed you want CTS to log. - The
port
parameter specifies which port CTS should expose the API interface. - The
syslog
parameter specifies the syslog server for logging. This section can be useful when you configure CTS as a daemon, for example in Linux usingsystemd
. - The
id
parameter specifies under what name CTS will be registered as a service in the Consul Catalog
Driver "terraform" block
The driver
block configures the subprocess used by CTS to propagate infrastructure change. The Terraform driver is required for CTS to define the Terraform version, and which providers to use.
By default, CTS uses Consul to store Terraform state files and use the connection information defined in the consul
block. If you want to use a different Terraform backend or to specify a different Consul datacenter as backend, use the backend
section of the configuration. CTS supports all standard Terraform backends. Refer to the Terraform backend documentation to learn about available options.
Task block
A task
block configures the task to run as automation for the defined services. You can explicitly define services in the task's condition
block. You can specify multiple task blocks in case you need to configure multiple tasks.
In the following example, CTS will run the cts-jumphost-module
every minute and collect data from the Consul Catalog about the nginx
service and whether there are any changes to it. The variable_files
parameter passes more information to the module, like the AWS region and the ID of the security group that Terraform will sync rules to.
The cts-jumphost-module
module deploys a ruleset for a security group attached to a jumphost EC2 instance that allows only outbound SSH communication to the services defined in CTS. In this case, the defined service is nginx
.
Refer to the Task Execution documentation for a full list of values that can trigger a task to run.
CTS will attempt to execute each task when it starts to synchronize infrastructure with the current state of Consul. CTS will stop and exit if any error occurs while it prepares the automation environment or executes the task for the first time.
Start CTS
CTS provides different running modes, including some that can be useful to safely test your configuration and the changes that are going to be applied.
The default mode is named daemon mode. In daemon mode, CTS passes through a once-mode phase, in which it will try to run all the tasks once before turning into a long running process. During the once-mode phase, the daemon will exit with a non-zero status if it encounters an error. After CTS completes the once-mode phase, when it encounters an error, CTS will not exit and log the error.
You will run CTS in daemon mode via the SystemD platform on the CTS instance. Review the SystemD unit file for CTS.
Then, start CTS.
Inspect the status of the CTS process. After a successful startup, the state will be active (running)
.
Leave CTS running for a minute so that it completes the jumphost-ssh
task. Open the CTS logging in real time and look for the following log entries for when the task has completed.
Review automation results
Since there is currently only one nginx
instance, the security group ruleset applied to the jumphost instance will contain only one rule. Inspect the contents of the security-group attached to the jumphost instance. Notice there is only one item in IpRanges
.
Save the address of the jumphost into an environment variable.
Save the address of the nginx
application instance into an environment variable.
Log on to the nginx
application instance via the jumphost instance and execute a sample command to test the jumphost function.
Review files created by Consul-Terraform-Sync daemon
When CTS starts, it will run Terraform inside the working_dir
defined in the CTS configuration.
Inside that folder, Terraform will create a workspace for each task defined in the configuration.
You will find the following files in this directory:
- The
main.tf
file contains the Terraform block, provider blocks, and a module block calling the module configured for the task. - The
providers.auto.tfvars
file contains the required Terraform providers you defined in thedrivers
block of the CTS configuration. - The
terraform.tfvars
file contains the services input variables from the Consul catalog. CTS periodically updates this file to reflect the current state of the configured set of services for the task. - The
terraform.tfvars.tmpl
file serves as a template for the information retrieved from Consul catalog into theterraform.tfvars
file. - The
variables.tf
file contains the definition of the services input variables from the Consul catalog, as well as the intermediate variables used to dynamically configure providers. - The
variables.module.tf
file contains the manually added variable definitions for extra information for the module - The
variables.auto.tfvars
file contains the actual values of the manually added variables for extra information for the module
The terraform.tfvars
file created by CTS will be similar to the following:
CTS auto-generates the Terraform configuration files. Any manual changes to these files may not be preserved and could be overwritten by a subsequent update.
Next, you will review the results of the CTS automation. Leave your session to the CTS instance open for when you will inspect the CTS logs later, and open a new terminal session on your local machine.
Scale-up the application deployments
Scale-up the deployments of the nginx
service to observe how CTS triggers updates to the security group.
Update the application_instances_amount
variable in terraform.tfvars
to 2
.
Next, apply your changes to deploy one more instance of nginx
. Confirm the run by entering yes
.
Wait a couple of minutes for the new nginx
instance to deploy, boot up, and join the Consul cluster as a client node. When the new instance is ready and operational, observe the output in the log tracking session in the running CTS instance:
Verify the current ruleset consists of two rules for the currently deployed nginx
instances:
Save the address of the second application instance into an environment variable.
Log on to the second application instance through the jumphost instance.
By scaling-up the nginx
deployment, CTS has reacted to the change and refreshed the content of the security group ruleset.
Clean up resources
Destroy the infrastructure via Terraform. Confirm the run by entering yes
.
It may take several minutes for Terraform to successfully delete your infrastructure.. Once Terraform completes, you should get the following output.
Next steps
In this tutorial, you learned how to use CTS to automate your network infrastructure and build an integration that automatically applies security configuration that reacts to changes of the Consul service catalog.
CTS executed a task that continuously polls for changes to the services in Consul catalog and triggers Terraform runs. These runs deploy security group configuration entries that allow communication to the preconfigured list of application instances.
Refer to the Network Infrastructure Automation documentation to learn more about configuration of CTS.