The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.
Vault, by HashiCorp, is an open-source tool for securely storing secrets and sensitive data in dynamic cloud environments. It provides strong data encryption, identity-based access using custom policies, and secret leasing and revocation, as well as a detailed audit log that is recorded at all times. Vault also features an HTTP API, making it the ideal choice for storing credentials in scattered service-oriented deployments, such as Kubernetes.
Packer and Terraform, also developed by HashiCorp, can be used together to create and deploy images of Vault. Within this workflow, developers can use Packer to write immutable images for different platforms from a single configuration file, which specifies what the image should contain. Terraform will then deploy as many customized instances of the created images as needed.
In this tutorial, you’ll use Packer to create an immutable snapshot of the system with Vault installed, and orchestrate its deployment using Terraform. In the end, you’ll have an automated system for deploying Vault in place, allowing you to focus on working with Vault itself, and not on the underlying installation and provisioning process.
In this step, you will write a Packer configuration file, called a template, that will instruct Packer on how to build an image that contains Vault pre-installed. You’ll be writing the configuration in JSON format, a commonly used human-readable configuration file format.
For the purposes of this tutorial, you’ll store all files under ~/vault-orchestration
. Create the directory by running the following command:
- mkdir ~/vault-orchestration
Navigate to it:
- cd ~/vault-orchestration
You’ll store config files for Packer and Terraform separately, in different subdirectories. Create them using the following command:
- mkdir packer terraform
Because you’ll first be working with Packer, navigate to its directory:
- cd packer
Storing private data and application secrets in a separate variables file is the ideal way of keeping them out of your template. When building the image, Packer will substitute the referenced variables with their values. Hard coding secret values into your template is a security risk, especially if it’s going to be shared with team members or put up on public sites, such as GitHub.
You’ll store them in the packer
subdirectory, in a file called variables.json
. Create it using your favorite text editor:
- nano variables.json
Add the following lines:
{
"do_token": "your_do_api_key",
"base_system_image": "ubuntu-20-04-x64",
"region": "fra1",
"size": "s-1vcpu-1gb"
}
The variables file consists of a JSON dictionary, which maps variable names to their values. You’ll use these variables in the template you are about to create. If you wish, you can edit the base image, region, and Droplet size values according to the developer docs.
Remember to replace your_do_api_key
with your API key, which you created as part of the prerequisites, and then save and close the file.
With the variables file ready, you’ll now create the Packer template itself.
You’ll store the Packer template for Vault in a file named template.json
. Create it using your text editor:
- nano template.json
Add the following lines:
{
"builders": [{
"type": "digitalocean",
"api_token": "{{user `do_token`}}",
"image": "{{user `base_system_image`}}",
"region": "{{user `region`}}",
"size": "{{user `size`}}",
"ssh_username": "root"
}],
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update",
"sudo apt-get install unzip -y",
"curl -L https://releases.hashicorp.com/vault/1.8.4/vault_1.8.4_linux_amd64.zip -o vault.zip",
"unzip vault.zip",
"sudo chown root:root vault",
"mv vault /usr/local/bin/",
"rm -f vault.zip"
]
}]
}
In the template, you define arrays of builders and provisioners. Builders tell Packer how to build the system image (according to their type) and where to store it, while provisioners contain sets of actions Packer should perform on the system before turning it into an immutable image, such as installing or configuring software. Without any provisioners, you would end up with an untouched base system image. Both builders and provisioners expose parameters for further work flow customization.
You first define a single builder of the type digitalocean
, which means that when ordered to build an image, Packer will use the provided parameters to create a temporary Droplet of the defined size using the provided API key, with the specified base system image and in the specified region. The format for fetching a variable is {{user 'variable_name'}}
, where the highlighted part is its name.
When the temporary Droplet is provisioned, the provisioner will connect to it using SSH with the specified username, and will sequentially execute all defined provisioners before creating a DigitalOcean Snapshot from the Droplet and deleting it.
The provisioner is of type shell
, which will execute given commands on the target. Commands can be specified either inline
, as an array of strings, or defined in separate script files if inserting them into the template becomes unwieldy due to size. The commands in the template will wait 30 seconds for the system to boot up, and will then download and unpack Vault 1.8.4. Check the official Vault download page and replace the link in the commands with a newer version for Linux, if available.
When you’re done, save and close the file.
To verify the validity of your template, run the following command:
- packer validate -var-file=variables.json template.json
Packer accepts a path to the variables file via the -var-file
argument.
You’ll see the following output:
OutputThe configuration is valid.
If you get an error, Packer will specify exactly where it occurred, so you’ll be able to correct it.
You now have a working template that produces an image with Vault installed, with your API key and other parameters defined in a separate file. You’re now ready to invoke Packer and build the snapshot.
In this step, you’ll build a DigitalOcean Snapshot from your template using the Packer build
command.
To build your snapshot, run the following command:
- packer build -var-file=variables.json template.json
This command will take some time to finish. You’ll see a lot of output, which will similar to this:
Outputdigitalocean: output will be in this color.
==> digitalocean: Creating temporary RSA SSH key for instance...
==> digitalocean: Importing SSH public key...
==> digitalocean: Creating droplet...
==> digitalocean: Waiting for droplet to become active...
==> digitalocean: Using SSH communicator to connect: ...
==> digitalocean: Waiting for SSH to become available...
==> digitalocean: Connected to SSH!
==> digitalocean: Provisioning with shell script: /tmp/packer-shell464972932
digitalocean: Hit:1 http://mirrors.digitalocean.com/ubuntu focal InRelease
...
==> digitalocean: % Total % Received % Xferd Average Speed Time Time Time Current
==> digitalocean: Dload Upload Total Spent Left Speed
==> digitalocean: 100 63.5M 100 63.5M 0 0 110M 0 --:--:-- --:--:-- --:--:-- 110M
digitalocean: Archive: vault.zip
digitalocean: inflating: vault
==> digitalocean: Gracefully shutting down droplet...
==> digitalocean: Creating snapshot: packer-1635876039
==> digitalocean: Waiting for snapshot to complete...
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' finished after 5 minutes 6 seconds.
==> Wait completed after 5 minutes 6 seconds
==> Builds finished. The artifacts of successful builds are:
--> digitalocean: A snapshot was created: 'packer-1635876039' (ID: 94912983) in regions 'fra1'
Packer logs all the steps it took while building your template. The last line contains the name of the snapshot (such as packer-1635876039
) and its ID in parentheses, marked in red. Note your ID of the snapshot, because you’ll need it in the next step.
If the build process fails due to API errors, wait a few minutes and then retry.
You’ve built a DigitalOcean Snapshot according to your template. The snapshot has Vault pre-installed, and you can now deploy Droplets with it as their system image. In the next step, you’ll write Terraform configuration for automating such deployments.
In this step, you’ll write Terraform configuration for automating Droplet deployments of the snapshot containing the Vault you just built using Packer.
Before writing actual Terraform configuration for deploying Vault from the previously built snapshot, you’ll first need to configure the DigitalOcean provider for it. Navigate to the terraform
subdirectory by running:
- cd ~/vault-orchestration/terraform
Then, create a file named do-provider.tf
, where you’ll store the provider:
- nano do-provider.tf
Add the following lines:
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
variable "do_token" {
}
variable "ssh_fingerprint" {
}
variable "instance_count" {
default = "1"
}
variable "do_snapshot_id" {
}
variable "do_name" {
default = "vault"
}
variable "do_region" {
}
variable "do_size" {
}
provider "digitalocean" {
token = var.do_token
}
This file declares parameter variables and provides the digitalocean
provider with an API key. You’ll later use these variables in your Terraform template, but you’ll first need to specify their values. For that purpose, Terraform supports specifying variable values in a variable definitions file similarly to Packer. The filename must end in either .tfvars
or .tfvars.json
. You’ll later pass that file to Terraform using the -var-file
argument.
Save and close the file.
Create a variable definitions file called definitions.tfvars
using your text editor:
- nano definitions.tfvars
Add the following lines:
do_token = "your_do_api_key"
ssh_fingerprint = "your_ssh_key_fingerprint"
do_snapshot_id = your_do_snapshot_id
do_name = "vault"
do_region = "fra1"
do_size = "s-1vcpu-1gb"
instance_count = 1
Remember to replace your_do_api_key,
your_ssh_key_fingerprint
, and your_do_snapshot_id
with your account API key, the fingerprint of your SSH key, and the snapshot ID you noted from the previous step, respectively. The do_region
and do_size
parameters must have the same values as in the Packer variables file. If you want to deploy multiple instances at once, adjust instance_count
to your desired value.
When finished, save and close the file.
For more information on the DigitalOcean Terraform provider, visit the official docs.
You’ll store the Vault snapshot deployment configuration in a file named deployment.tf
, under the terraform
directory. Create it using your text editor:
- nano deployment.tf
Add the following lines:
resource "digitalocean_droplet" "vault" {
count = var.instance_count
image = var.do_snapshot_id
name = var.do_name
region = var.do_region
size = var.do_size
ssh_keys = [
var.ssh_fingerprint
]
}
output "instance_ip_addr" {
value = {
for instance in digitalocean_droplet.vault:
instance.id => instance.ipv4_address
}
description = "The IP addresses of the deployed instances, paired with their IDs."
}
Here you define a single resource of the type digitalocean_droplet
named vault
. Then, you set its parameters according to the variable values and add a SSH key (using its fingerprint) from your DigitalOcean account to the Droplet resource. Finally, you output
the IP addresses of all newly deployed instances to the console.
Save and close the file.
Before doing anything else with your deployment configuration, you’ll need to initialize the directory as a Terraform project:
- terraform init
You’ll see the following output:
OutputInitializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.15.0...
- Installed digitalocean/digitalocean v2.15.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
When initializing a directory as a project, Terraform reads the available configuration files and downloads plugins deemed necessary, as logged in the output.
You now have Terraform configuration for deploying your Vault snapshot ready. You can now move on to validating it and deploying it on a Droplet.
In this section, you’ll verify your Terraform configuration using the validate
command. Once it verifies successfully, you’ll apply
it and deploy a Droplet as a result.
Run the following command to test the validity of your configuration:
- terraform validate
You’ll see the following output:
OutputSuccess! The configuration is valid.
Next, run the plan
command to see what Terraform will attempt when it comes to provision the infrastructure according to your configuration:
- terraform plan -var-file="definitions.tfvars"
Terraform accepts a variable definitions file via the -var-file
parameter.
The output will look similar to:
OutputTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.vault[0] will be created
+ resource "digitalocean_droplet" "vault" {
+ backups = false
+ created_at = (known after apply)
+ disk = (known after apply)
+ graceful_shutdown = false
+ id = (known after apply)
+ image = "94912983"
+ ipv4_address = (known after apply)
+ ipv4_address_private = (known after apply)
+ ipv6 = false
+ ipv6_address = (known after apply)
+ locked = (known after apply)
+ memory = (known after apply)
+ monitoring = false
+ name = "vault"
+ price_hourly = (known after apply)
+ price_monthly = (known after apply)
+ private_networking = (known after apply)
+ region = "fra1"
+ resize_disk = true
+ size = "s-1vcpu-1gb"
+ ssh_keys = [
+ "...",
]
+ status = (known after apply)
+ urn = (known after apply)
+ vcpus = (known after apply)
+ volume_ids = (known after apply)
+ vpc_uuid = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ instance_ip_addr = (known after apply)
...
The green +
on the beginning of the resource "digitalocean_droplet" "vault"
line means that Terraform will create a new Droplet called vault
, using the parameters that follow. This is correct, so you can now execute the plan by running terraform apply
:
- terraform apply -var-file="definitions.tfvars"
Enter yes
when prompted. After a few minutes, the Droplet will finish provisioning and you’ll see output similar to this:
Output
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.vault[0] will be created
+ resource "digitalocean_droplet" "vault" {
...
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ instance_ip_addr = (known after apply)
...
digitalocean_droplet.vault[0]: Creating...
digitalocean_droplet.vault[0]: Still creating... [10s elapsed]
...
digitalocean_droplet.vault[0]: Creation complete after 44s [id=271950984]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
instance_ip_addr = {
"271950984" = "your_new_server_ip"
}
In the output, Terraform logs what actions it has performed (in this case, to create a Droplet) and displays its public IP address at the end. You’ll use it to connect to your new Droplet in the next step.
You have created a new Droplet from the snapshot containing Vault and are now ready to verify it.
In this step, you’ll access your new Droplet using SSH and verify that Vault was installed correctly.
If you are on Windows, you can use software such as Kitty or Putty to connect to the Droplet with an SSH key.
On Linux and macOS machines, you can use the already available ssh
command to connect:
- ssh root@your_server_ip
Answer yes
when prompted. Once you are logged in, run Vault by executing:
- vault
You’ll see its “help” output, which looks like this:
OutputUsage: vault <command> [args]
Common commands:
read Read data and retrieves secrets
write Write data, configuration, and secrets
delete Delete secrets and configuration
list List data or secrets
login Authenticate locally
agent Start a Vault agent
server Start a Vault server
status Print seal and HA status
unwrap Unwrap a wrapped secret
Other commands:
audit Interact with audit devices
auth Interact with auth methods
debug Runs the debug command
kv Interact with Vault's Key-Value storage
lease Interact with leases
namespace Interact with namespaces
operator Perform operator-specific tasks
path-help Retrieve API help for paths
plugin Interact with Vault plugins and catalog
policy Interact with policies
print Prints runtime configurations
secrets Interact with secrets engines
ssh Initiate an SSH session
token Interact with tokens
You can quit the connection by typing exit
.
You have now verified that your newly deployed Droplet was created from the snapshot you made, and that Vault is installed correctly.
To destroy the provisioned resources, run the following command, entering yes
when prompted:
- terraform destroy -var-file="definitions.tfvars"
You now have an automated system for deploying Hashicorp Vault on DigitalOcean Droplets using Terraform and Packer. You can now deploy as many Vault servers as you need. To start using Vault, you’ll need to initialize it and further configure it. For instructions on how to do that, visit the official docs.
For more tutorials using Terraform, check out our Terraform content page and our How To Manage Infrastructure with Terraform series, which covers a number of Terraform topics from installing Terraform for the first time to managing complex projects.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
The UI does not display the image id of the snapshot created by packer. I can only see the id of the snapshot from packer output or using doctl. Since long I’m looking for a solution to automatically set the image id (var.do_snapshot_id) in the terraform script. It would be cool if I can select the snapshot id to use by name. Something like “doctl compute snapshot list | grep <name of snapshot> | awk ’ { print $1 } '” but with DO terraform provider. Is that possible?
This comment has been deleted
This comment has been deleted
This comment has been deleted
Make sure to use the old style fingerprint in:
~/vault-orchestration/terraform/definitions.tf
e.g. use:
ssh-keygen -l -E md5 -f ~/.ssh/id_rsa.pub
to get:
b9:77:30:14:94:11:c5:97:15:23:c1:21:2c:c0:aa:3e