Deploying Packer Images with Dynamic Secrets from HashiCorp Vault

Using Packer to automate the build process for machine images is awesome. In fact I have found great success in automating image builds across many environments including VMware, AWS, Azure & GCP, but one thing has always been a bit of a concern for me…..storing the credentials. Of course each environment requires a different set of credentials and to easily access them I typically store them as environment variables or within a local file on my computer. In fact, doing something like this for my AWS deployments has been the norm.

$ export AWS_ACCESS_KEY_ID="awsaccesskey"
$ export AWS_SECRET_ACCESS_KEY="awssecretkey"

A better way

As the number of environments and set of credentials has been growing, I have found a need for storing and accessing them in a better way. Ideally I am looking to store credentials in a central location, look them up when needed for deployment and rotate them when done with the job at hand. I would like the ability to create a set of rotating credentials that can be called and used within my Packer automation workflows.

Enter Vault

HashiCorp, the creators of Packer, also have a secrets management product called Vault. Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault supports my idea of rotating credentials or as they call them ‘dynamic secrets’ to minimize the length of time a set of credentials should exist. Vault supports a number of secrets engines to ease integration including AWS, Azure, GCP, Databases and Active Directory. (There are a ton of other engines available, but these will get me started).

In this case, we will utilize Vault’s AWS secrets engine to generate dynamic, on-demand AWS access credentials for my Packer AMI builds.

Enabling The AWS Secrets Engine in Vault

The AWS secrets engine can be enabled via the Vault command line our UI. Below is a quick video capturing the steps using the UI.

vault_packer_awsengine.gif
  1. Enable the AWS Secrets Engine

  2. Configure the AWS Secrets Engine by providing AWS credentials that provide Vault the ability to dynamically manage IAM Users.

  3. Create a ‘Packer’ role and specify the minimum set of permissions Packer needs to build AMIs. This is done by attaching the appropriate policy to the Packer role, which can be cut/past from the policy outlined in Packer’s AWS AMI builder documentation.

  4. Generate dynamic credentials in AWS that will provide access to Packer to generate an AMI.

  5. Revoke the credentials at any time.

If so inclined, these same series of steps can be performed via the Vault command line.

Build Your AMI Using PACKER With Dynamic AWS Credentials

Now that we have the Vault and AWS integration working we are ready to utilize the dynamic credentials in our Packer build. We will leverage Packer’s user variables to pass in our dynamic secrets by means of a variable file called awskeys.json.

vault_packer_build.gif


  1. Generate a set of dynamic credentials for Packer within Vault.

  2. Save credentials to a variable file called awskeys.json. Vault provides these in JSON format which is what Packer expects.

    {
      "accessKey": "AKIAIMQUVKMCSRB5NEZA",
      "secretKey": "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",
      "leaseId": "aws/creds/packer/1pf3nMh8KEEJtQQGADFtGYAE"
    }
  3. Specify the correct variable names in the packer build template. The builder stanza of my packer template for creating a vBrisket AMI looks like:

    {
      "builders": [{
        "access_key": "{{ user `accessKey`}}",
        "secret_key": "{{ user `secretKey`}}",
        "type": "amazon-ebs",
        "region": "us-east-1",
        "source_ami": "ami-0f9cf087c1f27d9b1",
        "instance_type": "t2.medium",
        "ssh_username": "ubuntu",
        "ami_name": "vbrisket-image {{timestamp}}",
        "ami_description": "vBrisket Image",
        "ami_groups": ["all"]
      }]
    }

Run a Packer build specifying the variable and template file:

packer build -var-file=awskeys.json vbrisket.json

Vault will automatically expire or rotate these credentials based on their lease time so there is no harm for me to show you my credentials as they will be revoked and become utterly useless after they have served their purpose. This workflow can be further automated utilizing consul-template and/or envconsul but that is a subject for a different post. Happy building (securely).

Creating A CloudMapper Virtual Appliance using Packer

One of my favorite visualization tools for diagraming Amazon Web Services (AWS) environments is Duo CloudMapper. CloudMapper helps you understand visually what exists in your AWS accounts by running a collection against the environment and providing an interactive web page. This is extremely handy for identifying possible network misconfigurations, along with a slew of other benefits. For a full listing why I like this tool check out my post on How to Visualize Your Cloud Deployments with CloudMapper.

Despite it’s power, one of the challenges I have found is to simply get it started and working. CloudMapper is open source built upon other open source products and I have found that there are inevitably build and dependency issues that suck up my time before I can simply use the tool. For these reasons and to make things easier in general, I chose to create and deploy CloudMapper as virtual appliance.

Building the Virtual Appliance

I utilized Packer to provision my CloudMapper virtual appliance. Packer is excellent for creating machine images for multiple platforms from a single source configuration. In this case we will build out an Amazon Machine Image (AMI) with Packer, which will take care of all package installation and dependencies for the build out. You can learn more about all the Packer goodness on the HashiCorp website and Paul Kirby provides a nice overview in his Packer PluralSight course.

  1. Install Packer

  2. Download the cloudmapper.packer template from my GitHub account. (Packer templates are simply JSON files that specify the various components used to create the machine image, and where the build of the image will be saved. In our case we will be creating and deploying our virtual appliance into AWS, but Packer comes with support to build images for Amazon EC2, CloudStack, DigitalOcean, Docker, Google Compute Engine, Microsoft Azure, QEMU, VirtualBox, VMware, and more.)

  3. Specify AWS Credentials for creating our virtual appliance. There are a number of ways to accomplish this but we will use environment variables.

       $ export AWS_ACCESS_KEY_ID="awsaccesskey" 
       $ export AWS_SECRET_ACCESS_KEY="awsecretkey"
  4. Build the image.

    $ packer build -var aws_region="us-west-2" -var ami_id="ami-6cd6f714" -var python_version="3.5.6" cloudmapper.packer
        # aws_region is where the image will be stored.
        # ami_id is the base Amazon Linux image in the region.
        # python version of your choice.

    There are currently some issues with CloudMapper and Python 3.7, so I am using the recommend version of 3.5.6

  5. The build process will take ~10-15 minutes as it needs to compile and pull down all of the components. Once it is complete, Packer will notify of your unique AMI that can now be used for deployment.

packami.png

Deploying the Virtual Appliance

Now that the image for our virtual appliance is available in AWS, let deploy it and run CloudMapper. My preferred way to deploy would be using Terraform but for purposes of this post we will step through the manual steps.

  • Launch an instance using the newly created CloudMapper image. You can accept the defaults providing your instance a public IP with SSH access.

myami.png

Configure CloudMapper by logging in via SSH and performing the final initialization steps. (While these could be automated and built into the image, I get sensitive about saving AWS credentials anywhere even if my image is private. I prefer to specify them when needed.)

  • $ aws configure

    You can specify a full access account to run CloudMapper but I like least privilege so have setup a “Visualization” IAM user with the privileges specified in the CloudMapper readme.

cloudmapperiam.png
myIAMAccount.png
  • Configure CloudMapper’s account information in the config.json file to match aws credentials:

    $ cd ~/cloudmapper
    $ pipenv run python3 cloudmapper.py configure add-account --config-file config.json --name AWS_USERNAME --id AWS_ACCESS_KEY_ID
       #AWS_USERNAME is “friendly name” tied to IAM account   
       #AWS_ACCESS_KEY_ID is the AWS Access Key ID specified in aws configure.

  • Run CloudMapper’s collection against the environment. The collection phase can take some time, as it is truly pulling all the metadata information for your entire AWS account across all components and regions.

    $ pipenv run python3 cloudmapper.py collect --account AWS_USERNAME
  • Prepare the results and launch the webserver to display them.

    $ pipenv run python3 cloudmapper.py prepare --config config.json --account AWS_USERNAME
    $ pipenv run python3 cloudmapper.py webserver --public

  • Create and attach a security group to the instance to make the site publicly available.

securitygroupcloudmapperweb.png
securitygroupcloudmapperwebassign.png
  • Browse to public DNS address of your virtual appliance on port 8000

Please note that these steps show running this instance with a publicly available website. You can certainly deploy this to a private subnet and access through a bastion server, etc which is recommended. It would also make sense to put this site behind a login which I have noted as an opportunity for further improvement. Be sure to stop this instance when you are done using it.

devstagingprod_cloudmapper.png

Further Improvements

Having a readily available virtual appliance that just works is perfect, but there are some further improvements that I think would be handy:

  • Create a docker image of CloudMapper that can be run as a container. (There are some folks who have built this)

  • Save the collection data to an external volume so that it doesn’t live in the running appliance.

  • Create the virtual appliance that can be deployed within other Packer supported platforms, namely vSphere and Azure.

  • Lock down the website behind a username and password.