Using Terraform to Up Your Automation Game: Multi-Environment/Multi-Region

In the last two posts we have been working with Terraform to automate the buildout of our Virtual Private Cloud (VPC) in AWS and deploy a fleet of scalable web infrastructure.  Our next step is to complete the infrastructure build so that it is identical across our development/test, staging & production environments.  Each of these environments reside in different regions: development (us-west-2), staging (us-east-1) and production (us-east-2).

File Layout

In the case of deploying our development environment, our work is really close to being complete.  We already have a working set of code that deploys into the us-west-2 region, which is contained in the main.tf and outputs.tf files. We will add one step, and that is to create a folder structure that allows our code to easily be managed and referenced.  At the top level we create a separate folder for each of our environments, and within each of these folders a sub-folder specifying the type of fleet we are deploying .  In the case of our development web server deployment, the folder structure is simple and looks like this:

. ├── dev │
   └── webservers │ 
   ├── main.tf │      
   ├── outputs.tf

There are a number of benefits with laying out our files in this manner - including isolation, re-use and managing state - which Yevgeniy Brikman does an excellent job of describing.  As Brikman indicates - "This file layout makes it easy to browse the code and understand exactly what components are deployed in each environment. It also provides a good amount of isolation between environments and between components within an environment, ensuring that if something goes wrong, the damage is contained as much as possible to just one small part of your entire infrastructure."

Deploy Development

Now that our file layout is the way we want it, let's deploy development.  Very simple, and something we have done a few times now.  Since we moved our two files into the a new folder location, we will need to initialize the deployment (terraform init), plan it (terraform plan) and finally deploy (terraform apply).

terraforminit_dev.png
terraform_plan_webservers.png
terraform_apply.png

Once complete we can browse over to see the development deployment.

webserver_dev.png

Deploy Staging

One of the most powerful benefits of deploying our infrastructure as code in a modular way is reusability.  In fact, to build out our staging environment is only a matter of a couple of steps.  First we will create a staging folder in which to store our files, and then we will copy over our main.tf and output.tf files.  We will then make a few edits to the main.tf including the following updates:  region, IP address space, tags, ami, cluster name, cluster size and key_names.  Looking at the differences between development and staging is as simple as running a compare between the main.tf files in the dev and staging folders.  The differences are highlighted below:

dev_staging_compare.png

Once we are happy with the updates, the sequence to deploy is exactly what we are use to.  This time we will run our initialize, plan and deployment from within the staging folder.  Once complete we can browse over to see the staging deployment.

webserver_stage.png

Production Deployment

Our production environment will mimic our staging environment with only a few edits including deployment to the us-east-2 region, and will start with 8 webservers in the fleet and scale as needed.  Once again leveraging infrastructure as code we will simply copy the main.tf and output.tf files out of staging and make our edits.  The differences between staging and production are highlighted below:

staging_prod_compare.png

Now we use the power of Terraform to deploy, and Viola...production is LIVE.

webserver_prod.png

Visualizing Our Deployment

Now that our 3 deployments are complete, we can see the folder structure that has been build out maintaining a separate state for each environment.

directorytree.png

To document and visualize our build out, I like to use hava.io which builds architecture diagrams for AWS and Azure environments.  As you can see all three environments are active and we can drill into any of them to see the details, including pricing estimates - production ($115/month), staging ($91/month), dev ($72/month).

my3enviornments.png
prod_diagram.png

Mission Complete

Our mission was to create and deploy a set of auto-scaling web servers, front ended by a load balancer for our development, staging and production environments across 3 different AWS regions.  Through the power of infrastructure as code we utilized Terraform to define, plan and automate a consistent set of deployments. Mission Complete.

Terraform Series

This is part of a Terraform series in which we cover:

Using Terraform to Up Your Automation Game - Building the Fleet

Populating our Virtual Private Cloud

In the previous post we successfully created our Virtual Private Cloud (VPC) in AWS via infrastructure as code utilizing Terraform, which provided us the ability to stand up and tear down our infrastructure landing pad on demand.  Now that our landing pad is complete and can be deployed at any time, let's build our fleet of load balanced web servers.

Building the Fleet Using Terraform Modules

Taking a similar approach to our VPC build out we will once again utilize Terraform modules, this time to create and build out our web server fleet.  In addition to the Terraform Module Registry there are a number of different sources from which to select ready built modules - including GitHub.  For our web server cluster we will utilize a short and simple webserver_cluster module that I have made available in my GitHub terraform repository.

This module creates a basic web server cluster which leverages an AWS launch configuration and auto scaling group to spin up the EC2 instances that will be perfuming as web servers.  It also places a load balancer in front of these servers which balances traffic amongst them and performs health checks to be sure the fleet is bullet proof.  The module also configures the necessary security groups to allow http traffic inbound.  All we need to do is to specify the size and number of the web servers and where to land them.

To call this module we simply need to append to the main.tf file to call the webserver_cluster module and specify how our web server fleet should be built.

In the code statement above we simply call out the source of our webserver_cluster module which resides in GitHub, specify a name for our cluster, the image and size server to use, a key name should we need to connect to an instance, the minimum and maximum number of servers to deploy, along with the VPC and subnets to place them in (referenced from our VPC build out).

In this case we are going to deploy two web servers to the public subnets we built in our VPC.

Deploying the Fleet

After updating our main.tf file with the code segment above, let's now initialize and test the deployment of our web servers.  Since we are adding a new module plan we must rerun our terraform init command to load the module.  We can then execute a terraform plan for validation and finally terraform apply to deploy our fleet of web servers to the public subnets of or VPC residing in AWS us-west-2.

webserver_cluster_module.png

Validate the Plan and Deploy using terraform plan and terraform apply.

terraform_plan_webservers.png
terraform_apply.png
terraform_plan2.png
terraform_apply2.png

Accessing the Fleet

So our deployment is complete, but how can we access it?  When building infrastructure, Terraform stores hundreds of attribute values for all of our resources.  We are often only interested in just a few of these resource, like the DNS name of our load balancer to access the website.  Outputs are used to identify and tell Terraform what data is important to show back to the user.

Outputs are stored as variables and it is considered best practice to organize them in a separate file within our repository.  We will create a new file called outputs.tf in the  same directory as our main.tf file and specify the key pieces of information about our fleet, including:  DNS name of the load balancer, private subnets, public subnets, NAT IPs, etc.

After creating and saving the outputs.tf file, we can issue a terraform refresh against our deployed environment to refresh its state and see the outputs.  We could have also issued a terraform output to see these values, and they will be displayed the next time terraform apply is executed.

outputs.png

Browsing to the value contained in our elb_dns_name output, we see our website.  Success.

web_output.png

Scaling the Fleet

So now that our fleet is deployed, let's scale it.  This is a very simple operation requiring just a small adjustment to the min and max size setting within the webserver_cluster module.  We will adjust two lines in main.tf and rerun our plan/deployment.

....  min_size            = 8 max_size            = 10  .... 
scalethefleet.png

Viola.  Our web server fleet now has been scaled up with an in place update that has no service disruption.  This showcases the power of infrastructure as code and AWS auto-scaling groups.

awsscalethefleet1.png
awsscalethefleet.png

Scaling back our fleet, as well as clean up is equally as easy.  Simply issue a terraform destroy to minimize AWS spend and wipe our slate clean.

Multi-Region/Multi-Enviornment Deployment

Now that we have an easy way to deploy and scale our fleet, the next step is to put our re-usable code to work to build out our development, staging and production environments across AWS regions.

Terraform Series

This is part of a Terraform series in which we cover:

Using Terraform to Up Your Automation Game

As a proponent of automation I am a big fan of using Terraform to deploy infrastructure across my environments.  Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently.  With an ever-growing list of supported providers it is clear that it belongs in our automation toolbox and can be leveraged across local datacenter, IaaS, PaaS and SaaS deployments.  If you come from a VMware background and work with AWS or Azure, like me, I would recommend checking out Nick Colyer's PluralSight course and Yevgeniy Brikman's "Terraform: Up & Running" book / blog posts.  Hopefully this post will entice you to learn more.

Mission

Our mission is to create and deploy a set of auto-scaling web servers, front ended by a load balancer for our development, staging and production environments across 3 different AWS regions.  We will utilize a modular approach to create and build our infrastructure, and reuse our code when possible to keep things simple.

Building A Virtual Private Cloud

Before deploying any instances, we need to create our landing pad which will consist of a dedicated VPC (Virtual Private Cloud) and the necessary subnets, gateways, route tables and other AWS goodies.  The diagram below depicts the development environment to be deployed in the AWS us-west-2 region, including private, public and database subnets across 3 availability zones.

awsvcpdev.png

Terraform allows us to take a modular approach for our deployment by offering self-contained/packaged configurations called modules.  Modules allow us to piece together our infrastructure, and enable the use of reusable components.  Terraform provides a Module Registery of community and verified modules for some of the most common infrastructure configurations.  For our purposes we will leverage the VPC module for AWS.

To begin creating our development VPC, we can create a file called main.tf.  Inside this Terraform file we will add a few lines of declarative code and the AWS provider to attach to the us-west-2 region.  I am hiding my AWS credentials, but you can include them under the AWS provider.

We add a few additional lines of code to our main.tf file inserting the VPC module for AWS to define the availability zones, subnets, IP addresses and tags.  You can carve out your subnets as you see fit.  Less then 30 lines of code and we are now ready to initialize and deploy a working VPC for our development envioronment.  You can see why I like Terraform.

Deploying our VPC

To deploy our newly created VPC, we simple need to install Terraform on our computer, initialize and plan the deployment and then apply it.  The download and install of Terraform is very straight forward as it deploys as a single binary.  From the command line browse to the directory that holds your main.tf file, and execute the initialize, plan and apply commands.

terraform init

The first command that should be run after setting up a new Terraform configuration is terraform init.  This command is used to initialize a working directory containing Terraform configuration files, and needs to be run from the same directory that our main.tf file is located.

terraforminit.png
terraform plan

Before deploying our development environment, Terraform provides the ability to run a check to see whether the execution plan matches our expectations for what is to be deployed.  By running the terraform plan command, no changes will be made to the resources or state of our environment.  In our case, there will be 25 additions to our development environment and these created items are detailed in the output of the terraform plan command.

terraformplan1.png
 
terraformplan2.png
terraform apply

Now that everything is initialized and the plan meets our expectation, it is time to deploy.  The terraform apply command is used to apply the changes specified in the execution plan.  You will be prompted to enter 'yes' in order to deploy.  The summary of this command shows that all 25 additions completed.  We now have a working VPC for our development environment.

terraformapply1.png
 
terraformapplycomplete.png

We can confirm that is the case by logging into AWS and viewing our VPC in the us-west-2 region.

awsvpc.png
terraform destroy

One of my favorite uses of Terraform is to quickly turn up an infrastructure environment with only a few lines of code, and conversely tear it down when it is no longer needed.  This is extremely practical when working with cloud providers to keep costs low and maintain a clean environment which is ready for future deployments.  The terraform destroy command does exactly what you would expect - it is used to destroy the managed infrastructure deployed.  In our case we will utilize terraform destroy to tear down our development VPC.  When we are ready to use it again, we simply issue a plan and apply - which is the power of Infrastructure as Code.

terraformdestroy.png
 
terraformdestroycomplete.png

Deploying our Web Servers, Load Balancers and Auto-Scaling Groups

Now that we have a place to put our web servers, it is time to create and deploy them.  We will complete this process in the next post.

Terraform Series

This is part of a Terraform series in which we cover:

Managing Servers in a Serverless World

Serverless computing is the new craze, but let's agree that the naming is confusing and I would argue misleading. Serverless computing is the idea that you can build, deploy, and run applications and/or services without the need to think about servers.  But not thinking about servers and being "serverless" are two very different things.  Applications and services need to run and execute somewhere, whether in your datacenter, a co-location facility, the public cloud or your basement.  Servers haven't disappeared and are not going to anytime soon.

serverlesstechnicalaccuracy.png

I am sure most of us can agree that while servers are not going way, managing them is not a key business differentiator and therefore the "serverless" concept certainly resonates despite being inaccurate. Customers expect server management to be simple and to just work without spending absorbent amounts of time dealing with the minutia involved.  Wouldn't it be nice to simply hit an "update" button that remediates vulnerabilities at the firmware level for things like Spectre and Meltdown across your entire server farm?  Recognizing this, server manufacturers are working to continuously improve their lifecycle management functions for a market who wants to think less about provisioning, maintaining, securing and operating servers.

As one of the largest global server manufacturers, Dell/EMC appears to recognize this growing mindset and is aiming to simplify, automate and unify their server life cycle management functions.  A core component of their approach is the recently announced OpenManage Enterprise product.

OpenManage Enterprise is a systems management console currently in Tech Release, which serves as the evolution of the OpenManage Essentials product that most Dell server customers and administrators are familiar with.  OpenManage Enterprise adds some really nice capabilities that make it easier to deploy and immediately use.  Some of these capabilities that I am most excited about are:

  • It deploys entirely as a virtual appliance which includes everything you need to get up and running.  No more separate servers and databases to manage.  
  • The UI is web based and therefore consumable across platforms (including a mobile app) as it has been built with HTML5.  The performance improvements are immediately recognizable.  Yeah!!! - no more java plugins.
  • You can manage up to 5500 devices (soon to be 8,000) within a single interface, and separate out management through the use of role based and device level access.  
  • It integrates directly with iDRAC for remote management and configuration, as well as SupportAssist for proactive and automated support.
  • The enhanced discovery feature allow you to quickly sweep the datacenter to pull devices into management, as well as determining the warranty levels on those devices.  This includes servers, storage, networking and third-party devices.
  • A deliberate focus on automation for both Dell and third-party servers/devices using a RESTful API and integration with Redfish
  • Free software download

Real World Examples

Impressed by the list of features and capabilities that OpenManage Enterprise is offering, I downloaded the free software and took it for a spin.  The speed of deployment and access was as advertised....extremely easy and fast.  The discovery recognized servers, hyper-converged appliances, and storage devices.  It is apparent that while a core focus of OpenManage Enterprise is to reduce the headache of server management, it was designed to work across multiple systems.  This makes sense as we see servers now operating in different capacities - storage controllers, hyper-converged platforms, stand-alone and third party offerings.

DeviceCategories.PNG

One of my new favorite views is the ability to see warranty information for all devices under management.  This information is literally 1-click from the home screen and shows all warranty information including: status, expiration, ship date, order number, and service level.  Extremely useful.

WarrantyDell.png

For more granular level detail you can utilize device categories, custom groupings or the 'search everything' field to find a system and drill in on specifics.  Of course hardware, firmware, O/S and system health information is available but so is compliance detail to an established baseline, so that you can easily identify and remediate those systems in which firmware updates need to be performed.  Remote console access to the server with iDRAC and Virtual Console is conveniently located on the device home screen.

ServerOpenManage.png

There is AN APP For That

Yes, OpenManage Enterprise also has a mobile app, which is also free to download.  While mobile access might not be for everyone, I find the ability to reboot a server or look at diagnostics without having to connect my laptop and log into VPN in to be very convenient.  Just be sure to lock your phone, because you do have a significant amount of power available to you through the app.

openmanagemobile.jpg

I would have to say I am very impressed by OpenManage Enterprise.  Not only is the the list of features and capabilities impressive, but they are practical and easy to use.  For system administrators who live in a "serverless" world and are asked to think and focus less on server management, OpenManage Enterprise certainly delivers.