Managing Servers in a Serverless World

Serverless computing is the new craze, but let's agree that the naming is confusing and I would argue misleading. Serverless computing is the idea that you can build, deploy, and run applications and/or services without the need to think about servers.  But not thinking about servers and being "serverless" are two very different things.  Applications and services need to run and execute somewhere, whether in your datacenter, a co-location facility, the public cloud or your basement.  Servers haven't disappeared and are not going to anytime soon.


I am sure most of us can agree that while servers are not going way, managing them is not a key business differentiator and therefore the "serverless" concept certainly resonates despite being inaccurate. Customers expect server management to be simple and to just work without spending absorbent amounts of time dealing with the minutia involved.  Wouldn't it be nice to simply hit an "update" button that remediates vulnerabilities at the firmware level for things like Spectre and Meltdown across your entire server farm?  Recognizing this, server manufacturers are working to continuously improve their lifecycle management functions for a market who wants to think less about provisioning, maintaining, securing and operating servers.

As one of the largest global server manufacturers, Dell/EMC appears to recognize this growing mindset and is aiming to simplify, automate and unify their server life cycle management functions.  A core component of their approach is the recently announced OpenManage Enterprise product.

OpenManage Enterprise is a systems management console currently in Tech Release, which serves as the evolution of the OpenManage Essentials product that most Dell server customers and administrators are familiar with.  OpenManage Enterprise adds some really nice capabilities that make it easier to deploy and immediately use.  Some of these capabilities that I am most excited about are:

  • It deploys entirely as a virtual appliance which includes everything you need to get up and running.  No more separate servers and databases to manage.  
  • The UI is web based and therefore consumable across platforms (including a mobile app) as it has been built with HTML5.  The performance improvements are immediately recognizable.  Yeah!!! - no more java plugins.
  • You can manage up to 5500 devices (soon to be 8,000) within a single interface, and separate out management through the use of role based and device level access.  
  • It integrates directly with iDRAC for remote management and configuration, as well as SupportAssist for proactive and automated support.
  • The enhanced discovery feature allow you to quickly sweep the datacenter to pull devices into management, as well as determining the warranty levels on those devices.  This includes servers, storage, networking and third-party devices.
  • A deliberate focus on automation for both Dell and third-party servers/devices using a RESTful API and integration with Redfish
  • Free software download

Real World Examples

Impressed by the list of features and capabilities that OpenManage Enterprise is offering, I downloaded the free software and took it for a spin.  The speed of deployment and access was as advertised....extremely easy and fast.  The discovery recognized servers, hyper-converged appliances, and storage devices.  It is apparent that while a core focus of OpenManage Enterprise is to reduce the headache of server management, it was designed to work across multiple systems.  This makes sense as we see servers now operating in different capacities - storage controllers, hyper-converged platforms, stand-alone and third party offerings.


One of my new favorite views is the ability to see warranty information for all devices under management.  This information is literally 1-click from the home screen and shows all warranty information including: status, expiration, ship date, order number, and service level.  Extremely useful.


For more granular level detail you can utilize device categories, custom groupings or the 'search everything' field to find a system and drill in on specifics.  Of course hardware, firmware, O/S and system health information is available but so is compliance detail to an established baseline, so that you can easily identify and remediate those systems in which firmware updates need to be performed.  Remote console access to the server with iDRAC and Virtual Console is conveniently located on the device home screen.


There is AN APP For That

Yes, OpenManage Enterprise also has a mobile app, which is also free to download.  While mobile access might not be for everyone, I find the ability to reboot a server or look at diagnostics without having to connect my laptop and log into VPN in to be very convenient.  Just be sure to lock your phone, because you do have a significant amount of power available to you through the app.


I would have to say I am very impressed by OpenManage Enterprise.  Not only is the the list of features and capabilities impressive, but they are practical and easy to use.  For system administrators who live in a "serverless" world and are asked to think and focus less on server management, OpenManage Enterprise certainly delivers.

Security has Failed, Analytics to the Rescue

Security has Failed.  A refreshing, and I believe honest, statement presented by Dr. Richard Ford, Chief Scientist of Forcepoint when talking about the current state of traditional Computer Security.  Computers are complicated and by their very nature are a difficult landscape in which to separate the good from the bad - the core function of security.  Using traditional computer security means (anti-virus, firewalls, secure web gateways) is no longer an adequate way in which to draw these lines.  In the words of Dr. Ford, when it comes to the computer security playing field, "it is much easier to play offense then defense."


Can Analytics Help?

Realizing that traditional means are not adequate, Forcepoint is taking what they call a "human-centric approach" to security.  This approach seeks to understand normal human behavior as it relates to the flow of data in and out of an organization.  The goal is to become better at drawing the lines between the good and bad, allowing their customers to identify and respond to risks in real-time.  Rather than static definitions (firewall rules allowing system A and system B to communicate on a specified port), it is far more valuable to provide dynamic intelligence which incorporates both system context and user behavior into computer security decision making.  Forcepoint is working to provide this value through User and Entity Behavior Analytics (UEBA).

UEBA is what is referred to as the "Brains" of the Forcepoint suite of products.  UEBA allows a dynamic risk score to be calculated and assigned to users and computers through the use of data modeling.  Much like data modeling helps financial institutions determine if an applicant is at risk of default before approving or denying a loan, UEBA utilizes data modeling to determine the security risk of a given person and/or system.  The risk score calculated through these models is then utilized by the Forcepoint security products to make a more informed decision.

Of course, no two customer environments and policies are indentical so identifying system context and user behavior goes through a learning and training process.  Forcepoint states that the training of their data models to detect what is normal in a customer environment can be accomplished in days.  The UEBA models are purposely generic at their start and updated over time.  This flexibility allows for refinement of the models as new threats are presented within an environment.  Once in place, the models assist in distinguishing and alerting anomolies from normal activity.


Having worked a number of years now in the data analytics space helping customers reduce the noise-to-signal ratio within their data environments, it seems obvious to me that analytics can provide immediate value to a 'failed' traditional computer security industry.

At What Expense

So if behavior based analytics seems intriguing and scary to you all within the same breath, you are not alone. Forcepoint is in the business of intersecting people and data, therefore they are very conscience in designing and creating solutions in which privacy and personal protection are a core focus.  Anytime you record, model, analyze and act on human behaviors the topic of privacy must be understood.  The tradeoff between minimizing insider threats while protecting personal information is non-trivial.  While time did not allow us to drive into how privacy in implemented within the UEBA product, perhaps we can learn more in a future session.

Learn More

If you are interested in learning more about Forcepoint's computer security offerings or wish to view the entire UEBA Tech Field Day presentation, I have embedded the recording below.  This and other presentations can be found on the Tech Field Day website.

Disclaimer:  I was personally invited to attend Tech Field Day 16, with the event team covering my travel and accommodation costs.  However I was not compensated for my time.  I am not required to blog on any content; blog posts are not edited or reviewed by the presenters or Tech Field Day team before publication.

From Textile to Tech

Living in Pittsburgh, Pennsylvania USA I have caught a glimpse of the city's transformation from it's industrial roots of steel and coal to an economy with a finer focus on medicine, university research, and computer technology.   It is a now a city that is home to driverless cars, biotech firms and tech startups incubated out of Carnegie Mellon.  The city has revitalized itself compared to other depressed Rust Belt cities.

Last week's travel brought me to a another part of the country in which I caught a glimpse of a similar revitalization - Manchester, New Hampshire and Fall River, Massachusetts.  Both being traditional New England river towns, they share a common history, their economic base historically was composed of textile manufacturing.  The standing textile mills of these towns are quite impressive but you won't find many actively manufacturing textiles.  What has replaced them?  Many things I suppose, but I was privileged during my trip to participate in two events which certainly showcase the move of these mills from textile to tech.

150 Dow St. is the home of Dyn - the DNS guys, who started it back in 2001 in Manchester in one of these old textile mills on the river, and were later acquired by Oracle in 2016.  Dyn was this year's host of the New Hampshire TechOut competition.  This is a competition open to New Hampshire startups who, if selected, are afforded the opportunity to pitch before a live audience to share their product and vision.  Six finalists were given the opportunity to present and two were awarded a total of $300,000.  As Greg McHale, the founder and CTO of Datanomix states "TechOUT is a phenomenal event that increases awareness of all the great things happening in New Hampshire's startup ecosystem."  I would have to agree - as I sat in an old mill, the home of a successful startup, watching six others lay down their roots.


The next day took me to Fall River, Massachusetts.  Situated about 50 miles south of Boston, I remember this town from my childhood where I boarded the USS Massachusetts as a cub scout.  Fall River provided a similar landscape to the night before as an old textile manufacturing town now meets tech.  Fall River is now the home of New England's largest data center, which is owned and operated by Congruity360 - an IT infrastructure services company that itself is transforming itself into a managed service provider.  I was joined by the Gestalt IT team on a private tour of the facility. It is incredibly impressive.  Once an old cotton mill, the 200,00 sq. ft. facility houses over 70,000 sq. ft. of datacenter space with room to grow.  The entire datacenter tour was recorded and is packed with details and historical gems of this building.

What struck me is that Congruity360 is making a solid investment in a regional datacenter and has aggressive plans to leverage this investment in support of their managed services offering.  This is a big deal for the town of Fall River and has certainly caught the attention of local officials including Jasiel Correia, the mayor of Fall River.  Mayor Correia performed the the ribbon cutting of the new facility and expressed the opening of this data center as "an incredible feat, and a big deal for us'.  You can watch his full comments and interview with Stephen Foskett on the Gestalt IT YouTube channel:

It was a great couple of days in two cities that most probably know very little about.  Two cities with a rich history and strong roots.  Two cities that are embracing and welcoming the technology ecosystem.  Two cities I hope are able to continue to transform and adapt.


Disclaimer:  Gestalt IT covered some of my travel and accommodation costs and I was not compensated for my time.  I am not required to blog on any content; blog posts are not edited or reviewed by the respective companies prior to publication.

Nutanix Community Edition & Automation VM (NTNX-AVM) on Ravello

There is nothing that can replace a good home lab for testing and staying relevant with technology, but for me Ravello comes pretty close.  For those not familiar with Ravello it is a "Cloud Application Hypervisor" that allows you to run multi-VM applications on top of any of it's supported clouds (Oracle Public Cloud, Amazon AWS, and Google Cloud Platform).  Through the use of "blueprints" you can easily publish a lab environment to any of Ravello's supported clouds without having to run you own lab at home.  That is of major benefit to me personally because it provides me a low cost & fast way to utilize a lab environment using the blueprints that Ravello makes available in its repository.  Two of my favorites are AutoLab and Nutanix Community Edition (CE).

There are some great resources for using Ravello and in this post I will be focusing on the Nutanix CE blueprint along with a cool new Automation VM (NTNX-AVM) that was recently released by Thomas Findelkind

Installing Nutanix CE on Ravello

Nutanix Community Edition is great blueprint made available by Nutanix on Ravello for use in familiarizing yourself with the Nutanix software and Prism management interface.  It is 100% software, so it is very simple to deploy following a few simple steps which Angelo Luciani captured in a short video.  Here are my abbreviated steps:

1. Add blueprint to my Ravello Account


2. Publish & Deploy Nutanix CE from blueprint

I like to be sure to publish with an optimization for performance, choosing a cloud location that is close.  You will notice that the CE deploys as a VM with 4 vCPU and 16GB of memory.  Public IP addresses are also assigned so that we can access the application remotely, which we will do in the next step.  Ravello also allows you to see your pricing details to run this blueprint.

3. Validate that your CE application is working appropriately.

Once the Nutanix CE application is published (which can take several minutes depending on what cloud you published to), you will notice that the VM shows in a running state.  You can connect to the Prism web interface remotely by selecting the 'External Access for' sub-interface NIC1/1, and selecting 'Open'.

This will open your web browser attaching to port 9440 on the public address as shown in the image above.  It does take a little bit of time once the CE VM is up and running for Prism to be responsive.  Stay patient. My average wait time is about 15-20 minutes, but I have had take as long as 40 minutes. If you open the browser and see the following message, it is normal - you just need wait for the cluster to be fully available.

You can also ssh into the Nutanix controller VM using ssh nutanix@PublicIPAddress tied to NIC1/1 interface.  The default password is nutanix/4u  If you run a cluster status command it will show you the status of the cluster.

4. Login into Prism and explore what Nutanix can offer.  

The default user name and password for Prism is admin / admin and you will be prompted to change the password and update to the latest release if you would like. Now that we have a running Nutanix CE cluster let's put something useful on it like the NTNX-AVM automation VM.

Adding NTNX-AVM Automation VM to Ravello Blueprint

The Nutanix automation VM (NTNX-AVM) was recently released by Thomas Findelkind and was designed for easy deployment of automation 'recipes' within the context of a VM that can be deployed on and run against a Nutanix cluster.  Once deployed the NTNX-AVM provides golang, git, govc, java, ncli (CE edition), vsphere CLI and some automation scripts the community has developed all preinstalled within a VM running on a Nutanix cluster.  I think would work great within Ravello for testing some automated scripts so let's step through the process for adding it to our application & blueprint.

The full details as well as the code for installing the NTNX-AVM are available on GitHub at, but here are my abbreviate steps for getting this up and moving on Ravello:

1. Adding a CentOS VM to my Nutanix CE Application

The NTNX-AVM is deployed using a simple bash script which will do all the heavy lifting.  This script can be run really from anywhere that can communicate with your Nutanix cluster.  I would like to eventually build a docker container for this part of the in but in the meantime an out of band CentOS VM in Ravello will do the trick.  Just so happens Ravello has a vanilla CentOS ready for me to add so that makes it easy.

In order to create and attach to this CentOS VM, a key pair needs to be created and assigned in your Ravello library.  This is easily done and downloaded for future SSH connectivity.  The VM also needs to be published as the Ravello application has been updated.  Once again, something easily done.

Assign the newly create Key Pair RavelloSSH to the CentOS VM

Once the key pair is assigned, the application can be updated to include the CentOS VM.

And we can connect to it by opening an SSH session to it's public IP address

ssh ravello@ -i RavelloSSH.pem

2. Download and unzip the NTNX-AVM install files and scripts

One of the requirements for running the NTNX-AVM install is that it makes use of genisoiamge/mkisofs which my vanilla install doesn't have so, I need to pull that down after updating my CA certificates to connect to the EPEL package repository.

sudo su
yum --disablerepo=epel -y update ca-certificates
yum install git
git clone

You can verify that all of the files have been dowloaded

3. Update the config for the CentOS recipe to deploy NTNX-AVM

Since we are using CentOS to deploy our NTNX-AVM, we need to modify the  -> "/recipes/NTNX-AVM/v1/CentOS7/config" to specify the parameters of our environment.  Things like the VM name, IP for VM, Nameserver, etc.  A quick look at the network canvas within Ravello shows us how things are connected.

In our case the Ravello application is working on the 10.1.1.x / 24 network so I will modify the configuration file accordingly.

vi ./DCI/recipes/NTNX-AVM/v1/CentOS7/config

My completed configuration file looks like this, were the new NTNX-AVM will have a IP address assigned to it.


[root@CentOS63vanilla DCI]# cat recipes/NTNX-AVM/v1/CentOS7/config


4. Deploy the NTNX-AVM

Now that the prep work is wrapped up, it is time to run create a place to put our NTNX-AVM on the Nutanix CE cluster and run the script from our CentOS VM to deploy it.  First we will create a new storage container called 'prod' within Prism, as well as configure a network it can use.

Then we will run the script.  The full syntax of the script can be found in Thomas' writeup.  The syntax and settings I used are as follows, with the being the IP address of the Nutanix CVM and prod being the container we are saving the VM to.

./ --recipe=NTNX-AVM --rv=v1 --ros=CentOS7 --host= --username=admin --password=nutanix/4u --container=prod --vlan=VLAN0 --vm-name=NTNX-AVM

The script will do the following:

  • First it will download the cloud image from a CentOS. Then it will download the deploy_cloud_vm binary.
  • It will read the recipe config file and generate a cloud seed CD/DVD image. Means all configuration like IP,DNS,.. will be saved into this CD/DVD image called “seed.iso”.
  • DCI will upload the CentOS image and seed.iso to the AHV image service.
  • The NTNX-AVM VM will be created based on the CentOS image and the seed.iso will be connected to the CD-ROM. At the first boot all settings will be applied. This is called the NoCloud deployment based on cloud-init. This will only work with cloud-init ready images.
  • The NTNX-AVM will be powered on and all configs will be applied.
  • In the background all tools/scripts will be installed

After the script is complete we can see that our NTNX-AVM is deployed on our Nutanix CE cluster but it is powered off.  This is because we are working with limited memory in our Ravello environment, so the memory on our VM needs to be adjusted from 2GB down to 1GB. 


Once that adjustment is made, the VM powers on nicely for it to complete it's configuration and tools/scripts installation.  We can check the status of this final process by simply connecting via ssh to the NTNX-AVM IP, which is in my case.  We can check the /var/log/cloud-init-output.log to see our progress and make sure that all tools are fully installed because this is done in the background after the first boot.

So let’s check if /var/log/cloud-init-output.log will show something like:

We know everything is complete when we see the 'The NTNX-AVM is finally up after NNN seconds." message.

5. Using the Nutanix Automation VM: NTNX-AVM

Now that we have a working NTNX-AVM, we have access to a number of great automation tools with more coming thanks to Thomas' automation scripts.  To be sure all is good, let's utilize an ncli command on the NTNX-AVM to check our cluster status.

ssh nutanix@
cli -s -u admin
cluster status

I look forward to using this new addition to my Ravello Nutanix CE blueprint for future automation.