Let’s get one thing straight, the authors of this article are HUGE fans of the VMware Hands-On-Labs (HOL). We think it’s unbelievably cool that at a moment’s notice you can spin up the newest version of vSphere, dive head-first into the vRealize Suite, or practice the latest integrations with Palo Alto Networks and Infoblox. Having essentially instant-access to cutting-edge labs is amazingly helpful.
And being such huge fans, we’ve always wondered what other amazing things we could do if we could only keep those labs open beyond their tiny 2-hour time limit… We’re part of a team of consultants at AHEAD who are constantly testing upgrade processes, breaking in new products, and developing new automation in our labs.
This is part of the effort that we have to go through to sharpen our skills, learn the documented (and undocumented) software features of the products we use, create custom integration to drive tangible business results, and prepare to help AHEAD’s clients embark on their own cloud journeys. As we think about what the VMware HOLs provide, we had to wonder:
What if our own lab could spin-up entire vSphere or vRealize Suite labs in a couple minutes, JUST LIKE THE VMWARE HANDS-ON-LABS?!
So we made it happen.
We leveraged the AHEAD Lab to build out our own version of the HOL.
First, we started by creating a nested vSphere lab that our team could request through a vRA 7 blueprint. The blueprint contains its own vCenter, nested ESXi host VMs, etc. Each time vRA spins up one of these blueprints, it tells NSX to put it on its own isolated network, behind NAT. Thanks to the NAT’ing, the VMs in each blueprint instance can all use the same overlapping IP range, but still get out to the Internet for updates and downloads without conflict.
Leveraging service catalogs, automated provisioning, integration of automated security capabilities (like those presented by micro-segmentation with NSX) are all part of what we call the AHEAD Cloud Delivery Framework (CDF). What differentiates our team’s (and AHEAD’s) value propositions is how we integrate technology across vendors, segments of the datacenter (Automation, Orchestration, DevOps, Security, Physical or Cloud Infrastructure, etc.) to provide better end-user results for our clients’ organizations.
The purpose of this post is to show you an example of how we “stitched together” solutions in our CDF to deliver automated services – exactly how we setup our first, nested vSphere lab.
From there, you can let your imagination run wild and add in whatever additional products you like: SRM, vRealize Operations, and even nested vRA.
Note: This blog was originally published on our personal blog – and some of the background material/links will route you back to the original post.
Step 1: Setup the physical hosts and VM templates
Fortunately, we’ve already covered all of the foundation work that goes into this project. In the previous article, Building a Nested ESXi Lab For Your Company, we walked you through how to setup the physical ESXi hosts, the nested ESXi hosts, and all the tweaks to ensure everything works properly. After all, you’re going to be running VMs inside other VMs on your hosts, so stuff can get weird in a hurry. If you haven’t read through this article already, I would suggest doing so before continuing on, as we’ll be referring back to it for the next couple of steps.
For the physical ESXi setup, pay particular attention to the section of the previous article called “Building the Physical vSphere Lab,” which goes over how to install a special “dvfilter” VIB for nesting.
A physical network configuration is detailed in the section of the previous article called “What we have here is a failure to communicate.”
However, the vRA blueprint we’ll be creating for this article will use NSX to automatically spin up an On-Demand NAT Network for each lab instance. As such, we will not need to do the physical network steps from the previous article. You may, however, wish to try them prior to following this article, so that you can test everything out and get comfortable with the nesting.
Finally, go ahead and get all of your VM templates created as described in the previous article (PSC, vCSA, and 3+ nested ESXi VMs). Remember to give them all their correct IP addresses and hostnames before converting them to templates. Our vRA blueprint will be cloning these templates as-is, so their IP addresses and names will be retained.
As I mentioned earlier, we’ll be using NAT to prevent IP address conflicts when multiple instances of the blueprint are provisioned.
Step 2: Build a basic NAT blueprint in vRA
For our initial NAT blueprint in vRA, we’ll start with the blueprint we built in a previous blog article, Rapidly Clone Whole Test/Dev/Lab Environments with vRA and NSX. Be sure to build your NAT Network Profile for the same IP subnet as the VM Templates you created in Step 1.
Go ahead and make a copy of that blueprint and call it “Nested vSphere Lab.”
It should look like this:
The only component from the previous blueprint that we aren’t going to be re-using is the “app-server-01” VM. You can go ahead and delete that component. Everything else we did to create the blueprint in the previous blog article is required, though, so make sure you’ve tested it thoroughly before you proceed with this article.
Also, when you followed the article from Step 1, Building a Nested ESXi Lab For Your Company, you should have added NFS to your AD-DNS template. Be sure to include that updated template in this blueprint.
Step 3: Add vSphere VMs to your blueprint
Open up your new “Nested vSphere Lab” blueprint.
Once open, drag and drop vSphere Machine components onto the canvas for each of the templates you created in Step 1 (vCSA, PSC and all nested ESXi servers). On the Build Information tab, do NOT enter a Customization Spec, since we want to preserve the IP addresses and names that are already in the VM templates.
- Blueprint Type: Server
- Action: Clone
- Provisioning Workflow: CloneWorkflow
- Clone From: (choose your new template for each machine)
- Customization spec: (leave blank)
All the “Build Information” properties should look like this (with the “Clone from” template changed accordingly):
On the Network tab, add the VM to the InternalLabNetwork, and specify the IP address of the VM template, like this:
vRA won’t actually be applying the IP Address to the VM clone since we didn’t provide a Customization Spec. Specifying it will, however, let vRA know that the address is in use, so it doesn’t give it out to other VMs that you might add to your blueprint later.
Now that we have all the components on the canvas, we want to control the build process and ensure they’re built and powered up in a certain order since they have dependencies on different services.
For starters, we want the ESXi hosts built after AD-DNS because when they power up we want to make sure they connect to their NFS datastores without issues. We also want to ensure that AD-DNS is up before the PSC, and that the PSC is up before vCenter.
So, let’s make that happen!
Move your mouse to the upper left-hand corner of the VM component and you’ll see a blue circle appear, click and drag to the VM component you want to depend on.
In this example, we’re making the PSC dependent on the AD-DNS VM, so the PSC won’t be built until the AD-DNS VM is built and powered on.
Repeat this process and connect the following:
- PSC to AD-DNS
- vCSA to PSC
- Each ESXi host to AD-DNS
The end result should look like this:
Side note: you may notice that in the blueprint pictured above, there is an additional on-demand NAT network, “InternalLabvMotion,” which I used to isolate nested ESXi vMotion traffic. This was completely optional and, ultimately, I felt that it didn’t add much value to the blueprint, so I haven’t detailed its creation in this article.
Now save your blueprint, publish it, and add to your entitlements.
Step 4: Tweak your Templates
In the next step, we’ll be configuring our On-Demand NAT Network to support nested VM guest traffic. Before we do that, let’s provision our blueprint to make sure all of its VMs can communicate with each other. We won’t be able to create any nested VMs within its nested ESXi hosts yet, but we will be able to make sure the blueprint’s vCenter can authenticate AD users, and see all of its nested ESXi hosts.
Once you provision your Nested vSphere Lab blueprint, open it up in your Items tab.
To access your lab, click on the Jumpbox, and then from the Actions menu choose “Connect using RDP.”
Side note: you might have noticed that there are a few extra VMs in the screenshot above. This screen was actually from a Nested vRA Lab blueprint. Since it was derived from our Nested vSphere Lab blueprint, it includes the same Jumpbox.
From your Jumpbox RDP connection, you should be able to try logging into vCenter with an AD account, and testing its connectivity to all the nested ESXi VMs. If everything works 100% the way you want, you can skip the rest of this section and move on to Step 5.
Once you’ve finished making all of your fixes or changes, and you have your lab exactly how you want it, we’ll shut it down and convert it to a new set of templates…
Before shutting down your ESXi VMs, be sure to log into each one’s ESXi Shell, and run the commands noted in our article from Step 1, Building a Nested ESXi Lab For Your Company, to configure the vmk0 MAC address and remove the System UUID. Then, create a folder in your physical environment, call it something super cool like “vSphere Lab”, and move your ESXi hosts, vCSA, PSC and AD-DNS VMs into this new folder.
Next, shut down all of the VMs, change the virtual network on each vNIC so it’s not using a NSX Logical Switch, and then convert the VMs to templates.
Going through this process is a little tedious, especially if you’re building out multiple labs for various uses, or make changes frequently so I wrote a script to do it for me:
Once everything is converted back to templates, provision your blueprint one more time to make sure everything works as expected.
Step 5: Add a vRO Workflow to enable nested VM network traffic
As you may recall from the previous article Building a Nested ESXi Lab For Your Company that we referenced in Step 1, there was a section on networking called “What we have here is a failure to communicate!” In that section, we detailed how to update the portgroup’s settings with Promiscuous Mode and Forged Transmits set to ACCEPT. These updates were required in order for the nested ESXi hosts’ guest VM traffic to travel across the network properly.
Since each instance of our vRA blueprint will spin up its own NSX Logical Switch, over which the nested ESXi hosts’ guest VM traffic will need to travel, we’ll need to include these settings in the blueprint.
Unfortunately, there isn’t a way to add the settings from the Blueprint design canvas. Instead, we’ll need to import a vRO workflow to add these settings, and then tell vRA to call the workflow each time it provisions an instance of the blueprint.
First, let’s import the workflow. You can download the file from here:
Unzip the package file, and then import it into vRO:
Then, click on the Configurations tab, and edit the “NSX + vRA Integration” configuration element:
Update the “switchNameFilter” attribute to match whatever you called the On-Demand NAT Network in your blueprint. If you named it exactly the same way I did, “InternalLabNetwork”, then you’re good to go.
When the workflow runs for a given blueprint instance, it will update any of its NSX Logical Switches whose name contains this string (it’s case-sensitive). You can ignore the “nsxConnection” and vcacCafeHost attributes – these were used by some other workflows in my lab, which I didn’t include in the workflow package for this article.
Now that our vRO workflow is ready to go, we’re ready to configure vRA to call it as part of our blueprint. We’ll use vRA’s Event Subscription feature to call it whenever vRA provisions a blueprint component called “NSX Edge.”
In vRA, browse to the Administration tab -> Events -> Subscriptions. Create a new subscription with the following conditions:
- “Any of the following”
- Data > Component id = NSX Edge
- Data > Component type id = Infrastructure.Network.Gateway.NSX.Edge
- “All of the following”
- Data > Lifecycle state > Lifecycle state name = VMPSMasterWorkflow32.WaitingToBuild
- Data > Lifecycle state > State phase = POST
Keep in mind, these values are case sensitive (and don’t forget the space in NSX Edge):
Since my vRA environment was dedicated to provisioning lab blueprints with On-Demand NAT networks, I didn’t bother adding a condition to specify my blueprint name (I wanted it to run for all my blueprints).
If you like, you can add a condition to apply this subscription to only your “Nested vSphere Lab” blueprint:
- Data > Blueprint Name
On the Workflow tab of the subscription, choose the following workflow, which we just imported:
“vRA Provisioning – Prepare NSX On-Demand NAT Networks for nested ESXi traffic.”
Finally, on the Details tab, be sure to check the box for Blocking.
After you click Finish, don’t forget to Publish the subscription.
Now go provision your new Nested vSphere Lab! WOOHOO!!
Hopefully at this point, you’ve made it all the way through the steps in this article, and you’re now basking in the GLORY of your very own VMware HOL (or at least a sweet approximation)!
We built ours toward the end of last summer, and we’re discovering new uses for it all the time. For example, our vSphere engineers can spin-up a lab to practice a complex migration before they do it for a customer. If their practice migration blows up, they just throw it away and try again until they get it perfect.
Our vRA engineers can develop all their vRO workflows and 3rd party integrations in isolation, without the chaos of breaking each other’s components. And our Technical Architects can do customer PoC’s and training events where each participant gets their own instance of an amazing lab, like the vRA and BCDR (SRM and Zerto) labs we recently built.
So what labs are you going to build!? The sky’s the limit!
If you’d like to book time with our experts in the AHEAD Lab, or want to preview your hands on environments and capabilities, and see how they play a much larger role in a well-stitched Cloud Delivery Framework – let us know, we are happy to host your organization at our labs in Chicago!