Select Page

Most of my DSC blog posts target on-premise or remotely accessed VMs which most of the times are in Azure. While everything is fine and dandy when you’re running PowerShell / PowerShell DSC on your local infrastructure, but when it comes to Azure, you might need to rethink your strategy a bit.

Here’s the fun part. In Azure we have two options of using PowerShell scripts or PowerShell DSC, the first obvious option being the Azure Automation service which is a highly scalable, highly available service that offers a subscription wide PowerShell Engine and DSC Pull server and the second option which is a more targeted approach, is by using custom VM Extensions.

Now there are different scenarios where these options can be used either separate or together. I for example I’m working right now on a project that combines both of them in a useful solution of CI/CD that’s centrally managed from Visual Studio Team Services.

Think of a scenario where you want some “deployment jobs” to be run every time somebody commits code and that code has to go into the Dev -> QA -> Production. In this type of scenario, you would use VM Extensions for running scripts / DSC on targeted VMs from DEV/QA and you would use the Azure Automation Pull Server for managing the production servers and the fun part is that you don’t even need to touch those machines.

In this blog post I’m going to focus on PowerShell DSC and how to use the configuration documents you write either in a Automation Pull Server or using VM DSC Extensions.

The main difference is that in Azure you treat your resources as cattle (as Jeffrey Snover would put it) and you don’t care about the ComputerName variable inside the guest OS. If you wrote your configurations in a modular and dynamic way then this difference doesn’t affect you in any way but if you didn’t then yes, you may have to do some slight refactoring.

Let’s take a look at a DSC Configuration.

As you can see in the code outlined above, it doesn’t do anything because everything is dynamic and is waiting for input. That input needs to come from somewhere and in our case we pass on the input via a configuration data document.

This simple hashtable of arrays can be set as a variable in PowerShell or saved in a .psd1 file and passed as a parameter when you’re sending the configuration.

This code can be used either in Azure Automation or it can be pushed from your local workstation or VSTS. In Azure Automation, you have the DSC Pull Server where your configuration sits and you compile the .mof file every time you do a change, and when using the VM DSC Extension approach where you have to publish the code in a Storage Blob and pass on a configuration .psd1 document which then gets injected inside the targeted VM and run.

So how do we make this happen?

First of all we need the configuration document and the configuration data file then we need an Azure Automation Account and a Storage Account to cover both scenarios and of course a new VM so we can fix it till it breaks.

Let’s start with VM DSC Extensions.

I keep my configuration document which is a .ps1 file which can be anywhere on your local machine or VSTS drop. In order to use VM DSC Extensions, I need to “publish” that configuration document so that’s available to the Azure Agent that will call it. For that I will use Publish-AzureRmVMDscConfiguration and here’s an example”:

Notice that I used splatting so that’s easier to read. The process in uploading the configuration file in a storage account is pretty straight forward. The cmdlet will read the contents of the .ps1 file, check the dependencies and after that it will copy everything in a temporary folder, create a metadata file that points to the referenced DSC Resources, archive the folder with .ps1.zip and upload it in the Azure Storage Account you referenced.

For the next step, you need to PUSH the DSC configuration to the VM and we can do that using the following code:

In the example above I set a few variables and then I referenced them in the hash table that would then be splatted to the Set-AzureRmVMDscExtension. The hash table is pretty self explanatory, the only place where you need to keep your eye out is the Version = ‘2.15’ parameter and I referenced a MSDN blog post where you can find the latest version to use. Basically the cmdlet calls the Azure REST API and configured the DSC Extension on the targeted VM, which then it starts the process of installing WMF 5, restarts the machine and then it applies the configuration.

As you can see this is a lot of manual work for configuring a VM but if you’re doing this task from VSTS for example then your only problem is to configure everything and then fill in the blanks and the rest is done automagically.

Now that we covered VM Extensions let’s see how we do this using the Azure Automation Pull Server.

For this part we only need to have an Azure Automation account created and a VM. This time we’re going to modify the configuration file in order for the DSC Pull server in Azure Automation can generate mof files based on what servers we want to configure.

So let’s modify the configuration document and make it role based.

OK, it’s been modified, now let’s publish it in the Azure Automation Pull Server.

As you can see, the main difference between the first configuration document and this one is that in this one I added a filter that looks for a ‘Role’ parameter. Using this method I can register nodes and assign them a specific configuration based on what they should be. Modifying the configuration data file is simple, we basically add a Role key-value pair and then write what we want in the NodeName part but this time we don’t need to use a .psd1 file because we’re not injecting anything in a specific VM, we are telling the Automation Service what to do with it.

So here’s how we tell Azure Automation how to compile the mofs files.

AzureAutomation_Compiled

This is the part that might get confusing. I made my configuration look for a Role key in order to establish what should be configured on a per role basis and then I set a NodeName that’s not even remotely close to how I would name the VMs, so what’s the deal? Basically all I did is set up the Pull Server to serve configurations which all that remains is to configure the VMs to register to the Pull Server and get the desired configuration file

How do we do this? Simple. We push a LCM Configuration file via the DSC Extension and tell the LCM to register to the Azure Automation Pull Server. Microsoft even provides a DSC Configuration Document to do that and you can get it from the official GitHub repository LCM Configuration File

The LCM configuration document requires some specific parameters to be filled in order for it to register to the Azure Automation Pull Server.

The parameters we mostly care about are $RegistrationUrl, RegistrationKey, ConfigurationMode, $NodeConfigurationName, $RebootNodeIfNeeded. We can get the URL and Registration Key using the Get-AzureRmAutomationRegistrationInfo cmdlet which will output the URL and RegistrationKey and the rest is simple. We want the node to reboot when needed so we set that to $true and the $ConfigurationMode should be set to ApplyAndAutoCorrect. The $NodeConfigurationName should be set to which in our case is ConfigureMachine.WebXX (See where the NodeName part comes in?)

You can deploy the document manually using the method I described in the first part of this blog post or you can go one step further and have an ARM Template where you declare a virtualMachines/Extensions block and lay out the values for all the parameters.

Here’s an snippet from one of my ARM Templates I use to generate a one click environment based on my needs:

As you can see, this is not rocket science. Well maybe the ARM templates are a bit hard to swallow but you get the main idea on how we can use DSC to maintain the desired state of our cloud environment 🙂

That being said, have a good one!

Pin It on Pinterest