Introduction
I never had the time to play with Bicep and was stuck with ARM templates. Following a blog post, I decided to document what I was using with DSC and try Bicep at the same time: deploy a VM with multiple discs, format them and attach them to the VM. Of course, DSC can be used for more powerful stuff, from installing choco package to installing and managing SQL Server.
Using DSC vs. Plain PowerShell is an endless debate. Each approach has its own pros and cons. Personally, I was seduced by:
-
The number of existing modules
-
The idempotency of DSC
Other arguments are:
-
Reporting capacity/auditing with DSC
-
Integration with Azure Automation
Prerequisites
Bicep CLI
Bicep CLI exists either as an extension of Azure CLI or as a standalone CLI. I made the choice of using the Azure CLI extension. To install this extension, the command is rather simple:
az bicep install
Azure Resource groups
Resource groups are required to create the VM for instance. Usually, I would have a "common" resource group for the storage account or a VNET, and a resource group for my VM, as my storage account is shared across my resources. For this example, I will use only one resource group. To create it:
az group create --name my-rg --location francecentral
Azure Storage Account
The DSC scripts must be available to the VM. I create a Storage Account to store the scripts but keep them private for security reason. Thus, the scripts are not accessible without authentication.
To create the storage account, I used a Bicep file:
// cf. https://github.com/Azure/bicep/blob/main/docs/examples/101/storage-blob-container/main.bicep
param storageAccountName string
param containerName string = 'dsc'
param location string = resourceGroup().location
resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = {
name: storageAccountName
location: location
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
properties: {
accessTier: 'Hot'
}
}
resource container 'Microsoft.Storage/storageAccounts/blobServices/containers@2019-06-01' = {
name: '${sa.name}/default/${containerName}'
}
Then, I deploy the template:
az deployment group create --resource-group my-rg --template-file prerequisites/main.bicep --parameters storageAccountName=mystorageaccount
DSC
DSC allows you to describe a configuration to be applied. Here is an example of DSC configuration that will:
-
Enable remote desktop
-
Mount the data disk
Configuration Common
{
param
(
[Parameter()]
[System.String[]]
$NodeName = 'localhost'
)
Import-DscResource -Module xRemoteDesktopAdmin
Import-DSCResource -ModuleName StorageDsc
Import-DSCResource -ModuleName xPendingReboot
Node $NodeName
{
LocalConfigurationManager {
RebootNodeIfNeeded = $true
}
#region RDP
xRemoteDesktopAdmin RemoteDesktopSettings {
Ensure = 'Present'
}
#endregion
xPendingReboot Reboot1 {
Name = 'BeforeManageDisk'
}
#region Disk
if (Get-CimInstance -ClassName Win32_CDROMDrive) {
OpticalDiskDriveLetter SetFirstOpticalDiskDriveLetterToZ {
DiskId = 1
DriveLetter = 'Z'
}
}
WaitForDisk Disk2 {
DiskId = "2"
RetryIntervalSec = 10
RetryCount = 3
}
Disk EVolume {
DiskId = "2"
DriveLetter = 'E'
FSLabel = 'Data'
DependsOn = '[WaitForDisk]Disk2'
}
#endregion
}
}
Once the configuration written, you must:
-
Prepare a package
-
Publish the package
Preparation of the package
You must install all dependencies/modules on your computer. In my case:
install-module xRemoteDesktopAdmin
Install-Module StorageDsc
Install-Module xPendingReboot
Installing the modules will also enable completion in VSCode. |
You can create a package to put somewhere (storage account, GitHub, etc.) with the following command, or skip this and go to the next step directly:
Publish-AzVMDscConfiguration .\common.ps1 -OutputArchivePath .\common.ps1.zip
Publication of the package
Az module has a specific cmdlet that is able to create the package and push it to a storage account:
Publish-AzVMDscConfiguration .\common.ps1 -ResourceGroupName "my-rg" -StorageAccountName "mystorageaccount" -ContainerName "dsc" -verbose -force
VM and DSC extension
Once the DSC package ready, we can deploy a VM. In my example, I reuse an existing VNET. For convenience in my testing, I will also add a public IP, but that is not a good practice of course.
I will focus on the the DSC extension at the end of the Bicep file. All available parameters for the DSC extension are described in Microsoft doc.
resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
name: storageAccountName
}
var _artifactsLocationSasToken = stg.listServiceSAS('2021-04-01', {
canonicalizedResource: '/blob/${stg.name}/${containerName}'
signedResource: 'c'
signedProtocol: 'https'
signedPermission: 'r'
signedServices: 'b'
signedExpiry: dateTimeAdd(baseTime, 'PT1H')
}).serviceSasToken (1)
resource dscExtension 'Microsoft.Compute/virtualMachines/extensions@2018-10-01' = {
location: location
parent: vm
name: 'Microsoft.Powershell.DSC'
properties: {
publisher: 'Microsoft.Powershell'
type: 'DSC'
typeHandlerVersion: '2.77'
autoUpgradeMinorVersion: true
settings: {
wmfVersion: 'latest'
configuration: {
url: '${stg.properties.primaryEndpoints.blob}${containerName}/common.ps1.zip' (2)
script: 'common.ps1' (3)
function: 'Common' (4)
}
configurationArguments: {}
}
protectedSettings: {
configurationUrlSasToken: '?${_artifactsLocationSasToken}' (5)
}
}
}
1 | This function, as described in the doc allows to get a SAS token, based on parameters. However, this SAS Token does not contain the '?' as a query string parameter. |
2 | URL build from the storage account URL + the container + the archive generated previously. The blob URL ends with a '/', hence the concatenation without an additional '/'. |
3 | Name of the DSC script. |
4 | Name of the configuration. |
5 | Little trick to get a query parameter. |
The complete file main.bicep
is available here.
I understand that at this moment, there is no built-in mechanism to create parameter file and that parameter files are the old JSON-based ARM template parameter. I used a script found in a GitHub issue to prepare my parameters:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"value": "administrateur"
},
"adminPassword": {
"value": "mysuperpassword"
},
"virtualNetworkName": {
"value": "MyVNET"
},
"subnetName": {
"value": "Subnet"
},
"storageAccountName": {
"value": "mystorageaccount"
},
"vmName": {
"value": "myvm"
},
"nicName": {
"value": "nic-myvm"
},
"publicIpName": {
"value": "pip-myvm"
},
"networkSecurityGroupName": {
"value": "nsg-myvm"
},
"sizeOfEachDataDiskInGB": {
"value": 128
}
}
}
Finally, we deploy the Bicep template with the parameters:
az deployment group create --resource-group my-rg --template-file vm/main.bicep --parameters @vm/main.parameters.json
Conclusion
Here is an easy way to get a VM with a ready-to-use data disk.
This article was a way to test Bicep and suffice to say that Bicep is really more concise, compared to ARM templates. Is it better?
I like the dependency graph automatically generated, like in Terraform. I like the way to reference existing resources.
I did not like my very long list of parameters that I could not collapse in VSCode.
I had a few setbacks though and I was happy to see the generated ARM template (thanks to the build
command) and see what was not working well with my template.
In any case, Microsoft has really pushed towards Bicep and has ready-to-use templates which is very helpful.
With this example, I did not test modules which is definitively a great add-on to the language, close to Terraform.
So, I’ll bite and keep working with Bicep with Azure!