β Managed identities can authenticate to Azure Files via REST API using OAuth tokens β no storage account keys required
β οΈ The x-ms-file-request-intent: backup header is mandatory β without it, all OAuth requests return HTTP 400
π― For OAuth-based access over the Azure Files REST API, assign the Storage File Data Privileged Reader or Storage File Data Privileged Contributor role, scoped appropriately (for example, at the file share level).
For SMB access, use the dedicated Storage File Data SMB Share roles instead.
π OAuth tokens expire after ~1 hour β implement caching and proactive refresh
π¦ No additional SMB OAuth configuration is required on the storage account when using OAuth authentication over the REST API.
OAuth-based REST access can be introduced alongside existing Shared Key or SAS usage during migration.
If you're accessing Azure Files from VMs using storage account keys, you've probably felt the pain:
π Key rotation overhead β someone has to rotate them, update configs, and pray nothing breaks
π Security risk β keys in config files are secrets waiting to be compromised
π΅οΈ Limited auditability β logs show "AccessKey" but not which VM or identity made the request
π Compliance headaches β auditors love asking about secret management
The goal: replace storage account key authentication with managed identity OAuth tokens. No secrets to manage. No keys to rotate. Full identity attribution in logs.
π This post is part of my Managed Identity Series β replacing secrets with identity-based authentication across Azure services:
Managed Identity is an Azure-managed identity attached to your VM. No credentials to store β Azure handles the lifecycle automatically.
IMDS (Instance Metadata Service) is a REST endpoint available only from within Azure VMs at 169.254.169.254. Your VM can request OAuth tokens from IMDS without any pre-configured secrets.
The x-ms-file-request-intent: backup header is mandatory for OAuth REST API access to Azure Files.
Without it:
HTTP 400 Bad Request
(No helpful error message)
With it:
HTTP 200 OK
Operations succeed
This header tells Azure Files to use backup semantic permissions, which:
When using the privileged backup semantics actions, file and directory-level permissions (such as NTFS ACLs) are bypassed, allowing access regardless of existing ACLs
Grants admin-level read/write access to all files
Is specifically designed for backup, restore, and auditing scenarios
Microsoft documentation mentions this header but doesn't emphasise just how non-negotiable it is. Every OAuth request will fail without it.
OAuth tokens expire after approximately 1 hour. Your application must handle this.
Token Refresh Strategy:
Cache the token and its expires_on timestamp
Refresh proactively (e.g., 5 minutes before expiration)
Ensure thread-safe access to cached token if multi-threaded
Pseudocode:
function getToken():
if cachedToken is null OR cachedToken.expiresOn < (now + 5 minutes):
cachedToken = fetchTokenFromIMDS()
return cachedToken.accessToken
Critical: If you get HTTP 401, invalidate your cached token, acquire a fresh one, and retry the request once. If it fails again, treat it as an authorization failure.
SMB OAuth (EnableSmbOAuth) is for mounting shares via SMB protocol (UNC paths like \\storage.file.core.windows.net\share).
REST API OAuth is for HTTPS endpoints (https://storage.file.core.windows.net/share/...).
These are completely separate features. If you're using REST API calls, you don't need EnableSmbOAuth on the storage account.
π‘ Side note: I actually enabled EnableSmbOAuth on the same storage account during this investigation, thinking it might be required. It wasn't β and I couldn't get SMB OAuth working regardless. The SMB OAuth feature requires additional prerequisites (non-domain-joined VMs, the AzFilesSMBMIClient PowerShell module, etc.) that didn't apply to my use case. That's a separate rabbit hole for another day. π
This should have been straightforward. Microsoft's documentation covers all the pieces β managed identities, OAuth tokens, Azure Files REST API. But the critical detail (that x-ms-file-request-intent: backup header) is easy to miss, and the error messages don't help.
Once you know the trick, the implementation is clean:
Get token from IMDS
Add three headers to your requests
Cache and refresh tokens
No more keys in config files. No more rotation schedules. Full identity attribution in your logs.
If you're still using storage account keys for REST API access to Azure Files β now you don't have to.
It's all fun and games until the MSP contract expires and you realise 90 VMs still need their patching schedules sortedβ¦
With our MSP contract winding down, the time had come to bring VM patching back in house. Our third-party provider had been handling it with their own tooling, which would no longer be used when the service contract expired.
Enter Azure Update Manager β the modern, agentless way to manage patching schedules across your Azure VMs. Add a bit of PowerShell, sprinkle in some Azure Policy, and you've got yourself a scalable, policy-driven solution that's more visible, auditable, and way more maintainable.
Here's how I made the switch β and managed to avoid a patching panic.
Why Resource Providers? Azure Update Manager needs these registered to create the necessary API endpoints and resource types in your subscription. Without them, you'll get cryptic "resource type not found" errors.
First order of business: collect the patching summary data from the MSP β which, helpfully, came in the form of multiple weekly CSV exports.
I used GenAI to wrangle the mess into a structured format. The result was a clear categorisation of VMs based on the day and time they were typically patched β a solid foundation to work from.
Some patch windows felt too tight β and, just as importantly, I needed to avoid overlaps with existing backup jobs. Rather than let a large CU fail halfway through or run headlong into an Azure Backup job, I extended the duration on select configs and staggered them across the week:
$config=Get-AzMaintenanceConfiguration-ResourceGroupName"rg-maintenance-uksouth-001"-Name"contoso-maintenance-config-vms-sun"$config.Duration="04:00"Update-AzMaintenanceConfiguration-ResourceGroupName"rg-maintenance-uksouth-001"-Name"contoso-maintenance-config-vms-sun"-Configuration$config# Verify the change$updatedConfig=Get-AzMaintenanceConfiguration-ResourceGroupName"rg-maintenance-uksouth-001"-Name"contoso-maintenance-config-vms-sun"Write-Host"Sunday window now: $($updatedConfig.Duration) duration"-ForegroundColorGreen
Armed with CSV exports of the latest patching summaries, I got AI to do the grunt work and make sense of the contents.
What I did:
Exported MSP data: Weekly CSV reports showing patch installation timestamps for each VM
Used Gen AI with various iterative prompts, starting the conversation with this:
"Attached is an export summary of the current patching activity from our incumbent MSP who currently look after the patching of the VM's in Azure
I need you to review timestamps and work out which maintenance window each vm is currently in, and then match that to the appropriate maintenance config that we have just created.
If there are mis matches in new and current schedule then we may need to tweak the settings of the new configs"
AI analysis revealed:
60% of VMs were patching on one weekday evening
Several critical systems patching simultaneously
No consideration for application dependencies
AI recommendation: Spread VMs across weekdays based on:
Criticality: Domain controllers on different days
Function: Similar servers on different days (avoid single points of failure)
Dependencies: Database servers before application servers
The result: A logical rebalancing that avoided "all our eggs in Sunday 1AM" basket and considered business impact.
Why this matters: The current patching schedule was not optimized for business continuity. AI helped identify risks we hadn't considered.
Before diving into bulk tagging, I needed to understand what we were working with across all subscriptions.
First, let's see what VMs we have:
Click to expand: Discover Untagged VMs (Sample Script)
# Discover Untagged VMs Script for Azure Update Manager# This script identifies VMs that are missing Azure Update Manager tags$scriptStart=Get-DateWrite-Host"=== Azure Update Manager - Discover Untagged VMs ==="-ForegroundColorCyanWrite-Host"Scanning all accessible subscriptions for VMs missing maintenance tags..."-ForegroundColorWhiteWrite-Host""# Function to check if VM has Azure Update Manager tagsfunctionTest-VMHasMaintenanceTags{param($VM)# Check for the three required tags$hasOwnerTag=$VM.Tags-and$VM.Tags.ContainsKey("Owner")-and$VM.Tags["Owner"]-eq"Contoso"$hasUpdatesTag=$VM.Tags-and$VM.Tags.ContainsKey("Updates")-and$VM.Tags["Updates"]-eq"Azure Update Manager"$hasPatchWindowTag=$VM.Tags-and$VM.Tags.ContainsKey("PatchWindow")return$hasOwnerTag-and$hasUpdatesTag-and$hasPatchWindowTag}# Function to get VM details for reportingfunctionGet-VMDetails{param($VM,$SubscriptionName)return[PSCustomObject]@{Name=$VM.NameResourceGroup=$VM.ResourceGroupNameLocation=$VM.LocationSubscription=$SubscriptionNameSubscriptionId=$VM.SubscriptionIdPowerState=$VM.PowerStateOsType=$VM.StorageProfile.OsDisk.OsTypeVmSize=$VM.HardwareProfile.VmSizeTags=if($VM.Tags){($VM.Tags.Keys|ForEach-Object{"$_=$($VM.Tags[$_])"})-join"; "}else{"No tags"}}}# Initialize collections$taggedVMs=@()$untaggedVMs=@()$allVMs=@()$subscriptionSummary=@{}Write-Host"=== DISCOVERING VMs ACROSS ALL SUBSCRIPTIONS ==="-ForegroundColorCyan# Get all accessible subscriptions$subscriptions=Get-AzSubscription|Where-Object{$_.State-eq"Enabled"}Write-Host"Found $($subscriptions.Count) accessible subscriptions"-ForegroundColorWhiteforeach($subscriptionin$subscriptions){try{Write-Host"`nScanning subscription: $($subscription.Name) ($($subscription.Id))"-ForegroundColorMagenta$null=Set-AzContext-SubscriptionId$subscription.Id-ErrorActionStop# Get all VMs in this subscriptionWrite-Host" Retrieving VMs..."-ForegroundColorGray$vms=Get-AzVM-Status-ErrorActionContinue$subTagged=0$subUntagged=0$subTotal=$vms.CountWrite-Host" Found $subTotal VMs in this subscription"-ForegroundColorWhiteforeach($vmin$vms){$vmDetails=Get-VMDetails-VM$vm-SubscriptionName$subscription.Name$allVMs+=$vmDetailsif(Test-VMHasMaintenanceTags-VM$vm){$taggedVMs+=$vmDetails$subTagged++Write-Host" β Tagged: $($vm.Name)"-ForegroundColorGreen}else{$untaggedVMs+=$vmDetails$subUntagged++Write-Host" β οΈ Untagged: $($vm.Name)"-ForegroundColorYellow}}# Store subscription summary$subscriptionSummary[$subscription.Name]=@{Total=$subTotalTagged=$subTaggedUntagged=$subUntaggedSubscriptionId=$subscription.Id}Write-Host" Subscription Summary - Total: $subTotal | Tagged: $subTagged | Untagged: $subUntagged"-ForegroundColorGray}catch{Write-Host" β Error scanning subscription $($subscription.Name): $($_.Exception.Message)"-ForegroundColorRed$subscriptionSummary[$subscription.Name]=@{Total=0Tagged=0Untagged=0Error=$_.Exception.Message}}}Write-Host""Write-Host"=== OVERALL DISCOVERY SUMMARY ==="-ForegroundColorCyanWrite-Host"Total VMs found: $($allVMs.Count)"-ForegroundColorWhiteWrite-Host"VMs with maintenance tags: $($taggedVMs.Count)"-ForegroundColorGreenWrite-Host"VMs missing maintenance tags: $($untaggedVMs.Count)"-ForegroundColorRedif($untaggedVMs.Count-eq0){Write-Host"οΏ½ ALL VMs ARE ALREADY TAGGED! οΏ½"-ForegroundColorGreenWrite-Host"No further action required."-ForegroundColorWhiteexit0}Write-Host""Write-Host"=== SUBSCRIPTION BREAKDOWN ==="-ForegroundColorCyan$subscriptionSummary.GetEnumerator()|Sort-ObjectName|ForEach-Object{$sub=$_.Valueif($sub.Error){Write-Host"$($_.Key): ERROR - $($sub.Error)"-ForegroundColorRed}else{$percentage=if($sub.Total-gt0){[math]::Round(($sub.Tagged/$sub.Total)*100,1)}else{0}Write-Host"$($_.Key): $($sub.Tagged)/$($sub.Total) tagged ($percentage%)"-ForegroundColorWhite}}Write-Host""Write-Host"=== UNTAGGED VMs DETAILED LIST ==="-ForegroundColorRedWrite-Host"The following $($untaggedVMs.Count) VMs are missing Azure Update Manager maintenance tags:"-ForegroundColorWhite# Group untagged VMs by subscription for easier reading$untaggedBySubscription=$untaggedVMs|Group-ObjectSubscriptionforeach($groupin$untaggedBySubscription|Sort-ObjectName){Write-Host"`nοΏ½ Subscription: $($group.Name) ($($group.Count) untagged VMs)"-ForegroundColorMagenta$group.Group |Sort-ObjectName|ForEach-Object{Write-Host" β’ $($_.Name)"-ForegroundColorYellowWrite-Host" Resource Group: $($_.ResourceGroup)"-ForegroundColorGrayWrite-Host" Location: $($_.Location)"-ForegroundColorGrayWrite-Host" OS Type: $($_.OsType)"-ForegroundColorGrayWrite-Host" VM Size: $($_.VmSize)"-ForegroundColorGrayWrite-Host" Power State: $($_.PowerState)"-ForegroundColorGrayif($_.Tags-ne"No tags"){Write-Host" Existing Tags: $($_.Tags)"-ForegroundColorDarkGray}Write-Host""}}Write-Host"=== ANALYSIS BY VM CHARACTERISTICS ==="-ForegroundColorCyan# Analyze by OS Type$untaggedByOS=$untaggedVMs|Group-ObjectOsTypeWrite-Host"`nοΏ½ Untagged VMs by OS Type:"-ForegroundColorWhite$untaggedByOS|Sort-ObjectName|ForEach-Object{Write-Host" $($_.Name): $($_.Count) VMs"-ForegroundColorWhite}# Analyze by Location$untaggedByLocation=$untaggedVMs|Group-ObjectLocationWrite-Host"`nοΏ½ Untagged VMs by Location:"-ForegroundColorWhite$untaggedByLocation|Sort-ObjectCount-Descending|ForEach-Object{Write-Host" $($_.Name): $($_.Count) VMs"-ForegroundColorWhite}# Analyze by VM Size (to understand workload types)$untaggedBySize=$untaggedVMs|Group-ObjectVmSizeWrite-Host"`nοΏ½ Untagged VMs by Size:"-ForegroundColorWhite$untaggedBySize|Sort-ObjectCount-Descending|Select-Object-First10|ForEach-Object{Write-Host" $($_.Name): $($_.Count) VMs"-ForegroundColorWhite}# Analyze by Resource Group (might indicate application/workload groupings)$untaggedByRG=$untaggedVMs|Group-ObjectResourceGroupWrite-Host"`nοΏ½ Untagged VMs by Resource Group (Top 10):"-ForegroundColorWhite$untaggedByRG|Sort-ObjectCount-Descending|Select-Object-First10|ForEach-Object{Write-Host" $($_.Name): $($_.Count) VMs"-ForegroundColorWhite}Write-Host""Write-Host"=== POWER STATE ANALYSIS ==="-ForegroundColorCyan$powerStates=$untaggedVMs|Group-ObjectPowerState$powerStates|Sort-ObjectCount-Descending|ForEach-Object{Write-Host"$($_.Name): $($_.Count) VMs"-ForegroundColorWhite}Write-Host""Write-Host"=== EXPORT OPTIONS ==="-ForegroundColorCyanWrite-Host"You can export this data for further analysis:"-ForegroundColorWhite# Export to CSV option$timestamp=Get-Date-Format"yyyyMMdd-HHmm"$csvPath="D:\UntaggedVMs-$timestamp.csv"try{$untaggedVMs|Export-Csv-Path$csvPath-NoTypeInformationWrite-Host"β Exported untagged VMs to: $csvPath"-ForegroundColorGreen}catch{Write-Host"β Failed to export CSV: $($_.Exception.Message)"-ForegroundColorRed}# Show simple list for easy copyingWrite-Host""Write-Host"=== SIMPLE VM NAME LIST (for copy/paste) ==="-ForegroundColorCyanWrite-Host"VM Names:"-ForegroundColorWhite$untaggedVMs|Sort-ObjectName|ForEach-Object{Write-Host" $($_.Name)"-ForegroundColorYellow}Write-Host""Write-Host"=== NEXT STEPS RECOMMENDATIONS ==="-ForegroundColorCyanWrite-Host"1. Review the untagged VMs list above"-ForegroundColorWhiteWrite-Host"2. Investigate why these VMs were not in the original patching schedule"-ForegroundColorWhiteWrite-Host"3. Determine appropriate maintenance windows for these VMs"-ForegroundColorWhiteWrite-Host"4. Consider grouping by:"-ForegroundColorWhiteWrite-Host" β’ Application/workload (Resource Group analysis)"-ForegroundColorGrayWrite-Host" β’ Environment (naming patterns, tags)"-ForegroundColorGrayWrite-Host" β’ Business criticality"-ForegroundColorGrayWrite-Host" β’ Maintenance window preferences"-ForegroundColorGrayWrite-Host"5. Run the tagging script to assign maintenance windows"-ForegroundColorWhiteWrite-Host""Write-Host"=== AZURE RESOURCE GRAPH QUERY ==="-ForegroundColorCyanWrite-Host"Use this query in Azure Resource Graph Explorer to verify results:"-ForegroundColorWhiteWrite-Host""Write-Host@"Resources| where type == "microsoft.compute/virtualmachines"| where tags.PatchWindow == "" or isempty(tags.PatchWindow) or isnull(tags.PatchWindow)| project name, resourceGroup, subscriptionId, location, osType = properties.storageProfile.osDisk.osType, vmSize = properties.hardwareProfile.vmSize, powerState = properties.extended.instanceView.powerState.displayStatus, tags| sort by name asc"@-ForegroundColorGrayWrite-Host""Write-Host"Script completed at $(Get-Date)"-ForegroundColorCyanWrite-Host"Total runtime: $((Get-Date)-$scriptStart)"-ForegroundColorGray
Discovery results:
35 VMs from the original MSP schedule (our planned list)
12 additional VMs not in the MSP schedule (the "stragglers")
Total: 90 VMs needing Update Manager tags
Key insight: The MSP wasn't managing everything. Several dev/test VMs and a few production systems were missing from their schedule.
PatchWindow β The key tag used by dynamic scopes to assign VMs to maintenance configurations
Owner β For accountability and filtering
Updates β Identifies VMs managed by Azure Update Manager
Click to expand: Multi-Subscription Azure Update Manager VM Tagging (Sample Script)
# Multi-Subscription Azure Update Manager VM Tagging Script# This script discovers VMs across multiple subscriptions and tags them appropriatelyWrite-Host"=== Multi-Subscription Azure Update Manager - VM Tagging Script ==="-ForegroundColorCyan# Function to safely tag a VMfunctionSet-VMMaintenanceTags{param([string]$VMName,[string]$ResourceGroupName,[string]$SubscriptionId,[hashtable]$Tags,[string]$MaintenanceWindow)try{# Set context to the VM's subscription$null=Set-AzContext-SubscriptionId$SubscriptionId-ErrorActionStopWrite-Host" Processing: $VMName..."-ForegroundColorYellow# Get the VM and update tags$vm=Get-AzVM-ResourceGroupName$ResourceGroupName-Name$VMName-ErrorActionStopif($vm.Tags){$Tags.Keys|ForEach-Object{$vm.Tags[$_]=$Tags[$_]}}else{$vm.Tags=$Tags}$null=Update-AzVM-VM$vm-ResourceGroupName$ResourceGroupName-Tag$vm.Tags-ErrorActionStopWrite-Host" β Successfully tagged $VMName for $MaintenanceWindow maintenance"-ForegroundColorGreenreturn$true}catch{Write-Host" β Failed to tag $VMName`: $($_.Exception.Message)"-ForegroundColorRedreturn$false}}# Define all target VMs organized by maintenance window$maintenanceGroups=@{"Monday"=@{"VMs"=@("WEB-PROD-01","DB-PROD-01","APP-PROD-01","FILE-PROD-01","DC-PROD-01")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="mon"}}"Tuesday"=@{"VMs"=@("WEB-PROD-02","DB-PROD-02","APP-PROD-02","FILE-PROD-02","DC-PROD-02")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="tue"}}"Wednesday"=@{"VMs"=@("WEB-PROD-03","DB-PROD-03","APP-PROD-03","FILE-PROD-03","DC-PROD-03")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="wed"}}"Thursday"=@{"VMs"=@("WEB-PROD-04","DB-PROD-04","APP-PROD-04","FILE-PROD-04","PRINT-PROD-01")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="thu"}}"Friday"=@{"VMs"=@("WEB-PROD-05","DB-PROD-05","APP-PROD-05","FILE-PROD-05","MONITOR-PROD-01")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="fri"}}"Saturday"=@{"VMs"=@("WEB-DEV-01","DB-DEV-01","APP-DEV-01","TEST-SERVER-01","SANDBOX-01")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="sat-09"}}"Sunday"=@{"VMs"=@("WEB-UAT-01","DB-UAT-01","APP-UAT-01","BACKUP-PROD-01","MGMT-PROD-01")"Tags"=@{"Owner"="Contoso""Updates"="Azure Update Manager""PatchWindow"="sun"}}}# Function to discover VMs across all subscriptionsfunctionFind-VMsAcrossSubscriptions{param([array]$TargetVMNames)$subscriptions=Get-AzSubscription|Where-Object{$_.State-eq"Enabled"}$vmInventory=@{}foreach($subscriptionin$subscriptions){try{$null=Set-AzContext-SubscriptionId$subscription.Id-ErrorActionStop$vms=Get-AzVM-ErrorActionContinueforeach($vmin$vms){if($vm.Name-in$TargetVMNames){$vmInventory[$vm.Name]=@{Name=$vm.NameResourceGroupName=$vm.ResourceGroupNameSubscriptionId=$subscription.IdSubscriptionName=$subscription.NameLocation=$vm.Location}}}}catch{Write-Host"Error scanning subscription $($subscription.Name): $($_.Exception.Message)"-ForegroundColorRed}}return$vmInventory}# Get all unique VM names and discover their locations$allTargetVMs=@()$maintenanceGroups.Values|ForEach-Object{$allTargetVMs+=$_.VMs}$allTargetVMs=$allTargetVMs|Sort-Object-UniqueWrite-Host"Discovering locations for $($allTargetVMs.Count) target VMs..."-ForegroundColorWhite$vmInventory=Find-VMsAcrossSubscriptions-TargetVMNames$allTargetVMs# Process each maintenance window$totalSuccess=0$totalFailed=0foreach($windowNamein$maintenanceGroups.Keys){$group=$maintenanceGroups[$windowName]Write-Host"`n=== $windowName MAINTENANCE WINDOW ==="-ForegroundColorMagentaforeach($vmNamein$group.VMs){if($vmInventory.ContainsKey($vmName)){$vmInfo=$vmInventory[$vmName]$result=Set-VMMaintenanceTags-VMName$vmInfo.Name-ResourceGroupName$vmInfo.ResourceGroupName-SubscriptionId$vmInfo.SubscriptionId-Tags$group.Tags-MaintenanceWindow$windowNameif($result){$totalSuccess++}else{$totalFailed++}}else{Write-Host" β οΈ VM not found: $vmName"-ForegroundColorYellow$totalFailed++}}}Write-Host"`n=== TAGGING SUMMARY ==="-ForegroundColorCyanWrite-Host"Successfully tagged: $totalSuccess VMs"-ForegroundColorGreenWrite-Host"Failed to tag: $totalFailed VMs"-ForegroundColorRed
For the 12 VMs not in the original MSP schedule, I used intelligent assignment based on their function:
Click to expand: Tagging Script for Remaining Untagged VMs (Sample Script)
# Intelligent VM Tagging Script for Remaining Untagged VMs# This script analyzes and tags the remaining VMs based on workload patterns and load balancing$scriptStart=Get-DateWrite-Host"=== Intelligent VM Tagging for Remaining VMs ==="-ForegroundColorCyanWrite-Host"Analyzing and tagging 26 untagged VMs with optimal maintenance window distribution..."-ForegroundColorWhiteWrite-Host""# Function to safely tag a VM across subscriptionsfunctionSet-VMMaintenanceTags{param([string]$VMName,[string]$ResourceGroupName,[string]$SubscriptionId,[hashtable]$Tags,[string]$MaintenanceWindow)try{# Set context to the VM's subscription$currentContext=Get-AzContextif($currentContext.Subscription.Id-ne$SubscriptionId){$null=Set-AzContext-SubscriptionId$SubscriptionId-ErrorActionStop}Write-Host" Processing: $VMName..."-ForegroundColorYellow# Get the VM$vm=Get-AzVM-ResourceGroupName$ResourceGroupName-Name$VMName-ErrorActionStop# Add maintenance tags to existing tags (preserve existing tags)if($vm.Tags){$Tags.Keys|ForEach-Object{$vm.Tags[$_]=$Tags[$_]}}else{$vm.Tags=$Tags}# Update the VM tags$null=Update-AzVM-VM$vm-ResourceGroupName$ResourceGroupName-Tag$vm.Tags-ErrorActionStopWrite-Host" β Successfully tagged $VMName for $MaintenanceWindow maintenance"-ForegroundColorGreenreturn$true}catch{Write-Host" β Failed to tag $VMName`: $($_.Exception.Message)"-ForegroundColorRedreturn$false}}# Define current maintenance window loads (after existing 59 VMs)$currentLoad=@{"Monday"=7"Tuesday"=7"Wednesday"=10"Thursday"=6"Friday"=6"Saturday"=17# Dev/Test at 09:00"Sunday"=6}Write-Host"=== CURRENT MAINTENANCE WINDOW LOAD ==="-ForegroundColorCyan$currentLoad.GetEnumerator()|Sort-ObjectName|ForEach-Object{Write-Host"$($_.Key): $($_.Value) VMs"-ForegroundColorWhite}# Initialize counters for new assignments$newAssignments=@{"Monday"=0"Tuesday"=0"Wednesday"=0"Thursday"=0"Friday"=0"Saturday"=0# Will use sat-09 for dev/test"Sunday"=0}Write-Host""Write-Host"=== INTELLIGENT VM GROUPING AND ASSIGNMENT ==="-ForegroundColorCyan# Define VM groups with intelligent maintenance window assignments$vmGroups=@{# CRITICAL PRODUCTION SYSTEMS - Spread across different days"Critical Infrastructure"=@{"VMs"=@(@{Name="DC-PROD-01";RG="rg-infrastructure";Sub="Production";Window="Sunday";Reason="Domain Controller - critical infrastructure"},@{Name="DC-PROD-02";RG="rg-infrastructure";Sub="Production";Window="Monday";Reason="Domain Controller - spread from other DCs"},@{Name="BACKUP-PROD-01";RG="rg-backup";Sub="Production";Window="Tuesday";Reason="Backup Server - spread across week"})}# PRODUCTION BUSINESS APPLICATIONS - Spread for business continuity"Production Applications"=@{"VMs"=@(@{Name="WEB-PROD-01";RG="rg-web-production";Sub="Production";Window="Monday";Reason="Web Server - Monday for week start"},@{Name="DB-PROD-01";RG="rg-database-production";Sub="Production";Window="Tuesday";Reason="Database Server - Tuesday"},@{Name="APP-PROD-01";RG="rg-app-production";Sub="Production";Window="Wednesday";Reason="Application Server - mid-week"})}# DEV/TEST SYSTEMS - Saturday morning maintenance (like existing dev/test)"Development Systems"=@{"VMs"=@(@{Name="WEB-DEV-01";RG="rg-web-development";Sub="Development";Window="Saturday";Reason="Web Dev - join existing dev/test window"},@{Name="DB-DEV-01";RG="rg-database-development";Sub="Development";Window="Saturday";Reason="Database Dev - join existing dev/test window"},@{Name="TEST-SERVER-01";RG="rg-testing";Sub="Development";Window="Saturday";Reason="Test Server - join existing dev/test window"}# ... additional dev/test VMs)}}# Initialize counters$totalProcessed=0$totalSuccess=0$totalFailed=0# Process each groupforeach($groupNamein$vmGroups.Keys){$group=$vmGroups[$groupName]Write-Host"`n=== $groupName ==="-ForegroundColorMagentaWrite-Host"Processing $($group.VMs.Count) VMs in this group"-ForegroundColorWhiteforeach($vmInfoin$group.VMs){$window=$vmInfo.Window$vmName=$vmInfo.NameWrite-Host"`nοΏ½οΈ $vmName β $window maintenance window"-ForegroundColorYellowWrite-Host" Reason: $($vmInfo.Reason)"-ForegroundColorGray# Determine subscription ID from name$subscriptionId=switch($vmInfo.Sub){"Production"{(Get-AzSubscription-SubscriptionName"Production").Id}"DevTest"{(Get-AzSubscription-SubscriptionName"DevTest").Id}"Identity"{(Get-AzSubscription-SubscriptionName"Identity").Id}"DMZ"{(Get-AzSubscription-SubscriptionName"DMZ").Id}}# Create appropriate tags based on maintenance window$tags=@{"Owner"="Contoso""Updates"="Azure Update Manager"}if($window-eq"Saturday"){$tags["PatchWindow"]="sat-09"# Saturday 09:00 for dev/test}else{$tags["PatchWindow"]=$window.ToLower().Substring(0,3)# mon, tue, wed, etc.}$result=Set-VMMaintenanceTags-VMName$vmInfo.Name-ResourceGroupName$vmInfo.RG-SubscriptionId$subscriptionId-Tags$tags-MaintenanceWindow$window$totalProcessed++if($result){$totalSuccess++$newAssignments[$window]++}else{$totalFailed++}}}Write-Host""Write-Host"=== TAGGING SUMMARY ==="-ForegroundColorCyanWrite-Host"Total VMs processed: $totalProcessed"-ForegroundColorWhiteWrite-Host"Successfully tagged: $totalSuccess"-ForegroundColorGreenWrite-Host"Failed to tag: $totalFailed"-ForegroundColorRedWrite-Host""Write-Host"=== NEW MAINTENANCE WINDOW DISTRIBUTION ==="-ForegroundColorCyanWrite-Host"VMs added to each maintenance window:"-ForegroundColorWhite$newAssignments.GetEnumerator()|Sort-ObjectName|ForEach-Object{if($_.Value-gt0){$newTotal=$currentLoad[$_.Key]+$_.ValueWrite-Host"$($_.Key): +$($_.Value) VMs (total: $newTotal VMs)"-ForegroundColorGreen}}Write-Host""Write-Host"=== FINAL MAINTENANCE WINDOW LOAD ==="-ForegroundColorCyan$finalLoad=@{}$currentLoad.Keys|ForEach-Object{$finalLoad[$_]=$currentLoad[$_]+$newAssignments[$_]}$finalLoad.GetEnumerator()|Sort-ObjectName|ForEach-Object{$status=if($_.Value-le8){"Green"}elseif($_.Value-le12){"Yellow"}else{"Red"}Write-Host"$($_.Key): $($_.Value) VMs"-ForegroundColor$status}$grandTotal=($finalLoad.Values|Measure-Object-Sum).SumWrite-Host"`nGrand Total: $grandTotal VMs across all maintenance windows"-ForegroundColorWhiteWrite-Host""Write-Host"=== BUSINESS LOGIC APPLIED ==="-ForegroundColorCyanWrite-Host"β Critical systems spread across different days for resilience"-ForegroundColorGreenWrite-Host"β Domain Controllers distributed to avoid single points of failure"-ForegroundColorGreenWrite-Host"β Dev/Test systems consolidated to Saturday morning (existing pattern)"-ForegroundColorGreenWrite-Host"β Production workstations spread to minimize user impact"-ForegroundColorGreenWrite-Host"β Business applications distributed for operational continuity"-ForegroundColorGreenWrite-Host"β Load balancing maintained across the week"-ForegroundColorGreenWrite-Host""Write-Host"=== VERIFICATION STEPS ==="-ForegroundColorCyanWrite-Host"1. Verify tags in Azure Portal across all subscriptions"-ForegroundColorWhiteWrite-Host"2. Check that critical systems are on different days"-ForegroundColorWhiteWrite-Host"3. Confirm dev/test systems are in Saturday morning window"-ForegroundColorWhiteWrite-Host"4. Review production systems distribution"-ForegroundColorWhiteWrite-Host""Write-Host"=== AZURE RESOURCE GRAPH VERIFICATION QUERY ==="-ForegroundColorCyanWrite-Host"Use this query to verify all VMs are now tagged:"-ForegroundColorWhiteWrite-Host""Write-Host@"Resources| where type == "microsoft.compute/virtualmachines"| where tags.Updates == "Azure Update Manager"| project name, resourceGroup, subscriptionId, patchWindow = tags.PatchWindow, owner = tags.Owner, updates = tags.Updates| sort by patchWindow, name| summarize count() by patchWindow"@-ForegroundColorGrayif($totalFailed-eq0){Write-Host""Write-Host"οΏ½ ALL VMs SUCCESSFULLY TAGGED WITH INTELLIGENT DISTRIBUTION! οΏ½"-ForegroundColorGreen}else{Write-Host""Write-Host"β οΈ Some VMs failed to tag. Please review errors above."-ForegroundColorYellow}Write-Host""Write-Host"Script completed at $(Get-Date)"-ForegroundColorCyanWrite-Host"Total runtime: $((Get-Date)-$scriptStart)"-ForegroundColorGray
Key insight: I grouped VMs by function and criticality, not just by convenience. Domain controllers got spread across different days, dev/test systems joined the existing Saturday morning window, and production applications were distributed for business continuity.
Here's where things get interesting. Update Manager is built on compliance β but your VMs won't show up in dynamic scopes unless they meet certain prerequisites. Enter Azure Policy to save the day.
You'll need two specific built-in policies assigned at the subscription (or management group) level:
What it does: This policy ensures your VMs have the necessary configurations to participate in Azure Update Manager. It automatically:
Installs the Azure Update Manager extension on Windows VMs
Registers required resource providers
Configures the VM to report its update compliance status
Sets the patch orchestration mode appropriately
Why this matters: Without this policy, VMs won't appear in Update Manager scopes even if they're tagged correctly. The policy handles all the "plumbing" automatically.
Assignment scope: Apply this at subscription or management group level to catch all VMs.
What it does: This is your compliance engine. It configures VMs to:
Regularly scan for available updates (but not install them automatically)
Report update status back to Azure Update Manager
Enable the compliance dashboard views in the portal
Provide the data needed for maintenance configuration targeting
Why this matters: This policy turns on the "update awareness" for your VMs. Without it, Azure Update Manager has no visibility into what patches are needed.
Assignment scope: Same as above β subscription or management group level.
This is where it all comes together β and where the magic happens.
Dynamic scopes use those PatchWindow tags to assign VMs to the correct patch config automatically. No more manual VM assignment, no more "did we remember to add the new server?" conversations.
Unfortunately, as of writing, dynamic scopes can only be configured through the Azure portal β no PowerShell or ARM template support yet.
Why portal only? Dynamic scopes are still in preview, and Microsoft hasn't released the PowerShell cmdlets or ARM template schemas yet. This means you can't fully automate the deployment, but the functionality itself works perfectly.
Here's the step-by-step:
Navigate to Azure Update Manager
Portal β All Services β Azure Update Manager
Access Maintenance Configurations
Go to Maintenance Configurations (Preview)
Select one of your configs (e.g., contoso-maintenance-config-vms-mon)
Create Dynamic Scope
Click Dynamic Scopes β Add
Name:DynamicScope-Monday-VMs
Description:Auto-assign Windows VMs tagged for Monday maintenance
Configure Scope Settings
Subscription: Select your subscription(s)
Resource Type:Microsoft.Compute/virtualMachines
OS Type:Windows (create separate scopes for Linux if needed)
Set Tag Filters
Tag Name:PatchWindow
Tag Value:mon (must match your maintenance config naming)
// Verify all VMs have maintenance tagsResources|wheretype=="microsoft.compute/virtualmachines"|wheretags.Updates=="Azure Update Manager"|projectname,resourceGroup,subscriptionId,patchWindow=tags.PatchWindow,owner=tags.Owner|summarizecount()bypatchWindow|orderbypatchWindow// Check maintenance configuration assignmentsMaintenanceResources|wheretype=="microsoft.maintenance/configurationassignments"|extendvmName=tostring(split(resourceId,"/")[8])|extendconfigName=tostring(properties.maintenanceConfigurationId)|projectvmName,configName,subscriptionId|summarizeVMCount=count()byconfigName|orderbyconfigName
PowerShell Verification:
# Quick check of maintenance configuration statusGet-AzMaintenanceConfiguration-ResourceGroupName"rg-maintenance-uksouth-001"|Select-ObjectName,MaintenanceScope,RecurEvery|Format-Table-AutoSize# Verify VM tag distribution$subscriptions=Get-AzSubscription|Where-Object{$_.State-eq"Enabled"}$tagSummary=@{}foreach($subin$subscriptions){Set-AzContext-SubscriptionId$sub.Id|Out-Null$vms=Get-AzVM|Where-Object{$_.Tags.PatchWindow}foreach($vmin$vms){$window=$vm.Tags.PatchWindowif(-not$tagSummary.ContainsKey($window)){$tagSummary[$window]=0}$tagSummary[$window]++}}Write-Host"=== VM DISTRIBUTION BY PATCH WINDOW ==="-ForegroundColorCyan$tagSummary.GetEnumerator()|Sort-ObjectName|ForEach-Object{Write-Host"$($_.Key): $($_.Value) VMs"-ForegroundColorWhite}
# Verify VM has required extensions and configuration$vmName="your-vm-name"$rgName="your-resource-group"$vm=Get-AzVM-Name$vmName-ResourceGroupName$rgName-Status$vm.Extensions|Where-Object{$_.Name-like"*Update*"-or$_.Name-like"*Maintenance*"}
Validate Maintenance Configuration:
# Test maintenance configuration is properly formed$config=Get-AzMaintenanceConfiguration-ResourceGroupName"rg-maintenance-uksouth-001"-Name"contoso-maintenance-config-vms-mon"Write-Host"Config Name: $($config.Name)"Write-Host"Recurrence: $($config.RecurEvery)"Write-Host"Duration: $($config.Duration)"Write-Host"Start Time: $($config.StartDateTime)"Write-Host"Timezone: $($config.TimeZone)"
Policy Compliance Deep Dive:
# Check specific VMs for policy compliance$policyName="Set prerequisites for scheduling recurring updates on Azure virtual machines"$assignments=Get-AzPolicyAssignment|Where-Object{$_.Properties.DisplayName-eq$policyName}foreach($assignmentin$assignments){Get-AzPolicyState-PolicyAssignmentId$assignment.PolicyAssignmentId|Where-Object{$_.ComplianceState-eq"NonCompliant"}|Select-ObjectResourceId,ComplianceState,@{Name="Reason";Expression={$_.PolicyEvaluationDetails.EvaluatedExpressions.ExpressionValue}}}
As always, comments and suggestions welcome over on GitHub or LinkedIn. If you've migrated patching in a different way, I'd love to hear how you approached it.
When we inherited our Azure estate from a previous MSP, some of the key technical components were already in place β ASR was configured for a number of workloads, and backups had been partially implemented across the environment.
What we didnβt inherit was a documented or validated BCDR strategy.
There were no formal recovery plans defined in ASR, no clear failover sequences, and no evidence that a regional outage scenario had ever been modelled or tested. The building blocks were there β but there was no framework tying them together into a usable or supportable recovery posture.
This post shares how I approached the challenge of assessing and strengthening our Azure BCDR readiness. It's not about starting from scratch β it's about applying structure, logic, and realism to an environment that had the right intentions but lacked operational clarity.
Whether you're stepping into a similar setup or planning your first formal DR review, I hope this provides a practical and relatable blueprint.
Some workloads were actively replicated via ASR. Others were only backed up. Some had both, a few had neither. There was no documented logic to explain why.
Workload protection appeared to be driven by convenience or historical context β not by business impact or recovery priority.
What we needed was a structured tiering model:
π§ Which workloads are mission-critical?
β±οΈ Which ones can tolerate extended recovery times?
We had technical coverage β but no operational recovery strategy.
There were no Recovery Plans in ASR. No sequencing, no post-failover validation, and no scripts or automation.
In the absence of structure, recovery would be entirely manual β relying on individual knowledge, assumptions, and good luck during a critical event.
Codifying dependencies, failover order, and recovery steps became a priority.
DNS and authentication are easy to overlook β until they break.
Our name resolution relied on internal DNS via AD-integrated zones, with no failover logic for internal record switching. No private DNS zones were in place.
Private Endpoints were widely used, but all existed in the primary region. In a DR scenario, they would become unreachable.
Identity was regionally redundant, but untested and not AZ-aware.
We needed to promote DNS, identity, and PE routing to first-class DR concerns.
Azure Storage backed a range of services β from SFTP and app data to file shares and diagnostics.
Replication strategies varied (LRS, RA-GRS, ZRS) with no consistent logic or documentation. Critical storage accounts werenβt aligned with workload tiering.
Some workloads used Azure Files and Azure File Sync, but without defined mount procedures or recovery checks.
In short: compute could come back, but data availability wasnβt assured.
NSGs, UDRs, custom routing, and SD-WAN all played a part in how traffic flowed.
But in DR, assumptions break quickly.
There was no documentation of network flow in the DR region, and no validation of inter-VM or service-to-service reachability post-failover.
Some services β like App Gateways, Internal Load Balancers, and Private Endpoints β were region-bound and would require re-deployment or manual intervention.
Networking wasnβt the background layer β it was core to recoverability.
VM replication is only half the story. The other half is whether those VMs can actually start during a DR event.
Azure doesnβt guarantee regional capacity unless you've pre-purchased it.
In our case, no capacity reservations had been made. That meant no assurance that our Tier 0 or Tier 1 workloads could even boot if demand spiked during a region-wide outage.
This is a quiet but critical risk β and one worth addressing early.
In the next post, Iβll walk through how I formalised the review into a structured BCDR posture document β including:
π§± Mapping workloads by tier and impact
β³ Defining current vs target RTO/RPO
π οΈ Highlighting gaps in automation, DNS, storage, and capacity
π§ Building a recovery plan roadmap
βοΈ Framing cost vs risk for stakeholder alignment
If you're facing a similar situation β whether you're inheriting someone else's cloud estate or building DR into a growing environment β I hope this series helps bring structure to the complexity.
Let me know if you'd find it useful to share templates or walkthroughs in the next post.
In Part 1 of this series, I shared how we reviewed our Azure BCDR posture after inheriting a partially implemented cloud estate. The findings were clear: while the right tools were in place, the operational side of disaster recovery hadnβt been addressed.
There were no test failovers, no documented Recovery Plans, no automation, and several blind spots in DNS, storage, and private access.
This post outlines how I took that review and turned it into a practical recovery strategy β one that we could share internally, align with our CTO, and use as a foundation for further work with our support partner.
To provide context, our estate is deployed primarily in the UK South Azure region, with UK West serving as the designated DR target region.
Itβs not a template β itβs a repeatable, real-world approach to structuring a BCDR plan when youβre starting from inherited infrastructure, not a clean slate.
Most cloud teams can identify issues. Fewer take the time to formalise the findings in a way that supports action and alignment.
Documenting our BCDR posture gave us three things:
π§ Clarity β a shared understanding of whatβs protected and what isnβt
π¦ Visibility β a way to surface risk and prioritise fixes
π― Direction β a set of realistic, cost-aware next steps
We werenβt trying to solve every problem at once. The goal was to define a usable plan we could act on, iterate, and eventually test β all while making sure that effort was focused on the right areas.
Executive Summary β what the document is, why it matters
Maturity Snapshot β a simple traffic-light view of current vs target posture
Workload Overview β whatβs in scope and whatβs protected
Recovery Objectives β realistic RPO/RTO targets by tier
Gaps and Risks β the areas most likely to cause DR failure
Recommendations β prioritised, actionable, and cost-aware
Next Steps β what we can handle internally, and what goes to the MSP
Each section followed the same principle: clear, honest, and focused on action. No fluff, no overstatements β just a straightforward review of where we stood and what needed doing.
Before we could plan improvements, we had to document what actually existed. This wasnβt about assumptions β it was about capturing the real configuration and coverage in Azure.
The aim wasnβt to build a full asset inventory β just to gather enough visibility to start making risk-based decisions about what mattered, and what was missing.
Once the current state was mapped, the next step was to define what βrecoveryβ should actually look like β in terms that could be communicated, challenged, and agreed.
We focused on two key metrics:
RTO (Recovery Time Objective): How long can this system be offline before we see significant operational impact?
RPO (Recovery Point Objective): How much data loss is acceptable in a worst-case failover?
These werenβt guessed or copied from a template. We worked with realistic assumptions based on our tooling, team capability, and criticality of the services.
With recovery objectives defined, the gaps became much easier to identify β and to prioritise.
We werenβt trying to protect everything equally. The goal was to focus attention on the areas that introduced the highest risk to recovery if left unresolved.
A plan needs ownership and timelines. We assigned tasks by role and defined short-, medium-, and long-term priorities using a simple planning table.
We treat the BCDR document as a living artefact β updated quarterly, tied to change control, and used to guide internal work and partner collaboration.
The original goal wasnβt to build a perfect DR solution β it was to understand where we stood, make recovery realistic, and document a plan that would hold up when we needed it most.
We inherited a functional technical foundation β but needed to formalise and validate it as part of a resilient DR posture.
By documenting the estate, defining recovery objectives, and identifying where the real risks were, we turned a passive DR posture into something we could act on. We gave stakeholders clarity. We gave the support partner direction. And we gave ourselves a roadmap.
In the next part of this series, Iβll walk through how we executed the plan:
Building and testing our first Recovery Plan
Improving ASR coverage and validation
Running our first failover drill
Reviewing results and updating the heatmap
If you're stepping into an inherited cloud environment or starting your first structured DR review, I hope this gives you a practical view of whatβs involved β and whatβs achievable without overcomplicating the process.
Let me know if you'd like to see templates or report structures from this process in a future post.
As part of my ongoing commitment to FinOps practices, I've implemented several strategies to embed cost-efficiency into the way we manage cloud infrastructure. One proven tactic is scheduling virtual machines to shut down during idle periods, avoiding unnecessary spend.
In this post, Iβll share how Iβve built out custom Azure Automation jobs to schedule VM start and stop operations. Rather than relying on Microsoftβs pre-packaged solution, Iβve developed a streamlined, purpose-built PowerShell implementation that provides maximum flexibility, transparency, and control.
# Azure CLI example to create the automation accountazautomationaccountcreate\--namevm-scheduler\--resource-groupMyResourceGroup\--locationuksouth\--assign-identity
Param([Parameter(Mandatory=$false)][ValidateNotNullOrEmpty()][String]$AzureVMName="All",[Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()][String]$AzureSubscriptionID="<your-subscription-id>")try{"Logging in to Azure..."# Authenticate using the system-assigned managed identity of the Automation AccountConnect-AzAccount-Identity-AccountId"<managed-identity-client-id>"}catch{Write-Error-Message$_.Exceptionthrow$_.Exception}$TagName="AutoStartStop"$TagValue="devserver"Set-AzContext-Subscription$AzureSubscriptionIDif($AzureVMName-ne"All"){$VMs=Get-AzResource-TagName$TagName-TagValue$TagValue|Where-Object{$_.ResourceType-like'Microsoft.Compute/virtualMachines'-and$_.Name-like$AzureVMName}}else{$VMs=Get-AzResource-TagName$TagName-TagValue$TagValue|Where-Object{$_.ResourceType-like'Microsoft.Compute/virtualMachines'}}foreach($VMin$VMs){Stop-AzVM-ResourceGroupName$VM.ResourceGroupName-Name$VM.Name-Verbose-Force}
Even this lightweight automation has produced major savings in our environment. Non-prod VMs are now automatically turned off outside office hours, resulting in monthly compute savings of up to 60% without sacrificing availability during working hours.
If youβre looking for a practical, immediate way to implement FinOps principles in Azure, VM scheduling is a great place to start. With minimal setup and maximum flexibility, custom runbooks give you control without the complexity of the canned solutions.
Have you built something similar or extended this idea further? Iβd love to hear about itβdrop me a comment or reach out on LinkedIn.
Using Shared Access Signature (SAS) tokens with azcopy is common β but rotating tokens and handling them securely can be a hassle. To improve security and simplify our automation, I recently replaced SAS-based authentication in our scheduled AzCopy jobs with Azure User Assigned Managed Identity (UAMI).
In this post, Iβll walk through how to:
Replace AzCopy SAS tokens with managed identity authentication
Assign the right roles to the UAMI
Use azcopy login to authenticate non-interactively
Replace the use of SAS tokens in an AzCopy job that uploads files from a local UNC share to Azure Blob Storage β by using a User Assigned Managed Identity.
azroleassignmentcreate\--assignee<client-id-or-object-id>\--role"Storage Blob Data Contributor"\--scope"/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container-name>"azroleassignmentcreate\--assignee<client-id-or-object-id>\--role"Storage Blob Data Reader"\--scope"/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>"azroleassignmentcreate\--assignee<client-id-or-object-id>\--role"Reader"\--scope"/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<storage-account>"
Here's the PowerShell script that copies all files from a local share to the Blob container:
$clientId="<your-uami-client-id>"# Login with Managed Identity&"C:\azcopy\azcopy.exe"login--identity--identity-client-id$clientId# Run the copy job&"C:\azcopy\azcopy.exe"copy \"\\\\fileserver\\data\\export\\"\"https://<your-storage-account>.blob.core.windows.net/<container-name>"\--overwrite=true\--from-to=LocalBlob\--blob-type=Detect\--put-md5\--recursive\--log-level=INFO
π‘ UNC Note: Double backslashes are used in PowerShell to represent UNC paths properly.
This script can be scheduled using Task Scheduler or run on demand.
Storing SQL usernames and passwords in application configuration files is still common practice β but it poses a significant security risk. As part of improving our cloud security posture, I recently completed a project to eliminate plain text credentials from our app connection strings by switching to Azure User Assigned Managed Identity (UAMI) authentication for our SQL Managed Instance.
In this post, Iβll walk through how to:
Securely connect to Azure SQL Managed Instance without using usernames or passwords
Use a User Assigned Managed Identity (UAMI) for authentication
Test this connection using the new Go-based sqlcmd CLI
Update real application code to remove SQL credentials
Replace plain text SQL credentials in application connection strings with User Assigned Managed Identity (UAMI) for secure, best-practice authentication to Azure SQL Managed Instances.
By switching to User Assigned Managed Identity, we removed credentials from connection strings and aligned SQL access with best practices for cloud identity and security.
β Set up Application Insights on an IIS-based web farm.
β Configure Log Analytics, Data Collection Rules, and Data Collection Endpoints.
β Use PowerShell to install the Application Insights agent.
β Monitor live metrics, failures, performance, and logs in real-time.
By the end, you'll have a fully monitored IIS-based web farm using Azure! π―
To effectively monitor your IIS-based application, you need to configure Azure Application Insights and ensure all required components are installed on your Azure VMs.
π― Continue monitoring logs & alerts for trends.
π― Optimize Application Insights sampling to reduce telemetry costs.
π― Automate reporting for key performance metrics.
By following this guide, you'll have a robust, real-time monitoring setup for your IIS web farm, ensuring optimal performance and quick issue resolution! π