Skip to main content

Azure Local

This guide offers an objective look at Azure Local—covering architecture, edge scenarios, security, connectivity, pricing, and comparisons to other hybrid clouds.

Carisa Stinger | June 16, 2025

What is Azure Local?

Azure Local is Microsoft’s full-stack infrastructure software that you run on validated bare-metal servers in your datacenter or edge site. It fuses Hyper-V compute, Storage Spaces Direct, and high-speed networking into a single cluster, then layers Azure services—virtual machines, AKS containers, and Azure Virtual Desktop—directly on top. 

Register the cluster with Azure Arc and you monitor, secure, and patch it from the same Azure portal you already use, while keeping sensitive data on-prem for compliance, data-residency, or ultra-low-latency scenarios. With pay-as-you-go Azure billing and familiar DevOps tools, you modernize local workloads without giving up control.

What core capabilities make Azure Local stand out from earlier Microsoft hybrid platforms?

Azure Local is more than a rebrand of Azure Stack HCI—it unifies Microsoft’s distributed-infrastructure portfolio and adds new flexibility you couldn’t get before. You still deploy familiar Hyper-V and Storage Spaces Direct on validated hardware, but now you can size clusters from one to sixteen nodes, run a wider set of Azure services on-prem, and manage everything through the Azure portal with Azure Arc. Those changes let you modernize local or edge workloads without changing your operational playbook.

Core capabilities at a glance

  • Unified brand and licensing – Azure Local replaces Azure Stack HCI, so you provision, register, and bill it like any other Azure service—charged per physical core on your monthly subscription invoice.
  • Cloud-consistent management – Registering the cluster with Azure Arc surfaces on-prem nodes, VMs, and Kubernetes workloads in the Azure portal for policy, monitoring, and extension management.
  • Expanded service catalog – Beyond Windows and Linux VMs, you can run AKS-enabled containers and Azure Virtual Desktop session hosts locally, then burst to Azure when capacity is tight.
  • Flexible sizing and form factors – Start with a single-node small-form-factor appliance or scale to 16 nodes for multi-petabyte clusters, all validated by Microsoft’s hardware partners.
  • Disconnected and edge operation – Keep workloads running during WAN outages and meet strict data-residency rules; sync with Azure when connectivity returns.
  • Low-latency network options – Pair with ExpressRoute Local or your existing MPLS to minimize egress fees and keep user experience snappy.
Capability Azure Local (2025) Azure Stack HCI (pre-2024) Stand-alone Hyper-V*
Azure subscription billing Per core, pay-as-you-go Yes (per core) No
Azure portal + Azure Arc management Built-in Limited None
Supported services on-prem VMs, AKS, AVD VMs, AKS (preview) VMs only
Cluster size 1–16 nodes 2–16 nodes N/A (single host or SCVMM)
Disconnected mode Yes Limited N/A

*Hyper-V row included for context.

How does Azure Local compare with other hybrid and edge platforms?

Hybrid choices abound—VMware, AWS, and Google all promise “cloud-on-prem.” But each platform delivers a different mix of management, pricing, and locality controls. The comparisons below highlight where Azure Local can simplify your decisions and where other offerings might fit better.

Where does Azure Local excel versus traditional VMware stacks?

Azure Local rolls virtualization, storage, and management into one Azure subscription, so you stop juggling separate hypervisor, vCenter, and add-on licenses.

Feature Azure Local VMware vSphere / Cloud Foundation
Billing model Pay-as-you-go per physical core on your Azure bill Per-core or per-processor licenses; 16-core minimum per CPU
Management Azure Portal + Azure Arc (policy, monitor, updates) vCenter + multiple add-ons (vRealize, vSAN, NSX)
Built-in services VMs, AKS, Azure Virtual Desktop on-prem VMs only; containers require Tanzu add-on
Disconnected mode Supported for edge/sovereign sites Not natively supported

Quick wins for you:

  • Single vendor for OS, virtualization, and cloud services—no separate vSAN or NSX renewals.
  • Cloud-style patches and policy enforcement through Azure Arc, reducing tool sprawl.
  • Ability to burst VMs or AKS clusters to Azure regions without refactoring.

How does ExpressRoute Local cut egress costs compared with public-internet paths?

  • Zero egress fees — ExpressRoute Local includes outbound data transfer in the port price, unlike Standard or Premium circuits .
  • Metro-scope routing — traffic stays on Microsoft’s private backbone only to the nearest Azure region, trimming distance-based charges and latency.
  • Simpler QoS — private Layer-2/Layer-3 links avoid internet congestion, improving SLA adherence for AVD or latency-sensitive AKS workloads.

What differentiates Azure Local from AWS Outposts and Google Distributed Cloud?

Capability Azure Local AWS Outposts Google Distributed Cloud
Hardware ownership Customer-owned, partner-validated nodes (1–16) AWS-owned racks or 1U/2U servers delivered as a service Google-owned racks (Connected) or customer hardware (Software Only)
Management plane Azure Portal via Azure Arc AWS Console via Region Google Cloud Console
Local services VMs, AKS, Azure Virtual Desktop EC2, EBS, RDS, EKS (subset) GKE, Anthos, AI accelerators
Offline / air-gapped option Yes (disconnected mode) No (requires AWS link) Yes (GDC Air-gapped)
Billing Per physical core, monthly Azure invoice Capacity-based rack/server rental Subscription or hardware purchase

Key takeaways

  • Choose Azure Local when you want customer-owned hardware plus cloud-attached governance and optional offline operation. 
  • Outposts fits when you prefer vendor-managed hardware and tight service parity with AWS Regions. 
  • Google Distributed Cloud stands out for air-gapped deployments requiring Google Kubernetes Engine and AI services on-prem.

Which enterprise scenarios benefit most from Azure Local?

Not every workload can live comfortably in a public region. Azure Local bridges that gap, giving you cloud-consistent tooling on hardware you own. The four scenarios below are where analysts and real-world deployments show the platform delivering clear, verifiable value.

How do edge and branch operations gain from Azure Local?

  • Run real-time analytics or AI next to machines and IoT sensors, avoiding round-trip latency to the cloud.
  • Keep local services alive during WAN outages and sync when links return.
  • Scale from a single-node appliance in a closet to a 16-node cluster in a mini–data center.

    Proof point: A Prowess Consulting study showed factory-floor workloads on Azure Stack HCI processing sensor data locally for faster quality control.

Why is Azure Local suited to regulated industries with strict data-residency needs?

  • Disconnected mode keeps workloads entirely on-prem to meet sovereignty or classified-network rules.
  • When connectivity is allowed, Azure Arc applies policy and collects logs without copying customer data to the public cloud.

How can retail and manufacturing leverage Azure Local at scale?

  • Drop-in clusters for each store or plant deliver uniform POS, MES, or analytics stacks.
  • Central IT manages hundreds of sites via Azure Monitor and GitOps, slashing “truck-roll” overhead.

    Proof point:
    SSP Group is rolling out Azure Stack HCI clusters across its retail footprint, managed centrally through Azure Arc.

When does migrating from VMware or aging hardware to Azure Local pay off?

  • Consolidate hypervisor, storage, and network licensing into one per-core Azure charge.
  • Reuse existing Windows Server skills; manage clusters in the Azure portal instead of separate vCenter, vSAN, and NSX consoles.

How is an Azure Local cluster architected under the hood?

Azure Local brings Azure services on-prem while remaining fully governed through the Azure management plane via Azure Arc.

Local hardware runs the specialized Azure Local OS. Your Azure services (VMs, AKS, AVD) run locally on this OS layer. Azure Arc acts as the bridge, connecting this on-prem environment back to the Azure cloud management plane.

Azure Local packs Microsoft’s cloud building blocks into a hyper-converged cluster you run on premises. Knowing the hardware, storage/compute design, and network layout lets you size the environment correctly and avoid bottlenecks.

What validated hardware and sizing options are available?

  • Node count: Start with a single node for edge closets or scale to 16 nodes for multi-petabyte capacity.
  • OEM catalog: Choose servers pre-imaged with the Azure Local OS from Microsoft’s validated-hardware catalog; each new node is hardware-checked before joining the cluster.
  • Core specs: TPM 2.0 and Secure Boot are required; all-flash NVMe or SSD drives power Storage Spaces Direct, with optional GPUs up to 192 GB memory per host for AI workloads.
Cluster Size Typical Use Case Example Footprint*
1 – 2 nodes Branch/edge, retail store 2 × 1U servers, 32 cores, 1 TB RAM
3 – 8 nodes Regional DC, manufacturing plant 6 × 2U servers, 192 cores, 6 TB RAM
9 – 16 nodes Core datacenter, VDI farm 12 × 2U servers, 384 cores, 12 TB RAM

*Illustrative only—use the Azure sizing tool for exact numbers.

How do Storage Spaces Direct and Hyper-V deliver resilient, high-performance compute and storage?

  • Hyper-V provides the virtualization layer for Windows, Linux, AKS, and Azure Virtual Desktop workloads.
  • Storage Spaces Direct (S2D) pools local NVMe/SSD drives across the cluster, adds caching and erasure coding, and keeps data available during node or disk failures.
  • Cluster-aware updating (CAU) rolls out firmware and OS patches node by node, maintaining SLA uptime.

Which network topologies are recommended for performance and resilience?

Topology Node Count Switches Link Speed Best For
Switchless (direct) 2 nodes None 25 GbE RDMA Remote/branch closets
Dual-switch 3–16 nodes 2 TOR switches 25–100 GbE RDMA Most production clusters
Spine-leaf 8–16 nodes 4+ (leaf) + spine 100 GbE RDMA Large DC or multi-rack
  • Reserve 1 – 2 % bandwidth on RDMA networks for system heartbeats; enable Priority Flow Control (PFC) as documented. 
  • Use VLAN or VXLAN overlays for SMB storage and VM traffic isolation; QoS policies are configured in the Azure Local OS and enforced by the switches.

With the right hardware, resilient storage/compute stack, and tuned network fabric, you get a turnkey hybrid platform that behaves like Azure—even when it’s running in your own racks.

How does Azure Local ensure security, compliance, and data sovereignty?

Azure Local bakes Microsoft’s cloud-grade controls into hardware you own. Secure-booted nodes, policy-driven governance, and an optional disconnected mode let you meet industry mandates without losing the Azure management experience.

With audited controls, offline operation, and cloud-connected patch orchestration, it gives you the same security posture you expect in Azure—only this time inside your own racks.

Which compliance certifications and frameworks does Azure Local inherit on-prem?

Azure Local is assessed against the same core standards Microsoft uses in Azure regions, and published artifacts are available for audit teams.

Standard / Program Coverage in Azure Local
ISO 27001, 27017, 27018 Controls mapped to the Azure blueprint; certificate available in the Trust Center.
SOC 1 & SOC 2 Type 2 Independent attestation reports include HCI control set.
PCI DSS 4.0 Scope limited to customer-managed workloads; supporting documents provided.
FedRAMP High (in progress) Aligns with Azure Stack HCI roadmap for U.S. public-sector use.

Policy & monitoring tools you can reuse

  • Azure Policy built-in definitions extend to Arc-enabled servers, letting you audit CIS, NIST, or custom tags on local VMs. 
  • Microsoft Defender for Cloud can be activated on Azure Local clusters for real-time threat analytics. 
  • Cloud-based management tools like Intune, allow you to extend unified endpoint management and compliance policies to virtual desktops or Cloud PCs running on Azure Local infrastructure. This includes utilizing services like Windows Autopatch for automated updates of the Windows operating system on these endpoints.
  • Trusted Launch adds vTPM, Secure Boot, and boot-integrity attestation for local VMs. 

How is patching and firmware management handled within the Azure portal?

  • Cluster-aware updating (CAU) orchestrates rolling OS and firmware updates—nodes drain, patch, reboot, and rejoin automatically, preserving SLA uptime.
  • Azure Update Manager surfaces available fixes in the portal and can schedule maintenance windows across multiple sites.
  • Disconnected sites download update bundles to removable media or a local WSUS mirror, then apply them through CAU—keeping data sovereign while staying current.

Know the TCO

This step-by-step wizard tool gives you the total cost of ownership for Windows 365 in your organization.

How do you connect Azure Local to Azure and other networks?

Connecting Azure Local is about choosing the right path for traffic—private ExpressRoute, encrypted VPN, or a mix—and then lighting up Azure Arc–based services over that link. The guidance below helps you balance cost, bandwidth, and latency while keeping monitoring and disaster-recovery options open.

What bandwidth and latency considerations affect workload placement?

  • ExpressRoute Local vs. Standard. Local circuits bundle all inbound and outbound transfer into the port fee, making them cheaper for data-heavy edge sites than Standard or Premium plans, which charge egress separately.
  • Link speed. Microsoft’s host-network guide recommends 25 GbE RDMA or faster between nodes and reserves 1 % of that bandwidth for system heartbeats.
  • End-user latency targets. Keep round-trip latency from users to Azure (for burst scenarios or AVD gateway traffic) under 150 ms; sessions feel optimal below 100 ms.
  • Fallback VPN. Site-to-site IPsec tunnels can back up ExpressRoute or serve small offices that don’t justify a dedicated circuit.
Option Bandwidth Egress Fees Typical Latency Best Use
ExpressRoute Local 1 – 100 Gbps Included <5 ms metro Heavy data, low cost
ExpressRoute Standard 1 – 100 Gbps Per GB <10 ms regional Multi-region access
Site-to-site VPN Up to 1.25 Gbps Standard egress Internet-dependent Small or backup link

How are monitoring, log analytics, backup, and site recovery services integrated?

  • Azure Monitor & Log Analytics. Install the Azure Monitor agent through Arc policy; logs stream into a workspace exactly like native Azure VMs.
  • Microsoft Sentinel. Onboard Arc-enabled servers to Sentinel to correlate security events across cloud and on-prem.
  • Azure Backup. Use Microsoft Azure Backup Server (MABS) to protect VMs running on Azure Local clusters. For workloads like Azure Virtual Desktop, Azure Backup provides a solution to protect the backend virtual machines, including those running Windows 11 Enterprise.
  • Azure Site Recovery (ASR). Continuously replicate VMs from Azure Local to Azure, with failover/fail-back orchestration in the portal.
Service Deployment Method Data Path Offline Support
Azure Monitor Arc extension + Policy Log Analytics workspace Collects locally; uploads when link restores
Sentinel Same as Monitor Log Analytics + SIEM rules Offline collection, delayed upload
Azure Backup MABS agent On-prem vault ➜ Azure Storage Yes (stage locally)
Site Recovery ASR agent Async replication ➜ Azure Requires network link

What licensing and cost factors should finance and IT leaders evaluate?

Azure Local shifts you from perpetual host licenses to a cloud-style per-core subscription, yet you still purchase and depreciate the on-prem hardware. Understanding where that subscription starts—and what add-ons or network fees may apply—lets you model true TCO against public-cloud or VMware scenarios before you order the servers.

How does Azure Local pricing compare with pay-as-you-go Azure VMs or VMware license renewals?

Cost Component Azure Local (on-prem) Azure Pay-as-you-go VM VMware vSphere / VCF
Base host fee $10 per physical core per month after a 60-day trial None (usage billed per VM hour) Per-core or per-CPU license (16-core minimum per socket) plus SnS contract
Windows Server guest rights Optional add-on $23.30 per core per month for unlimited VMs Windows Server licensing included in VM rate Separate Windows Server Datacenter licenses or SA
Cloud management Included via Azure Arc Native vCenter + add-ons (vSAN, NSX)
Data-egress fees None on ExpressRoute Local (bundled with port) Charged per GB N/A
Hardware cost Customer-owned validated servers None (Azure hardware) Customer-owned servers

Key takeaway: Azure Local consolidates hypervisor, storage, and management into one predictable per-core Azure subscription, then lets you re-use cloud governance tools—often cheaper than stacking VMware licenses or running large VM fleets in the public region.

Which cost-optimization levers apply on-prem with Azure Local?

  • Azure Hybrid Benefit (AHB). If you already own Windows Server with Software Assurance, you can waive the $10 host fee and the $23.30 guest-OS add-on—cutting total subscription cost to $0/-core for qualified licenses.

  • Right-size core counts. Because billing tracks physical cores, scaling from 32 to 24 cores per node trims the monthly host fee by 25 % with no effect on RAM-heavy workloads.

  • Choose ExpressRoute Local where possible. Its fixed port price includes all inbound and outbound transfer, avoiding surprise egress bills when syncing backups or telemetry to Azure.

  • Leverage free AKS on-prem. Running AKS on Azure Local (release 2402+) carries no additional control-plane charge, so container clusters cost only the host cores you already pay for.

  • Use reserved capacity for Azure burst workloads. When you burst VMs into an Azure region, pair Azure Local with 1- or 3-year reserved VM instances to lock in lower compute rates.

  • Monitor idle VMs with Azure Advisor. The same Advisor recommendations you see in Azure also flag oversized or idle VMs on Azure Local clusters via Arc policy.

     

A clear view of the per-core subscription, optional guest licensing, network charges, and available discounts allows your finance and IT teams to model break-even points and determine whether Azure Local, straight Azure, or a VMware renewal delivers the best ROI.

What are the prerequisites and step-by-step process for deploying Azure Local?

Rolling out Azure Local is straightforward if you line up the hardware, identity, and network pieces first. Use the checklist below, then follow the numbered workflow to stand up a production-ready cluster without surprises.

Which prerequisites must you satisfy before imaging the first node?

Area Key Requirements
Hardware Microsoft-validated server SKUs; TPM 2.0 and Secure Boot enabled; all-flash NVMe/SSD, no RAID/SAN controllers.
Network Choose a reference topology (switchless, dual-TOR, or spine-leaf) and reserve RDMA-capable 25–100 GbE links; assign static mgmt / storage VLANs.
Identity & DNS An Active Directory domain and DNS records for each host; service account with “Create Computer Objects.”
Azure subscription Contributor rights to register the cluster—first 60 days are free, then $10 per physical core per month.
Environment check Run the Azure Local Environment Checker to validate BIOS, firmware, and network settings.

What is the end-to-end deployment workflow?

Follow this prereq checklist and seven-step workflow to get your Azure Local cluster into production quickly—and keep it aligned with Microsoft’s support matrix from day one:

  1. Cable and power the nodes according to the selected network topology; verify RDMA and PFC settings on the switches.
  2. Install Azure Local OS from the latest ISO or using Windows Admin Center (WAC) imaging. Each server auto–enables Hyper-V and Storage Spaces Direct features.
  3. Create the cluster in WAC (or PowerShell): validate all nodes, set the cluster name, and enable Storage Spaces Direct.
  4. Register with Azure from WAC or the Azure portal; this starts the 60-day free trial and onboards the cluster to Azure Arc for policy and monitoring.
  5. Run cluster-aware updating (CAU) to apply the latest cumulative update and firmware bundles node-by-node, keeping workloads online.
  6. Enable services such as AKS, Azure Virtual Desktop, or Microsoft Defender extensions through Arc; deploy your first VMs or containers.
  7. Configure backup and DR by installing Azure Backup Server (MABS) and Azure Site Recovery agents, then test failover to an Azure region.

How can Nerdio streamline Azure Local operations and lifecycle management?

Nerdio Manager for Enterprise (NME) plugs into Azure Arc and the Azure Local resource bridge, giving you a single console to automate provisioning, scaling, and upkeep of on-prem Azure Virtual Desktop (AVD), VM, and AKS workloads. The features below come from Microsoft-validated guidance and Nerdio’s own release notes—so you can quantify savings without wading into marketing hype.

How does Nerdio Manager automate provisioning and scaling on Azure Local?

  • One-click host-pool creation. NME builds AVD pools on Azure Local clusters in minutes, wiring up FSLogix profiles and session-host settings automatically.
  • Dynamic auto-scaling. Convert static pools to dynamic and pay only for cores in use; v5.7 adds full support for Azure Local 23H2.
  • Hybrid recognition. Mark a host pool as Hybrid so NME treats it as an on-prem workload while still applying cloud policies.
Task Native Tools With Nerdio Manager
Create AVD host pool PowerShell + JSON templates Guided wizard—≈5 min
Scale hosts on demand Manual sizing or custom script Auto-scale engine (CPU, RAM, schedule)
Image management Azure Portal + CLI Central image gallery with versioning

How does Nerdio simplify image, app, and profile lifecycle?

  • Image gallery sync. Import Azure Marketplace or custom gold images, then push updates to on-prem host pools in a maintenance window.
  • Application repositories. Package MSIX or traditional installers once and deploy across cloud and local clusters.
  • Profile resilience. Pre-configured FSLogix templates cut roaming-profile troubleshooting time.

What cost and monitoring insights does Nerdio add?

  • Usage analytics. Dashboards surface idle hosts and per-user cost, so you can right-size core counts and reclaim under-used nodes.
  • Alerting & remediation. Built-in runbooks restart failed session hosts or clear stuck service agents—helpful while Microsoft irons out resource-bridge performance limits.
  • License alignment. Reports show where Azure Hybrid Benefit or Windows Server Datacenter licensing can eliminate the monthly per-core host fee.

By layering Nerdio Manager on top of Azure Local, you automate the grunt work—provisioning, patching, scaling, and cost analysis—so your team spends less time in PowerShell and more time delivering services to the business.

Frequently Asked Questions


Learn more about Azure Local

About the author

Photo of Carisa Stinger

Carisa Stinger

Head of Product Marketing

Carisa Stringer is the Head of Product Marketing at Nerdio, where she leads positioning, messaging, and go-to-market strategy for the company’s enterprise and MSP technology solutions. She joined Nerdio in 2025, bringing extensive experience in end user computing, desktops-as-a-service, and Microsoft technologies. Prior to her current role, Carisa held key product marketing positions at Citrix and Anthology, where she contributed to innovative go-to-market initiatives. Her career reflects a strong track record in driving growth and adoption in the enterprise technology sector. Carisa holds a Bachelor of Science in Industrial Engineering from the Georgia Institute of Technology.

Ready to get started?