Enterprise Cost Benefits of Using AVD Compared to Other Cloud VDI Vendors

Since Azure Virtual Desktop (AVD) entered General Availability in October 2019, it has rapidly gained popularity among enterprises, government, and educational institutions.  Beyond the technological advantages that AVD brings to the table, there are a number of nuances that also make it the most cost-effective cloud-based virtual desktop solution.   

Multi-Session Windows 10 – aka Windows 10 EVD (Enterprise Virtual Desktop) Edition 

A core component of the AVD service is the new Windows 10 EVD, a multi-session desktop-class Windows operating system.  This OS is only available in Azure as part of the AVD service and cannot be used on-premises or in another cloud environment.  The cost advantage comes from the ability to consolidate many individual users onto a single VM, thereby reducing the cost of cloud infrastructure on a per-user basis to a fraction of what it would be in a one-to-one user-to-VM assignment. 

It is certainly possible to use a multi-user operating system in other clouds, but the OS would have to be Windows Server 2016 or 2019.  Because this is a server-class operating system, many cloud VDI implementations opt for Windows 10 Enterprise (single-user OS) to give the users a native desktop experience, but are forced to pay for a lot more infrastructure as a result of giving each user an entire VM. 

No RDS Licensing Required 

Most AVD deployments leverage Windows 10 EVD (multi-session) OS instead of Server-class operating system.  Windows 10 EVD is licensed as a subscription that most enterprises already own as part of their existing Microsoft 365 license.  Therefore, there is no need to pay anything extra for the RDS license or anything similar since Windows 10 EVD is already included in the subscription.  In other clouds, when using multi-session operating systems like Windows Server 2016 and 2019, an RDS license is needed.  This adds $6 to $7 per user per month to the cost of the deployment. 

Microsoft-Managed Control Plane and Connection Broker 

As part of AVD service, Microsoft is providing all of the “infrastructure RDS roles” as a managed service that’s included with the AVD license that comes as part of Microsoft 365 subscription.  This includes the connection broker, security gateway, HTML5 client, and other components that must be hosted on dedicated virtual machines when RDS is deployed in other clouds or on-premises.  By providing these services at no additional charge and without the need for any dedicated VMs, AVD further reduces the cost of deployment and ongoing management of virtual desktops in Azure. 

No Third-Party Presentation Layer Needed 

Other cloud providers and virtual desktop hosting services leverage presentation layer technology from vendors such as Citrix, VMware, Teradici and others.  These vendors’ technology provides value and improvements to the end-user experience, but adds significant cost on top of the cloud infrastructure.  With AVD, Microsoft has refined the RDP protocol and made it perfectly suited for the vast majority of deployment scenarios.  When using native AVD with the native RDP protocol, there is no additional “presentation layer tax” that needs to be paid to third party vendors. 

Cost of IaaS in Azure 

Azure provides the most affordable and highest performance infrastructure for virtual machine workloads, which is what host AVD is based on.  Azure Hybrid Benefit reduces the cost of compute by up to 40% when the OS license is provided separately and not rented through the Azure VM.  This is exactly how AVD works; the OS license (Windows 10 EVD) comes as part of Microsoft 365 subscription and this allows Azure VMs to be used without paying for the OS through Azure, resulting in significant savings.  On other cloud platforms, you still pay for the Server OS on the VMs (e.g. Server 2016/2019) even when you own the license under a different licensing program. 

Other Azure IaaS cost benefits include Reserved Instances, which allow customers to commit to compute capacity in a specific Azure region and experience savings of up to 50%.  Combining Azure Hybrid Benefit and Reserved Instances savings reduces the cost of a VM by up to 80%, as compared to a pay-as-you-go price.  No other cloud provided can even come close. 

Auto-scaling is another technology that can significantly reduce the cost of infrastructure of AVD deployments.  Nerdio Manager for Enterprise is one such technology that can reduce the cost of Azure infrastructure by up to 75% without the need to commit to Reserved Instances. 

Existing Azure Footprint 

Many organizations already have a footprint in the Microsoft cloud — whether that’s Office 365, Azure AD, Express Route, or one of the many other Azure services.  This means that the hurdle of deploying a hosted VDI environment in Azure is much lower than doing so on another cloud or with a hosted desktop provider.  The existing Azure footprint can be utilized to deploy the AVD environment quicker and support it in the same environment as other IT resources.  This reduces the time to deployment and engineering costs involved in the process. 

Cloud-Native Management Tools 

Azure Virtual Desktop was created as a cloud-first (really cloud ONLY) technology and built from the ground up to be an Azure-native technology.  Alongside AVD, management tools like Nerdio Manager for Enterprise were created to simplify the speed-up the process of deploying, managing and auto-scaling Azure Virtual Desktops and democratize the use of AVD technology.  A fully functional environment can be stood up with Nerdio Manager for Enterprise in just two hours, which is incrediblyfaster and simpler than any other cloud desktop solution. 

Other “legacy” virtual desktop technologies were created before the cloud and certainly not exclusively for the cloud.  Therefore, they have to be retrofitted to work in the new world of the cloud and their complexity and difficulty to use make it very apparent. 

At Nerdio, we empower IT professionals to deploy, manage, and auto-scale large Azure Virtual Desktop environments with the Nerdio Manager for Enterprise; the most secure and intuitive AVD management platform available today. If you would like a free 30-day trial of Nerdio Manager for Enterprise, click the button below to get started.  

Free White Paper Download!

Increasing Growth in Microsoft Azure Amidst Covid-19

Microsoft reported 59% growth in Microsoft Azure last quarter. Did you grow your practice by 59% in the same period?

Over the past few months, there have been numerous articles, webinars, podcasts, and open forums on Facebook and Reddit talking about the impact of Covid-19 on the MSP ecosystem. Some MSPs have unfortunately had to scale down their practice dramatically or even close their doors as their small and medium business customers were shuttered. At minimum, all MSPs need to rethink their path to continued profitability.

At a time of worldwide tragedy and upheaval, it could be viewed as inappropriate to speak about making money. Numerous individuals have written that in public forums.  As a vendor to the MSP ecosystem, I try to promote Nerdio as a way to enable MSPs to build a work-from-home solution for their customers while at the same time offering “general assistance” to MSPs during this difficult time. But if there is one thing the Covid-19 crisis has laid bare in the MSP ecosystem, it is the following: If you are an MSP and not managing your customer’s infrastructure in the cloud, you are missing a train that left the station long ago. You likely do not have immediate access to your customer’s IT environment and are left at a disadvantage during this crisis relative to your MSP peers who have embraced the cloud.  

Why do MSPs balk at moving their practice to the cloud and Microsoft Azure?  There are three primary reasons:

  1. They lack a technical resource who knows how to architect and manage a customer environment in Azure; hiring that person is too expensive.
  2. They find Azure to be too complicated and are scared off by the breadth of Azure’ services.
  3. They are concerned that consumption-based pricing is too risky and cannot be packaged in a fixed-cost way similar to an on-premises environment.

While those are legitimate and real challenges, they should not prevent an MSP from building a cloud practice in Azure.   Automation platforms like Nerdio will help you deploy, price, manage, and optimize Azure environments without the need to hire an expensive Azure engineer.  Microsoft has put in place rich offers and licensing programs that give MSPs a discount of up to 80% off the list price of Azure.  I am here to tell you that we have already seen thousands of MSPs who easily deploy, manage, and make money with Microsoft Azure. And with the recently released Windows Virtual Desktop, empowering work from home solutions in Azure with a native Windows 10 operating system has never been easier from a technology and licensing perspective.

With that, why are MSPs still not moving to the cloud?  I would posit it is a fourth reason or challenge that we often run into and it is called “MSP Inertia” where an MSP says “I have always done it this way and  don’t want to change the way I do things.”  MSPs argue they are used to deploying and managing and securing their customers on-premises and they simply do not want to change that model. MSPs also say “I am generating recurring revenue on a managed services model on-premises. Why do I want to change things up?”  MSPs are comfortable doing things a certain way and are concerned about how they can move to the new reality of the cloud on behalf of their customers.

When talking about cloud computing, people often refer to products as one of three things—Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).  Over four years ago, most MSPs saw the writing on the wall when it came to productivity software.  It no longer made sense to manage an on-premises Exchange Server when Microsoft was investing considerable resources developing Office 365 (now known as Microsoft 365). Today, more than 80% of MSPs have moved most of their customers to Microsoft 365—the company’s most popular SaaS product.

Imagine where your MSP practice would be if you still were managing an Exchange Server during a crisis, pandemic, or natural disaster and that server went down?  Wouldn’t you feel a bit embarrassed that you had not brought your customers to the cloud with Microsoft 365?  Even without a crisis at hand, wouldn’t you have a hard time explaining why your customers cannot benefit from the breadth of technology in Microsoft 365, including collaboration with Teams, enhanced security, and much more?

Now is the time for all MSPs to build a strong IaaS practice on the back of Microsoft Azure. It is time to take that same leap of faith you did several years ago with Office 365 and bring your practice fully to the cloud with Microsoft Azure.

A mental model you can use when thinking about when to move customers to the cloud is as follows:

  • Step 1—All new customers you onboard into your practice should be 100% from Day 1.
  • Step 2—Take your existing customers and design a hybrid approach to first bring them to the cloud with Microsoft 365 and then move their infrastructure to the cloud with Microsoft Azure
  • Step 3—Fully migrate all remaining customers to the cloud as their on-premises equipment needs to be retired or breaks.

The promise of Azure and the public cloud is not a vision to be realized some years from now. It is the here and now for MSPs who want to stay in the game. It is the investment you must make to remain competitive and to deliver the most value to your customers.  And it is the way to prepare for a new world of work where your customers demand and insist on the best work from home solutions that you as their MSP must deliver.

Not betting on Microsoft Azure and ignoring the opportunity to move your customers to the cloud will certainly leave you uncompetitive and wondering why you delayed this decision or succumbed to MSP Inertia.

Joseph Landes is the Chief Revenue Officer at Nerdio. Joseph joined Nerdio in November 2018 after a 23-year career at Microsoft where he led high-performance sales and marketing teams around the world including the US, India, Brazil, Russia, and Germany. He loves talking to partners about how to build successful cloud practices in Microsoft Azure and was named a CRN Channel Chief in 2020 for his work evangelizing the cloud and Nerdio’s contribution to the channel.

VM Sizing and Deployment Strategy for Cost Savings for MSPs


Partners frequently ask us to recommend the best VM series size for their use case, or at least the standard for most deployments. Unfortunately, these are difficult questions to answer due to the many variables which must be considered. However, with a basic understanding of the most common VM series, and a few sizing strategies, perfect VM series allocation can be achieved.

In this article, we’ll break down the top three most commonly utilized VM series sizes and explain some key strategies to leverage when sizing new deployments to ensure optimal cost savings.

Most Commonly Utilized VM Series


B-series VMs are the minimum we recommend for a test environment. Nerdio deploys all new environments with some B-series VMs. However, this is not necessarily our recommendation for production use. We provision with B-series to help prevent our partners from having a huge Azure bill after their first month in the environment. This is especially important because that first month often consists of configuring and building out the solution, but the client may not even begin migrating into the new environment until after the first month. 

B-Series VMs are specifically designed by Microsoft to optimize cost savings. This can be great for your bottom line, but it means they come with some significant limitations.

Burstable CPU Quota & the Credit Bank

The B-series VMs are burstable. This means Microsoft designed them to operate at a baseline (quota). If tasks require more than the base resources, these VMs can burst up to complete the given task. The way this is managed is through a credit bank. Each hour, the B-series VMs accumulate credits during idle time. Once the resources become needed those credits will then be spent executing tasks.

This is great for cost savings, however the downside is when the credits run out. When this happens, performance of the VM slows to the baseline, which can feel like a crawl if end users are attempting to leverage the VM at that time. This can happen very quickly if the B-series VM is applied to a pool where users are logged in and working for hours at a time.


There are also limitations when it comes to IOPS. For instance, a B2ms VM is capped at 1920. This means that even if a Standard SSD (6,000 IOPS max) or Premium SSD (20,000 IOPS max) is assigned to that VM, it will never be able to utilize the added IOPS capabilities. We frequently see partners pairing SSD drives with B-series VM’s not knowing this. As a result, they waste money because it’s impossible to improve performance through the SSD drive pair on a B-series. The HDD drives have an IOPS cap of 2000 which means HDD drives are more than enough for the standard B-series VMs.


The D-series VMs don’t have the limitations of B-series and their resources are available 100% of the time. We see the D-series VMs used most effectively for servers like FS01, or other LOB servers which leverage resources at a steady rate throughout the day. This includes desktop pools. D-series are effective when applied to pools where users will not leverage large amounts of RAM for their daily tasks. D-series VM’s have a 1:4 Core to Memory ratio.


The E-Series VMs are almost identical to the D-Series except they have a 1:8 Core to Memory ration rather than a 1:4. This is good for environments that have users who leverage memory intensive applications, or like to have several browser tabs open at the same time. We find E-Series to be the most common VM series deployed with WVD pools.

Now that we’ve covered the different VM series sizes, let’s talk about use case & deployment strategies.

Use Cases

B-Series Use Case

We most commonly see the B-series VMs applied to the domain controller (DC01). DC01 doesn’t usually perform steady tasks and instead executes bursts of processes throughout the day. As a result, it’s the perfect fit for something like a B2ms. When it comes to WVD pools, we’ve only seen B-series VMs function well with 2-3 very low-level users.

D-Series Use Case 

D-series VMs are most commonly applied to FS01. FS01 is the source for FSLogix and manages several tasks including mounting user VHD’s to the various session hosts, folder redirection, and any changes that are made in the user’s desktop, documents, or favorites folders. For pools with 10+ users, these tasks can add up quickly and we often see the credit bank exhausted if FS01 is a B-sereis VM. As a result, we recommend the D-series for FS01 in almost all scenarios. Depending on user count, the D2sv3, D4sv3, D8sv3, or D16sv3 may be appropriate. There isn’t a hard ratio to go by when it comes to resizing FS01 based on user count, but we’ve generally found D2sv3 to work for 10-15 users, D4sv3 for 15-30 users, D8sv3 for 30-60 users, D16sv3 for 60-100+ users. But again, remember that those are rough guidelines. At the end of the day consumption on FS01 should be monitored to ensure resources are not under or over-allocated for the user count. 

E-Series Use Case

Like mentioned above, E-series VMs are most commonly seen for desktop pools. Our recommendation, however, is to always test the environment for one to two weeks to make sure the VMs are allocated for optimal cost saving. When quoting E or D-series we recommend initially making the quote based on CPU. That would look something like 2:1 user to core ratio on those servers (as an example). If there are 50 users in an environment, 25 cores should fully accommodate those users. With that in mind, quoting Pool-A with three E8sv3 session hosts would be appropriate, since E8sv3s provide 8 core & 64GB of memory each. If it’s anticipated those users wouldn’t utilize all that memory, then a D8sv3 may be more appropriate. D8sv3 would provide 8 cores & 32GB of memory. Remember E-series has a 1:8 Core to Memory ration while D-series has 1:4 core to memory ratio.

This brings us into our next section, Deployment Strategy.

Deployment Strategy

This section is mostly related to FS01, Dedicated Desktops, & Pooled Session Hosts. However, the principle here can be applied to other servers as well.

One of the most important takeaways from this section is to NOT purchase the Reserved Instances (RI) until after one to two weeks in the new environment. The reason is because it’s hard to know if the environment is appropriately sized until after users are in and working. As an MSP, it doesn’t make sense to intricately monitor user habits prior to migrating into the cloud. As a result, it won’t be well known what type of resources each user leverages on a daily basis. Given this, everything that’s done to size the environment prior to GoLive is just an educated guess. It would be unfortunate to get locked into a 3-year RI only to find out the environment was under or over-specified and a penalty must be paid to Microsoft to get out of the RI contract.

We all know the end user experience is king when it comes to solution adoption. As a result, it is critical to make sure the environment is not only fully dialed in with all the necessary applications and software, but that it’s also been tested for performance. Our recommendation is to, if possible, log in with 50% to 75% of the users in the environment prior to go-live. Make sure to open any LOB applications users will leverage along with any web-based applications and the estimated number of browser tabs they may utilize. Be sure log in/out process is seamless (FS01 sized correctly) and that general performance on the pooled desktops is smooth (Session Host sized correctly). While logged in with users, make sure to either monitor performance with an RMM tool, or log in to each session host and monitor performance via Task Manager. If users are experiencing latency in their session, it might be indicated in CPU maxing out on the VM (upgrade to a larger VM in current series) or Memory (if current VM series is D then upgrade to E). It’ll also allow make it clear if the VMs are overallocated. If all users are logged in and the session hosts never spike above 50%, some cost savings can be achieved by lowering the VM series size. 

Final Thoughts

If you’ve followed this guide you should be equipped to size almost any new environment with confidence. We understand that with all the different VM sizes Microsoft offers it can be a bit confusing and overwhelming at first. Don’t worry, though, after just a few deployments you’ll be quoting new environments with ease.


Read a full article on Azure terminology, hierarchy, and resources here.

Reserved Instance (RI) – An RI is basically Microsoft’s way of anticipating (as best as possible) the resources that will be utilized in their data center in a given month. As a result, they provide large incentives (sometimes up to 57% off) if partners are willing to commit to specific resources for 1 or 3 years. In the past, the RI was paid up front as a lump sum. However, around the end of 2019 they updated their offering and now RIs can be paid on a monthly basis. This allows for a much smaller up front commitment.

In the event the RI needs to be terminated, Microsoft requires a payment worth 12% of the remainder in the contract. So, using a hypothetical scenario; let’s say the RI was purchased for 3 years and the total cost was $300 for those 3 years. 2 years had passed, and it became necessary to move away from that RI. In this scenario, only $100 would be left in the contract and given the 12% termination fee (of the remainder) only $12 would be owed for the remaining $100.

One last thing to know about RIs. They can be exchanged across Azure region and VM series without penalty. As an example, if a D2sv3 RI was purchased and it became necessary to upgrade the VMs in a pool to D4sv3s , you could simply purchase a second D2sv3 and those two would equate the D4sv3. In the same way four D2sv3 RIs could go towards an RI for a E8sv3, even if the E8sv3 was in a different Azure region than the four D2sv3 RIs.

Azure Hybrid Usage/Benefits (AHU) – AHU is when the partner purchases the Operating System (OS) license, rather than renting it from Microsoft. When AHU is leveraged an addition 20 to 30% savings can be achieved vs. the standard Pay-As-You-Go monthly price. When you combine AHU & RI the cost savings can be up to 80%. This often runs into the tens of thousands when scaled out over 3 years.

The only exception to this is when dealing with B-series VM’s. Microsoft has made these so cheap that to purchase the OS license on these would actually cost more in the long run. As a result, it’s cheaper to just leave these as is and rent the OS on a monthly basis from Microsoft.