Nerdio Manager for Enterprise Case Study: Newfoundland and Labrador Centre for Health Information

Case Study

Learn how an information technology and eHealth service company deployed 1,700 remote desktops for critical healthcare workers in just five days using Nerdio Manager for Enterprise . 


About Newfoundland and Labrador Centre for Health Information

Newfoundland and Labrador Centre for Health Information (NLCHI) supports healthcare organizations across the province with IT services. In response to COVID-19, the organization had to enable remote work quickly. In five days, NLCHI deployed Azure Virtual Desktop (AVD) to 1,700 healthcare workers, who connect to their Windows desktops and apps with their own devices. Using Windows 10 Enterprise multi-session, the organization saves compute costs by enabling 32 users to connect to a session host instead of just two. 

The Newfoundland and Labrador Centre for Health Information (NLCHI) provides quality information to health professionals, the public, researchers, and health system decision-makers in Newfoundland and Labrador in Canada. Through collaboration with the health system, NLCHI is helping build a smarter, more connected healthcare system by developing data and technical standards, maintaining key health databases, and supporting health research. The province is divided into four regional health authorities (RHAs): Labrador-Grenfell Health, Central Health, Western Health, and Eastern Health. 

NLCHI’s staff of 175 supports 20,000 healthcare workers with solutions such as hospital information systems, an electronic health record (EHR) system that authorized healthcare providers to securely access essential patient data, and an electronic medical record (EMR) program that is digitizing clinician offices across the province. In the end, all of these machines and roles need to be maintained, patched, kept free from viruses. 

Moving to remote work and the cloud amid the COVID-19 pandemic

In early 2020, roughly 400 of the province’s 20,000 healthcare workers regularly worked remotely using VPN connections and corporate laptop computers. Thanks to its e-health focus, NLCHI had recently established a single Microsoft 365 tenant for itself and the four RHAs, including Microsoft Teams. The rest of its IT infrastructure was mostly on-premises.

In early March 2020, when the province confirmed its first cases of COVID-19, NLCHI had to respond quickly to keep healthcare services running. At the same time, it had to configure new COVID-19 intensive-care unit (ICU) wings and deploy solutions in long-term care facilities (NLCHI provided 500 iPads and various apps to help patients connect with family and health providers). Robert Drover, Director of NLCHI, says, “Case counts and admissions accelerated overnight. It was a critical situation and we needed to enable remote work for as many workers as possible; staff responsible for acquiring personal protection equipment are just as essential as ICU physicians and nurses to keep services running.”

Fast deployment, cost savings, and minimal training requirements 

NLCHI contacted Microsoft, which suggested Azure Virtual Desktop for providing remote access. Rodney Keough, Data Center and Unified Communications Manager at NLCHI, recalls late night and early morning calls with Microsoft to determine how to set the service up for NLCHI and the four RHAs with their different requirements. ”Robert contacted me on a Sunday night. We built out the main controllers in Microsoft Azure by Thursday morning, when we brought on the first pilot group from Eastern Health. In five days, we had about 1,700 people using the new Azure Virtual Desktop platform, with peak usage at 3,700 people.”

To reduce its resource requirements, NLCHI used Windows 10 Enterprise multi-session, a Remote Desktop Session Host that allows multiple concurrent interactive sessions. 

NLCHI created materials explaining how employees could access their remote desktops by using their own personal devices. Keough says, “Beyond creating an announcement email and a couple of support documents, there was no more training required. The experience is intuitive, just like the desktop workers are already used to.”

NLCHI also worked with Nerdio to set up Nerdio Manager for Enterprise — an enterprise solution to help automate management, optimization, and security of Azure Virtual Desktop deployments. The organization used Nerdio Manager for AVD, which works with NLCHI’s four Azure Active Directory P1 deployments, to create an image for itself and for each RHA, and automatically deploy them to each domain. “We also deployed servers with credentials and connected them to Azure file stores quickly using the Nerdio interface,” says Keough.

To make its Azure spend more efficient, the organization uses Azure Reserved Virtual Machine Instances to manage costs across predictable workloads. It also uses auto-scaling in Nerdio Manager for Enterprise to handle its fluctuating needs (at the end of each workday, the organization scales down from 30 servers to just one). “The ability to scale down automatically helps us save on compute costs for Azure; that’s not something that was available in our traditional datacenter model. We built a sustainable solution that’s fiscally responsible and will help us recover some of its costs,” says Keough.

Instead of two users per CPU, we can enable 32 users to connect to one session host, and all get equal performance and a full desktop experience… It cuts our costs by a factor of 30.  Rodney Keough: Data Center and Unified Communications Manager, Newfoundland and Labrador Centre for Health Information

Flexibility, security, and a new direction for IT

The organization benefits from the flexibility it gained around devices. Keough says, “We don’t have to secure workers’ computers with encrypted drives and security updates because the devices are just acting like a thin client, in the sense that they’re providing the connection to our virtual desktop infrastructure.” Drover points out that while NLCHI doesn’t anticipate increased funding or resources, expectations from and requirements for its IT services keep growing. “Azure Virtual Desktop and Nerdio Manager for Enterprise will help us automate and improve our services quickly to keep up with demand,” he says.

NLCHI sees virtualization as an opportunity to rethink how it procures and delivers devices—and provides a desktop experience for workers. The organization won’t have to image new computers for remote workers, and it can focus more on identity management and security. Workers won’t need different usernames and passwords for various applications but will instead have a single Microsoft identity that IT staff can manage across the environment. Keough says, “Next, we can look at using Microsoft Intune mobile device management across the entire organization like we’re doing for the 500 new iPads.”

The organization is also looking into migrating its entire remote desktop environment to Azure Virtual Desktop, and using the service for application delivery. “Azure Virtual Desktop enlightened us about the power of Azure—we’re looking at it as part of our data center portfolio, and we’re evaluating more application-level and server-level workloads to migrate to the service,” says Keough.

Drover sees the Azure Virtual Desktop deployment as changing the organization’s strategy around what it can achieve. “We’ve advanced our capabilities quickly, achieving in nine months what would have taken us 5 or 10 years to do previously. We’re seeing intrinsic benefits that make us more effective, efficient, and responsive,” he says.

In summary, Keough says, “We chose Azure Virtual Desktop paired with Nerdio Manager for Enterprise because of the close collaboration and trust that we have with Microsoft. The companies have the best interests of our organization and our patients, clients, and residents in mind. Our workers can use their own devices to access internal resources while we still maintain our security principles. The fit is phenomenal when it comes to performance and flexibility.”

“We’ve advanced our capabilities quickly, achieving in nine months what would have taken us 5 or 10 years to do previously. We’re seeing intrinsic benefits that make us more effective, efficient, and responsive. ”  Robert Drover: Director,  Newfoundland and Labrador Centre for Health Information

Download the application today from the Azure marketplace and begin a free 30-day trial:


Find Nerdio in the Azure Marketplace:

Notes From the Field (CTO): What All Organizations Have in Common

Throughout the last year, I have spoken to more than 250 different partners and customers. Not only has it been a lot of fun, but it has also been very educational. As a result, I have worked with many smart and fun people. I learned a lot, which also enabled me to view dozens, if not hundreds, of different (Windows) Virtual Desktop environments.

Not just the ones built and managed with Nerdio, no; most were already in place and being used for test or production purposes. However, almost all of these organizations had something in common: they were all looking for a better, easier, and more efficient way to manage and optimize their Azure and WVD environments, which inevitably led them to Nerdio.

75% of the companies I talk to each week already have an existing WVD deployment in place. It is no coincidence that they end up with Nerdio, as they all seem to have the same challenges – see “The (business) risk” section a bit further down as well:

  • The lack of Azure knowledge within the company. Azure can be overwhelming and WVD comes with a steep learning curve
  • Lots of manual (PowerShell) tasks are involved
  • Getting things automated is a tough task, if at all possible. It takes up a lot of time and it’s definitely not for everybody.
  • Once automation is in place, it can be hard to maintain and update
  • Companies worry about knowledge leaving their company
  • Ongoing user management is a challenge
  • Management of applications / images is a cumbersome process and takes too long
  • Monitoring is hard to set up, alerting is not possible or is limited
  • Azure and WVD can be (very) expensive if you do not know what to look out for and lack proper tooling to help you with that. In other words, how to save time and money and take full advantage of all that Cloud has to offer
  • Big feature gap. Most companies think WVD is not production-ready and misses enterprise-grade features and functionalities 

These are all valid concerns and challenges that we run into (and solve) daily. In short, getting things set up is “just” step one; it’s the other 80-90% they worry about.


Automation, in different forms and shapes, is great and I highly encourage all technically orientated IT professionals to get acquainted in some form, or even deep dive right into it. However, you also need to be realistic and look at it from a business perspective. 

The companies who do have this “down” often lack time to keep everything updated and running smoothly. They are too busy dealing with the day-to-day operations and putting out fires, as they say. While this is nothing new, it’s also something that won’t change overnight, as it’s in the nature of our profession. While this is accepted, or acceptable to most, it also poses a problem or a challenge, at the very least – unfortunately, you can only spend your time once.

Getting real hands on experience also takes a lot of time. As highlighted above, automation and DevOps can be tricky to maintain and it’s definitely not for everybody. If it were easy, everybody would do it and I can tell you from personal experience, that is definitely not the case.

Tech Savvy

When it comes to IT professionals, you roughly have two kinds; the tech-savvy “I really like to dig in” kind of sysadmin (at Nerdio we have a lot of those), and the more all-round and laid-back “I don’t want to spend my free time getting to know (new) technology” kind of sysadmin, which is perfectly fine and makes a lot of sense.   

The latter doesn’t mind digging in a bit and staying current in general but has no interest in going all in all the time. He or she likes it when things are automated for them. They like to work with “to the point” solutions and easy-to-use graphical user interfaces – that being WVD or any other type of service/technology, like VDI in the past, for example. 

They’ll keep their certifications up to date but won’t become an automation, PowerShell, or scripting guru anytime soon. It’s a job they love, it gives them fulfillment, but that’s it. This will apply to 80% of the people in IT, give or take. 

The tech-savvy sysadmin likes to deep-dive whenever he or she can. They’ll probably blog about it, help others on various forums and social media, contribute to community programs, and so on – work becomes a hobby, or vice versa, in many cases. It isn’t for everyone; far from it, even. And that’s OK, too. 

Risky Business

Given the above, you could state that this also imposes a risk for companies; to those who are using WVD/Azure or are thinking about doing so, and those who do not.

Most companies will have a couple of tech-savvy senior system administrators employed. Up to a certain point they will be able to automate and streamline deployments, script image updates, use various kinds of DevOps methodologies, and so on.

They will have it running as smoothly as possible, they will handle all code, templates, updates, daily management tasks – you name it. 

However, what happens when something needs to be updated or breaks, and no one is available for some reason? Maybe they are “putting out fires” and are needed somewhere else? Perhaps they are spending time with friends and family and are not reachable? What if they leave your company, or even worse; they become ill for a longer period of time (let’s hope not)? 

The saying “putting all your eggs into one basket” comes to mind. 

Educating and training other employees isn’t that straightforward. I mean, doing your “job” because you are asked or told to is something completely different (even though you may still love your job) than when it’s born from passion, as part of your day job, especially if it doesn’t come naturally or from within.

What about hiring in the expertise?  Sure, but they won’t know your company the way you or your (former) employee(s) do/did, they are often much more expensive, and by default, they will leave as well. A vicious cycle. 

Managing IT resources, WVD or otherwise, should not require “special” skills or take a longer period to learn, I think most companies will agree. If anything, it should reduce your time to market and help make you (more) money, free up time to put out those fires, and/or come up with a more permanent, future-proof solution – preferably by taking advantage of cutting-edge technology supercharging your Azure/WVD deployments.  

The goal should be to relieve individuals from unneeded stressful situations, spread the workload evenly across your team, do more with less, and save hours per week on common day-to-day management tasks, creating room for other types (more interesting, fun, or important) of tasks.


At Nerdio, we are all about giving you back some of your valuable time while saving on WVD/Azure compute and storage resources, big time.

In fact, throughout the last couple of months, we have been spending quite some time talking to customers and partners to get a good perspective on how we help save companies time and thus, money – the proof is in the pudding, as they say.

I won’t go into any technical details at this point and let the graph speak for itself. Of course, these numbers are debatable, and might differ per company, individual, etc. It could be that even more time is saved, or perhaps a bit less in some cases. The point is, by using Nerdio, the life of a/your WVD/Azure admin will become much more efficient and less stressful.

Because of our intuitive approach, we enable just about anyone within your company who has basic Azure knowledge to work with, built, cost optimize, and manage Windows Virtual Desktop environments on Microsoft Azure on a daily basis.

As always, don’t just take my (biased) word for it. Instead, try it for yourself. Need some help in setting things up? Let us know and we’ll jump on a call. Do you have any other questions? No problem, we are here to help.

Thank you for reading, talk soon!

Bas van Kaam

Nerdio Field CTO, EMEA

How Consistent Cloud Management Drives Workload Optimization

Note – Microsoft announced the rebrand of Windows Virtual Desktop (WVD) to Azure Virtual Desktop (AVD) in June 2021. Read more about that here.

Information technology and “the cloud” are in no shortage of buzzwords and acronyms.  String enough abbreviations together and even the best of us risk losing parts of a conversation.  Consistency in the management of cloud solutions is critical to get to the optimal performance of Azure Virtual Desktop (AVD) workloads in Azure.  There are many paths to “good enough” and “works for the most part”, but when you place “optimal” in the sentence, it embodies the requirement to bring nothing but the very best of everything.  While we can’t cover every topic to get there, we will cover many topics that simplify and place importance on some fundamentals, that when done really well, will lead to a level of optimization.  Much like abbreviations, string enough of these together and you will have, in every sense of the words, workload optimization.

Whether you have vast experience or are getting started with AVD, the topics we discuss will either reinforce current methods or create learning and application opportunities.  In this article we will cover AVD pool management, Microsoft Azure resources, auto-scale, and user profile management.   

AVD Pool Image Management

While there are several aspects to a Azures Virtual Desktop pool, the focus here is around the best practice and optimized use of image templates when orchestrating hosts within a AVD pool.  The building blocks for properly managing a template can be broken into three distinct categories.

  1. Windows 10 Enterprise Virtual Desktop (EVD) Operating system and updates
  2. Microsoft 365 and Common applications and updates
  3. Common line of business applications (LOBs)

I will break each of these down as they have a direct impact on workload optimization when management is consistent and measurable.

Windows 10 EVD

While tailored specifically for desktops in Azure, is simply recognized as Windows 10 Enterprise and follows the best practices and methods for management that have been used for years.  Feature updates are released twice a year with updates occurring on the second and sometimes fourth Tuesday of the month (patch Tuesday).  It is this cadence, that when managed well, will optimize a host of different factors around the OS, performance, security, end user experience, etc. 

Feature updates should always be taken as an opportunity to test before deploying fully to the end user population.  Nerdio often recommends cloning a current pool and performing the update on the clone pool.  Here, testing can be performed and can eventually include a small segment of the end user base for further testing and validation.  Once the new OS is accepted as viable, we often recommend moving users to the newly cloned pool that can be crafted to include the appropriate number of hosts to satisfy the capacity of the base.  Once the users are assigned to the new pool, the former pool with older OS features can be destroyed.  Nerdio has several optimization features that can be used to accelerate the process of assigning users to the new pool.

General patch Tuesday updates should be met with some scrutiny in terms of applying those updates to the templates for a AVD host pool.  Security updates, as per best practice, should be vetted and applied first with the additional updates scrutinized for level of impact and utility.  Once updates have been applied, some testing should occur in the current pool.  After acceptance, the hosts can be created to pick up capacity while allowing the older hosts to naturally drain users and eventually be destroyed.  In some cases, a pool can sit out a cycle if there are no major security or feature patches.  Establishing a cadence to manage templates is recommended and can be a scheduled part of planned maintenance.  Thirty days would certainly be the minimum and would allow the normal Microsoft cadence to be fully observed.  Microsoft’s feature updates have a published schedule and can be met with some levels of readiness and planning to vet through any standards and methods.  It is this consistency which also allows for higher probabilities of success in other managed environments in terms of supported operating systems. 

Microsoft 365, LOBs and FSLogix Tools

This is a great segue to go from the operating system into the licensing that allows end users to access the AVD resources.  Insert M365; M365 is a license that satisfies the AVD Windows 10 Enterprise requirement while also including Office Business or ProPlus.  Given the ease of licensing and the bundles it includes, having Office installed on the template is an easy decision.  This is one application that make perfect sense to have installed and can be done in such a way to follow simple updates when the Semi-Annual channel installation is followed.  The Semi-Annual Channel (Targeted) is also an option, however, Microsoft best practice and recommendation is that channel is used on a fraction of the total end user base.  Semi-annual is easy to maintain due to its frequency and can become a staple on the image templates for AVD host pools.

While Office Business has typically been a single-user session application, M365 Business Premium does entitle multi-user session capabilities via the management of some minor registry changes.  With that in mind, having consistent licensing will ultimately influence the delivery and install of the Office Suite of applications.  In rare instances and accounts that have many sessions hosts in a pool, having two office versions is possible, however, it does add some complexity with the assignment of licenses and users to AVD host pools.  In this instance, less (options) is more and becomes easier to manage and scale AVD environments.

Line of business applications would be the one item that is more specific to the customer and less about the standards and support items specific to the MSP.  LOBs can be installed on a pool image template with the users using those applications spread across multiple pools by LOB application.  An example would be general users are on Pool-A, while the accounting group is on Pool-B – where the accounting application is installed.  Based on concurrency numbers from users, this setup is not optimized, as the resources to accommodate multiple pools does not scale and can carry additional unnecessary costs.  The remedy to this situation preserves consistent pool template management by installing all of the applications on the template, grouping everyone together in the same pool and using FSLogix application masking to only show the applications to users belonging to a specific group.  In this optimized configuration, pool resources are scaled to full user population, the template is optimized for easier management, and users will only see the applications they need to see based on group membership.  

Another service called MSIX app attach, as it gains popularity and resources (vendors providing MSIX packages), will ultimately lead to better and more efficient application management.  Applications will be able to leverage centralized management (consistent and current delivery) and will be made available by user or group delivering applications dynamically.  MSIX app attach became available on Windows 10 Ent version 2004. 

Resources and Auto-scale

We have progressed from looking at the operating system and applications and will now discuss the underlying resources and virtual machines.  Anyone who has ever looked and reviewed all of the available series VMs in Azure could be overwhelmed, to say the least.  Fortunately, when it comes to delivering and managing VMs in AVD host pools, there are consistent series VMs that are staples in every environment.  Here we will scale down the options and make managing pools that much easier.

Microsoft has broken down VM sizes and series by groups.  These intuitive groups are as follows:

  • General purpose
  • Compute optimized
  • Memory optimized
  • Storage optimized
  • GPU – accelerated compute
  • FPGA – accelerate compute
  • High performance compute

In a standard and flexible AVD pool, we will focus on the sizes appropriate for multi-session host pools.  This would include general purpose, memory optimized, and GPU accelerated.  Everything else is well suited for databases, application serving, AI, etc. 

Within those three sizes, we can start to break down what is useful and appropriate and what is not useful.

  • General Purpose
    • A-series – used for dev ops and have no ability to have reserved instances, highly under resourced
    • B-series – used for light workloads where compute and memory consumption are consistent and does not have direct interaction with end users, first hour after boot will have throttled CPU and credits need to build up to burst to full CPU utility
    • D-series – a series with a core to memory ratio of 1 to 4, great series to use with RemoteApp pools, good series to use with desktops where memory intensive applications like internet browsers are minimized or have good memory consumption and management
  • Memory Optimized
    • E-series – as series with core to memory ratio of 1 to 8, the added memory is a fraction of the cost to upgrade a VM to a higher series that adds both CPU and memory, great for high memory consumption where applications and internet browsers compound over time
    • M-series – boosted memory for database applications where cache can be leveraged to optimize performance of the DB
  • GPU Accelerated
    • NC and ND – optimized for serving applications that can leverage GPU processing and machine learning applications, application servers
    • NVv4 – NV, NVv3 and NVv4 are optimized for remote visualization where applications can leverage GPU processing (design, engineering, 3D, etc.). NV and NVv3 leverage Intel and Nvidia where NVv4 leverage AMD and Radeon.

With the VM size groups defined, we are left with D, E, and NV series as being the clear front runners when provisioning hosts within a pool.  Pairing down the options within those sizes by cores further reduces the number of options as 4, 8, and 16 core machines would be the ideal configuration for distribution of users across multiple hosts.  Distributing users across multiple hosts will reduce end user impact for an organization if a host were to have challenges.  Better to have 15% of users with issues as opposed to 50% if a larger VM was used and could allow for more user sessions.  Distributing users and having clear boundaries for host capacity is particularly important with a consistent managed AVD offering. 

Now that we have narrowed down the potential VM series for a AVD host pool, we will step out of the technical and will move to the economics of those resources.  Managing costs is equally as important.  Since Azure billing is a factor of metered consumption, any ability to minimize those costs while preserving end user performance is ideal.  There are two items that will allow for the dynamic nature of a host pool (especially pools for a larger user base).  Leveraging both Reserved Instances and auto-scaling are great ways to maintain performance while optimizing the economics and resources. 

The default metering within an Azure environment is pay-as-you-go (PAYG).  Time is metered and Azure bills per the hourly rate of the resource.  To create region predictability, Microsoft values the consistency of knowing what resources have a committed purchase and offer a discount to get that commitment by offering Reserved Instances (RI).  Reserved instances can be applied to any of the VM series we highlighted above.  Over a 1- or 3-year term, providers can pay for committed resources monthly with a significant discount.  Simply stated, a reserved instance is the purchase of resources over a committed period; 1- or 3-years. 

While purchasing RI may come at a significant discount, in many cases the variability of the end user concurrency will offset any of the savings.  An example of this would be RI for 10 AVD 8 core session hosts where only 20 people out of 100 log in that day.  The host capacity exceeded the demand and could have easily been turned into savings.  This is where managing with a hybrid approach that uses auto-scaling and RI will optimize resources and economics.  To extend our prior example in a hybrid approach, having RI on 2 hosts with the remaining 8 on auto-scale would optimize end user performance and economics at the same time.  As the end user demand changes, auto-scale provisions or boots PAYG resources to meet the demand.  When the demand is removed, the resources are removed or deallocated and the PAYG meter stops.  Having both RI and auto-scale applied to hosts in a pool will optimize the end user experience, while also providing substantial cost savings.

User Profile Management – FSLogix 

While FSLogix is an application used to manage user profiles, its role inconsistent management to optimize AVD workloads is worthy of mention.  Where proper management comes into play is around the VHDX profile disks and the storage requirements needed to ensure end user performance.

Managing the profile size has a direct impact on the performance of a user’s session.  The current default and best practice is a maximum size of 30GB.  Having methods of procedures to monitor and/or manage profile disks and those users approaching the limit will help to avoid issues and optimize end user experience.  While the virtual disk can certainly be expanded beyond defaults, the better solution is to follow the defaults for consistency and identify the elements that are causing the profile bloat.  On average, a user profile will typically be between 12 and 15GB.   

Much like having virtual profile disks within thresholds, the same would apply to the storage managing those profiles.  Throughput and IOPS is crucial to create the expected end user experience.  As the saying goes, “you can’t manage it if you can’t measure it”.  This raises the importance of having some reliable cadence as to the performance of the storage being used, managed disks or Azure files, monitoring and validating proper thresholds and measures will facilitate performance issues before they manifest into bigger problems. 

Why Nerdio?

We have covered a lot of ground here when it comes to consistent management to optimize workloads.  We have looked at both technical and economic factors that when done consistently will lead to positive results and customer/end user satisfaction.  While many of these concepts are applicable in and around Azure natively, it is worth identifying that the core of Nerdio is and will continue to be rooted on optimization.  In the case of Nerdio products, the orchestration for many of the concepts covered in this article provide reliable and efficient means to align with the outcomes expected when consistent management is employed.  The flexibility and ease with which Nerdio products can align with procedures and policies makes it a good fit and added accountability when meeting the demands of customers. 

Try Nerdio Manager for Enterprise out today for free!

10 Most Common Azure Mistakes Made by IT Professionals

In this article, we are going to focus on the top 10 most common mistakes we see our partners make in Microsoft Azure. Let’s jump right in: 

1. Selecting Non-optimal VM Sizes for Servers and Session Hosts 

There are many use cases for virtual machines (VMs) in Azure. Some examples of roles that VMs are typically used as domain controllers, file servers, application servers, database servers, remote desktop session hosts, and Windows Virtual Desktop (WVD) hosts.  

It is very common for someone unfamiliar with VM families and SKUs to randomly pick any VM size that is similar in core count and memory required for their needs. However, it is important to know there is a big difference, for example, between D2sv3 and DS3v2. Although VM SKUs look similar, perhaps even the same in core count and memory, it is important to understand the differences and pick the right one. Picking a non-optimal VM size can cause negative pricing ramifications and degraded performance and sometimes even both.  

Domain Controllers  

For domain controllers, it is very common to use a B-series machine since these machines provide significant value and will give you the performance a typical domain controller needs.  

File Servers  

For file servers, this can be quite tricky as CPU, core, and memory aren’t the only things to consider. Picking the right storage type and size is equally as important when optimizing performance on a file server (more on this in point in number 3 below). A typical VM size to select might be a D2asv4 or a DS3v2 for larger premium disks.  

Application Servers  

For application servers, referring to the recommended system requirements from your vendor is your best bet. Common VM families used here are the DASv4 or EASv4 types. There is also a difference between hyper-threaded cores and non-hyper threaded cores. For example, a DASv4 machine family uses hyper-threaded cores while the DS2_v2 does not. Performance on the DS2_v2 would be better since they will perform like physical cores rather than virtual cores. Checking with your application vendor to see what they recommend is the right thing to do.  


For AVD, session hosts, or RDS servers, it’s a good idea to use a machine that has a higher CPU core count to allow some room for bursting. It is also a good idea but not absolutely required to use an E series machine. E series machines have double the memory for only 15% more cost. The memory will come in handy if you have users using a lot of browser tabs or opening a lot of Office documents. Even NV series VMs would offer a performance boost as NV VM’s have a GPU attached to the machine which could offset some load from the CPU allowing you the ability to put more users on a session host.  

 2. Using a Deprecated Virtual Machine Family 

This topic is related to those Azure environments that have been around for a while. When an environment has been in Azure for any length of time, it is common to see that environment running on the Azure Classic platform rather than the modern Azure Resource Manager (ARM) model. When we see that, there is a high likelihood that the VMs were configured a long time ago and no maintenance has been done to resize the machine to use modern hardware. Azure does deprecate VMs over time by either not offering them anymore or making the cost increase, which incentivizes you to resize to a more modern, better-performing VM that actually costs less.  

If you are inheriting an Azure environment or reviewing an Azure environment that has been built a few years ago, you may find VMs running on older VM SKUs. It is a good idea to resize them to the current VM SKUs. You’ll see much better performance and likely at a much lower cost. A win-win situation!  

3. Using Premium SSDs on VMs That Can’t Handle the Full Potential of the Disk 

Oftentimes, when reviewing a quote or build that a partner brings to us, it’s common to see that premium SSDs are used everywhere. While premium SSDs are best in class in terms of speed and SLA, it is also important to consider the VM SKU being paired with the premium disk. Not all VM sizes can take full advantage of the premium disk you give it. If you look at Microsoft’s premium SSD documentation, you will notice that the larger the premium disk is, the more IOPs and MB/s throughput that disk is capable of. However, what most people don’t know is that each VM SKU can only handle a maximum IOPS and MB/s throughput. This means that if you assign a very large premium disk—let’s use a 4TB premium SSD (7500 IOPS) as an example–and pair it with a D2sv3 VM, the VM documentation shows that the VM can only take advantage of a disk with IOPS that will max out at 3200 IOPS. The VM would never be able to take full advantage of the full capability of that premium disk and you are therefore wasting money if higher performance is what you are looking to achieve.  

Make sure you select a VM that is properly sized to take full advantage of the premium disk you assign to it by picking a VM that has greater IOPs and MB/s throughput than all the combined disks assigned to that VM. 

4. Using Standard HDDs for Heavy Production Workloads 

Quite the opposite can also happen. We will see mission-critical workloads being assigned standard HDDs or standard SSDs. All mission-critical workloads should be using a premium SSD disk. Your workload performance will certainly increase compared to a standard SSD or standard HDD. The rule of thumb is that if the disk serves data to an end-user, make it premium. With that said, make sure you follow #3 above and size your VM appropriately for the disk.  

5. Selecting the Wrong Tier or Azure Files and Not Allocating Enough Storage  

When using Azure files for mission-critical workloads such as hosting FSlogix profiles for RDS or WVD, I see the use of standard tier Azure files used. The challenge will always be the speed of WVD if you select anything but premium tier storage for Azure files. However, just selecting premium is not good enough. You also must allocate a decent quoting size to get the IOPS you are looking for. Azure files’ formula for IOPs is 400 IOPS, +1 IOPS per GB you assign to the Azure files share. This means that if you want more IOPS (up to 100,000) you must allocate more GBs to the share. Performance degradation can come from not using premium tier storage and not allocating enough storage quota to your Azure files share.  

6. Forgetting to Order Reserved Instances on Virtual Machines 

Reserved Instances are an absolute must when it comes to cost control and saving money in Azure. To read more about Reserved Instances, read this article. A very high percentage of partners do not opt-in for Reservations for their VMs. Without Reserved Instances, your Virtual Machines are running at the pay-as-you-go rate, which is the absolute most expensive way to pay for Azure. I believe partners are so busy that they either forget to do it, or don’t know how to do it. If you are working with a CSP Distributor, you need to contact them to order and lock in your Reserved Instances and make sure every running VM is covered by a Reserved Instance.  

7. Forgetting to Toggle Azure Hybrid Benefit 

Equally as important is purchasing the licenses required for Azure Hybrid Benefit and not forgetting to TOGGLE the switch on each VM to take advantage of AHB.  

Similar to Reserved Instances, partners often forget to do this as well. Renting an OS or SQL license from Azure is by far the worst way to acquire the necessary Windows licensing for your VM.  

Purchasing the licenses isn’t all you need to do. You must tell Microsoft that you own a compatible license for Azure for them to give you the appropriate discount. 

8. Improperly Licensing Microsoft SQL Server 

If you have applications using SQL on Azure VMs, it is very important to understand how SQL can be licensed in Azure. Unlike on-premises where you can license SQL by the User and CAL model, you cannot do this in Azure. SQL can only be licensed under the Core model, and you must purchase a minimum of 4 cores per SQL Standard instance regardless of if your machine is under 4 cores. Core licenses are sold in packs of 2.  

There are currently two supported models of purchasing SQL licenses under the Core model in Azure:  CSP Software Subscription SQL Server 2 Core Pack (1 year or 3 years) and OPEN license for SQL Server per Core model with Software Assurance. 

If you don’t have either of these two types of licenses, you may not use this in Azure. The licenses will need to be repurchased under the correct licensing program.  

It is also important to take advantage of Azure Hybrid Benefit for SQL Server licensing. Over a 3-year term, renting the SQL Server license under the Pay as you Go model will cost you over $3,000 for a 4 Core SQL Server compared to bringing your own license under the CSP or OPEN license with Software Assurance program and taking advantage of Azure Hybrid Benefit. The drawback is that it is an upfront payment vs renting it month to month. 

9. NSG Inbound Outbound Rules 

Understanding how Network Security Groups (NSG) work is important to the security of your Azure environment. NSGs are like your stateful firewall. They can be set to ALLOW or DENY traffic to your virtual network in Azure. Most NSG’s are misconfigured, thereby giving full access to the outside world on all ports or specific ports such as 80, 443 or 3389. Hunker down and learn how NSGs work as getting it wrong can pose a huge security risk to your network and frustrate you when traffic does not flow, you cannot connect, and cannot seem to figure out why.  

10. Not Patching your VMs Running Azure 

Believe it or not, when VMs are deployed in Azure, there is a high likelihood the VMs aren’t patched like machines that are running on-premises. A virtual machine running in Azure is no more secure or less secure than a VM running on-premises. It is very important to install your RMM tools and anti-virus software on VMs running in Azure as well. Treat them the exact same way and put them on the same patch schedule as a VM running on-premises. Do not neglect your VMs in Azure as they too need to be safe and treated with care.  

Azure even has a Windows Update Manager service that you can enroll your VMs to that will help patch your machines if you don’t feel like using your RMM tool to do the job. Here is how to enroll your VM and use Update Manager.   

These are the 10 most common Azure mistakes we see partners make. Keeping these points in mind when you are working with Azure will help you be more successful. And, of course, we are always here to help assist you. 

If you’d like to schedule a demo of how Nerdio Manager for Enterprise can help your business save up to 75% on Azure compute and storage costs and drastically lower the time it takes to deploy WVD, click the button below. 

Free White Paper Download!