Windows Virtual Desktop (and Citrix) with FSLogix

Overview of Windows Virtual Desktop with ANF providing FSLogix container and file share storage

Now that Windows Virtual Desktop (WVD) is now generally available (GA) since 30th September 2019 you may be wondering what the best practices are for deploying highly available, performant and scalable storage to support your users. In addition, that storage must natively support Active Directory and NTFS permissions.

Microsoft offer enterprise shared file services with their latest storage service called Azure NetApp Files, which allows you to deploy SMB (Windows Files Shares) directly into your private VNET (no internet facing IP) that can easily support FSLogix profile containers and shared files for WVD with ease.

In fact, it’s a recommended deployment methodology😦https://docs.microsoft.com/azure/virtual-desktop/create-fslogix-profile-container)

So how do you get started? Well first, let’s start with the why.


FSLogix profile disk (VHDX) is the single source of truth for a users data, configuration and settings. Therefore it is important to protect this against failure, corruption and other source of data loss / outage. If you lose this disk, you will lose all of your user(s) data too.

One of our partners here in the UK tried it out for themselves and with help from Andi Kelcher from Fujitsu the results became very clear:

“By moving from our previous configuration of BLOB storage with FSLogix, to Azure NetApp Files with FSLogix, initial testing shows a dramatic performance increase when looking at login times, shown below:

Azure HSD Server 2019 – 69% decrease

Azure VDI – 38% decrease

Azure WVD – 29% decrease

Andi Kelcher – Fujitsu
Reduced Login Times using Azure NetApp Files – Note – MVD = WVD (small typo!)

Via the use of Citrix Performance Analytics, and during early testing we have noticed that the occurrence of fair “session logon durations” have disappeared  and been replaced the excellent UX score. As per below

Andi Kelcher – Fujitsu

Another design consideration to take into account is that your AppData is also stored within your FSLogix profile disk, and the performance of your applications is therefore tied to the performance of the underlying storage of this disk.

In summary, ANF offers a simple to deploy, Azure native shared filed storage (it’s from Microsoft) that is guaranteed to offer your users a consistently performant experience whilst protecting their data via in-built data management capabilities.

Getting Started

Over this post we will perform the following steps in order to successfully deploy Window Virtual Desktop:

  • Prerequisites
    • Create a tenant in Windows Virtual Desktop
    • Create service principal and role assignments
    • Install Windows Desktop Client
  • Part 1: Deploy Windows Virtual Desktop Host Pools.
  • Part 2: Deploy storage for our user profiles (Note: you must have requested whitelisting to the ANF service beforehand. If you haven’t simply select the Azure NetApp Files service from the Azure storage services and select register. This typically takes no more than 24h).
  • Part 3: Install FSLogix onto the WVD hosts and configure Azure storage for optimal performance and reliability of user profile and O365 data.


Create a tenant in Windows Virtual Desktop


So, let’s start with the basics and clarify some of the terms used by WVD. Firstly, what is a tenant? It’s a group of one or more host pools.

And each of these host pools contains one or more session hosts (VMs) that are registered to the Virtual Desktop service. In order to create a tenant there are a number of steps that must be completed to allow the service to interact correctly with your Azure AD.

Rather than re-write the excellent deployment documentation provided by Microsoft, simply follow the steps outlined here: https://docs.microsoft.com/en-gb/azure/virtual-desktop/tenant-setup-azure-active-directory

Create Service Principals and Role Assignments


Once you have successfully completed the previous step to create an tenant, you must then create the service principal and role assignments for Windows Virtual Desktop. Again, Microsoft provide excellent documentation covering this step here: https://docs.microsoft.com/en-gb/azure/virtual-desktop/create-service-principal-role-powershell

Once the above step is complete, you are now ready to deploy your host pools.

Install Windows Desktop Client

Finishing the installation of Remote Desktop

One final important snippet of information – be sure to install the Windows Remote Desktop Client (confusingly – this is not the same as your remote desktop connection built into Windows (mstsc).

This is not the same as the Windows Desktop Client

You can grab the latest download from here: https://docs.microsoft.com/en-gb/azure/virtual-desktop/connect-windows-7-and-10

Part 1: Deploy Windows Virtual Desktop Host Pools

How-to deploy Windows Host Pools for WVD

In order to deploy Windows Virtual Desktop, you must provision a host pool (a collection of one or more session hosts) which provide(s) the desktop sessions for your users. There are two deployment types that you can chose from:

  1. Pooled – Enable multi-session virtual desktop – Multiple users share the underlying host resources (many to one mapping of users to resources).
  2. Personal – Each user receives their own persistent host (one-to-one mapping of users to resources).

Part 2: Deploy Storage for FSLogix Containers (User Profile VHDX) & for Shared Data

Learn how to deploy SMB storage in Azure for FSLogix

Windows Virtual Desktop users can make use of FSLogix, a powerful and simple to deploy user profile and O365 container technology that makes handling remote user profile data simpler than ever whilst offering the following benefits:

Source: https://docs.microsoft.com/en-gb/azure/virtual-desktop/fslogix-containers-azure-files

Part 3: Deploy & Configure FSLogix

I have built upon the excellent work by Senior Microsoft FastTrack engineer Dean Cefola and modified his automated deployment script which will automatically download and configure FSLogix into your session host for you. This is available at this GitHub repo: https://github.com/kirkryan/Azure-WVD/blob/master/PowerShell/New-WVDSessionHost.ps1

Once you have downloaded the above PowerShell script, simply add/edit the mount path for Azure NetApp Files to the variable called $ANFSMBPath (shown below):

Copy the path shown in the mount instructions of the Azure NetApp File volume
Paste the Azure NetApp Files SMB mount path into the $ANFSMBPath variable in the PowerShell script

Alternative Configuration Method:

If you have issues running the PowerShell script provided above, you can simply install the FSlogix agent from here: https://aka.ms/fslogix_download

Once installed, open the registry editor (regedit.msc) and create a new entry called VHDLocations (Type: REG_MULTI_SZ). Simply set the value to the mount path of the ANF volume and reboot the session host (VM). You’ll need to do this once per session host and can easily automated this step via GPO or other alternative methods.

Create VHDLocations in HKLM /SOFTWARE/FSLogix/Profiles

Appendix/ Assumptions

  • You have deployed an AD/DNS server that is reachable over IP from the parent VNET containing the ANF volume. Please note that UDR is not supported therefore a natively supported route must exist between the volume and the AD/DNS. If you have a complex network setup i.e. virtual firewall appliances, then simply deploy a read-only AD server within the ANF VNET or a locally peered VNET.
  • You have whitelisted your subscription for Azure NetApp Files. It is a fully GA service but must be requested (similar to CPU count increase or SAP HANA large instances for example).

A big thanks to Andi Kelcher from Fujitsu for sharing their performance testing, Christiaan Brinkhoff & Jim Moyle who are Microsoft Global Black Belts for Windows Virtual Desktop, and Geert Van Teylingen GBB for ANF for their assistance in setting up my environment and understanding of the solution.

Data replication between Azure regions just got easier with Azure NetApp Files.

What is cross-region replication

For anyone reading this that doesn’t know what ANF is; it’s a Microsoft shared file service that is native to Azure (first-party). That means it’s supported and sold by Microsoft themselves, just like premium and ultra managed disk.

Now that’s out of the way let’s get to the exciting news: Cross-Region replication is now available in private preview for customers within certain regions (reach out to your local CSA to find out the latest news on your specific region). This is a big-deal for anyone using ANF and needing powerful, yet simple to manage disaster recovery in Azure.

How does it work?

Cross-region replication uses block-level tracking making it the most efficient replication technology available in Azure today. This is important as it results in much reduced data payload for replication, which directly reduces data egress and therefore cost of regional replication.

The configuration is relatively straight forward. If you haven’t already, create a volume that will be used as the source of the replication, you can use an existing volume with existing data if you already have one.

For this demonstration we will be using a 1TB volume called primary-vol as the initial source volume in eu-north, and we will create a destination volume called secondary-vol in eu-west. CRR (Cross-Region Replication) will then be configured on a 10-minute asynchronous schedule to ensure data is frequently replicated between regions.

Comparison of supporting architecture requirements for replication. ANF does not require any customer networking between regions as it uses the azure backbone for data transport.

Simpler architecture, less operational complexity and also more cost efficient

That’s a brave claim – simpler, easier and cheaper? Let’s put that to the test:

Simple Architecture: No VMs to deploy or keep switched on for replication – also no VNET peering to configure. In addition, when using VMs there is the added complexity of having to maintain, patch and right-size (bandwidth calculations) for your replication needs. None of this architecture is required when using ANF.

Less complexity: All networking is already provided by the service. Therefore you do not need to setup global VNET peering or VNET gateways as all replication uses the Azure backbone itself for maximum performance.

Cost efficient: By removing the need for VMs to replicate data you can remove these (where appropriate) or switch them offline until DR is invoked. This can save significant monthly spend in many cases. Please note: whilst CRR is in preview – there is no cost for data replicated by the service – use this to your advantage to seed your entire DR site for free (you’ll still pay for the volumes themselves of course!)

Episode 109 – Set up and configuration of cross-region replication

How do I invoke DR? How fast can I failover?

A major benefit of CRR is that failover takes seconds – that is, the data can be made available for read and write (this is important – the data is always available for read at the secondary site). This is done with one simple ‘break replication’ command.

Difference between ultra/premium/standard tiers?

ANF is available in three different tiers – Ultra, Premium and Standard. Regardless of the tier you select, all will benefit from accelerated and efficient replication at similar speeds shown above.

You may now be wondering how CRR compares to other replication technologies such as RSYNC, Robocopy, XCP or even NetApp’s own CloudSync solution.

That’s it for this week. Be sure to join me for part 2, where I directly compare the time taken to replicate 1TB of data between regions using several different technologies.

Azure NetApp Files: May Update

Another month has passed and with it another development sprint of Microsoft’s shared files platform has been delivered.

In this release the following features and enhancements have been delivered:

Backup Policy Users for SMB/Active Directory Connections

This feature allows for the use of privileged (i.e. a non AD administrator account) when migrating data to SMB volumes. You can read more about this feature here and also request access to this feature by emailing anffeedback@microsoft.com

Allow .snapshot folder to be hidden

By default, ANF volumes allow access and visibility to a hidden read-only folder called “.snapshot”. For some workloads this can be problematic. You now have the option to hide this folder when creating or modifying new volumes.

Edit Active Directory Connections

Users can now alter existing Active Directory connections like DNS servers, site name and organisational unit path via the portal.

NFSv4.1 ACLs are now enabled

This feature allows NFS access access control lists to be user with Azure NetApp Files volumes for file and folder permissions.

How to: K3s on Raspberry Pi

Taking a break from pure Azure deployments, I decided to try my hand at building a small Kubernetes cluster that would allow me to learn and develop applications quickly at home without burning any of my Azure credits. Of course I could easily deploy on an AKS cluster, but I want to learn more about the management clusters in Kubernetes in order to fully understand the benefits any managed solution brings (sometimes it’s best to understand the pain points first hand to truly see the value in a service/solution).

The Build

In order to build my personal Kubernetes cluster, I decided to invest in four of the latest Raspberry Pi’s with the largest RAM option available (Raspberry Pi 4 – 4GB). In addition to these, I happened to have a Raspberry Pi 3+ lying around not doing much, so that will become the master/API node to the cluster, leaving the beefier RPi4 available for use as the worker nodes.

High-Level Overview

In order to build my Kubernetes cluster I had to make two decisions:

  1. OS Distribution – this was between Raspbian and Ubuntu. I chose Ubuntu as I have used it in enterprise environments and have experience with it.
  2. Kubernetes tooling – this was between microk8s (https://microk8s.io/) and K3s (https://rancher.com/). I opted for K3s as it was simply the first set of tutorials I came across – I would like to give microK8s a go in future. Both offer optimisation for edge environments such as the Raspberry Pi.


Luckily the folks over at rancher have made this simple. Run the following command and you’ll have a single node K3s single node cluster up and running in no time.

curl -sfL https://get.k3s.io | sh -

Once this is installed you can check that your single node cluster is up and running with the following command – if this fails for you check the next section (additional configuration/troubleshooting)

kubectl get node

At this stage, start your other nodes and run the following command on them to install k3s and configure them to join the existing cluster:

curl -sfL https://get.k3s.io | K3S_URL= K3S_TOKEN=K10373f030f773c58a476ca0332eda0beb8ef8ddfc6d3e1c909642939bc59c4d096::server:d2ec6a145a753bb676bb0034f71bb6c0 sh -

And that’s it! You now have a cluster up and running, ready for your next project! You can check your node status with the command sudo kubectl get nodes

If you liked this post then subscribe as I build upon this journey and start to connect more advanced use-cases and cloud integration to my projects!

What’s next?

I’m thinking of playing with Azure Arc and connecting it to my cluster to see what it is capable of and gaining experience with it. I also have a raft of projects (mainly cycling related) that I will deploy on this cluster! Keep tuned for more articles and videos!

Additional Configuration

When using the Raspberry Pi there are some additional configuration steps that must be performed in order to get K3s up and running successfully.

Ubuntu 20.04 LTS on Raspberry Pi

When I followed the above instructions, I hit a snag where no matter what I did, I couldn’t get the K3s service to start. After a bit of digging I found that a required flag for cgroups must be set on the Raspberry Pi. In order to do so follow these instructions (source: https://microk8s.io/docs/install-alternatives#heading–arm).

sudo vi /boot/firmware/nobtcmd.txt
# or for older Raspberry Pis (such as the 3+)
sudo vi /boot/firmware/cmdline.txt

The add the following line to the existing options:

cgroup_memory=1 cgroup_enable=memory

Then simply reboot your node and it will enable cgroups. You’re existing agent will then start correctly and be visible when running a kubectl get nodes

Enable legacy iptables on Raspian Buster

If you are using Raspbian Buster then ensure to enable legacy iptables as it defaults to nftables, which will not work correctly with K3s networking.

sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo reboot


Announcing: ANF Snapshot Scheduler v2

What’s new

  • Support for multiple volumes per scheduler
  • Support for scoped snapshots – a scheduler will not interfere with another schedulers or manually created snapshots. This is great for running scenarios such as daily, weekly and monthly snapshots with different retentions (i.e. 48 hourly, 14 daily, 4 weekly).
  • One-touch deployment (thanks to Sean Luce)

Where to get it / documentation



Scheduled snapshots could now not be any simpler. This allows for advanced, flexible snapshot scheduling with further advanced integration to follow in future

Got an idea/feature request?

Simply pop your idea/request here: https://github.com/ANFTechTeam/anfScheduler/projects/1

How To: Azure Kubernetes Service – Enable Dynamic Provisioning (Part 1)

Tired of manually managing PVCs and static storage classes? Then look no further, in the 2 part video series I’ll be taking you through the steps required to enable dynamic storage provisioning using Azure NetApp Files as an example of what responsive, low-latency, high-performance storage can do for your k8s applications.

Video Guide

Part 1 – Dynamic Provisioning with Azure Kubernetes Service and ANF


Create a service principal with contributor rights to the correct scope (subscription, resource groups, etc)
az ad sp create-for-rbac --name "Trident" —-role contributor --scopes /subscriptions/enter-your-sub-id-here/resourceGroups/rg-sponsored-csa-west-europe-anf
Download and install Trident

Next, download and install trident with the following commands:

wget https://github.com/NetApp/trident/releases/download/v20.01.1/trident-installer-20.01.1.tar.gz

tar -xf trident-installer-20.01.1.tar.gz

cd trident-installer

Note: always check the repo for the latest version numbers (https://github.com/NetApp/trident)

And that’s it for this post, part 2 will cover how to create and verify the back end and storage classes for use by Azure Kubernetes Service.

The full Trident installation guide is available here: https://netapp-trident.readthedocs.io/en/latest/kubernetes/deploying.html#install-trident

How to: Azure NetApp Files with PowerShell

Installing Azure PowerShell & NetApp Files Module

Install Azure PowerShell

If you’re not using cloud shell or one of the latest docker images you’ll need to install Azure PowerShell. Simply run the installation instructions here and you’ll be up and running in no time.

Don’t forget to log into your Azure account by running the following command:


Import the Azure NetApp Files Module

Next, you will import the Azure NetApp Files module:

Install-Module -Name Az.NetAppFiles

And that’s it – you’re ready to use PowerShell to manage and query all of your Azure NetApp Files resources in Azure!

The available cmdlets in the Azure NetApp Files module

🚀 Azure NetApp Files in 2020: January Update 🚀

We’re barely into January 2020 and already the team has been hard at work to bring new features and enhancements to the Azure NetApp Files service for Azure users.

Join Geert Van Teylingen (Microsoft – GBB Tech Specialist for Azure Advanced Storage) & myself (NetApp – Principal Architect) as we run through the latest in the first our our monthly update videos below:

Monthly Update: January 2020
  • In-place Snapshot Restore (Preview)
  • Improved Metrics and Alerting via Azure Monitor (PowerShell)
  • Terraform: Azure NetApp Files added to the Azure Resource Provider
  • Coming Soon: Native Cross Region Replication (Preview)

Don’t forget to subscribe for automatic updates & news about Azure NetApp Files and more!

Azure NetApp Files: December Update

Microsofts Azure NetApp Files Service

This month the team have been busy working on a series of new features and enhancements for Microsofts enterprise shared file service.

Release Notes

In-Place Snapshot Restore (Private Preview)

Azure NetApp Files has gained the ability to roll back an entire volume to any selected restore point. This is a very powerful feature and can be used to protect production systems against data corruption and malware. In addition by performing an in-place restore, host systems do not have to be reconfigured to a different mount point.

This is a highly powerful feature to roll your systems back to the last known working state in seconds.


I am under NDA with both Microsoft and NetApp therefore the above list is not exhaustive and excludes information about low-level service enhancements that are also included with monthly releases. Please reach out to your local Microsoft team for NDA updates.

Azure NetApp Files: Enable Alerting

Full video demonstration

Azure NetApp Files is Microsoft’s enterprise shared file service (read more here). As a first party service is allows you to deploy private NFS & SMB storage to your VNETs in seconds. As it is a Microsoft first party service, it also allows you to easily monitor key metrics via the Azure Monitor.

I will run through an example below, where I create and set an alert on a 100GB volume that will raise an alert to my Azure app installed on my mobile device. I could quite easily change this to SMS, Email, Teams, Slack, etc and also specify automatic actions to take place (i.e. grow a volume).


Step 1: Create Azure Monitor Alert

In order to create an alert, you must have provisioned the ANF resources before these steps.

Easily retrieve the ResourceID of your volume from the Properties tab within the portal

The PowerShell script is available on my repo here: https://github.com/kirkryan/anf-alerts

Azure Monitor is used to monitor and alert on any Azure resources. It can be configured via the portal or PowerShell commands, in this example we will be using PowerShell as the portal workflow is not yet enabled for creating a new alert for ANF.

Use the following PowerShell commands to create your alert:

$Resource = "enter your ResourceID here"
$ResourceGroup = "enter your ResourceGroup here"
$QuotaInBytes = 107374182400               (example for 100GB)

az monitor metrics alert create --name "Volume Quota Exceeded" --condition "avg VolumeLogicalSize > $QuotaInBytes" -g $ResourceGroup --scopes $Resource --description "Volume Quota Exceeded" -o table
Successful Metric Alert Creation

HandyTip: You’ll notice that you must specify the capacity in bytes – here is a handy GiB to Bytes calculator: http://extraconversion.com/data-storage/gibibytes/gibibytes-to-bytes.html

Step 2: Configure Action Group

Azure Monitor – Rules View

Now that you have successfully create a rule, you more than likely would like to perform an action when that rule is triggered. Navigate to the Azure monitor and select “Manage Alert Rules”. You will be shown a list of all existing rules. Select the name of the rule you would like to configure the action for to open the configuration screen below:

Rule configuration / detail screen

Next, select “Create action group” to configure a new action or “Select action group” if you already have an existing action.

Adding an action group

Select the appropriate action type that you would like the rule to perform once activated. In this example I have used Email/SMS/Push/Voice and have configured the rule to push an alert to the Azure App on my mobile phone.


And that’s it! You’ve successfully configured alerting for your ANF volumes in just a few simple steps!

Any time your alerting thresholds are breached you will be automatically notified via your chosen action.

Here is an example of the push notification via the Azure mobile app:

ANF Scheduler: December Update

For users of Azure NetApp Files the ability to take on demand snapshots and instantly restore any application is a powerful one. Snapshots can be taken on demand or triggered via API, PowerShell or CLI easily, however many customers would like an easy to deploy, secure method of taking snapshots at a regular interval with retention management. Enter the ANF Scheduler (https://github.com/kirkryan/anfScheduler). It deploys in seconds, is secure as it is compliant with Azure IAM and can also be monitored in Azure natively.

After 230 days of running anfScheduler every hour, every day, I have incurred a cost of £0.10p

Kirk Ryan – The cost of serverless automation using Azure Logic Apps.

Today, I have published an update to the app that fixes a small bug introduced with an underlying API response change from the ANF service. You can download the latest version at the link above.