Part 2: Using Azure to remove to HQ dependency for AD

Part 1: The Sceanario

Part 2: The Azure Solution

Part 3: Virtual Networks (coming soon)

Proposed Solution

After sitting down and scoping the needs and requirements.  The below is the design that is now being built. In following posts I will discuss how to create those changes and why we used the options that we did.

In comparison to the initial image, you can see that we have moved key services (ADFS and ADFS Proxies (WAP) away from Azure Classic and into Azure Resource Manager. These are not rebuilds but instead we used MigAz to move the boxes from the old environment to ARM with minimal downtime.

A new subnet within our production vNet will host the Domain Controllers and the ADFS servers, the ADFS servers will sit behind a internal load balancer. WAP boxes will sit within the DMZ behind a External Load Balancer.

Both vNets are protected via Network Security Groups (NSG) and the VM’s themselves will also have individual NSG’s assigned to the network cards. This will limit the attack vector should someone breach another machine within the same Subnet.

vNet peering is enabled between the Azure Service Manager environment and Resource Manager. This keeps the traffic within the Azure data plane and means that we are not relaying through the HQ.

S2S VPN tunnels will be created from remote sites directly to Azure. Using Site costs within Active Directory Sites and Services, we will be able to force the majority of the traffic back to HQ, using Azure as the fall back for authentication traffic.

DNS servers currently point to HQ. This will remain the same, however tertiary and quaternary DNS servers will be added to DHCP scopes and statically assigned servers, meaning they will round robin between all 4.




Part 1: Using Azure to remove to HQ dependency for AD


Currently HQ is a single point of failure, this is the only site within the network to host Domain Controllers. These are responsible for authentication to Office 365, external applications that are served via Azure AD, and all of the Users and Computers within the company. This has been highlighted twice in the past few months, during which time HQ experience an internet outage, and a separate incident where the ASM VPN went down.

During these times, users are unable to authenticate with cloud apps, including Outlook, Onedrive and Sharepoint. Users in the remote offices also connect back across Site to Site VPNs for authentication to file shares and applications. The network grounded to an halt.

Current Setup

From the diagram below, the red line indicates the way end points connect back to HQ to authenticate. As you can see from the image, all traffic heads back to HQ from both Azure VPN environments. ADFS servers currently site in ASM and are dependent on the ASM VPN tunnel remaining up.

The network sits in a Hub/Spoke topology where all traffic between remote sites will traverse through the HQ. MPLS and ultimately ExpressRoute was deemed too expensive.

To Note: The Azure portal also uses ADFS for authentication so could lose our login ability to Azure aswell.

Find when VM was created in Azure Resource Manager

Recently I was asked how to confirm when a particular ARM Virtual machine was created. I thought this would be a relatively easy thing to accomplish. However searching through powershell I could not find any Datecreated on either the VM or the VHD.

in the end I resorted to going into the Windows VM (this will not work on Linux). And checking the location: C:\Windows\panther\

In here you should find a number of files but the ones we are interested in are: WaSetup.log and WaSetup.xml – The dates modified of these files will be from during the provisioning of the Image / VHD.

This was the closest I could find to given a true Date and time.


Migrate from Azure Classic to Azure RM

For reference in the below post:

Azure Service Manager = ASM

Azure Resource Manager = ARM

My current client have over 75 VMs that sit within Azure Service Manager (Azure Classic). The majority of these production VMs do not warrant a project to spin up new VMs and move across the services.

After some investigation into possible ways to migrate a VM from ASM > ARM. I found MigAz ( – This tool can migrate from Classic to ARM, ARM to ARM and currently being developed work with AWS.

This free to use tool will export scripts to JSON files and a powershell scripts to migrate your selected VM from one environment to the next. It also works across Subscription so if your like my current client and each environment is separated by subscription this tool can be very useful.

The tool is self explanatory, the only thing that I would point out is I was caught out when moving ARM > ARM that the managed disks by default only have a 60 minute token. So when copying large amounts of data (I copied around 600gb and it was taking about 6-7 hours).

You will need to go into the options of the program and increase the metric for: Managed Disk Access SAS Duration.

I’ve now used this tool to migrate around 40% of the VM’s, of all different sizes and I could not recommend this tool enough.