With the implentation of an Azure CSR 1000v Router, we reviewed how we wanted our traffic to flow. We decided, all sites would terminate into the CSR and back to our Head Office with DMVPN, this gave us some redundancy in the event of a failure at either HQ or Azure.
The inbuilt Azure VPN would remain in place between itself and HQ, traffic between the two sites would use this. Whilst remote sites would forward its traffic out of the CSR router. As sites were migrated over to the DMVPN solution, we added them into the Route table that had been applied to all subnets bar the dedicated CSR subnet that we had created within our vNET (6 in total)
The route table would be used to forward any traffic destined for the /24 networks we defined, to the internal IP of our CSR router.
we set this up with a 10.0.0.0/8 traffic to forward to the virtual gateway (which led to the original Azure VPN). Then the more specific subnets (/24 in the above example) takes precedence.
Once I have implemented the ASAv into our environment – an additional post will cover the use of route tables for this feature to filter traffic.
Implementing Cisco CSR router into Azure turned out to be quite a learning curve. Originally there was multiple images available on the Azure marketplace, and there was multiple versions of conflicting documentation around the features/capabilities on what was supported.
Luckily, this looks like its being sorted out. There are now 2 images available on the marketplace, one of which is for the DMVPN transit VNETs.
When building the CSR, we assigned an entire /24 within the subnet from our /20 VNET. We did this just to separate the traffic which we could then lockdown access to remove any unnecessary changes.
IMPORTANT – Something that I forgot to do and it took alot of faffing around to get our traffic forwarding through the CSR.
On your Network interface card for the CSR > Go to IP Configurations
Here you would add additional IPs if required
But you also have the option for IP Forwarding Settings > Enable this, I think it requires a reboot if I remember correctly. and then you should be able to get traffic from one side of the interface to another
Currently the production vNet is a /20. This has been broken into 2 segments of /24’s at the moment, one for standard production VMs, the other for the DMZ.
A new /28 subnet will be created within the same vNet. This allows for 11 IP addreses to be assigned (Microsoft Reserve 5 addresses for backend). 11 IP’s is over the requirement of what we need, however the next option is 3, which is under our requirements.
Network Security Group
The NSG we use is assigned at subnet level. I do not tend to use them directly onto the network cards unless we are trying to lock down the VM from what would be the equivalent of Layer 2 devices.
We allow ports: 25,42,135,137,139,389,636,88,53,445,9389,5722,464,123,138,67,1024-5000,49152-65535
To domain controllers that are specified in the Destination IP range
After sitting down and scoping the needs and requirements. The below is the design that is now being built. In following posts I will discuss how to create those changes and why we used the options that we did.
In comparison to the initial image, you can see that we have moved key services (ADFS and ADFS Proxies (WAP) away from Azure Classic and into Azure Resource Manager. These are not rebuilds but instead we used MigAz to move the boxes from the old environment to ARM with minimal downtime.
A new subnet within our production vNet will host the Domain Controllers and the ADFS servers, the ADFS servers will sit behind a internal load balancer. WAP boxes will sit within the DMZ behind a External Load Balancer.
Both vNets are protected via Network Security Groups (NSG) and the VM’s themselves will also have individual NSG’s assigned to the network cards. This will limit the attack vector should someone breach another machine within the same Subnet.
vNet peering is enabled between the Azure Service Manager environment and Resource Manager. This keeps the traffic within the Azure data plane and means that we are not relaying through the HQ.
S2S VPN tunnels will be created from remote sites directly to Azure. Using Site costs within Active Directory Sites and Services, we will be able to force the majority of the traffic back to HQ, using Azure as the fall back for authentication traffic.
DNS servers currently point to HQ. This will remain the same, however tertiary and quaternary DNS servers will be added to DHCP scopes and statically assigned servers, meaning they will round robin between all 4.
Currently HQ is a single point of failure, this is the only site within the network to host Domain Controllers. These are responsible for authentication to Office 365, external applications that are served via Azure AD, and all of the Users and Computers within the company. This has been highlighted twice in the past few months, during which time HQ experience an internet outage, and a separate incident where the ASM VPN went down.
During these times, users are unable to authenticate with cloud apps, including Outlook, Onedrive and Sharepoint. Users in the remote offices also connect back across Site to Site VPNs for authentication to file shares and applications. The network grounded to an halt.
From the diagram below, the red line indicates the way end points connect back to HQ to authenticate. As you can see from the image, all traffic heads back to HQ from both Azure VPN environments. ADFS servers currently site in ASM and are dependent on the ASM VPN tunnel remaining up.
The network sits in a Hub/Spoke topology where all traffic between remote sites will traverse through the HQ. MPLS and ultimately ExpressRoute was deemed too expensive.
To Note: The Azure portal also uses ADFS for authentication so could lose our login ability to Azure aswell.
Recently I was asked how to confirm when a particular ARM Virtual machine was created. I thought this would be a relatively easy thing to accomplish. However searching through powershell I could not find any Datecreated on either the VM or the VHD.
in the end I resorted to going into the Windows VM (this will not work on Linux). And checking the location: C:\Windows\panther\
In here you should find a number of files but the ones we are interested in are: WaSetup.log and WaSetup.xml – The dates modified of these files will be from during the provisioning of the Image / VHD.
This was the closest I could find to given a true Date and time.
My current client have over 75 VMs that sit within Azure Service Manager (Azure Classic). The majority of these production VMs do not warrant a project to spin up new VMs and move across the services.
After some investigation into possible ways to migrate a VM from ASM > ARM. I found MigAz (https://github.com/Azure/migAz) – This tool can migrate from Classic to ARM, ARM to ARM and currently being developed work with AWS.
This free to use tool will export scripts to JSON files and a powershell scripts to migrate your selected VM from one environment to the next. It also works across Subscription so if your like my current client and each environment is separated by subscription this tool can be very useful.
The tool is self explanatory, the only thing that I would point out is I was caught out when moving ARM > ARM that the managed disks by default only have a 60 minute token. So when copying large amounts of data (I copied around 600gb and it was taking about 6-7 hours).
You will need to go into the options of the program and increase the metric for: Managed Disk Access SAS Duration.
I’ve now used this tool to migrate around 40% of the VM’s, of all different sizes and I could not recommend this tool enough.
Microsoft have announced that the Azure Classic portal (https://manage.windowsazure.com/) Will be retired starting the 8th of Janurary 2018. Therefore you will need to use the new ARM portal (https://portal.azure.com) for future administration.
The full story can be found here: https://azure.microsoft.com/en-us/updates/azure-portal-updates-for-classic-portal-users/
I would expect this will be the beginning of pushing companies away from the old environment and moving them to the new Azure Resource Manager instead.
On Saturday I passed the 70-533 – Implementing Microsoft Azure Infrastructure Solutions.
The exam itself I found challenging but no where near as difficult as I had primed for. I have been working with Azure on and off for around 2 years now and thought it was about time I stamped my Linkedin page with the 70-533 exam. For clarification and refinement, i used Videos on udemy followed by a ton of reading on the Microsoft pages.
But my main stress came from before the exam with the At home proctored exam that Pearson Vue are now offering. It started fine with me needing to look directly at the webcam, then provide my driving license as identification. Then comes the room sweep. You have to move your webcam slowly around the room so that the agent can see what is in the room. I was using my Laptop, but was forced to unplug screens (that were not connected to anything the other end). Have my phone in the room but not in arms reach. Move Credit Cards and other paperwork that I had deliberatly stored underneeth my printer behind my screens, out of arms reach. And put that on the floor.
The doors had to be closed, and I had to keep my face in view at all times of the exam. Problem being, my Dell XPS 13 web cam sits in the bottom left of the screen. The screen has to be tilted back massively for me to fit in the screen (poor design on Dells part). In all. The experience took me about 25 minutes of constantly going over areas to prove that it was clear and that no one else was in the room. I wish I had braved the Christmas Shopper rush to my nearest Test Center which is in the heart of my nearest city right next to the shopping center.
A client I’ve been carrying out some Azure work for over the past few months has a split environment between Azure Resource Manager (ARM) & Azure Service Manager (Classic and/or ASM). Their ADFS infrastructure and System centres all currently live in ASM and there was no direct connection through to ARM meaning without some funky routing via the VPNs through the Head Office.
The fix is to setup vnet peering through the portal. This is done via the virtual network > Peering > Create
You’ll need to fill in the following details:
I then got this error:
Failed to add virtual network peering ‘<peering name>’. Error: Subscription <ID> is not registered with NRP.
It was quite difficult to find information on this error message and initally I thought it was due to lack of permissions.
However, the following script fixed the issue (Note: It can take a bit of time for this to go through as it registers extensions with your subscription).
Afterwards you should be able to to create your vnet peering across subscriptions.