Azure Stack Development Kit: Removing Network Restrictions

This process is confirmed working for Azure Stack version 1910.

 

So you've got your hands on an Azure Stack Development Kit (ASDK), hopefully at least of the spec of the PaaS Edition Dell EMC variant below or higher, and you've been testing it for a while now. You've had a kick of the tyres, you've fired up some VMs, syndicated from the marketplace, deployed a Kubernetes cluster from template, deployed Web and API Apps, and had some fun with Azure Functions.

 

ASDK_Specs.png

 

All of this is awesome and can give you a great idea of how Azure Stack can work for you, but there comes a time where you want to see how it'll integrate with the rest of your corporate estate. One of the design limitations for the ASDK is that it's enclosed in a Software Defined Networking (SDN) boundary, which limits access to the Azure Stack infrastructure and any tenant workloads deployed in it to being accessed from the ASDK host. Tenant workloads are able to route out to your corporate network, however nothing can talk back in.

 

There's a documented process for allowing VPN Access to the ASDK to allow multiple people to access the tenant and admin portals from their own machines at the same time, but this doesn't allow access to deployed resources, and nor does it allow your other existing server environments to connect to them.

 

There are a few blogs out there which have existed since the technical preview days of Azure Stack, however they're either now incomplete or inaccurate, don't work in all environments, or require advanced networking knowledge to follow. The goal of this blog is to provide a method to open up the ASDK environment to deliver the same tenant experience you'll get with a full multi-node Azure Stack deployed in your corporate network.

 

Note: When you deploy an Azure Stack production environment, you have to supply a 'Public VIP' network range which will function as external IPs for services deployed in Azure Stack. This range can either be internal to your corporate network, or a true public IP range. Most enterprises deploy within their corporate network while most service providers deploy with public IPs, to replicate the Azure experience. The output of this process will deliver a similar experience to an Azure Stack deployed in your internal network.

 

The rest of this blog assumes you have already deployed your ASDK and finished all normal post-deployment activities such as registration and deployment of PaaS resource providers.

 

Removing Network Restrictions

 

This process is designed to be non-disruptive to the ASDK environment, and can be fully rolled back without needing a re-deployment.

 

Within the ASDK environment there are two Hyper-V switches. A Public Switch, and an SDN Switch.

 

  • The Public Switch is attached to your internal/corporate network, and provides you the ability to RDP to the host to manage the ASDK.
  • The SDN Switch is a Hyper-V 2016 SDN managed switch which provides all of the networking for all ASDK infrastructure and tenant VMs which are and will be deployed.

 

Untitled.png

 

 

The ASDK Host has NICs attached to both Public and SDN switches, and has a NAT set up to allow access outbound to the corporate network and (in a connected scenario) the internet.

 

Rather than make any changes to the host which might be a pain to rollback later, we'll deploy a brand new VM which will have a second NAT, operating in the opposite direction. This makes rollback a simple case of decommissioning that VM in the future.

 

On the ASDK Host open up Hyper-V Manager, and deploy a new Windows Server 2016 VM. You can place the VM files in a new folder in the C:\ClusterStorage\Volume1 CSV.

 

NewVM.png

 

 

The VM can be Generation 1 or Generation 2, it doesn't make a difference for our purposes here. I've just used the Gen 1 default as it's consistent with Azure.

 

Set the Startup Memory to at least 2048MB and do not use Dynamic Memory.

 

 

StartupMemory.png

Attach the network to the SdnSwitch.

 

SDNSwitch.png

Click through the Hard Disk options, and then on the Installation Options, specify a Server 2016 ISO. Typically you'll have one on-host already from doing the ASDK deployment, so just use that.

 

2016ISO.png

Finish the wizard, but do not power on the VM.

 

While we've attached the VM's NIC to the SDN Network, because that network is managed by a Server 2016 SDN infrastructure, it won't be able to communicate with any other VM resources attached to it by default. First we have to make this VM part of that SDN family.

 

In an elevated PowerShell window on your ASDK host, run the following:

 

$Isolation = Get-VM -VMName 'AzS-DC01' | Get-VMNetworkAdapter | Get-VMNetworkAdapterIsolation

$VM = Get-VM -VMName 'AzS-Router1'

$VMNetAdapter = $VM | Get-VMNetworkAdapter

$IsolationSettings = @{

    IsolationMode = 'Vlan'

    AllowUntaggedTraffic = $true

    DefaultIsolationID = $Isolation.DefaultIsolationID

    MultiTenantStack = 'off'

}

$VMNetAdapter | Set-VMNetworkAdapterIsolation  @IsolationSettings

 

Set-PortProfileId -resourceID ([System.Guid]::Empty.tostring()) -VMName $VM.Name -VMNetworkAdapterName $VMNetAdapter.Name

 

Now that this NIC is part of the SDN infrastructure, we can go ahead and add a second NIC and connect it to the Public Switch.

 

PublicSwitch.pngNow you can power on the VM, and install the Server 2016 operating system - this VM does not need to be domain joined. Once done, open a console to the VM from Hyper-V Manager.

 

Open the network settings, and rename the NICs to make them easier to identify.

 

NICs.png

Give the SDN NIC the following settings:

 

IP Address: 192.168.200.201

Subnet: 255.255.255.0

Default Gateway: 192.168.200.1

DNS Server: 192.168.200.67

 

The IP Address is an unused IP on the infrastructure range.

The Default Gateway is the IP Address of the ASDK Host, which still handles outbound traffic.

The DNS Server is the IP Address of AzS-DC01, which handles DNS resolution for all Azure Stack services.

 

 

IPSettings1.png

Give the Public Network NIC an unused IP Address on your corporate network. Don't use DHCP for this, as you don't want a default gateway to be set. In my case, my internal network is 192.168.1.0/24, and I've given the same final octet as the SDN NIC so it's easier for me to remember.

 

IPSettings2.png

On the VM, open an elevated PowerShell window, and run the following command:

 

New-NetNAT -Name "NATSwitch" -InternalIPInterfaceAddressPrefix "192.168.1.0/24" -Verbose

 

Where the IP range matches your internal network's subnet settings.

 

While we have a default route set up to the ASDK Host, Azure Stack also uses a Software Load Balancer as part of the SDN infrastructure, AzS-SLB01. In order for all to work correctly, we need to set up some static routes from the new VM to pass appropriate traffic to the SLB.

 

Run the following on your new VM to add the appropriate static routes:

 

$range = 2..48

foreach ($r in $range) { route add -p "192.168.102.$($r)" mask 255.255.255.255 192.168.200.64 }

$range = 1..8

foreach ($r in $range) { route add -p "192.168.105.$($r)" mask 255.255.255.255 192.168.200.64 }


That's all the setup on the new VM complete.

 

Next you will need to add appropriate routing to your internal network or clients. How you do this is up to you, however you'll need to set up the following routes:

 

Each of:

192.168.100.0/24

192.168.101.0/24

192.168.102.0/24

192.168.103.0/24

192.168.104.0/24

192.168.105.0/24

192.168.200.0/24

… needs to use the Public Switch IP of the new VM you deployed as their Gateway.

 

In my case, I configured this on my router as below (click to expand).

 

RouterConfig.pngYou will need DNS to be able to resolve entries in the ASDK environment from your corporate network. You can either set up a forwarder from your existing DNS infrastructure to 192.168.200.67 (AzS-DC01), or you can add 192.168.200.67 as an additional DNS server in your client or server's network settings.

 

Finally, on the ASDK Host, open up an MMC and add the Local Certificates snap-in.

 

Export the following two certificates, and import them to the Trusted Root CA container on any machine you'll be accessing ASDK services from.

  MMCCerts.png

You should now be able to navigate to https://portal.local.azurestack.external from your internal network.

 

 

Portal.png

If you deploy any Azure Stack services, for example an App Service WebApp, you will also be able to access those over your internal network.

AppService.png

Even deployment of an HTTPTrigger Function App from Visual Studio now works the same from your internal network to Azure Stack as it does to Public Azure (click to expand).

 

DeployFunctionFromVS.gif

 

If at any time you want to roll the environment back to the default configuration, simply power off the new VM you deployed.

 

This setup enables the testing of many new scenarios that aren't available out of the box with an ASDK, and can significantly enhance the value of having an Azure Stack Development Kit running in your datacenter, enabling new interoperability, migration, integration, hybrid, and multi-cloud scenarios.