A common error with Set-AzureVNetGatewayKey

Recently I’ve helped a customer configuring a hub-and-spoke topology where they had one VNET at the ‘Centre’ configured with VPN to their on-premises network which then needed to be connected to multiple ‘satellite’ VNETs using VNET-VNET connectivity.

A very good walkthrough of how to configure advanced topologies and multi-hop networks on Azure can be found here

We’ve taken a step-by-step approach so we first established cross-premises connectivity using the portal UI, we then started to add the satellite networks one by one.

On the satellite sites we never had any issues as we could do everything through the UI. Expanding the connectivity on the central network required editing the configuration XML to link to multiple networks and after the first two, arguably as we were growing overly confident, we got the following error when trying to set the pre-shared key for the VPN gateway on the central network –

Set-AzureVNetGatewayKey -VNetName CentralVnet -LocalNetworkSiteName SatelliteVnet3 -SharedKey A1B2C3

Set-AzureVNetGatewayKey : BadRequest: The specified local network site name SatelliteVnet3′ is not valid or could not be found.

It took us a little while to figure out what we were missing as we didn’t get this every time. Turns out that occasionally we got ahead of ourselves and tried to update the shared key before importing the updated network configuration xml with the added link between the central network and the satellite one. Given that they key is set on the combination of the two, if you try to set it before making the actual link the command, understandably, fails (although the error message could be a bit clearer)

As an aside – we’ve also seen the following error when executing this command –

Set-AzureVNetGatewayKey : An error occurred while sending the request.

This happened when we delayed long enough for the AAD token in the PowerShell session and we could verify that by trying to execute any other command such as Get-AzureVNetGatewayKey or even Get-Azure Subscription. Using Add-AzureAccount to obtain a new token solved that one easily enough.

Deploying an internal website using Azure Web Roles

The most common use for Web Roles is to host web workloads that are accessible from the public internet, but for enterprises a common requirement is to deploy load balanced web workloads that are only accessible from within their network (the Azure VNET and, in many cases, from on-premises via ExpressRoute) Turns out that this is quite easy to achieve, but perhaps not well known – To achieve this, two additions are needed in the cloud service project’s configuration file. Firstly – in the service definition file the web role is likely to have an InputEndpoint, to connect the endpoint to the internal load balancer one can add the loadBalancerName attribute  –

<Endpoints> <InputEndpoint name=”Endpoint1″ protocol=”http” port=”80″ loadBalancer=”MyIntranetILB” /> </Endpoints>

Then, in the service configuration file one has to link the web role to a subnet and corresponding vnet and provide the details around the load balancer, this is done by adding the following section after the Role element within the ServiceConfiguration Element –

<NetworkConfiguration> <VirtualNetworkSite name=”[vnet name]” /> <AddressAssignments> <InstanceAddress roleName=”[role name]”> <Subnets> <Subnet name=”[subnet name]” /> </Subnets> </InstanceAddress> </AddressAssignments> <LoadBalancers> <LoadBalancer name=”MyIntranetILB”> <FrontendIPConfiguration type=”private” subnet=”[subnet name]” staticVirtualNetworkIPAddress=”″ /> </LoadBalancer> </LoadBalancers> </NetworkConfiguration>

Notice that you can (optionally) add a static ip for the load balancer – this is important as you’re likely to want to configure a DNS entry for this There is no need to create anything in advance other than the VNET and the Subnet – the internal load balancer will be created as part of the deployment. Upon a successful deployment the web role will NOT be accessible via the public internet and you will have a load balanced internal IP to access it from the VNET.

One of the questions I’ve been asked is whether the IP assigned to the ILB in the configuration file is registered with the DHCP server, and the answer appears to be yes. when I’ve configured an IP address right at the beginning on the subnet, the web roles and other VMs provisioned later on the network, were assigned IP addresses greater than the load balancer, so one does not have to worry about IP clashes when configuring an ILB in this fashion.

Windows Azure Private Network behaviour change

I’ve learnt today that IP routing on Windows Azure when a private network (VPN) is configured has changed recently (not quite sure exactly when, but in the last few weeks I suspect) in a way that can be quite dramatic to many –

Previously, as soon as a site-to-site VPN was configured on a virtual network on Windows Azure, all outbound traffic from the network got routed through the VPN.

This surprised me at the time – I assumed that as the range of IP addresses exposed via the VPN is known, only traffic directed at this ranged will  get routed via the VPN and all other traffic will go directly to the internet. This assumption was proven to be wrong, as we’ve learnt at the time.

This, I was told by several people, is more secure given that VMs on Azure only have one NIC. It also provided organisation an opportunity to more tightly control traffic as all traffic got routed through the organisation’s firewall. for example – organisations who needed to present a consistent IP publically when calling remote systems could control that via their on-premises network configuration as I previously blogged about here, a post I now had to correct, see below.

The main downside of this was always that not all traffic was sensitive and routing everything through the VPN and the on-premises network added latency to the requests and load on the network. For example – a virtual machine on a private network with VPN, calling other Azure services such SQL Database, the Service Bus or the Caching Service would see all requests routed to their internal network before going back out to the internet and to, potentially, the same data centre the request originated from.

In conversation today I’ve learnt (and since confirmed, of course!) that this behaviour has changed and that Azure now behaves as I originally expected it to and now only outbound traffic directed at the IP range exposed via the VPN is routed via the VPN and all other traffic goes straight out through the internet.

This makes perfect sense, but is quite a big change and I’m surprised this wasn’t communicated clearly.

There are downsides – some customers, as I eluded earlier, enjoyed the extra level of control that routing all traffic via their network provided – be it firewall configuration or control over the IP they got routed to external services from. This is not possible at the moment, but I’m hoping that in the not too distant future Windows Azure will offer choice to customers.

Cross posted on the Solidsoft blog

Mix-n-Match on Windows Azure

One of the powerful aspects of Windows Azure is that we now have both PaaS and IaaS and that – crucially – the relationship between the two is not that of an ‘or’ but rather one of an ‘and’, meaning – you can mix and match the two (as well as more ‘traditional’, non-cloud, deployments, coming to think of it) within one  solution.

IaaS is very powerful, because it is an easier step to the cloud for many scenarios – if you have an existing n-tier solution, it is typically easier and faster to deploy it on Azure over Virtual Machines than it is to migrate it to Cloud Services.

PaaS, on the other hand, delivers much more value to the business, largely in what it takes away (managing VMs).

The following picture, which most have seen, I’m sure, in one shape or form, lays down things clearly –


And so – the ability to run both, within a single deployment if necessary, provides a really useful on-ramp to the cloud; Consider a typical n-tier application with a front end, middle tier and a back end database. The story could be something along these lines –

You take the application as-is and deploy it on Azure using the same tiered approach over VM roles - 


Then, when you get the chance, you spend some time and update your front end to a PaaS Web Role –


Next – you upgrade the middle tier to worker roles –


And finally – you migrate the underlying database to a SQL database –


Over this journey you have gradually increased the value you get from your cloud, on your time frames, in your terms.

To enable communication between the tiers over a private network we need to make both the IaaS elements and the PaaS elements part of the same network, here’s how you do it –

Deploying a PaaS instance on a virtual network

With Virtual Network on Azure the first step is always to define network itself, and this can be done via the Management Portal using a wizard or by providing a configuration file –


The wizard guides you through the process which includes providing the IP range you’d like for the network as well as setting up any subnets as required. It is also possible to point at a DNS and link this to a Private Network – a VPN to a local network. for more details see Create a Virtual Network in Windows Azure.

With the network created you can now can deploy both Virtual Machines and Cloud Services to it.

Deploying Virtual Machines onto the network is very straight forward – when you run the Virtual Machine creation wizard you are asked where to deploy it to and you can specify a region, an affinity group or a virtual network –


If you’ve selected a network, you are prompted to select which subnet(s) you’d like to deploy it to –


and you’re done – , the VM will be deployed to the selected subnet in the selected network, and will be assigned the appropriate IP address.

Deploying Cloud Services to a private network was a little less obvious to me – I kept looking at the management portal for ways to supply the network to use when deploying an instance, and completely ignored the most obvious place to look – the deployment configuration file.

Turns out that a network configuration section has been added with the recent SDK (1.7), allowing one to specify the name of the network to deploy to and then, for each role, specify which subnet(s) to connect it to.

For detailed information on this refer to the NetworkConfiguration Schema, but here’s my example –

<ServiceConfiguration serviceName="WindowsAzure1" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*" schemaVersion="2012-05.1.7">
  <Role name="WebRole1">
    <Instances count="1" />
    <VirtualNetworkSite name="mix-n-match" />
      <InstanceAddress roleName="WebRole1">
          <Subnet name="FrontEnd" />

This configuration instructs the platform to place the PaaS instance in the virtual network with the correct subnet, and indeed, when I remote desktop into the instance, I can confirm that it had received a private IP in the correct range for the subnet (, in my case) and – after disabling the firewall on my IaaS virtual machine – I can ping it successfully using its private IP (

It is important to note that the platform places no firewalls between tenants in the private network, but of course VMs may well still have their firewall turned on (the templates we provide do), and so these will have to be configured as appropriate.

And that’s it – I could easily deploy a whole bunch of machines to my little private network – some are provided as VMs, some are provided as code on Cloud Services, as appropriate, and they all play nicely together…

%d bloggers like this: