IP addresses in Azure

Every now and again customers bring up the need to configure rules in firewalls (their own or, more typically, 3rd parties’) to allow accepting incoming requests from specific roles on Windows Azure.

The Story On-Premises

When you consider your home network – each device connected to the network gets a private IP address and is not, as a general rule,  available from the public internet.

Outgoing traffic from any device gets routed through the home router which sends the request on its behalf and relays the response. The net result of which is that the IP address that gets presented to the server as the ‘caller id’ is not the device’s IP but rather the router’s public IP address, obtained from the Internet Service Provider (ISP).

The same, largely, applies in most private data centres – a data centre will have one or more proxy servers and these will relay traffic to and from the public internet hiding the devices (servers) behind them.

Companies often rely on these public IP addresses to control who’s allowed access into their network through their firewall – when allowing a 3rd party to connect, its public facing IP address (or range of) is often configured in the firewall to allow access.

The Story on Windows Azure

So – when moving to Azure customers are needing to know what the public facing IP address would be for their instances and under what circumstances would these change.

I believe there are 4 4 possible scenarios to consider –

1. A Web / Worker role

2. A VM in a cloud service

3. A VM on a virtual network

4. A VM on a virtual network with site-to-site VPN

Cloud Services

Azure Cloud Services, of course, are behind a load balancer, so despite the fact that each instance would have its own IP, all instances share a single public IP – the assigned Azure VIP, and it is this IP that will get presented to remote servers. This is true for a single instance or multiple instances within the same service.

The VIP will remain for the lifetime of the deployment and will survive service updates, but will not remain if the deployment is deleted, something customers should beware of.

This model works well when scaling out the deployment as no single-instance’s IP is at play.

Virtual Machines in a cloud service

From an Azure point of view there’s very little difference between these and cloud services and so naturally, they behave exactly the same.

A Cloud Service can include multiple virtual machines, which may or may not share load-balanced endpoints and may or may not server the same role; regardless – as the exist within the same Cloud Service they have a single public IP and will all present the same ‘caller id’ to the outside world.

Virtual Machines on a Virtual Network

The theme continues within a Virtual Network to some extent – A Virtual Machine deployed to a virtual network still gets a Cloud Service, and within that cloud service the behaviour are as above – the cloud service’s VIP is presented and multiple VMs within the cloud service share the same public IP.

The difference is, of course that multiple cloud services are likely to be deployed on the Virtual Network and it is important to remember that any network devices such as firewalls, proxy servers, internet gateways are all hidden away using some ‘Azure Magic’ and it is always the relevant Cloud Service’s IP that gets presented.

This does mean that when additional VMs are added with their own Cloud Services, additional potential IPs for outbound access are introduced which will need to be added to any firewall configuration.

Scaling out within existing Cloud Services, however, will have no bearing.

Virtual Machines on a Virtual Network with Site-to-Site VPN

The story does change completely when a VPN is introduced.

For obvious reasons (or so I’m told) when a VPN is configured all outbound traffic from the Virtual Network is routed through the VPN. This provides greater control over the security of such calls as all traffic can be routed through the corporate firewall. it also, of course, what enables access to on-premises addresses.

It also means that the on-premises network configuration determines which IP will get presented as the caller IP in this case – it may be the proxy server or firewall IP or it may be the Virtual Machines public IP if treated as a DMZ server.

With a VPN in-place the Virtual Machines on Azure should be treated more or less exactly like any other virtual server on the corporate network.

Update: It turns out that the Windows Azure behaviour when VPN is configured has changed over the last few weeks and the story is a lot more consistent with the previous scenarios, which is a good thing! –

With the recent change, outgoing traffic bound to the address range exposed via the VPN is routed through it but all other outbound traffic goes out to the internet directly and not routing through the VPN. this means that for that traffic the VPN makes no different and the behaviour is consistent with the other scenarios – it is the clouds service’s IP that gets presented.

Cross posted on the Solidsoft Blog

%d bloggers like this: