IP addresses in Azure

Every now and again customers bring up the need to configure rules in firewalls (their own or, more typically, 3rd parties’) to allow accepting incoming requests from specific roles on Windows Azure.

The Story On-Premises

When you consider your home network – each device connected to the network gets a private IP address and is not, as a general rule,  available from the public internet.

Outgoing traffic from any device gets routed through the home router which sends the request on its behalf and relays the response. The net result of which is that the IP address that gets presented to the server as the ‘caller id’ is not the device’s IP but rather the router’s public IP address, obtained from the Internet Service Provider (ISP).

The same, largely, applies in most private data centres – a data centre will have one or more proxy servers and these will relay traffic to and from the public internet hiding the devices (servers) behind them.

Companies often rely on these public IP addresses to control who’s allowed access into their network through their firewall – when allowing a 3rd party to connect, its public facing IP address (or range of) is often configured in the firewall to allow access.

The Story on Windows Azure

So – when moving to Azure customers are needing to know what the public facing IP address would be for their instances and under what circumstances would these change.

I believe there are 4 4 possible scenarios to consider –

1. A Web / Worker role

2. A VM in a cloud service

3. A VM on a virtual network

4. A VM on a virtual network with site-to-site VPN

Cloud Services

Azure Cloud Services, of course, are behind a load balancer, so despite the fact that each instance would have its own IP, all instances share a single public IP – the assigned Azure VIP, and it is this IP that will get presented to remote servers. This is true for a single instance or multiple instances within the same service.

The VIP will remain for the lifetime of the deployment and will survive service updates, but will not remain if the deployment is deleted, something customers should beware of.

This model works well when scaling out the deployment as no single-instance’s IP is at play.

Virtual Machines in a cloud service

From an Azure point of view there’s very little difference between these and cloud services and so naturally, they behave exactly the same.

A Cloud Service can include multiple virtual machines, which may or may not share load-balanced endpoints and may or may not server the same role; regardless – as the exist within the same Cloud Service they have a single public IP and will all present the same ‘caller id’ to the outside world.

Virtual Machines on a Virtual Network

The theme continues within a Virtual Network to some extent – A Virtual Machine deployed to a virtual network still gets a Cloud Service, and within that cloud service the behaviour are as above – the cloud service’s VIP is presented and multiple VMs within the cloud service share the same public IP.

The difference is, of course that multiple cloud services are likely to be deployed on the Virtual Network and it is important to remember that any network devices such as firewalls, proxy servers, internet gateways are all hidden away using some ‘Azure Magic’ and it is always the relevant Cloud Service’s IP that gets presented.

This does mean that when additional VMs are added with their own Cloud Services, additional potential IPs for outbound access are introduced which will need to be added to any firewall configuration.

Scaling out within existing Cloud Services, however, will have no bearing.

Virtual Machines on a Virtual Network with Site-to-Site VPN

The story does change completely when a VPN is introduced.

For obvious reasons (or so I’m told) when a VPN is configured all outbound traffic from the Virtual Network is routed through the VPN. This provides greater control over the security of such calls as all traffic can be routed through the corporate firewall. it also, of course, what enables access to on-premises addresses.

It also means that the on-premises network configuration determines which IP will get presented as the caller IP in this case – it may be the proxy server or firewall IP or it may be the Virtual Machines public IP if treated as a DMZ server.

With a VPN in-place the Virtual Machines on Azure should be treated more or less exactly like any other virtual server on the corporate network.

Update: It turns out that the Windows Azure behaviour when VPN is configured has changed over the last few weeks and the story is a lot more consistent with the previous scenarios, which is a good thing! –

With the recent change, outgoing traffic bound to the address range exposed via the VPN is routed through it but all other outbound traffic goes out to the internet directly and not routing through the VPN. this means that for that traffic the VPN makes no different and the behaviour is consistent with the other scenarios – it is the clouds service’s IP that gets presented.

Cross posted on the Solidsoft Blog

Reflections on BizTalk Services

I had the pleasure of spending some more time this week with some integration legends – Richard Seroter, Michael Stephenson, Steef-Jan Wiggers, Sam Vanhoutte and Saravana Kumar amongst others.

When my dear wife asked me, as I set off for our dinner, whether we’d spend all evening talking about BizTalk I was adamant we would not. I was right of course – we only talked about integration for around 70% of the time!

Anyhow – discussion flowed on many topics and there’s much to reflect on, the first thing I want to go back to is BizTalk Services

These days I’m asked at least once a week, if not twice,  to position BizTalk vs. Service Bus vs BizTalk Services. This very much reminds me the early days of Windows Server AppFabric. There’s a lot of confusion out there as to how these all fit together and what’s the direction. or – in short – is BizTalk dead? 🙂

I always thought that the story from Microsoft makes reasonable sense, but that it was a difficult one to communicate; Following our dinner, and Sam’s talk at the event the following day I think I’ve managed to distil this a little bit –

Thinking about BizTalk Services capabilities (ignoring tooling for the moment) I think there are three big ticket items missing; The first two are –

1. Better routing – ‘first match’ routing is too basic. A more comprehensive routing solution is needed which allows distributing messages to n endpoints, Ideally allowing adding/removing them outside of code. sound familiar?

2. Persistence. BizTalk Services is a light weight solution. it takes a message, performs some mediation and delivers it to a destination. It does not provide any persistence and so it does not provide any ability to handle any exceptions – there are not retries, no suspended queue, no backup transport, etc.

Thinking about these two it is easy to be tempted by concluding that both could be mitigated by finally adding integration with the Azure Service Bus on the receive side of BizTalk Services. the Service Bus obviously has persistence and a pretty good routing capability using Topics and Subscriptions. Is that the future answer?

Not quite – the service bus is passive and with a very thin administration layer. to handle the configuration of the routing and certainly to handle the end-to-end scenarios with retries etc one will have to build a solution on top of these two services. Not to mention technical log/monitoring and business activity monitoring if these were deemed important.

Which leads me to number 3 – error handling, monitoring and administration

Hold on – we’re building BizTalk all over again, aren’t we?

I truly don’t think that this is Microsoft’s intention. I equally don’t think that BizTalk Services is a bad direction, I just think they serve wholly different purposes. at least for the foreseeable future - 

BizTalk Server is a full blown integration-platform/Middleware / SOA / ESA / all that Jazz.     ‘nough said.

To me – BizTalk Services is emerging as a better way to connect an application with the outside world. But it’s probably still best viewed as a part of a specific solution rather than a generic messaging platform.

If you like – it’s a better way to do point to point integration.

Viewed this way it is the overall solution that takes responsibility for the end-to-end message delivery, including monitoring and error handling. BizTalk Services merely provides a ‘bridge’ (that term finally makes some sense!) between the application to the outside world, exposing easier ways(?) to deliver the mediation – enrichment/fransformation/adaptation.

Sure – there’s still some way to go for BizTalk Services, particularly around the tooling, and certainly there’s many more features that would be very nice to have, but when I think about it this way I am less annoyed that there’s no built in integration with the service bus or that there’s not much you can do by way of routing, etc.

If I’m deploying a web application to the cloud and need a light weight integration with a known third party, or indeed on-premises system, I might consider BizTalk Services for the job. if I don’t need, or can’t afford, a full blown integration platform, that is.

And of course – if BizTalk Services is an extension to a solution, providing mediation through the cloud, the solution itself could be BizTalk Server, either on-premises or in the cloud, too. At Solidsoft we see quite a few opportunities these days of using Windows Azure Service Bus in conjunction with BizTalk. BizTalk Services fits the very same pattern building on BizTalk’s persistence, configuration, error management, logging, etc. 

Final note – in many respects this is in par with my thinking of light-weight ESBs such as nServiceBus and the likes – they are very good frameworks/platforms (well – some are!) but they are light weight, and are typically suited as a component in a particular solution which will wrap them nicely.

%d bloggers like this: