Hide-and-seek with Cloud Services containers for Virtual Machines on Azure

From several conversations I had recently it transpired that there’s some confusion around the relevancy of Cloud Service when using Virtual Machines on Azure.
Initially it surprised me, but then it was pointed out to me that, in fact, the Windows Azure Management Portal does a very good job festering this confusion.

I’ll get to why in a moment but first let me start with this –

The simplest way to think about how a Virtual Machine is deployed to Azure is the good-old VM Role –

Just as the VM Role used to, a Windows Azure Virtual Machine is deployed as a role with a single instance within a cloud service. This is quite apparent when you look at it in the old portal – here’s an example of a VM I created in the Management Portal using the standard quick-create as seen on the old portal –

image

Ok – there’s plenty behind the scenes that the platform is doing differently when managing VMs vs. Web or Worker roles, but from a deployment model point of view, a cloud service is created with a single instance of a role which happens to be a Virtual Machine.

So – why is the portal confusing? well – because, and I suspect that it was an attempt to simplify things that ended up making things rather more confusing – when you deploy a VM, the management portal chooses to hide it’s existence –

I’m guessing that the thinking was that with a single VM the cloud service is rather irrelevant – conceptually, the role and the instance are one and all the management operations and monitoring are done at an instance level and so – there was no real need to surface the fact that a cloud service exists.

Unfortunately, things start to get more confusing as soon as you delete your VM because at some point during the delete operation, the corresponding Cloud Service will suddenly appear, and eventually, when the delete operation will complete, it will be empty –

image

It appears largely for one reason – so that it can be deleted; otherwise a phantom service will remain in the subscription with no way to manage it through the UI.
BTW – the API and Powershell (and some dialogues, as you will see shortly), seem to always reveal the real details, and the cloud service that is clearly visible in the old-portal is also fully accessible through these. In fact – the description given to the cloud service says it all –

image

So – why the platform, which had kindly created this cloud service for me – doesn’t also kindly remove it? well – I suspect that this is because it cannot assume that the container will only contain the original VM. How come? let’s look at another case when the cloud service suddenly appears –

If you deploy a second VM, from Gallery, and choose to connect to an existing virtual machine –

image

(notice how the non-existent cloud service suddenly makes an appearance?)

What you are actually requesting the platform to do is to deploy an new instance, of a new role, into the same cloud service, and that’s a whole different story!  Now – not only the cloud service has a meaning that is greater than either VM definitions, deleting the first VM clearly cannot delete the cloud service and so – shortly after requesting the second VM – the cloud service will make it’s formal appearance in the portal –

image

and again – looking at how things look like in the old portal can help get a somewhat clearer behind-the-scenes view – one service, two roles, one instance of each –

image

In the new portal, this is reflected like this (which isn’t telling the full story) –

image

Interestingly, although not wholly surprisingly, removing the second VM does not re-hide the cloud service and so it is possible to ‘see in the wild’ a Cloud Service with a single VM in it –

image

And so – given this seemingly erratic behaviour, even if there is some method behind the madness, it is not surprising the people get confused.
Personally, I would have much preferred if things were straight forward, black and white – you create a VM, you get the corresponding Cloud Service. Simples.

But there you go – nobody asked me, so at least I hope this helps shed some light on the subject

Side note – all of this also helps explain the very long and repetitive names given to disks – the naming convention is [cloud serivce]-[VM name]-[disk number]-[timestamp]

Backing up a Windows Azure Virtual Machine using PowerShell

Back in June I wrote about how to use snapshots to backup and restore VHDs used by Windows Azure Virtual Machines.

Whilst this approach does have the limitation of only copying the state persisted to the disk and so I would not recommend using it on running VMs, for cases when the VM can be shut-down temporarily this can be a handy approach.

This could be particularly useful for dev and test scenarios when the VMs can be turned off and when users of those VMs might want to be able to go back to various known states.

Of course doing the whole process manually is cumbersome, time consuming and error prone, so scripting is needed and so in this post I’ll show what it takes to do this using PowerShell

‘Backing up’ a virtual machine using snapshots

The first job would be to turn off the virtual machine we want to back-up

Stop-AzureVM -ServiceName $cloudSvcName -Name $vmname

With the VM stopped it is time to take the snapshot. unfortunately the Azure PowerShell Cmdlets do not, currently, support snapshot operations on Blobs but thankfully Cerebrata have produced a great set of Cmdlets for Azure Mananagement

Taking a snapshot of a Blob using the Cerebra Cmdlets is very easy –

Checkpoint-Blob -BlobUrl $blobUrl -AccountName $accountName -AccountKey $accountKey

It is worth noting that this command returns the url for the snapshot, and it might be useful to keep for later use. Also worth noting is that it is possible to add metadata to the snapshot, in the form of a collection of key-value pairs, which could be used to select snapshots later, finally it is useful to know that this command is pretty much instantaneous  so whilst there’s some downtime required for shutting down and starting up the machine, taking the snapshot takes no time at all.

With the snapshot taken it is now time to start our VM again

Start-AzureVM -ServiceName $cloudSvcName -Name $vmname

and so, with a very short powershell script, one can create an effective backup of a virtual machine which will be stored with the Blob. It is worth noting again that this is only reliable when the machine is stopped (turned off) before the snapshot is taken. it is also worth noting that if machines may have more than one disk and additional disks may need to be handled too (in the same way)

‘Restoring’ a virtual machine using snapshots

Restoring a VM is a little bit more involving – when a disk is attached to a VM a lease is created on the Blob storing the disk effectively locking it for everyone else.

That means that one has to remove the VM before a snapshot can be ‘promoted’ on a VHD (effectively overwriting the blob with the snapshot), stopping the VM would not be enough, and so to start we do have to stop the VM again, as before –

Stop-AzureVM -ServiceName $cloudSvcName -Name $vmname

but in order to update the OSDisk we also need to remove it completely.

Before we do that, given that we’re going to bring it back online shortly, it would be useful to export all of its configuration first –

Export-AzureVM -ServiceName $cloudSvcName  -Name $vmname  -Path 'c:\tmp\export.xml'

and with the VM configuration exported, we can go ahead and remove it

Remove-AzureVM -ServiceName $cloudSvcName -Name $vmname

Now that the machine is gone, its lease on the blob is gone too, which means we should be able to use Cerebrata’s Copy-Blob Cmdlet to copy the snapshot over the Blob

Copy-Blob -BlobUrl $blobUrl -TargetBlobContainerName $container -AccountName $accountName -AccountKey $accountKey

Like the Checkpoint-Blob command this is pretty much instantaneous.

Now that we have the updated disk in place, it’s time to bring our VM back to life, first import –

Import-AzureVM -Path 'c:\tmp\export.xml' | New-AzureVM -ServiceName $cloudSvcName -Location 'North Europe'

and that is it – when the VM creation, using the imported configuration, completes – we have our VM back online this time running off the updated VHD state

Final note – in this post I just wanted to show the key operations one would use to create a useful script. the actual process will vary depending on the specific requirements and , of course, it is possible  to make quite an elaborate script with PowerShell – for example it is possible to enumerate on all the VMs in a particular cloud service and for each VM query the OS and Data disks and then perform all the operations above over these enumerations, the sky is pretty much the limit! 🙂

Uploading a VHD to Windows Azure

Ok – this is clearly one of those note-to-self posts, I’m sure it’s been blogged about elaborately, and that I have nothing to add, but I want to be able to find it quickly, so if anybody else finds this useful it’s simple a nice to have! Smile

One of the best ways to upload a VHD to Windows Azure is to use csupload (accessible via the ‘Windows Azure Command Prompt’ or in “%ProgramFiles%\Microsoft SDKs\Windows Azure\.NET SDK\2012-06\”

This would make sure the VHD is fixed size and actually compresses the data so transfer times are around a sixth of what I could achieve using other storage tools.

The commands I’ve used to upload –

csupload Set-Connection “SubscriptionID=<subscription iD>;CertificateThumbprint=<management cert thumbprint>;ServiceManagementEndpoint=https://management.core.windows.net/”

csupload Add-Disk -Destination “https://<storage_account_name&gt;.blob.core.windows.net/<container>/<vhd_name>” -Label “<vhd_name>” -LiteralPath “<local_path_to_vhd>” -OS Windows

Taking VHD snapshot with Windows Azure Virtual Machines

One very useful capability of virtualisation (in general) is the ability to take snapshots of a disk at a point in time, allowing restoring the machine to a known state quickly and easily.

With Windows Azure Virtual Machines one does not have access to the hypervisor (for obvious reasons!), so could this be achieved?

The answer is – by taking a snapshot of the underlying blob on which the VHD is stored.

To demonstrate I’ve created a brand new Machine from the galley – I’ve used the SQL template-

image

I then went ahead and created a quick database and added several rows to it through the SQL Management Console –

image

At this point, using Cerebrata’s Cloud Storage Studio in my case, I took a snapshot of the blob containing the VHD

image

With the snapshot taken I went ahead and removed the table, only to create a new one with slightly different structure and values, to make sure the state of the machine had changed since I took the snapshot –

image

Now I wanted to restore it to it’s previous state.

I could chose to do that to a brand new machine or to override the existing machine, the later would of course require that I first remove the machine (and disk) from the management portal so that the lease on the underlying blob would be returned making it writable, and that’s what I’ve done. If I wanted to create a new machine I would have used the CopyBlob capability to copy the snapshot to a new blob, making it writable (snapshots are read only), and then create a disk and machine out of that, next to my existing machine.

In my case I wen on to delete the Virtual Machine and the related disk and, using Storage Studio again in my case, I ‘promoted’ the snapshot – tthis simply copies the snapshot back to the original blob.

image

With this done, I now have the older snapshot as the current blob, and it was time to re-create the disk..

image

…and virtual machine –

image

image

..and sure enough, once the machine finished started I connected to it and looked at the databases using SQL Management studio again, which contained the original database and values, as expected –

image

Quick update – I have added a post about how to do this from PowerShell, read it here

Mix-n-Match on Windows Azure

One of the powerful aspects of Windows Azure is that we now have both PaaS and IaaS and that – crucially – the relationship between the two is not that of an ‘or’ but rather one of an ‘and’, meaning – you can mix and match the two (as well as more ‘traditional’, non-cloud, deployments, coming to think of it) within one  solution.

IaaS is very powerful, because it is an easier step to the cloud for many scenarios – if you have an existing n-tier solution, it is typically easier and faster to deploy it on Azure over Virtual Machines than it is to migrate it to Cloud Services.

PaaS, on the other hand, delivers much more value to the business, largely in what it takes away (managing VMs).

The following picture, which most have seen, I’m sure, in one shape or form, lays down things clearly –

image_thumb

And so – the ability to run both, within a single deployment if necessary, provides a really useful on-ramp to the cloud; Consider a typical n-tier application with a front end, middle tier and a back end database. The story could be something along these lines –

You take the application as-is and deploy it on Azure using the same tiered approach over VM roles - 

image4_thumb1

Then, when you get the chance, you spend some time and update your front end to a PaaS Web Role –

image8_thumb

Next – you upgrade the middle tier to worker roles –

image18_thumb

And finally – you migrate the underlying database to a SQL database –

image22_thumb

Over this journey you have gradually increased the value you get from your cloud, on your time frames, in your terms.

To enable communication between the tiers over a private network we need to make both the IaaS elements and the PaaS elements part of the same network, here’s how you do it –

Deploying a PaaS instance on a virtual network

With Virtual Network on Azure the first step is always to define network itself, and this can be done via the Management Portal using a wizard or by providing a configuration file –

image

The wizard guides you through the process which includes providing the IP range you’d like for the network as well as setting up any subnets as required. It is also possible to point at a DNS and link this to a Private Network – a VPN to a local network. for more details see Create a Virtual Network in Windows Azure.

With the network created you can now can deploy both Virtual Machines and Cloud Services to it.

Deploying Virtual Machines onto the network is very straight forward – when you run the Virtual Machine creation wizard you are asked where to deploy it to and you can specify a region, an affinity group or a virtual network –

image

If you’ve selected a network, you are prompted to select which subnet(s) you’d like to deploy it to –

image

and you’re done – , the VM will be deployed to the selected subnet in the selected network, and will be assigned the appropriate IP address.

Deploying Cloud Services to a private network was a little less obvious to me – I kept looking at the management portal for ways to supply the network to use when deploying an instance, and completely ignored the most obvious place to look – the deployment configuration file.

Turns out that a network configuration section has been added with the recent SDK (1.7), allowing one to specify the name of the network to deploy to and then, for each role, specify which subnet(s) to connect it to.

For detailed information on this refer to the NetworkConfiguration Schema, but here’s my example –

<ServiceConfiguration serviceName="WindowsAzure1" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*" schemaVersion="2012-05.1.7">
  <Role name="WebRole1">
    <Instances count="1" />
.
.
</Role>
  <NetworkConfiguration>
    <VirtualNetworkSite name="mix-n-match" />
    <AddressAssignments>
      <InstanceAddress roleName="WebRole1">
        <Subnets>
          <Subnet name="FrontEnd" />
        </Subnets>
      </InstanceAddress>
    </AddressAssignments>
  </NetworkConfiguration>
</ServiceConfiguration>

This configuration instructs the platform to place the PaaS instance in the virtual network with the correct subnet, and indeed, when I remote desktop into the instance, I can confirm that it had received a private IP in the correct range for the subnet (10.1.1.4, in my case) and – after disabling the firewall on my IaaS virtual machine – I can ping it successfully using its private IP (10.1.2.4).

It is important to note that the platform places no firewalls between tenants in the private network, but of course VMs may well still have their firewall turned on (the templates we provide do), and so these will have to be configured as appropriate.

And that’s it – I could easily deploy a whole bunch of machines to my little private network – some are provided as VMs, some are provided as code on Cloud Services, as appropriate, and they all play nicely together…

Connecting to SQL Server on an Azure Virtual Machine

Not surprisingly, one of the first things I’ve done when I got access to the new Virtual Machines capability on Windows Azure is create a VM with SQL Server 2012.I used the gallery image and was up and running in minutes.

The next logical thing was to remote desktop to the machine and play around, which did and I’m glad to report it was boring Smile – everything was exactly as I expected it to be.

Next, juts for fun, I wanted to see whether I could connect to the database engine from my laptop; I knew I won’t be able to use Windows Authentication, so the first thing to do was to create a SQL login on the server and make it an administrator. standard stuff.

I was now ready to connect, so I opened Managemenet Studio on my laptop and tried to connect to yossiiaas.cloudapp.net (the not so imaginative name I gave my instance), using the SQL login I created –

image

This sent management studio thinking for a while before coming up with the following error –

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)

Hmm….looks like a connectivity issue….errr…of course! – Virtual Machines are created by default with only one endpoint – for RDP. port 1433 will be blocked by the firewall on Azure.

Thankfully it is easy enough to add an endpoint for a running instance through the management portal, so I did.
Initially I created one that uses 1433 publically and 1433 privately, but that is not a good idea as far as security is concerned. it would be much better to use a different, unexpected, port publically and map it to 1433 privately and so I ended up using the not-so-imaginative 14333 (spot the extra 3) mapped to 1433.

This adds another layer of security (by obscurity) to my database.

With this setup I tried to connect again, using yossiiaas.cloudapp.net,143333 as the server name (note the use of ‘,’ instead of ‘:’ which is what I’d initially expected) – only to get a completely different error, this time

Login failed for user ‘yossi’. (.Net SqlClient Data Provider)

Looking at the Application event log on the server (through RDP) I could spot the real problem

Login failed for user ‘yossi’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only.

Basically – the server needed to be configured for SQL Authentication (it is configured for Windows Authentication only by default, which is the best practice, I believe)

With this done, and the service restarted I could now connect to the database engine remotely and do as I wish.

(whether that’s a good idea, and in what scenarios that could be useful is questionable, and a topic for another post…)

%d bloggers like this: