Uploading a VHD to Windows Azure

Ok – this is clearly one of those note-to-self posts, I’m sure it’s been blogged about elaborately, and that I have nothing to add, but I want to be able to find it quickly, so if anybody else finds this useful it’s simple a nice to have! Smile

One of the best ways to upload a VHD to Windows Azure is to use csupload (accessible via the ‘Windows Azure Command Prompt’ or in “%ProgramFiles%\Microsoft SDKs\Windows Azure\.NET SDK\2012-06\”

This would make sure the VHD is fixed size and actually compresses the data so transfer times are around a sixth of what I could achieve using other storage tools.

The commands I’ve used to upload –

csupload Set-Connection “SubscriptionID=<subscription iD>;CertificateThumbprint=<management cert thumbprint>;ServiceManagementEndpoint=https://management.core.windows.net/”

csupload Add-Disk -Destination “https://<storage_account_name&gt;.blob.core.windows.net/<container>/<vhd_name>” -Label “<vhd_name>” -LiteralPath “<local_path_to_vhd>” -OS Windows

Advertisements

Taking VHD snapshot with Windows Azure Virtual Machines

One very useful capability of virtualisation (in general) is the ability to take snapshots of a disk at a point in time, allowing restoring the machine to a known state quickly and easily.

With Windows Azure Virtual Machines one does not have access to the hypervisor (for obvious reasons!), so could this be achieved?

The answer is – by taking a snapshot of the underlying blob on which the VHD is stored.

To demonstrate I’ve created a brand new Machine from the galley – I’ve used the SQL template-

image

I then went ahead and created a quick database and added several rows to it through the SQL Management Console –

image

At this point, using Cerebrata’s Cloud Storage Studio in my case, I took a snapshot of the blob containing the VHD

image

With the snapshot taken I went ahead and removed the table, only to create a new one with slightly different structure and values, to make sure the state of the machine had changed since I took the snapshot –

image

Now I wanted to restore it to it’s previous state.

I could chose to do that to a brand new machine or to override the existing machine, the later would of course require that I first remove the machine (and disk) from the management portal so that the lease on the underlying blob would be returned making it writable, and that’s what I’ve done. If I wanted to create a new machine I would have used the CopyBlob capability to copy the snapshot to a new blob, making it writable (snapshots are read only), and then create a disk and machine out of that, next to my existing machine.

In my case I wen on to delete the Virtual Machine and the related disk and, using Storage Studio again in my case, I ‘promoted’ the snapshot – tthis simply copies the snapshot back to the original blob.

image

With this done, I now have the older snapshot as the current blob, and it was time to re-create the disk..

image

…and virtual machine –

image

image

..and sure enough, once the machine finished started I connected to it and looked at the databases using SQL Management studio again, which contained the original database and values, as expected –

image

Quick update – I have added a post about how to do this from PowerShell, read it here

Using the Windows Azure Media Services Preview with the Azure SDK 1.7

A few weeks ago we’ve announced the public preview of the Windows Azure Media Services

Being a preview things are expected to be a little bit rough on the edges and in the media services casse this currently means account setup through powershell only until we release the portal-based experience.

The instructions on how to set-up a Media Services account can be found here; unfortunately these use the Windows Azure PowerShell Cmdlets, which – at their current version – do not support the latest SDK version (1.7) – fair enough, given that it was released last week, but I wanted to play now! Smile

Fortunately, though, only two steps require the cmdlets – 5 and 6 – and looking at what they do (download publishing settings flie and using it to create a Media Services account) I realised it’s quite straight forward to replace them with manual steps; here what I’ve done –

First – I needed to ensure you have a management certificate for the required subscription in the management portal – and I did. this certificate is required to remotely manage accounts and deployments.

Then I needed to download the publishing profile and the easiest way to do that is to browse to https://windows.azure.com/download/publishprofile.aspx sign in and save the file that gets downloaded.

Next I needed to execute a couple of the Media Services powershell cmdlets, so I started with

Import-Module -Name ".\Microsoft.WindowsAzure.MediaServices.Management.PowerShell.dll

followed by

Import-MediaServicesPublishSettings and providing the path to the publishing settings file I downloaded earlier to the parameter when prompted.

With this done I could go back to the original instructions and carry on with step 7 onwards, resulting in a working Media Services account

Of course this is all very temporary, but if you do want to use it, and like me you only have the latest SDK, now you have no excuse!

Mix-n-Match on Windows Azure

One of the powerful aspects of Windows Azure is that we now have both PaaS and IaaS and that – crucially – the relationship between the two is not that of an ‘or’ but rather one of an ‘and’, meaning – you can mix and match the two (as well as more ‘traditional’, non-cloud, deployments, coming to think of it) within one  solution.

IaaS is very powerful, because it is an easier step to the cloud for many scenarios – if you have an existing n-tier solution, it is typically easier and faster to deploy it on Azure over Virtual Machines than it is to migrate it to Cloud Services.

PaaS, on the other hand, delivers much more value to the business, largely in what it takes away (managing VMs).

The following picture, which most have seen, I’m sure, in one shape or form, lays down things clearly –

image_thumb

And so – the ability to run both, within a single deployment if necessary, provides a really useful on-ramp to the cloud; Consider a typical n-tier application with a front end, middle tier and a back end database. The story could be something along these lines –

You take the application as-is and deploy it on Azure using the same tiered approach over VM roles - 

image4_thumb1

Then, when you get the chance, you spend some time and update your front end to a PaaS Web Role –

image8_thumb

Next – you upgrade the middle tier to worker roles –

image18_thumb

And finally – you migrate the underlying database to a SQL database –

image22_thumb

Over this journey you have gradually increased the value you get from your cloud, on your time frames, in your terms.

To enable communication between the tiers over a private network we need to make both the IaaS elements and the PaaS elements part of the same network, here’s how you do it –

Deploying a PaaS instance on a virtual network

With Virtual Network on Azure the first step is always to define network itself, and this can be done via the Management Portal using a wizard or by providing a configuration file –

image

The wizard guides you through the process which includes providing the IP range you’d like for the network as well as setting up any subnets as required. It is also possible to point at a DNS and link this to a Private Network – a VPN to a local network. for more details see Create a Virtual Network in Windows Azure.

With the network created you can now can deploy both Virtual Machines and Cloud Services to it.

Deploying Virtual Machines onto the network is very straight forward – when you run the Virtual Machine creation wizard you are asked where to deploy it to and you can specify a region, an affinity group or a virtual network –

image

If you’ve selected a network, you are prompted to select which subnet(s) you’d like to deploy it to –

image

and you’re done – , the VM will be deployed to the selected subnet in the selected network, and will be assigned the appropriate IP address.

Deploying Cloud Services to a private network was a little less obvious to me – I kept looking at the management portal for ways to supply the network to use when deploying an instance, and completely ignored the most obvious place to look – the deployment configuration file.

Turns out that a network configuration section has been added with the recent SDK (1.7), allowing one to specify the name of the network to deploy to and then, for each role, specify which subnet(s) to connect it to.

For detailed information on this refer to the NetworkConfiguration Schema, but here’s my example –

<ServiceConfiguration serviceName="WindowsAzure1" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*" schemaVersion="2012-05.1.7">
  <Role name="WebRole1">
    <Instances count="1" />
.
.
</Role>
  <NetworkConfiguration>
    <VirtualNetworkSite name="mix-n-match" />
    <AddressAssignments>
      <InstanceAddress roleName="WebRole1">
        <Subnets>
          <Subnet name="FrontEnd" />
        </Subnets>
      </InstanceAddress>
    </AddressAssignments>
  </NetworkConfiguration>
</ServiceConfiguration>

This configuration instructs the platform to place the PaaS instance in the virtual network with the correct subnet, and indeed, when I remote desktop into the instance, I can confirm that it had received a private IP in the correct range for the subnet (10.1.1.4, in my case) and – after disabling the firewall on my IaaS virtual machine – I can ping it successfully using its private IP (10.1.2.4).

It is important to note that the platform places no firewalls between tenants in the private network, but of course VMs may well still have their firewall turned on (the templates we provide do), and so these will have to be configured as appropriate.

And that’s it – I could easily deploy a whole bunch of machines to my little private network – some are provided as VMs, some are provided as code on Cloud Services, as appropriate, and they all play nicely together…

Deploying Joomla on Windows Azure Web Sites

Today we have released the preview of Windows Azure Web Sites to –

Quickly and easily deploy sites to a highly scalable cloud environment that allows you to start small and scale as traffic grows.

Use the languages and open source apps of your choice then deploy with FTP, Git and TFS. Easily integrate Windows Azure services like SQL Database, Caching, CDN and Storage.

Using Web Sites it is very easy to deploy many different OSS based platforms such as Drupal, Joomla!, DNN or WordPress

This simple post will show the steps required to get a Joomla site up and running and with Web Sites this is a wizard driven process that takes around 10 minutes end-to-end. how easy is that?!

To start the process, in the new, HTML5 based, management portal you click the ‘NEW’ button on the bottom left

image

In the menu that opens you select ‘WEB SITES’ and ‘FROM GALLERY’ to use one of the pre-canned solutions

image

You could, of course, create your own instance and do anything you’d like on it.

In my case, from the gallery that opens, I select the Joomla! 2.5 item and click the next button

image

and I’m then asked to provide the details for the deployment. these will, naturally, differ from platform to platform, but they usually follow the same line – url, username, password Smile as well as location for the deployment and database details.

For Joomla! I can select from MySQL (provided through ClearDB or Windows Azure SQL Database), I chose the former, just because-

image

Next, as I opted for MySQL Database, I’m asked for the details around that

image

and I’m good to go!

I hit the button and I can see my web site is being created, and then deployed

image

a couple of minutes later, my web site is running –

image

and I can see the detailed view –

image

..as well as browse to it –

image 

and, after signing in, editing it –

image

Connecting to SQL Server on an Azure Virtual Machine

Not surprisingly, one of the first things I’ve done when I got access to the new Virtual Machines capability on Windows Azure is create a VM with SQL Server 2012.I used the gallery image and was up and running in minutes.

The next logical thing was to remote desktop to the machine and play around, which did and I’m glad to report it was boring Smile – everything was exactly as I expected it to be.

Next, juts for fun, I wanted to see whether I could connect to the database engine from my laptop; I knew I won’t be able to use Windows Authentication, so the first thing to do was to create a SQL login on the server and make it an administrator. standard stuff.

I was now ready to connect, so I opened Managemenet Studio on my laptop and tried to connect to yossiiaas.cloudapp.net (the not so imaginative name I gave my instance), using the SQL login I created –

image

This sent management studio thinking for a while before coming up with the following error –

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)

Hmm….looks like a connectivity issue….errr…of course! – Virtual Machines are created by default with only one endpoint – for RDP. port 1433 will be blocked by the firewall on Azure.

Thankfully it is easy enough to add an endpoint for a running instance through the management portal, so I did.
Initially I created one that uses 1433 publically and 1433 privately, but that is not a good idea as far as security is concerned. it would be much better to use a different, unexpected, port publically and map it to 1433 privately and so I ended up using the not-so-imaginative 14333 (spot the extra 3) mapped to 1433.

This adds another layer of security (by obscurity) to my database.

With this setup I tried to connect again, using yossiiaas.cloudapp.net,143333 as the server name (note the use of ‘,’ instead of ‘:’ which is what I’d initially expected) – only to get a completely different error, this time

Login failed for user ‘yossi’. (.Net SqlClient Data Provider)

Looking at the Application event log on the server (through RDP) I could spot the real problem

Login failed for user ‘yossi’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only.

Basically – the server needed to be configured for SQL Authentication (it is configured for Windows Authentication only by default, which is the best practice, I believe)

With this done, and the service restarted I could now connect to the database engine remotely and do as I wish.

(whether that’s a good idea, and in what scenarios that could be useful is questionable, and a topic for another post…)

Somebody had renamed my website!

Last week I got an email from a customer who was surprised to find out that somebody had decided to point a different domain name to their web site (i.e. if there were Contoso.com, somebody pointed Northwind.com at their web site)

We couldn’t quite figure out why would somebody do that, or whether it’s really a problem but it certainly made them feel uncomfortable, and I can see why.

Technically there’s not much one can do to prevent others from doing this, and whilst you can go and complain to the registrar of the rouge domain, this is a hassle and will take some time to sort out, so a technical solution is needed to circumvent that.

The best approach, as far as I can tell, is to set the host name property in the site bindings in IIS to the correct domain name(s), which would result in IIS rejecting any request carrying a different domain name(s) and, indeed, on-premises, this is what everybody seems to do –

image

Any request made to the web site using a different domain (easily simulated using the hosts file in C:\Windows\System32\drivers\etc), will result in an HTTP 400 or HTTP 503 errors.

To set the host name on a web role instance declaratively one could use the hostHeader attribute of the binding element in the ServiceDefinition.csdef file – this will instruct the fabric to set the value provided in IIS and, as a result, any request made using a different host name will get rejected.

The problem with setting the host name to the production domain is that it would prevent access to the system whilst on staging – when the URL includes a generated quid – the staging URL is not known at design time as as such cannot be provided in the ServiceDefinitions.csdef file.

The solution is to set the site bindings dynamically from within the deployment, and the easiest way to do that is from the OnStart method of the Role –

    public class WebRole : RoleEntryPoint
    {
        public override bool OnStart()
        {
            // For information on handling configuration changes
            // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

            try
            {
                FixSiteBindings();
            }
            catch (Exception ex)
            {
                WriteExceptionToBlobStorage(ex);
            }
            return base.OnStart();
        }

Before I dive into my FixSiteBindings method I should point out that during the testing of this I’ve used the method pointed out by Christian Weyer to log any exception in OnStart to blob storage, which was very handy!

So – when the role start, FixSiteBindings is called, which looks as follows –

        void FixSiteBindings()
        {
            //web site name is the role instance id with the "_Web" postfix (WebSite name in ServiceDefinition.csdef)
            string webSiteName = RoleEnvironment.CurrentRoleInstance.Id + "_Web";

            using (ServerManager sm = new ServerManager())
            {
                //find web site
                Site site = sm.Sites[webSiteName];
                if (site == null)
                    throw new Exception("Could not find site " + webSiteName);
                //find the binding with hostName TBR - this is the one we need to replace
                Binding b = site.Bindings.FirstOrDefault(binding => binding.Host == "TBR");
                if (b != null)
                {
                    site.Bindings.Remove(b);
                    //add a binding with the expected domain - (address:port:hostName),protocol
                    site.Bindings.Add(string.Format(@"{0}:{1}:{2}", b.EndPoint.Address, b.EndPoint.Port, RoleEnvironment.DeploymentId + ".cloudapp.net"), b.Protocol);
                    sm.CommitChanges();
                }

            }
        }

In this code I use ServerManager (Microsoft.Web.Administration) to manipulate IIS settings and add the necessary bindings, but before I go into the code, let me explain the approach I’ve taken –

Ultimately, when my web site is in production, I need a host header for my domain name. I also need a host header with my staging URL, added dynamically.

As I might be using VIP swap between staging and production, I need to have both all the time because the OnStart code (or any start up tasks) will not run during a VIP swap, so I won’t get another chance to make any changes and besides – it is best to do as little as possible between staging and production to keep the system as stable as possible between environments.

Last – I need to ensure that the default binding, without the host header, does not exist, which would prevent others from pointing other domain names at my deployment.

To achieve all of the above I’ve concluded that the easiest way was to start with a ServiceDefinition.csdef file that defines the two bindings I need, use the domain name needed for production and a place holder, with a pre-determined host name for staging, in my case ‘TBR’ –

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="HostName" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="MvcWebRole1" vmsize="ExtraSmall">
    <Runtime executionContext="elevated"/>
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="yossitest.cloudapp.net"/>
          <Binding name="Staging" endpointName="Endpoint1" hostHeader="TBR"/>
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
      <Import moduleName="Diagnostics" />
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
    </Imports>
  </WebRole>
</ServiceDefinition>

When this gets deployed onto Azure I’ve already achieved two of my three requirements – there’s no default binding (as I’ve defined a host name for both endpoints) and I’ve got the production binding fully configured. I also have the beginning of my third requirement as I’ve got a binding for staging, and the known host name makes it easy to find it programmatically, so the last step would be to find that binding and update the host header with the correct value in the role’s OnStart method.

To achieve that I start with figuring out the name of the Web Site in IIS – this will be composed of the current role instance name, with the name of the web site as set in the ServiceDefinition.csdef flie as a postfix –  in my case “Web”.

With an instance of the ServiceManager I find the web site by name and then look for a binding with the host name ‘TBR’ – the one I need to update.

I’m ‘updating’ the binding by removing it and adding one in its place, making sure to use the values from the original one for everything but the host name, which keeps the flexibility of setting these through the ServiceDefinition file.

With the old binding removed and the new one added I commit the changes through the ServerManmager and I’m done – the role should now be set correctly allowing access to both production and staging.

One last thing worth pointing out is that for this code to run it must be run in elevated mode, otherwise trying to make any changes to IIS will result with an error due to lacking permissions; this can be achieved by adding the <Runtime executionContext=”elevated”/> element in the relevant role in the ServiceDefinition.csdef file as is shown above.

It is important to note that this only means that the RoleEntryPoint code will run in elevated mode, and rest of the role’s code will run as normal, which is quite important.

I’ve clearly taken a very specific approach to solve a very specific case.  I could have, for example, iterated over all the instance endpoints from the RoleEnvironment class and added the relevant bindings from that, which would be needed if the site had more than one endpoint; I’m sure that there are many variations for the solution above, but I hope that it provides a nice and easy solution for most and a good starting point for others.

%d bloggers like this: