On SQL Azure Reporting

I’ve been preparing a demonstration for a customer about SQL Azure Reporting so I’ve been playing around a little bit and I thought I’d share, at high level, what I’ve done (nothing fancy, I’m afraid, but if you’ve never looked at it, this should give you an idea of what’s involved)

To beginning was to get a data source to work on, and at the moment, that means SQL Azure database(s), which – of course – makes perfect sense, and so I promptly created a SQL Azure Database server, and, using the SQL Azure Migration Wizard, I’ve migrated good old Northwind onto it.


Now that I have a data source with some familiar data, it was time to create a report.
Given that I’m by no means a reporting expert and that this isn’t really the point of the demonstration, I did not try to get too creative and created a simple report of customers by country

I started by opening Visual Studio 2008 and creating a new project of type ‘Report Server Project Wizard’

The first step in the wizard was to define a data source, and it’s great that SQL Azure is an entry in the list of possible types; all that’s needed is to provide the connection string and the UI helps make that easy too


It was simply a case of typing in my database server name and credentials and provide the database name. The only other thing I needed to do is set the TrustServercertificate to True under the properties accessed through the Advanced button.

I then used the Query Builder to select the entire Customers Table and carried on with the Wizard specifying Tabular Format, Group By Country and the details fields (you can see I’ve been very creative)

Then, at the last page of the wizard, it was time to specify the deployment location I replaced the default value of http://localhost/ReportServer with the address of my Azure-based SQL Reporting ‘Server’, which I copied from the management portal


This, of course, is not necessary at this stage, it is perfectly fine to start working against a local reporting server and deploying the report later either through the management portal or by changing the server property in the report project’s properties and deploying from Visual Studio.

With the wizard complete I could now run my report from Visual Studio and see the results and the only thing I noticed is that I had to provide the credentials to the data source every time I ran the report.

This might be desirable in some cases, but I wanted a more streamlined experience, and so I set the credentials to the database in the data source. the report file itself will be protected through the management portal and the login to that, so these don’t get compromised.

With the data source credentials sorted I now deploy the project straight from visual studio and after minute or so it is visible in the management console. Clicking on the report renders it successfully –


So – at this point the report is fully operatoinal, and can be accessed via a publicly available url. access is governed by username/password pairs setup through the admin console and permissions set on the report itself (or a folder0, and that’s probably good enough for many scenarios for departmental reports inside the organisation.

For more public reports, ones available for external parties for example, I think that re-hosting the report in a web role and leveraging the ACS for access control would be a lot more flexible and manageable, and so I moved on to do this as well –

Embedding the report simply meant, in my little example anyway, using the ReportViewer control on an ASP.net page; I’ve configured the ServerReport property of the viewer with the relevant Uri’s and made sure to set the control ProcessingMode property to ‘Remote”.

I then used code to assign the fixed credentials to reporting services. once again – my application is going to be protected by ACS and this code is server side code, so I am comfortable with embedding these in the code (should be configuration, of course…)

At this point I could run my little ASP.net application locally and that would succesfully access the report in reporting services and display it on screen –


The last step, then, was to add support for STS.

I’ve made all the necessary configuration in the management portal, and copied the ws-federation metadata url, and then used the add STS reference wizard to make the necessary configuration changes to my application –


The result of the wizard was a set of entries added to my web.config, to which I added, under <system.web> the following –

<httpRuntime requestValidationMode=”2.0″/>
<pages validateRequest=”false”/>
  <deny users=”?” />

Running the application now automatically redirects me to the ACS, and – as I have configured to possible identity providers (Windows Live Id and Google) I am presented with a selection screen –


Choosing the provider I want I am redirected to the login screen, hosted by the identity provider, and from there back to my application. the second time I will access my reporting application these redirects will happen, but local cookies in all parties will remember me and I won’t need to sign in again, until I sign out or the cookies expire.

The only thing to note is that the ACS configuration includes the url to the application, so once tested locally this needs to change to include the url on Windows Azure but once done, and deployed to Windows Azure, I can now browse to my reporting application, login using, for example, my Windows Live ID, and view a report on SQL Azure Reporting.

Of Claims and Public Identities

I remember sitting in a session delivered a few years ago by Kim Cameron, in which I heard for the first time about ‘the laws of identity’, and I was hooked immediately. I find the topic of identity very interesting and important, and too often looked over.

The cloud often bring this into the conversation and often I find people surprised how comprehensive the Microsoft story around identity is and how powerful the ACS is. I’ve discussed an aspect of this – the ability to federate with the ‘big boys’ – Windows Live ID, Yahoo, Google and Facebook in my previous post.

In his talk, Kim listed 7 ‘laws’ –

  1. User Control and Consent
  2. Minimal Disclosure for Constrained Use
  3. Justifiable Parties
  4. Directed Identity
  5. Pluralism of Operators and Technologies
  6. Human Integration
  7. Consistent Experience Across Contexts

You can read about them here.

In my post I mentioned that the application developer could leverage the ACS to authenticate the user without having to write authentication code, worry about storing username and password, implementing password reset, etc., but suggested the developer would still need to implement a registration screen to get any information that is required about the user and manage a local profile.

This is due to the second law – an identity provider should, in most cases, and certainly in the case of a provider outside the organisation, only disclose the user’s identity, and nothing else. all that’s needed is for someone to say – ‘yep – this user is X as far as I can tell’,

To understand why this is important, consider the difference between these two examples –

When using Windows Live ID as the identity provider, the calling web site (the Relying Party, or RP), receives back a token with two claims – the identity provider (“uri:WindowsLiveID”) and the name identifier (in the Windows Live ID case – a GUID). no personal details are shared, nothing that can be used to hack the account.

The web site is then expected to have details stored against those two facts, which the user had given directly to the web site, as it should be according to the 1st law, and if the web site does not have any such details, it should ask the user directly, and the user – now knowing exactly who is getting these details, can make a conscious call as to whether to share them or not.

In contrast, consider Google’s identity provider – this returns four claims – the identity provider (‘Google’) and the name identifier (a link with a unique identifier in it), similar to Windows Live ID, but also the user’s friendly name (‘Yossi Dahan’ in my case) and email address (‘my gmail address used to sign-in to Google). these are two pieces of information I did not necessarily wish to share with the web site, but they were shared without me realising it (until I debugged the code, that is).

Now – it is true that as a user I needed to first proactively go to the website, and then when got redirected to the Google sign-in page I had to agree for details to be shared, so it’s not exactly horrible, but it does demonstrate the point that identity providers should really provide very little detail and that web site developers, whilst they don’t have to worry about authentication, should really manage user profiles independently and handle user registration against those identities.

Using Access Control Service for Identity Federation

The Windows Azure Platform is full of goodies. Some are at the heart of the conversation – Web Roles, Worker Roles, SQL Azure, the fabric controller – these form a part of pretty much every conversation. Some are often mentioned, but usually in very little detail – the Marketplace, for example, or the Service Bus or the Caching capabilities.

Another topic I find that I often end up glossing over in conversations is the Access Control Service, not because it’s not useful or important, it is, simply because the platform is so big, and there’s only so much one can discuss in any one conversation, but federated identity is something I’m quite passionate about, and I just love the Windows Identity Foundation, so the Access Control Service is bound to be something close to my heart.

The Access Control Service is seemingly a fairly simple offering – on it’s own, in most circumstances, it does not really do much per-se, but, coupled with the Windows Identity Foundation and the .net framework, it enables federated identity scenarios (think single-sign on within, as well as across, organizations) easily, reliably and securely.

Using ACS, you can take any web application and, in just a few clicks, allow users to authenticate to it using all the major public identity providers (Windows Live ID, Yahoo, Google and Facebook) as well as, if you have ADFS, your corporate identity, or – if you need to – any other custom Secure Token Service that supports industry standards.

Want a proof? take a look at this walk through that shows how to enable a web site to use Google ID.

As a developer, ACS takes away the need to build authentication mechanism, store passwords, build password reset capabilities and all of that, you can simply leverage other identity providers. all that’s left for you to do is to enhance the given with your own profile information (as some of these provider will only give you a GUID for that user, no personal information is shared, which is a good thing!)

So – using the ACS can be a great relief for anyone building a public web site as it saves you a lot of work and your users the need to remember yet another set of credentials, but the support for ADFS means you can also protect your web assets with your corporate identity, no matter where they are deployed (your data centre, someone else’s data centre or the public cloud) and also – considering the Consumerization of IT trend – allowing users access to enterprise applications using external identities in a managed way may not be a bad thing.

Enabling Hybrid Cloud/On-Premises Solutions

Paolo Salvatori write a fantastic paper on How to integrate a BizTalk Server application with Windows Azure Service Bus Queues and Topics and being an Azure guy at present but a BizTalk guy for the past 11 years, this is “right up my street” as they say, although I do think that this article will come very useful to anyone interested in hybrid solutions and the messaging capabilities in the cloud, with or without BizTalk in the picture as Paolo does a great job of introducing the service bus capabilities and describing some typical scenarios.

This is an important topic – as much as we’d like to see everything on Windows Azure, naturally, enterprises will not magically teleport their entire IT estate into the cloud.
Some applications will probably remain on premises for a while , if not forever, and there are some very valid reasons for that.

What this means, of course, is that any enterprise that is half-serious about cloud adoption needs to consider how to support the hybrid model where some applications are in the cloud whilst others are on premises and how to do cloud/on-premises integration.

The Windows Azure platform, with the Service Bus capabilities – the relay service, queues and topics, as well as the upcoming integration capabilities, enables the hybrid model easily yet robustly using familiar concepts and tools.

Couple this with a strong enterprise integration solution, one implemented on BizTalk server for example, which provides the ‘gateway’ to and from the cloud within the enterprise and you can take a federated approach to enterprise service bus with one leg on the ground and another in the cloud.

Windows Azure beats the competition in cloud speed test

I’ve been sent this from all over the place….

Well worth reading the whole thing, but here are a few extracts –

Microsoft’s Windows Azure has beaten all competitors in a year’s worth of cloud speed tests, coming out ahead of Amazon EC2, Google App Engine, Rackspace and a dozen others

Some of the numbers….

The Windows Azure data center in Chicago completed the test in an average time of 6,072 milliseconds…compared to 6.45 seconds for second-place Google App Engine….while Amazon EC2 in Virginia posted a nearly identical 7.20. Amazon’s California location scored 8.11 seconds on average.

… and availability too

Azure also did well in availability, with its Chicago facility hitting 99.93 percent uptime over the past month, significantly better than the 97.69 percent score posted by Google App Engine, and among the highest overall. Rackspace in Texas achieved 99.96 percent uptime, while Amazon EC2 in California scored 99.75 percent and EC2 in Virginia was up 99.39 percent of the time.

Windows Azure and mobile devices

In a session with a large Telco company yesterday we stated that Azure is a great companion to mobile device development only to be presented almost immediately with the, quite reasonable, question – WHY?

And so – here’s my version of the answer –

Most practical mobile applications are not stand-alone, they have a backend, be it server storage, some business service or some combination of the two.
This adds some concerns to the application developer, for example –

  • Needing a place to host the backend logic/storage in a way that is both secure and accessible from the internet, without incurring big costs or have to go through lengthy processes to re-configure networks etc.
  • Needing to start small to minimise risk (the application market is somewhat unpredictable) but knowing that it is possible to scale if and when the demand for my application grows. (Fail Fast or Scale Fast)

Windows Azure compute and storage capabilities provides a platform that answers both of these concerns (and others) – the cloud based storage is fully accessible from the internet, using standard open protocols and with published SDKs for all major platforms.

Similarly – web services (over work roles) and worker roles can be deployed to Azure to provide any backend processing cheaply and be made available to the device using web services or REST technologies

With the cloud there is no upfront investment in infrastructure, and one can add or remove instances as needed and so Fail Fast or Scale Fast is easily achieved in a risk free fashion.

To add to this – Windows Azure also includes the Service Bus with it’s relay service which can be used to securely relay requests from the device into a service inside a private network (for example – the enterprise’s corporate network) without needing to make any network or firewall changes (more advanced scenarios can take advantage of queues and topics in the cloud, that enable a publish/subscribe type patterns as well, but in most cases this is not necessary)

And so – I hope you would agree that there there are quite a few reasons why the Windows Azure Platform is a great companion to mobile application development!

New Beginnings

In February this year I have joined the ranks of Microsoft UK as a technical pre-sales guy after working as an independent consultant for about 7 years (and as a developer of kind for several years before that).

Prior to joining Microsoft I’ve been working almost exclusively with BizTalk from the early stages of BizTalk Server 2000 and all the way through to BizTalk Server 2010 and so it made perfect sense for me to join the ‘Application Platform’ team responsible as a technical specialist for BizTalk Server.

One of the (many) reasons that led me to join Microsoft is my belief that the IT world, and with many businesses, are going through a big shift with cloud technologies maturing and becoming mainstream and it became clear to me that it is time for a change; I’ve been doing pretty much the same thing for a long time, and I wanted to do something new, and I could not think of a more interesting space to be in than Cloud Computing and a better company in this space than Microsoft. I really wanted to get closer Windows Azure

Turns out that I played my cards right, and shortly after joining (as an integration technical specialist) the Application Platform team in which I work had taken ownership of the Windows Azure in Microsoft UK and I was asked to work with our enterprise customers to help them leverage the platform.

From a Microsoft perspective this change of ownership simply reflects the fact that the technology had matured and is now absolutely ‘mainstream’, and of course – highlights the fact that it is an integral part of Microsoft’s Application Platform.

From my perspective it is a huge opportunity to work with a fantastic technology and help companies harness the cloud to their success.

And so with this new start I decided it is time to start a new blog, as my work with Sabra Ltd and my rambling in blog.sabratech.co.uk are associated closely with my previous persona it could get quite confusing.

It is hard to tell how this new blog of mine will shape up, how often I will get to write and what it would be about. only time would tell. but my hope is that as my role as a technical pre-sale guy sits firmly between business, IT and development, I will get to cover many angles of this cloud thing.

%d bloggers like this: