Reading metric data from Azure using the Azure Insights library

A couple of months ago I published a post on how to get metrics from the Azure API Management service in order to integrate it into existing monitoring tools and/or dashboards.

One of my customers is building a solution with a significant use of Document DB at the heart of it and asked how we could do the same for Document DB.

In this blog post I will walk through the steps required to obtain the metrics for Document DB using the preview release of the Microsoft Azure Insights Library (I’ve used version 0.6.0) to obtain metric data for Document DB using Azure Active Directory (AAD) for authentication but the same approach should work for any of the services that are accessible via the Azure Resource Manager (and the Azure Preview Portal)

There are three steps required to obtain metrics through Azure Insights –

  1. 1. [One time] access control configuration
  2. 2. Authenticating using Azure Active Directory
  3. 3. Requesting the metrics from Azure Insights

Configuring Access Control

Access to resources through the Azure Resource Manager (ARM) is governed by AAD and so the first step is to have an identity, configured with the right permissions to our desired resource (a Document DB account, in this case), which we could use to authenticate to Azure AD and obtain a token.

The token would then be used  in subsequent requests to ARM for authentication and authorisation.

Naturally, we need our solution to work programmatically, without an interactive user, and so we need to have a service principal with the right permissions.

Service Principals exist in AAD through web Applications – every configured web application is also a service principal in AAD and its client Id and client secret can be used to authenticate to AAD and obtain a token on the apps behalf.

this documentation article shows how to create such application using Powershell, configure it with permissions to use the ARM API as well as how to obtain an authentication token for it

the following steps can be used to configure this through the protal –

In the Management portal open the directory’s application tab and clicked on the Add button at the bottom toolbar.

In the popup window, enter a name for the application ( I used ‘monitoring’, as I plan to use it to monitor my services) and keep the web application radio button selected


In the next screen enter a sign-on URL and application URI.
The values do not really matte as long as they are unique, as this application isn’t really going to be used by interactive users and so does not perform any browser redirection


As I’ve mentioned – the application represents a service principal in the directory.
To be able to authenticate using this service principal need three pieces of information – the client Id (essentially – the username of the principal), the client secret (the equivalent to the password) and the tenant Id (which is the authentication authority).

Once the application is created, within the application page in AAD, generate an application secret and note that as well as the client Id.


The tenant Id can be extracted using the View Endpoints button at the bottom toolbar and noting the GUID part of any of the endpoints shown –


With the application created,  the service principal is ready and the next step is to configure role-based-access-control to provide permissions to the relevant resources to the principal.

At the moment this needs to be done via PowerShell

To see all existing service principles –

  1. Open Azure PowerShell console
  2. Use Azure-AddAccount and sign-in with an account that has permissions to the directory
  3. Switch to AzureResourceManager mode using the Switch-AzureMode AzureResourceManager command
  4. Ensure you’re using the correct subscription using Select-AzureSubscription
  5. Use the Get-AzureADServicePrincipal command to see a list of the service principals in the active directory.

Doing so also reveals the service principal’s Object Id which is needed in order to assign permissions to the newly created principal to the relevant resource. This is done using the New-AzureRoleAsignment cmdlet.

The following example assigns a service principal the Contributor role for a specific resource group.

The scope can be expanded, of course, to more specific elements such as the document db account or even a specific database –

New-AzureRoleAssignment -ObjectId {service principal’s objectId} -RoleDefinitionName {role i.e Contributor} -Scope /subscriptions/{subscription id}/resourceGroups/{resource group name}

Authenticating using Azure Active Directory

With a principal in place and the right permissions set, the next step is to authenticate to Azure AD using the service principal’s credentials (the client id and client secret).

In .net this can be done using the Active Directory Authentication Library, here’s a brief code sample-

private static string GetAuthorizationHeader (string tenantId, string clientId, string clientSecret)
	var context = new AuthenticationContext("" + tenantId);
	ClientCredential creds = new ClientCredential(clientId, clientSecret);
	AuthenticationResult result = 
		context.AcquireToken("", creds);
	return result.AccessToken;

This code snippet returns an authorization token from AAD for the service principal which can subsequently be used in requests to the Azure Resource Management to authenticate and authorise these requests.

As an aside – obtaing a token can also be done in PowerShell as follows –

$password = ConvertTo-SecureString {client secret} -AsPlainText –Force

$creds = New-Object System.Management.Automation.PSCredential ({client id}, $password) 
Add-AzureAccount -ServicePrincipal -Tenant {tenant id} -Credential $creds

David Ebbo has a great blog post title Automating Azure on your CI server using a Service Principal which walks through this in detail.

Requesting metrics from Azure Insights

Now that authentication has been completed and we’ve obtained a token for the principal, we’re ready to call the Azure Resource Manager API and retrieve the metrics required. The best way to do so is to use the aforementioned Microsoft Azure Insights Library.

The first step is to instantiate an InsightsClient with the right credentials, here’s a code snippet that does that (and uses the GetAuthorizationHeader method described earlier) –

private static InsightsClient getInsightsClient(string subscriptionId, string tenantId, string clientId, string clientSecret)
	Uri baseUri = new Uri("");
	string token = GetAuthorizationHeader(tenantId, clientId,clientSecret);
	SubscriptionCloudCredentials credentials = new 
		TokenCloudCredentials(subscriptionId, token);
	InsightsClient client = new InsightsClient(credentials, baseUri);
	return client;

With an instance of the insights client obtained, reading metrics is a simple case of creating a Urli pointing at the required resource and a filter string describing the required metrics and time frame before calling the GetMetricsAsync method of the MetricsOperations property in the client –


The result of this call is a MetricListResponse containing the metrics requested for the period grouped

InsightsClient client = getInsightsClient(subscriptionId,tenantId , clientId, clientSecret);

string start = DateTime.UtcNow.AddHours(-1).ToString("yyyy-MM-ddTHH:mmZ");
string end = DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mmZ");

string resourceUri = string.Format("/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.DocumentDB/databaseAccounts/{2}/", subscriptionId, resourceGroupName, documentDbName);
string filterString = string.Format("(name.value eq 'Total Requests' or name.value eq 'Average Requests per Second' or name.value eq 'Throttled Requests' or name.value eq 'Internal Server Error') and startTime eq {0} and endTime eq {1} and timeGrain eq duration'PT5M'", start, end);

CancellationToken ct = new CancellationToken();

Task<MetricListResponse> result = client.MetricOperations.GetMetricsAsync(resourceUri, filterString, ct);

return result.Result;

according to the time-grain value supplied


Two things worth noting, as they caught me out initially –

  1. MetricValues only contains records for time slots for which there is data. If you consider the request above – I’ve asked for data for the last hour aggregated as groups of 5 minutes. the result will not necessarily contain 12 values under MetricValues groups. it will only contain records form time aggregation windows in which there was activity
  2. In addition – there is some latency in getting the data, at the moment of about 20 minutes, but this is likely to be reduced.
    The combination of these meant that for a while I thought this my  code was not working  because during my tests  I was only running load against my Document DB account for a short whilst at a time and then immediately requested the metrics.
    Because of the latency – data was not yet available, which meant MetricValues was always empty.

Once the above was pointed out to me it all made perfect sense, of course.

Integration testing with Azure Mobile Services (or– how to generate authorisation token)

A customer I’m working with is deploying a mobile application backed by Azure Mobile Services and as part of their CI process they needed to automate integration testing of the mobile backend.

Given that an Azure mobile services .net backend is essentially MVC Web API, unit testing it as part of a continuous deployment wasn’t much of an issue and, in-fact, integration testing would have been easy too  – simply calling the web API from a test project – if it wasn’t that our scenario, like many others’, involved authentication.

Required Headers

Every request to the mobile services backend needs to include the X-ZUMO-APPLICATION http header carrying the application key.

This does not pose any challenges for testing because it is essentially a constant value.

For backend operations that require authentication, however, in addition to the application key header the client must also supply a valid authorisation token through the X-ZUMO-AUTH header and in order to automate testing the test code must be able to programmatically provide a valid authorisation header.

Authentication flow with Mobile Services

In order to understand how best to approach this, it is important to understand the authentication flow of mobile services as I find that many have a small misunderstanding around it.

In many oAuth flows the client contacts the identity provider directly to obtain a token before calling the resource carrying the token retrieved. This is not the case with Mobile Services which uses a server flow-

Triggering authentication using the Mobile Services SDK (App.MobileService.LoginAsync) opens a browser that is directed at the mobile services backend and and not the identity provider directly.

The backend identifies the identity provider requested (which is a parameter to LoginAsync) and performs the necessary redirection based on the identity configuration.

The identity provider authenticates the user and redirects back to mobile services backend which is then able to inspect the token issued by the identity provider and use it to create the ZUMO authorisation token.

The key point here is that, with Mobile services, the authorisation token used is a JWT token which is ALWAYS created by mobile services as part of the authentication flow.

This approach means that following a valid authorisation flow, all tokens exchanged use the same structure irrespective of the identity provider.

It also means, that tokens are interchangeable, as long as they are valid.

Approach for Integration Testing

Understanding that the tokens are interchangeable, as they are always created by the Mobile Services backend, highlights a possible approach for integration testing – if I could create a valid token as part of my test code and supply it with the call I wish to test I could pass authentication at the backend and retrieve the results without being redirected to the identity provider for an interactive login.

So – the question is – can a token be created and if so – how?

To examine this lets first take a look at what a JWT token is –

The JWT token

A Json Web Token (JWT) has three parts separated by a dot (‘.’) -  the first part is a header indicating the token is JWT and the hashing algorithm used, the second part is the set of claims and the third part is a signature of the first two parts to prevent tampering.

The mobile services backend creates the JWT token and populates the appropriate claims based on the identity provider before signing the token with the accounts master key.

It is possible to examine the values in a token by obtaining it through tools such as Telerik’s Fiddler and decoding it using tools such as

Here’s an example of the header and claim-set parts of a token generated following authentication with Azure Active Directory –

  "alg": "HS256",
  "typ": "JWT"
  "iss": "urn:microsoft:windows-azure:zumo",
  "aud": "urn:microsoft:windows-azure:zumo",
  "nbf": 1423130818,
  "exp": 1423134359,
  "urn:microsoft:credentials": "{\"tenantId\":\"72f966bf-86f1-41af-91ab-2d7cd011db47\",\"objectId\":\"c57dab9d-31a8-4cf5-b58c-b198b31353e6\"}",
  "uid": "Aad:XyCZz3sPYK_ovyBKnyvzdzVXYXuibT4R2Xsm5gCuZmk",
  "ver": "2"

Naturally – the signature part does not contain any additional information but is a combination of the first 2 parts – base64 encoded and hashed.

Creating your own JWT token

Josh Twist blogged on how one could create a valid Mobile Services token as long as one has access to the master key (hence the importance of protecting the master key); the code in Josh’s post is in node.js so I created the following c# equivalent to be able to use it from test projects in Visual Studio –

 private string createZumoToken(string masterSecret, string userId, DateTime notBefore, DateTime expiry)
            int nNotBefore = convertToUnixTimestamp(notBefore);
            int nExpiry = convertToUnixTimestamp(expiry);

            string jwtHeader = "{\"typ\":\"JWT\",\"alg\":\"HS256\"}";
            string jwtPayload = "{\"iss\":\"urn:microsoft:windows-azure:zumo\",\"aud\":\"urn:microsoft:windows-azure:zumo\",\"nbf\":" + nNotBefore.ToString() + ",\"exp\":" + nExpiry.ToString() + ",\"urn:microsoft:credentials\":\"{}\",\"uid\":\"" + userId + "\",\"ver\":\"2\"}";

            string encodedHeader= urlFriendly(EncodeTo64(jwtHeader));
            string encodedPayload = urlFriendly(EncodeTo64(jwtPayload));

            string stringToSign = encodedHeader + "." + encodedPayload;
            var bytesToSign = Encoding.UTF8.GetBytes(stringToSign);

            string keyJWTSig = masterSecret + "JWTSig";
            byte[] keyBytes = Encoding.Default.GetBytes(keyJWTSig);

            SHA256Managed hash = new SHA256Managed();
            byte[] signingBytes = hash.ComputeHash(keyBytes);

            var sha = new HMACSHA256(signingBytes);
            byte[] signature = sha.ComputeHash(bytesToSign);

            string encodedPart3 = urlFriendly(EncodeTo64(signature));

            return string.Format("{0}.{1}.{2}", encodedHeader, encodedPayload, encodedPart3);
         public string EncodeTo64(string toEncode)
            byte[] toEncodeAsBytes
                  = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode);
            return EncodeTo64(toEncodeAsBytes);
         public string EncodeTo64(byte[] toEncodeAsBytes)
             string returnValue
                   = System.Convert.ToBase64String(toEncodeAsBytes);
             return returnValue;
         public  DateTime ConvertFromUnixTimestamp(double timestamp)
             DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0);
             return origin.AddSeconds(timestamp ); 

         public int convertToUnixTimestamp(DateTime dateTime)
             DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0);
             return (int)Math.Round((dateTime - origin).TotalSeconds, 0);

         string urlFriendly(string input)
             input = input.Replace('+', '-');
             input = input.Replace('/', '_');
             input = input.Replace("=", string.Empty);
             return input;

As you can see from the code, thee really isn’t much to it – the only dynamic values in this example are the not-before and expiry timestamp (expressed as seconds from 1/1/1970) as well as, of course the user id and master key values. It is possible, of course, to add additional claims as needed, based on the identity provider used, as these become available to the code though the User object.

After creating the plain text version of the first two parts, encoding them to base64 and ensuring they are url friendly the code creates a hash of they key (master key + “JWTSig” ) and then used that to create an HMAC of the first two parts. This HMAC-SHA256 signature becomes the third part of the JWT token – the signature and as long as it is created correctly, the Mobile Services backend will accept the token.

Summary – putting it to use

So – to round things up – with the understanding of how mobile services authentication works, how the JWT token looks like and how one can be created this can be put into use as part of a continuous integration set-up where the release process can deploy the mobile services backend, along side any other components of the solution and then run a set of tests to exercise the environment from the outside.

Of course – this relies on sharing the master key with the developers/testers and embedding it in the test project, so this should not be done for production! (but is entirely suitable, in my view, for dev/test environments)

%d bloggers like this: