Getting Metric Data from Azure API Management

The Azure API Management portal provides a useful dashboard showing number of calls to an API over time, bandwidth used, number of errors and average response time.

Viewing this information in the online dashboard is useful, but integrating it into existing dashboards and/or monitoring tools is even more powerful so – what it takes to get this data programmatically?

Not much, as it turns out!

APIM has a good set of management APIs to itself, including reporting on metrics. The first step is to enable the API Management REST API, which can be done within the security tab of the APIM admin portal –

image

Then, for my exercise, I manually created an Access Token at the bottom of the page, which I’ll use later when issuing requests to the portal. This can be done programmatically

Now that I’m able to call the management API, it is time to experiment with some requests.

The security tab showed the base Url for the management api which takes the form of

https://{servicename}.management.azure-api.net

Every request to this url must include an api- version with the current value being – 2014-02-14-preview
The request also needs to include an Authorization HTTP header with the value being the access token generated either in the portal or programmatically.

There are many entities that can be operated on using the management API, the list is published here. In this case I wanted to look at the Report entity which can be use to retrieve metrics

Using this entity I can get usage metric by time period, geographical region, user, product, api or operation. I chose to get metrics by time so I appended to my url the entity path – /reports/byTime, the requests at this stage looks like this  –

https://apiyd.management.azure-api.net/reports/byTime?api-version=2014-02-14-preview

the byTime report takes 2 additional parameters – Interval, which defines the aggregation period  and $filer which can be used to restrict the data returned., I’ve decided to aggregate data in 40 day increments and requested data from August 2014, the full request looks like this –

https://apiyd.management.azure-api.net/reports/byTime?api-version=2014-02-14-preview&interval=P40D&$filter=timestamp ge datetime’2014-08-01T00:00:00′

and the response looks like this –

image

You can see that I get two groups (there are more than 40 days between today and last August, with the relevant metrics in them.

I could also change the accept HTTP header to text/csv to receive the data in CSV format –

timestamp,interval,callCountSuccess,callCountBlocked,callCountFailed,callCountOther,callCountTotal,bandwidth,cacheHitCount,cacheMissCount,apiTimeAvg,apiTimeMin,apiTimeMax,serviceTimeAvg,serviceTimeMin,serviceTimeMax
09/10/2014 00:00:00,P40D,128,9,2,13,152,153677,18,3,422.1049578125,0.376,5620.048,399.780409375,0,5598.4203
11/29/2014 00:00:00,P40D,7,0,6,0,13,10648,0,0,171.982185714286,32.0005,612.5971,162.947842857143,19.8287,579.0369

Using Azure API Management between client and Azure Mobile Services backend

I’m working on a solution with a customer which requires placing Azure API Management (APIM) between the mobile devices and the published Azure Mobile Services (ZuMo) .net backend.

Conceptually this is fairly straight forward, given that all of the Mobile Services capabilities are exposed as Web APIs but I needed to figure out what changes might be required to the client app to support that and prove that it works.

To get started, I created an API in Azure API Management with the relevant operations in APIM (for now I focused on the default GET request for the ToDoItem table so I defined a simple GET operation in APIM with the URL template /tables/TodoItem?$filter={filter}

image

The generated windows 8 app code for the ToDo item sample includes the following code to create the MobileServiceClient instance, which points at the ZuMo URL –

        public static MobileServiceClient MobileService = new MobileServiceClient(
            "https://travelviewer.azure-mobile.net/",
            "hmptBIjrEnAELyOgmmYPRPSOZlLpdo56"
        );

Naturally, to hit APIM instead of ZuMo directly, this needed to be updated to point at the APIM API published, in my case  I changed it to be

        public static MobileServiceClient MobileService = new MobileServiceClient(
            "https://apiyd.azure-api.net/travelviewer/",
            "hmptBIjrEnAELyOgmmYPRPSOZlLpdo56"
            );

Running the application with only this change was clearly not enough, I got the following error when trying to retrieve the ToDo items from the backend (currently configured to require no authentication)

image

This was not unexpected, but I wanted to prove the point. Trusted Fiddler provided more information – I could see the request going exactly as expected –

GET https://apiyd.azure-api.net/travelviewer/tables/TodoItem?$filter=(complete%20eq%20false) HTTP/1.1

and it included the various ZUMO headers (Installation Id, application, etc.

But the response was – HTTP/1.1 401 Access Denied

With the body providing the exact details –

{ 
   "statusCode": 401, 
   "message": "Access denied due to missing subscription key. 
               Make sure to include subscription key when making 
               requests to an API." 
}

Authentication to Azure API Management (APIM) is done by providing a subscription key either as a query string parameter (‘subscription-key’) or an HTTP header (‘ocp-apim-subscription-key’) and clearly the ZuMo client does not know it needs to do that, so – how do we work around that?

The answer, I believe, is to create an HttpMessageHandler or – more specifically – a DelegatingHandler.

The MobileServiceClient takes in a third parameter which is an array of such delegates that are then given the chance to process the outgoing requests and responses flowing through the client, this is a great opportunity to inject the missing HTTP header in a central place without affecting the application’s code.

and so I created a class that inherits from System.Net.Http.DelegatingHandler

class InsertAPIMHeader : DelegatingHandler
    {
    }

and I’ve overridden the main method – ‘SendAsync’ – and in it simply added the required header to the HttpRequest (in my case – hardcoded)

protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
                                    System.Threading.CancellationToken cancellationToken)  
{ request.Headers.Add("ocp-apim-subscription-key", "84095a7d792a47738464faae6bf950d3"); return base.SendAsync(request, cancellationToken); }

The last step is to write that into the MobileServiceClient Constructor, in my App’s App.xaml.cs I’ve added an instance of the delegate as the last parameter

public static MobileServiceClient MobileService = new MobileServiceClient(
            "https://apiyd.azure-api.net/travelviewer/",
            "hmptBIjrEnAELyOgmmYPRPSOZlLpdo56",
            new InsertAPIMHeader()
            );

 

Running the app again and refreshing the list of ToDo items now works. inspecting the request in Fiddler shows the request now has the APIM subscription key header

GET https://apiyd.azure-api.net/travelviewer/tables/TodoItem?$filter=(complete%20eq%20false) HTTP/1.1
X-ZUMO-FEATURES: TT,TC
X-ZUMO-INSTALLATION-ID: 226d6ea7-2979-4653-96d4-a230128719c5
X-ZUMO-APPLICATION: hmptBIjrEnAELyOgmmYPRPSOZlLpdo56
Accept: application/json
User-Agent: ZUMO/1.2 (lang=Managed; os=Windows Store; os_version=--; arch=Neutral; version=1.2.21001.0)
X-ZUMO-VERSION: ZUMO/1.2 (lang=Managed; os=Windows Store; os_version=--; arch=Neutral; version=1.2.21001.0)
ocp-apim-subscription-key: 84095a7d792a47738464faae6bf950d3
Host: apiyd.azure-api.net
Accept-Encoding: gzip

and, importantly – I actually got a result, meaning that APIM accepted the request, forwarded it to the mobile services backend and relayed the result. Success!

Sample of custom PowerShell cmdlets to manage Azure ServiceBus queues

A customer I’m working with these days is investing in automating the provisioning and de-provisioning a fairly complex solution on Azure using PowerShell and Azure Automation to help lower their running costs and provide increased agility and resiliency.

Within their solution they use Azure Service Bus Queues and Topics and unfortunately there are no PowerShell cmdlets to create and delete these.

Thankfully, creating PowerShell cmdlets is very easy and with the rich SDK for Service Bus creating a few cmdlets to perform the key actions around queue provisioning and de-provisioning took very little time.

In this post I wanted to outline the process I went through and share the sample I ended up producing.

First, I needed to get my environment ready – – to develop a custom PowerShell cmdlet I needed a reference to System.Management.Automation.

This assembly gets installed as part of the Windows SDK  into the folder C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell  so I installed the SDK for Windows 8.1 which can be found here.

Once installed, I created a new Class Library project in Visual Studio and added the reference.

Given that I’ve never developed a cmdlet before I started with a very simple cmdlet to wrap my head around it, turns out that creating a simple cmdlet that returns the list of queues in an Azure Service Bus namespace is incredibly simple –

 [Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        protected override void EndProcessing()
        {
            string connectionString = string.Format("Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2}", Namespace, SharedAccessKeyName, SharedAccessKey);
            NamespaceManager nm = NamespaceManager.CreateFromConnectionString(connectionString);
            WriteObject(nm.GetQueues(), true);
        }
    }

I created a class and inherited from CmdLet in the System.Management.Automation namespace and added the Cmdlet attribute. it is the latter that provides the actual Cmdlet name in the verb-noun structure.

 

I then added an override for EndProcessing which creates an instance of the Service Bus’ NamespaceManager class with a connection string (in the code above I obviously removed the members I added to hold the hardcoded credentials, you will need to add these), used it to retrieve all the queues in the namespace and wrote them as the output from the CmdLet using the base class’ WriteObject method.

The true parameter tells PowerShell to enumerate over the responses.

Once compiled I used Import-Module to load my cmdlet and call it using Get-AzureServiceBusQueue. The result was an output of all the properties of the queues returned. magic.

Ok – so the next step was obviously to stop hardcoding the connecting string details – I need a few properties for my CmdLet -

I removed my hard-coded members and added properties to the class as follows –

        [Parameter(Mandatory = true, Position = 0)]
        public string Namespace { get; set; }

        [Parameter(Mandatory = true, Position = 1)]
        public string SharedAccessKeyName { get; set; }

        [Parameter(Mandatory=true, Position=2)]
        public string ShareAccessKey { get; set; }

Mandatory is self-explanatory. Position allows the user to pass the parameters without specifying their name but by specifying them in the correct order. I could now use this CmdLet in two ways

Get-AzureServiceBusQueue –Namespace <namespace> –SharedAccessKeyName <key name> –SharedAccessKey <key>

or

Get-AzureServiceBusQueue <namespace> <key name> <key>

both yield exactly the same result.

Next I knew I needed to add more CmdLets – to create and remove queues – for example, and it seemed silly to re-create a namespace manager every time.

The next logical step was to create a CmdLet that created a Namespace Manager and then pass that into any other queue-related CmdLet, I started by creating the Get-AzureServiceBusNamespaceManager CmdLet as follows –

[Cmdlet(VerbsCommon.Get,"AzureServiceBusNamespaceManager")]
    public class GetAzureServiceBusNamespaceManager : Cmdlet
    {
        [Parameter(Mandatory = true, Position = 0)]
        public string Namespace { get; set; }
        [Parameter(Mandatory = true, Position = 1)]
        public string SharedAccessKeyName { get; set; }
        [Parameter(Mandatory=true, Position=2)]
        public string ShareAccessKey { get; set; }
        protected override void EndProcessing()
        {
            base.EndProcessing();
            string connectionString = string.Format("Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2}",Namespace,SharedAccessKeyName,ShareAccessKey);
            WriteObject(NamespaceManager.CreateFromConnectionString(connectionString));
        }
    
    }

With this in place I removed the creation of  the NamespaceManager from the Get-AzureServiceBusQueue CmdLet and added a parameter, it now looked like this –

 [Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        [Parameter(Mandatory = true, ValueFromPipeline = true)]
        public NamespaceManager NamespaceManager { get; set; }
        protected override void EndProcessing()
        {
            WriteObject(NamespaceManager.GetQueues(), true);
        }
    }

I made sure to set the ValueFromPipeline property of the Parameter attribute to true; this allows me to pass the namespace manager as an object down the powershell pipeline and not just as a parameter. It meant I had two ways to use this CmdLet

Get-AzureServiceBusNamespaceManager <namespace> <sas key name> <sas key> | `
Get-AzureServiceBusQueue

or using two separate commands –

$namespaceManager - Get-AzureServiceBusNamespaceManager <namespace> <sas key name> <sas key> 
Get-AzureServiceBusQueue -Namespace $namespaceManager

The last bit I wanted to add is an optional parameter allowing me to specify the path of the queue I’m interested in so that I don’t have to retrieve all the queues all the time.  I also moved the code from EndProcessing to ProcessRecord which means it will get called for any item piped in. in theory that allows for a list of namespace managers to be pushed in, although i don’t really see that’s happening. The final CmdLet looks like this –

[Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        [Parameter(Mandatory = false, Position = 0)]
        public string[] Path { get; set; }

        [Parameter(Mandatory = true, ValueFromPipeline = true)]
        public NamespaceManager NamespaceManager { get; set; }
        protected override void ProcessRecord()
        {
            base.ProcessRecord();

            IEnumerable<QueueDescription> queues = NamespaceManager.GetQueues();
            if (null != Path)
                queues = queues.Where(q => Path.Contains(q.Path));

            WriteObject(queues, true);
        }
    }

Actually, as I started to think about the next few Cmdlets I’ve realised they would all need the namespace manager parameter, so I extracted that to a base class and inherited from that.

With this done, I added further Cmdlets to remove and create queues (for the latter, I’ve added a few more properties) –

[Cmdlet(VerbsCommon.Remove, "AzureServiceBusQueue")]
    public class RemoveAzureServiceBusQueue : AzureServiceBusBaseCmdlet
    {
        [Parameter(Mandatory = true, Position = 0)]
        public string[] Path { get; set; }

        protected override void ProcessRecord()
        {
            base.ProcessRecord();
            foreach (string p in Path)
            {
                if (NamespaceManager.QueueExists(p))
                    NamespaceManager.DeleteQueue(p);
                else
                    WriteError(new ErrorRecord(new ArgumentException("Queue " + Path + " not found in namespace " + NamespaceManager.Address.ToString()), "1", ErrorCategory.InvalidArgument, NamespaceManager));
            }
        }
    }
 [Cmdlet("Create","AzureServiceBusQueue")]
    public class CreateAzureServiceBusQueue : AzureServiceBusBaseCmdlet
    {
        [Parameter(Mandatory=true,Position=0)]
        public string[] Path { get; set; }
        [Parameter(HelpMessage="The maximum size in increments of 1024MB for the queue, maximum of 5120 MB")]
        public long? MaxSize { get; set; }

        [Parameter(HelpMessage="The default time-to-live to apply to all messages, in seconds")]
        public long? DefaultMessageTimeToLive { get; set; }

        bool enablePartitioning;
        [Parameter()]
        public SwitchParameter EnablePartitioning
        {
            get { return enablePartitioning; }
            set { enablePartitioning = value; }
        }
        protected override void ProcessRecord()
        {
            base.ProcessRecord();
            foreach (string p in Path)
            {
                QueueDescription queue = new QueueDescription(p);

                if (MaxSize.HasValue)
                    queue.MaxSizeInMegabytes = MaxSize.Value;
                if (DefaultMessageTimeToLive.HasValue)
                    queue.DefaultMessageTimeToLive = new TimeSpan(0, 0, 0, int.Parse(DefaultMessageTimeToLive.Value.ToString()));
                queue.EnablePartitioning = enablePartitioning;
                NamespaceManager.CreateQueue(queue);
            }
        }
    }

Of course, this is by no means complete, but it does perform all 3 basic operations on queues and can easily be expanded to support everything that the customer requires to support their automation needs

Microsoft Azure Media Indexer basics – Part 3

In this part of the series covering the basics of using the Azure Media Indexer to make speech in audio and/or video assets searchable with SQL Server Full-text-search I will cover the basics of querying the database to find assets based on spoken words and even list the locations within these assets where the words had been spoken.

If you haven’t already, I suggest that you look at the previous two posts – part 1 covers the basics of uploading and processing an asset to produce the audio index (AIB) and closed captions files and part 2 covers how to prepare the SQL environment and load the AIB into it to enable searching.

With AIB files loaded into a table enabled for full-text-search and with the Media Indexer SQL-Add-on installed, searching for spoken words in assets using the database is nearly as simple as using standard SQL full-text-search constructs, but not quite, some differences and best practices exist and I’ll try to cover the main ones here.

The first thing to note is that when querying over AIB files the terms placed within CONTAINS or CONTAINSTABLE need to be encoded. This encoding can be done manually but it is easier (and useful for other purposes, more on this next) to use the Retrieval class provided with the SQL add-on within the MSRA_AudioIndexing_Retrieval assemly. you can find both x86 and x64 versions in the [InstallDir]\Indexer_SQL_AddOn\bin folder.

A ‘standard’ full-text-search query that looks for the word ‘microsoft’ in the title column of the File table looks like

SELECT FileID FROM Files WHERE CONTAINS(Title,' FORMSOF (INFLECTIONAL,"microsoft") ')

Having been expanded by the Retrieval class a similar query looking for the spoken word ‘microsoft’ in an AIB file contained in the AIB column of the table would look like

SELECT FileID FROM Files WHERE CONTAINS(AIB,' FORMSOF (INFLECTIONAL,"spoken:microsoft@@@") ')

The number of @ influences the confidence level in the matching the SQL filter will apply during query execution, the more @ you have the more inclusive the query will be.

As indicated, thankfully, one does not have to perform the encoding manually, a call to the ExpandQuery or BuildContainsTableClause static methods of the Retrieval class can be used –

string queryText = "microsoft speech"
string sqlContainsClause = Retrieval.ExpandQuery(queryText);
string sqlText = "select FileID ID from Files where contains(AIB,'" +
 sqlContainsClause + "')";

Running this SQL query against the database will return all rows where the terms requested appear within the row’s AIB data (the query can contain one more more words separated by a space or a double-quoted phrase strings), but the magic does not stop there –

If you use ContainsTable the result set will include an initial ranking score for the match –

string queryText = "microsoft speech"
string sqlContainsTableClause =
 Retrieval.BuildContainsTableClause(queryText, "Files", "AIB", 5);
string sqlText = "select FileID ID from Files inner join " +
 sqlContainsTableClause +
 " AS CT on Files.FileID = CT.[KEY] ORDER BY Rank";

This initial ranking is a useful measure to the quality and number of the matches found and so it is useful to sort the records returns by descending order of this ranking.

The initial ranking is approximate, but the Retrieval class also provides a ComputeScore method which takes a query and the AIB (and two other parameters I’ll touch upon shortly) and provides a more thorough ranking.

Because this requires the entire AIB’s contents, it makes sense to not run this on the entire set and this is exactly where the approximate ranking pays off, so the initial flow of a query would be to run the query using CONTAINSTABLE, sort by ranking and take the top n results.

Then one can choose to run the deeper scoring on these selected results by retrieving the AIBs for them from the database and calling the ComputeScore method.

Another thing the ComputeScore method can do is generate a set of snippets from the AIB . if snippets are requested  (by setting the third parameter of the ComputeScore method to true) the process will return with the score a list of snippets, each representing a section of the media file where one or more of the terms sought were found. for each snippet a start and end time is supplied (in seconds from start of media) as well as the score and even the extracted text as a string.

This allows calling applications to be very detailed – an application can display the snippets of text where query terms were found and ‘deep link’ into the media allowing the user to play the relevant section of the media.

The fourth parameter to the ComputeScore method is the maximum number of seconds to include in either side of a term found when extracting the snippet.

a call to the method looks like this

Retrieval.ScoringResult scoringResult = Retrieval.ComputeScore(queryText, result.AIB, false, snippetContext);

Again – this computation can be comparatively extensive and so it would make sense to do that either as a second step after the initial database query and filtering on the approximate score or even, in some cases, it might be worth further limiting the initial result set by getting the details scoring, without the snippets, and only generate snippets for records actually rendered on screen

The power is that one has all the options.

Overall, the process looks like this –

image

Naturally – the size of the AIB file can be quite large (a 40 minute mp3 file I processed resulted in 991kb AIB file) and so transferring this across the network every time a user runs a query may not be wise.

For this reason I think the right pattern is to implemented the query logic within a simple Web API which can be placed on the SQL server itself. This allows for the most efficient transfer of the AIB data and only returning the analysis result, which is a much smaller data set, to the client.

A client can call the API, passing in the term to look for and return a list of matches with snippet information and because this snippet (and hits) information contains time positions within the asset (in seconds) it can use these in the client to start playback of the asset at the relevant location (and even display the text relevant to the match)

An example of a response from a WebAPI I created looks like this (I searched for ‘money’ and ‘criminal’ in a You and Yours BBC Radio 4 podcast) –

[
-{
    Title: "test title"
    URL: "assets\\yyhighlights_20141010-1400a.mp3"
Rank: 3
Duration: 2375.840002
-Snippets: [
-{
    -Hits: [
    -{
        TermScore: 0
        EndPos: 8
StartPos: 0
EndTime: 102.210007
StartTime: 101.57
QueryTerm: "criminal"
}
-{
    TermScore: 0
    EndPos: 31
StartPos: 26
EndTime: 104.01
StartTime: 103.670006
QueryTerm: "money"
}
]
EndTime: 104.25
StartTime: 101.57
Text: "criminal using her to get money out "
}
-{
    -Hits: [
    -{
        TermScore: 0
        EndPos: 70
StartPos: 65
EndTime: 130.17

Another possible approach is to use SQLCLR to run the scoring method directly in the database. This will certainly be more efficient in terms of data transfer but everyone likes this approach. again – its good to have options.

Microsoft Azure Media Indexer basics – Part 2

In my previous post I’ve described the Azure Media indexer at high level and walked through the steps required to produce the AIB file (amongst other things) for a video or audio asset.

Whilst AIB files can be processed individually (as will be touched upon in the next post), the real power comes from using them in conjunction with the Media Indexer SQL Server add-on , allowing the result of the media indexing job to be integrated into SQL Server’s full text search capability.

To do that a SQL Server instance needs to be prepared with the SQL add-on and this short post will discuss the steps taken to prepare a database to be searchable.

The first step is to download the Azure Media Indexer SQL Add-on which, at the time of writing, can be found here

Once downloaded run either the x86 or x64 installer as required which will perform the installation steps and expand the SDK files (by default into C:\Program Files\Azure Media Services Indexer SDK)

The best next step at this point, in my opinion at least, is to read the user guide delivered with the SDK which can be found under the [Install Dir]\Indexer_SQL_AddOn\Docs folder within the installation location as this explains quite a lot about how the add-on works and how it should be used as well as describing the capabilities and usage of the classes available within the MSRA_AudioIndexing_Retrieval.dll assembly that ships with the SDK.

The installer would have configured the full text search filter within SQL Server and so the next step is to have a database in which the AIB data will be stored and to ensure that full text search is enabled on the database –

image

Within the database, you can create a table to hold the AIB data. The minimum requirement, as outlined by the user guide is to have two columns – a column called ‘AIB’ of type varbinary(max) to hold the bytes of the AIB file and a column called ‘Ext’ of type varchar(4) to hold the extension which should always be ‘.aib’.

This could be a dedicated table or columns added to an existing table. Obviously you would want to include some sore of reference to the original content and metadata so that this information can be available alongside the search results.

The AIB column needs to be configured for full-text-search, a SQL script to create a sample table exists in the SDK in the location – [InstallDir] \code\Setup.sql, below is the part of the script that creates the full text search catalogue and associates the AIB and FilePath columns with it

CREATE FULLTEXT CATALOG AudioSearchFTCat

CREATE FULLTEXT INDEX ON
	files (AIB TYPE COLUMN ext LANGUAGE 43017, 
			FilePath LANGUAGE English)
			KEY INDEX IX_Files
			ON AudioSearchFTCat
	WITH CHANGE_TRACKING AUTO

and that is it!

Following these steps the SQL server and table created are ready to be used to index media files, a simple insert, such as the example below can then add data to the table which will then be query able

buffer = System.IO.File.ReadAllBytes(filePath); using (SqlConnection con = new SqlConnection(m.ConnectionString)) { con.Open(); using (SqlCommand cmd = new SqlCommand("insert into files
(Title, Description,Duration, FilePath, Ext,AIB) values
(@Title, @Description, @Duration, @FilePath,@Ext, @AIB)", con)) { cmd.Parameters.Add(new SqlParameter("@Title", "test title")); cmd.Parameters.Add(new SqlParameter("@Description", "test description")); cmd.Parameters.Add(new SqlParameter("@Duration", duration)); cmd.Parameters.Add(new SqlParameter("@FilePath", "assets\\" + assetName)); cmd.Parameters.Add(new SqlParameter("@Ext", ".aib")); cmd.Parameters.Add(new SqlParameter("@AIB", SqlDbType.VarBinary,
buffer.Length, ParameterDirection.Input, false, 0, 0,
"AIB", DataRowVersion.Current, (SqlBinary)buffer)); int result = cmd.ExecuteNonQuery(); tick4.Visibility = System.Windows.Visibility.Visible; } }

In the third post in this series I go over the basics of querying the table for spoken words in the source media file.

Microsoft Azure Media Indexer basics – Part 1

One of the most recent additions to Microsoft Azure Media Services is the Media Indexer which came out of Microsoft Research’s project MAVIS

In short – the Indexer is an Azure Media Services processor that performs speech-to-text on video or audio assets and produces both close-captions files and an Audio Indexer Blob (AIB) file – a binary formatted file containing the extracted text with specific location information and match ranking.

The indexer can be used simply to automate the production of closed captions for audio and video files and this could be further enhanced by leveraging Bing translation to create closed captions in multiple languages.

It does, however, support a much broader use for the enterprise – in addition to the media indexer processor the team had also released an add-on to SQL Server that plugs the AIB file processing to SQL Server’s full-text-search capability allowing clients to query AIB contents, stored as a as varbinary column in a SQL table, for text in the originating spoken content.

Making speech in audio and video files searchable is incredibly powerful, with uses for compliance and auditing (for example in financial services), proactive fraud detection, sentiment analysis for call centres and many more scenarios.

Indexing an asset involves several steps - 

  1. Creation of an input assets in media services
  2. Submitting a job with the indexer processor. This will produce an output asset with multiple files – closed captions, keywords xml and AIB.
  3. Download the output asset’s files
  4. Load the AIB file into a table in SQL Server (configured for full text search with the add-on installed)

With the AIB file loaded clients are ready to run full text search queries against the AIB column as needed.

Optionally the client (or a service wrapping the database, see coming post on that) can use the AIB Retriever class, supplied with the SQL add-on, to further analyse the AIB of a given record (found through full text search) and identify the specific locations in the asset where the queried text was found and the specific matching score for each location.

In this first post I’ll run through the first 3 steps – the basics of running the media indexer. A second post will walk through the configuration of a SQL Server environment to support the search and a third will discuss execution of the searches themselves

    1. Creating an input Asset

    The first step is to create the input asset the job should work on, in line with other Media service scenarios. An example of which can be found here.

    The one thing to note is that the asset could be a media asset or a simple file containing one or more URLs to source content.
    This second option is very interesting as it allows indexing media without having to upload it to the content store first, as long as it is accessible via a URL either anonymously or using basic authentication (a configuration file can be used to provide the credentials, more on this shortly)

    2. Submitting the indexer job

    Submitting a job start with finding the correct processor. For the media indexer the processor name is “Azure Media Indexer” and the following method can return the latest version of it -

     private static IMediaProcessor GetLatestMediaProcessorByName(string mediaProcessorName)
            {
                var processor = _context.MediaProcessors
                                        .Where(p => p.Name == mediaProcessorName)
                                        .ToList()
                                        .OrderBy(p => new Version(p.Version))
                                        .LastOrDefault();
    
                if (processor == null)
                    throw new ArgumentException(string.Format("Unknown media processor", mediaProcessorName));
    
                return processor;
            }

With a processor at hand a job can be created  -

IJob job = _context.Jobs.Create("Index asset " + assetName);

a task is added to the job, with the indexer processor provided as the first parameter and an optional configuration file provided as the second and the input and output assets are linked –

ITask task = job.Tasks.AddNew("Indexing task", indexer, configuration, TaskOptions.None);
task.InputAssets.Add(asset);
task.OutputAssets.AddNew(assetName + ".Indexed", AssetCreationOptions.None)

The configuration file is discussed here – if empty than the default behaviour will be applied which is to process the first file in the asset provided, but the configuration file can include a list of assets to index and/or the name of an asset which is a text file containing a URL list to process (a simple list of URLs delimited by CR-LF). it can also include the username and password to use to access URLs if basic authentication is needed.

3. Download output assets

Once the job had completed the output asset’s file can be downloaded-

IAsset indexedAsset = _context.Assets.Where(a => a.Name == asset).FirstOrDefault();
            foreach (IAssetFile file in indexedAsset.AssetFiles)
            {
                file.Download(Path.Combine(localFolder, file.Name));
            }

This should result in 4 files downloaded for each input asset –

Two closed caption files, in two different format – ttml and smi, a keywords file – kw.xml and, crucially, the AIB file.

The closed caption files are very useful if you wanted to have CC when playing back the media file (windows media player, for example, will automatically look for and use an smi file with the same name as the original media name) or if you wanted to read the parsed text to assess the quality of the parsing.

The AIB file, however, is where the real power lies, as it holds, in binary format, the parsed text, but including accuracy scoring alongside location information and is what used to drive the full text search capability

In the next post discuss how to prepare SQL server to support loading and searching the AIB contents.

BizTalk Services and partitioned queues….

….don’t mix at the moment.

If you’re deploying a BizTalk Services project with a queue source, and are getting the error

Failed to connect to the ServiceBus using specified configuration

and/or if you have a queue destination and are getting an error that contains

This client is not supported for a partitioned entity

check whether you’re using queues that have been created with the default option to enable partitioning –

image

Follow

Get every new post delivered to your Inbox.

Join 28 other followers

%d bloggers like this: