Sample of custom PowerShell cmdlets to manage Azure ServiceBus queues

A customer I’m working with these days is investing in automating the provisioning and de-provisioning a fairly complex solution on Azure using PowerShell and Azure Automation to help lower their running costs and provide increased agility and resiliency.

Within their solution they use Azure Service Bus Queues and Topics and unfortunately there are no PowerShell cmdlets to create and delete these.

Thankfully, creating PowerShell cmdlets is very easy and with the rich SDK for Service Bus creating a few cmdlets to perform the key actions around queue provisioning and de-provisioning took very little time.

In this post I wanted to outline the process I went through and share the sample I ended up producing.

First, I needed to get my environment ready – – to develop a custom PowerShell cmdlet I needed a reference to System.Management.Automation.

This assembly gets installed as part of the Windows SDK  into the folder C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell  so I installed the SDK for Windows 8.1 which can be found here.

Once installed, I created a new Class Library project in Visual Studio and added the reference.

Given that I’ve never developed a cmdlet before I started with a very simple cmdlet to wrap my head around it, turns out that creating a simple cmdlet that returns the list of queues in an Azure Service Bus namespace is incredibly simple –

 [Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        protected override void EndProcessing()
        {
            string connectionString = string.Format("Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2}", Namespace, SharedAccessKeyName, SharedAccessKey);
            NamespaceManager nm = NamespaceManager.CreateFromConnectionString(connectionString);
            WriteObject(nm.GetQueues(), true);
        }
    }

I created a class and inherited from CmdLet in the System.Management.Automation namespace and added the Cmdlet attribute. it is the latter that provides the actual Cmdlet name in the verb-noun structure.

 

I then added an override for EndProcessing which creates an instance of the Service Bus’ NamespaceManager class with a connection string (in the code above I obviously removed the members I added to hold the hardcoded credentials, you will need to add these), used it to retrieve all the queues in the namespace and wrote them as the output from the CmdLet using the base class’ WriteObject method.

The true parameter tells PowerShell to enumerate over the responses.

Once compiled I used Import-Module to load my cmdlet and call it using Get-AzureServiceBusQueue. The result was an output of all the properties of the queues returned. magic.

Ok – so the next step was obviously to stop hardcoding the connecting string details – I need a few properties for my CmdLet –

I removed my hard-coded members and added properties to the class as follows –

        [Parameter(Mandatory = true, Position = 0)]
        public string Namespace { get; set; }

        [Parameter(Mandatory = true, Position = 1)]
        public string SharedAccessKeyName { get; set; }

        [Parameter(Mandatory=true, Position=2)]
        public string ShareAccessKey { get; set; }

Mandatory is self-explanatory. Position allows the user to pass the parameters without specifying their name but by specifying them in the correct order. I could now use this CmdLet in two ways

Get-AzureServiceBusQueue –Namespace <namespace> –SharedAccessKeyName <key name> –SharedAccessKey <key>

or

Get-AzureServiceBusQueue <namespace> <key name> <key>

both yield exactly the same result.

Next I knew I needed to add more CmdLets – to create and remove queues – for example, and it seemed silly to re-create a namespace manager every time.

The next logical step was to create a CmdLet that created a Namespace Manager and then pass that into any other queue-related CmdLet, I started by creating the Get-AzureServiceBusNamespaceManager CmdLet as follows –

[Cmdlet(VerbsCommon.Get,"AzureServiceBusNamespaceManager")]
    public class GetAzureServiceBusNamespaceManager : Cmdlet
    {
        [Parameter(Mandatory = true, Position = 0)]
        public string Namespace { get; set; }
        [Parameter(Mandatory = true, Position = 1)]
        public string SharedAccessKeyName { get; set; }
        [Parameter(Mandatory=true, Position=2)]
        public string ShareAccessKey { get; set; }
        protected override void EndProcessing()
        {
            base.EndProcessing();
            string connectionString = string.Format("Endpoint=sb://{0}.servicebus.windows.net/;SharedAccessKeyName={1};SharedAccessKey={2}",Namespace,SharedAccessKeyName,ShareAccessKey);
            WriteObject(NamespaceManager.CreateFromConnectionString(connectionString));
        }
    
    }

With this in place I removed the creation of  the NamespaceManager from the Get-AzureServiceBusQueue CmdLet and added a parameter, it now looked like this –

 [Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        [Parameter(Mandatory = true, ValueFromPipeline = true)]
        public NamespaceManager NamespaceManager { get; set; }
        protected override void EndProcessing()
        {
            WriteObject(NamespaceManager.GetQueues(), true);
        }
    }

I made sure to set the ValueFromPipeline property of the Parameter attribute to true; this allows me to pass the namespace manager as an object down the powershell pipeline and not just as a parameter. It meant I had two ways to use this CmdLet

Get-AzureServiceBusNamespaceManager <namespace> <sas key name> <sas key> | `
Get-AzureServiceBusQueue

or using two separate commands –

$namespaceManager - Get-AzureServiceBusNamespaceManager <namespace> <sas key name> <sas key> 
Get-AzureServiceBusQueue -Namespace $namespaceManager

The last bit I wanted to add is an optional parameter allowing me to specify the path of the queue I’m interested in so that I don’t have to retrieve all the queues all the time.  I also moved the code from EndProcessing to ProcessRecord which means it will get called for any item piped in. in theory that allows for a list of namespace managers to be pushed in, although i don’t really see that’s happening. The final CmdLet looks like this –

[Cmdlet(VerbsCommon.Get, "AzureServiceBusQueue")]
    public class GetAzureServiceBusQueue : Cmdlet
    {
        [Parameter(Mandatory = false, Position = 0)]
        public string[] Path { get; set; }

        [Parameter(Mandatory = true, ValueFromPipeline = true)]
        public NamespaceManager NamespaceManager { get; set; }
        protected override void ProcessRecord()
        {
            base.ProcessRecord();

            IEnumerable<QueueDescription> queues = NamespaceManager.GetQueues();
            if (null != Path)
                queues = queues.Where(q => Path.Contains(q.Path));

            WriteObject(queues, true);
        }
    }

Actually, as I started to think about the next few Cmdlets I’ve realised they would all need the namespace manager parameter, so I extracted that to a base class and inherited from that.

With this done, I added further Cmdlets to remove and create queues (for the latter, I’ve added a few more properties) –

[Cmdlet(VerbsCommon.Remove, "AzureServiceBusQueue")]
    public class RemoveAzureServiceBusQueue : AzureServiceBusBaseCmdlet
    {
        [Parameter(Mandatory = true, Position = 0)]
        public string[] Path { get; set; }

        protected override void ProcessRecord()
        {
            base.ProcessRecord();
            foreach (string p in Path)
            {
                if (NamespaceManager.QueueExists(p))
                    NamespaceManager.DeleteQueue(p);
                else
                    WriteError(new ErrorRecord(new ArgumentException("Queue " + Path + " not found in namespace " + NamespaceManager.Address.ToString()), "1", ErrorCategory.InvalidArgument, NamespaceManager));
            }
        }
    }
 [Cmdlet("Create","AzureServiceBusQueue")]
    public class CreateAzureServiceBusQueue : AzureServiceBusBaseCmdlet
    {
        [Parameter(Mandatory=true,Position=0)]
        public string[] Path { get; set; }
        [Parameter(HelpMessage="The maximum size in increments of 1024MB for the queue, maximum of 5120 MB")]
        public long? MaxSize { get; set; }

        [Parameter(HelpMessage="The default time-to-live to apply to all messages, in seconds")]
        public long? DefaultMessageTimeToLive { get; set; }

        bool enablePartitioning;
        [Parameter()]
        public SwitchParameter EnablePartitioning
        {
            get { return enablePartitioning; }
            set { enablePartitioning = value; }
        }
        protected override void ProcessRecord()
        {
            base.ProcessRecord();
            foreach (string p in Path)
            {
                QueueDescription queue = new QueueDescription(p);

                if (MaxSize.HasValue)
                    queue.MaxSizeInMegabytes = MaxSize.Value;
                if (DefaultMessageTimeToLive.HasValue)
                    queue.DefaultMessageTimeToLive = new TimeSpan(0, 0, 0, int.Parse(DefaultMessageTimeToLive.Value.ToString()));
                queue.EnablePartitioning = enablePartitioning;
                NamespaceManager.CreateQueue(queue);
            }
        }
    }

Of course, this is by no means complete, but it does perform all 3 basic operations on queues and can easily be expanded to support everything that the customer requires to support their automation needs

Caution required with BizTalk’s SB-Messaging adapter

BizTalk 2013 ships with a native adapter for the Windows Azure Service Bus – the SB-Messaging adapter.

The adapter provides a very easy to use approach to receiving and sending messages to and from a service bus queue/topic/subscription but, as it turns out, it might be that it is a little bit simplistic or – at the very least – some caution is required as I’ve learnt with a customer yesterday –

The customer uses BizTalk to read a message from a Service Bus queue using the adapter. In the pipeline they ‘debatch’ the incoming message into multiple messages that get published to BizTalk’s message box.

This all worked just fine during testing, but when the customer moved to performance testing they hit a point after which they got a lot of errors such as the follows –

The adapter "SB-Messaging" raised an error message. Details "System.ObjectDisposedException: Cannot access a disposed object.

Object name: ‘Microsoft.ServiceBus.Messaging.Sbmp.RedirectBindingElement+RedirectContainerChannelFactory`1[System.ServiceModel.Channels.IRequestSessionChannel]’.

The adapter "SB-Messaging" raised an error message. Details "Microsoft.ServiceBus.Messaging.MessageLockLostException: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue.

When reading from the Service Bus the client can employ one of two strategies – a desructive read whereby the message is remove from there queue as soon as it is being read or a ‘peek lock’ strategy where the read message is locked and effectively hidden on the queue for a duration, allowing the client to process the message and then come back and either confirm that it had successfully processed the message or abandon the message, effectively  putting it back on the queue.

The peek-lock strategy is very useful to ensure no message loss and BizTalk’s SB-Messaging adapter makes a good use of it – until the received message is persisted in the message box it is not removed from the queue and hence zero-message loss can be guaranteed.

On the flip side, it means that a message may be delivered more than once and this has to be catered for.

In this customer’s case, the debatching they performed meant that, under load, it took BIzTalk too much time to complete the debatching and persisting all the messages to the message box. That meant that it did not go back to the queue in time to release the lock resulting with the error above. unfortunately this also caused a snow-ball effect as it meant that another BizTalk instance picked up the same message and the issue repeated itself, with the large number of messages being persisted to the message box increasing the load on the server and with it the duration of the receive pipeline processing….

Once identified they have found a quick workaround, at least in the first instance – remove the debatching from the service bus receive port and move it to a later stage in their BizTalk processing. This, I think, is a good appraoch, as it lets them ‘take ownership’ of the message in BizTalk, removing it from the Service Bus, before starting the heavy processing.

Another approach is to change the lock duration property on the queue/subscription in question. the default is 60 seconds. This could by time and with careful load-testing a reasonable value may be found, but – of course – it does not eliminate the risk, just pushes it out further.

Asking in the virtual corridors of Solidsoft it appears we have seen this issue with at least  two other customers. I think that the SB-Messaging adapter should support both strategies and allow the customer the choice, and perhaps flagging the need to set the lock duration value on the queue/subscription as required.

 

Cross posted on the Solidsoft blog

Caution required with BizTalk’s SB-Messaging adapter

BizTalk 2013 ships with a native adapter for the Windows Azure Service Bus – the SB-Messaging adapter.

The adapter provides a very easy to use approach to receiving and sending messages to and from a service bus queue/topic/subscription but, as it turns out, it might be that it is a little bit simplistic or – at the very least – some caution is required as I’ve learnt with a customer yesterday –

The customer uses BizTalk to read a message from a Service Bus queue using the adapter. In the pipeline they ‘debatch’ the incoming message into multiple messages that get published to BizTalk’s message box.

This all worked just fine during testing, but when the customer moved to performance testing they hit a point after which they got a lot of errors such as the follows –

The adapter "SB-Messaging" raised an error message. Details "System.ObjectDisposedException: Cannot access a disposed object.

Object name: ‘Microsoft.ServiceBus.Messaging.Sbmp.RedirectBindingElement+RedirectContainerChannelFactory`1[System.ServiceModel.Channels.IRequestSessionChannel]’.

The adapter "SB-Messaging" raised an error message. Details "Microsoft.ServiceBus.Messaging.MessageLockLostException: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue.

When reading from the Service Bus the client can employ one of two strategies – a desructive read whereby the message is remove from there queue as soon as it is being read or a ‘peek lock’ strategy where the read message is locked and effectively hidden on the queue for a duration, allowing the client to process the message and then come back and either confirm that it had successfully processed the message or abandon the message, effectively  putting it back on the queue.

The peek-lock strategy is very useful to ensure no message loss and BizTalk’s SB-Messaging adapter makes a good use of it – until the received message is persisted in the message box it is not removed from the queue and hence zero-message loss can be guaranteed.

On the flip side, it means that a message may be delivered more than once and this has to be catered for.

In this customer’s case, the debatching they performed meant that, under load, it took BIzTalk too much time to complete the debatching and persisting all the messages to the message box. That meant that it did not go back to the queue in time to release the lock resulting with the error above. unfortunately this also caused a snow-ball effect as it meant that another BizTalk instance picked up the same message and the issue repeated itself, with the large number of messages being persisted to the message box increasing the load on the server and with it the duration of the receive pipeline processing….

Once identified they have found a quick workaround, at least in the first instance – remove the debatching from the service bus receive port and move it to a later stage in their BizTalk processing. This, I think, is a good appraoch, as it lets them ‘take ownership’ of the message in BizTalk, removing it from the Service Bus, before starting the heavy processing.

Another approach is to change the lock duration property on the queue/subscription in question. the default is 60 seconds. This could by time and with careful load-testing a reasonable value may be found, but – of course – it does not eliminate the risk, just pushes it out further.

Asking in the virtual corridors of Solidsoft it appears we have seen this issue with at least  two other customers. I think that the SB-Messaging adapter should support both strategies and allow the customer the choice, and perhaps flagging the need to set the lock duration value on the queue/subscription as required.

Enabling Hybrid Cloud/On-Premises Solutions

Paolo Salvatori write a fantastic paper on How to integrate a BizTalk Server application with Windows Azure Service Bus Queues and Topics and being an Azure guy at present but a BizTalk guy for the past 11 years, this is “right up my street” as they say, although I do think that this article will come very useful to anyone interested in hybrid solutions and the messaging capabilities in the cloud, with or without BizTalk in the picture as Paolo does a great job of introducing the service bus capabilities and describing some typical scenarios.

This is an important topic – as much as we’d like to see everything on Windows Azure, naturally, enterprises will not magically teleport their entire IT estate into the cloud.
Some applications will probably remain on premises for a while , if not forever, and there are some very valid reasons for that.

What this means, of course, is that any enterprise that is half-serious about cloud adoption needs to consider how to support the hybrid model where some applications are in the cloud whilst others are on premises and how to do cloud/on-premises integration.

The Windows Azure platform, with the Service Bus capabilities – the relay service, queues and topics, as well as the upcoming integration capabilities, enables the hybrid model easily yet robustly using familiar concepts and tools.

Couple this with a strong enterprise integration solution, one implemented on BizTalk server for example, which provides the ‘gateway’ to and from the cloud within the enterprise and you can take a federated approach to enterprise service bus with one leg on the ground and another in the cloud.

%d bloggers like this: