Mule ESB and ActiveMQ - A Perfect Match

by David Dossot, Author of Mule In Action

ActiveMQ in Action, an upcoming book from Manning Publications, may well end-up being the perfect companion book for Mule in Action. Thanks to Mule ESB’s native support for Apache ActiveMQ and the capacity to transparently use Spring for advanced configuration needs, Mule is certainly the best ESB for tapping the power of this JMS provider.

In this article, we’ll review a few common usage topologies of Mule and ActiveMQ, then we’ll take a look at some configuration samples and finally we’ll discuss a couple of production concerns.

Common Usage Topologies

Embedded Non-Persistent Broker

An in-memory non-persistent ActiveMQ broker accessed via its VM protocol (not to be confused with Mule’s VM transport) is an excellent option for applications that need the richness of JMS semantics but can afford losing messages. High-performance message dispatching applications are a typical example here.

Another domain where this topology truly shines is testing. The same way in-memory databases are commonly used for unit and integration tests, a non-persistent embedded ActiveMQ broker allows you to test the JMS elements of a Mule configuration in the most realistic manner possible.

 

Embedded Persistent Broker

This topology is a variation of the previous one, where the ActiveMQ broker has been configured to enable persistence of the JMS messages it handles. It is mainly useful for scenarios where a single Mule instance is involved, or where each instance and the JMS messages it processes can be functionally siloed.

A typical use case is a Mule instance that uses JMS queues internally for reliably exchanging messages between its services. In the event of a crash, all messages pending delivery will have to wait until Mule, and its embedded ActiveMQ, has been restarted in order to be processed.

Remote Master/Slave Broker

In this topology, communications between the Mule node and the ActiveMQ brokers happen over the wire, usually by using ActiveMQ’s TCP transport. Consequently, performances are lower than with the previous two patterns. Moreover, it is necessary to configure Mule to handle the case when connecting to a remote broker isn’t possible.
 
This topology is very common in production because of the high availability gained by deploying ActiveMQ as a pair of master and slave brokers. It is also a standard practice to have JMS providers deployed and operated in a centralized manner in corporate environments.

Embedded Networked Broker

This topology relies on ActiveMQ’s brokers capacity to engage into a specific network. This advanced feature transparently provides Mule with distributed JMS queues and topics. Moreover, by co-locating Mule and the ActiveMQ broker it connects to within the same JVM, the network of brokers is accessed through the convenience of in-memory access.
 
A typical use case for this topology is a canonical ESB deployment where all communications occurring between Mule services are routed over JMS. External applications can use the same principle and reach Mule nodes via the network of brokers, leaving to ActiveMQ the burden of delivering messages to the right consumer. One drawback of this topology is that shutting down an ActiveMQ broker that had undelivered messages will delay their delivery until it has been restarted.
 
Now that you’re convinced Mule and ActiveMQ form a very powerful and versatile duo, let’s look at some bits of configuration.

Configuration Samples

As said in the opening paragraph, you can access the gamut of ActiveMQ’s goodness thanks to a specific Mule connector and Mule’s native support for Spring XML configuration elements.
 
So let’s look at two extremes based on the first two topologies we’ve discussed above.

From Easy...

The following configuration sample creates an in-memory non-persistent ActiveMQ broker and a Mule connector attached to it:
 

 
 
That is all you need to run JMS based integration tests!
 
The good news is that, you can easily switch such a test configuration with one that is production grade, as none of the services relying on this connector will be able to tell the difference between an in-memory ActiveMQ broker and a remote one... This can be done by leveraging Mule’s capacity to load several configurations side-by-side: if you extract your connector configuration in a specific file, you can then easily swap it with another one between test and production deployments.
 
Let’s now look at what could be such a production grade configuration.

... To Intense!

The following shows the configuration of an in-memory persistent ActiveMQ broker, complete with the configuration of a redelivery policy and dead-letter-queues:
 

 
It’s indeed pretty intense but because everything conveniently fits within Mule’s configuration, there is no need to look around for other files to get the complete picture. Also note how we use Spring’s p namespace to increase the readability of the configured elements.
 
Let’s conclude by discussing a few production concerns, namely reconnection and monitoring.

Production Concerns

Reconnection

In the third previously discussed topology, Mule connects to remote ActiveMQ brokers (primarily to a master node with fail-over to a slave node). In that scenario, we need to take care of two specific concerns:

  • Remote resource connectivity - Whenever Mule needs to maintain a connection to a remote resource, care must be taken to ensure that if that connection breaks, reconnection happens. In the same vein, the absence of a remote resource can prevent Mule from starting at all, unless configured properly. Though the absence of both the master and slave ActiveMQ broker is highly unlikely, a production-grade Mule configuration must be able to handle the scenario. Mule’s retry policy is the mechanism in charge of taking care of reconnection.
  • Resource fail-over - This is typically handled by the specific connector for the remote resource. In our case, it’s ActiveMQ’s failover transport that is in charge of handling a fail-over between the master broker and its slave.

 
Here is a configuration fragment that connects to a master/slave pair of remote ActiveMQ brokers and uses the asynchronous retry policy provided with Mule ESB Enterprise:
 

 
 
Using an asynchronous retry mechanism is essential to allow Mule to start even if none of the remote ActiveMQ brokers can be contacted. With the above configuration Mule would keep trying to reconnect to any of the brokers every 3 seconds.
 
Let’s now look into monitoring, a very important production concern.

Monitoring

Many of our examples have shown running ActiveMQ within the same VM as Mule. A concern that production teams may have is: will it be possible to monitor these embedded JMS brokers or will they fall off our radars? Monitoring for queues that are filling up and setting high-watermark alert thresholds is indeed a very common practice.
 
The good news is that ActiveMQ’s extensive JMX support plays perfectly well with Mule’s one. If you look at the following JConsole screenshot, you’ll see that both Mule and ActiveMQ JMX domains are living together in peace:

This opens the door for in-depth monitoring of embedded ActiveMQ brokers as if they were standalone ones.

Conclusion

As you’ve seen it, Mule’s love affair with ActiveMQ is a pretty serious thing and not just a temporary crush. Whether you’re in need for a quick JMS-driven prototype, a solution for integration testing or a solid messaging broker for complex production environments, the Mule-ActiveMQ duo is undoubtably one of your best bets.