ActiveMQ 5.6.0 and other news

We just released Apache ActiveMQ 5.6.0. It was a long-awaited maintenance release, but however there are a few very significant new features that was worth waiting for. Those include a new LevelDB store, MQTT and Stomp 1.1 protocols support and self-balancing cluster clients, to name a few. I already written about clustering here and there sure will be a lot of things to write about these other features as well in coming days.

Also, next week I’ll be at CamelOne in Boston and JEEConf In Kiev (a bit too much traveling for my taste, but there you go). I’ll talk about Enterprise deployment of ActiveMQ using Fuse Fabric (a topic already covered here in a nutshell) and Apache Apollo, the next generation of the broker. All these projects that are coming in the pipeline are pushing the possibilities of our integration infrastructure one step further. It will allow people to do complex deployments easier and connect to the infrastructure from virtually everywhere. It was a very exciting first half of the year, and it looks like things are going to be even more interesting going forward.

ActiveMQ in the cloud

FuseSource just announced a public beta of new Enterprise products (Read more about it in Rob’s post). So it’s time to give you a bit more details on what we were working on for the past few months. One thing I want to emphasize in this post is how this project improves the experience of deploying and managing ActiveMQ brokers. Fuse Fabric (the central engine behind Fuse Enterprise) can help you provision and manage your brokers better and additionally help with cloud deployments.

Provisioning

While classic server software (ActiveMQ included) deployment scenarios, which include unpacking a distro, setting a config and starting/stopping software, works OK for small deployments, there are a lot of challenges people encounter when trying to set and manage a large number of instances. Here are some of them:

  • Enormous amount of work needed to set everything – imagine a large cluster you need to set, ssh-ing, unpacking, copying and tweaking config files is very tedious and error-prone process
  • Changing configuration during runtime – you need again to manually tweak every file and restart the server, which is all but fun
  • Upgrading – can also be a challenge and time consuming

Some of the things folks usually do to make their life easier is to:

  • Keep XML config part of the spec as a template and in version control system and tweak as much things as possible with properties – this makes it easier to keep things under control and minimize the potential of an error when managing a configuration on large number of instances.
  • Keep configuration separate from distribution to make it all easier to upgrade
  • Use tools like Puppet or Chef to even further make things easier in these scenarios

One of the things where Fabric excels is centralized configuration management. In Fabric it’s really easy to spin new container instances on your servers (using ssh) or to the public cloud. Additionally it’s really easy to deploy ActiveMQ broker to those containers,
by simply applying appropriate profile to them. So generally, you can predefine a template for your broker, like XML configuration template and all necessary properties in a profile
and then with a simple Karaf command (or even mouse click in our tools, like FuseIDE and Fuse Management Console) deploy as many instances as you like of that broker (no matter if it’s a physical box in your data center or a VM at the public could provider).

Discovery

Fabric uses Apache ZooKeeper as it’s central registry of container instances running. This same registry is used to keep track of all brokers running inside a Fabric instance. This means that we can use this registry to discover all brokers inside a certain group. Therefore we created a new discovery protocol (called fabric of course) that can do that. So clients can connect to the broker group (think of it as a cluster) without any need to know exact location of the brokers. So now you can see how Fabric helps deploying your brokers to the cloud. First, you can use its ability to start containers (and deploy brokers on them) on any server or cloud provider available. And than clients can use Fabric to discover brokers and connect to them, all location agnostic.

Topologies

ZooKeeper as a central registry allows us to do some more nice things in domain of broker topologies. For example, currently the master-slave topology of ActiveMQ brokers depends on shared storage (either shared file system or enterprise JDBC database). In this scenario the master election and slave locking depends on the locking ability of the shared storage, limiting it to certain type of hardware. With ZooKeeper as a distributed registry, battle-proven for this kind of use cases, it’s easy to create a master-slave topologies with master election done on ZooKeeper locks. If you want to have a persistent master-slave, you still gonna need to store your messages in a some kind of a shared storage, but locking is not store’s job any more. And on the other hand it’s really easy to create non-persistent cluster of brokers with shared-nothing philosophy. Creating a master-slave is as simple as creating a multiple brokers with the same name in the group. The first one started will become a master, while others will be slaves waiting for the master to fail.

Another example of enhanced topology possibilities with ActiveMQ and Fabric is new ways you can set networks of brokers. Just as clients can use ZooKeeper registry to discover brokers, network connectors can use the same discovery protocol to connect the broker with all other brokers in the certain group. Again, brokers are mutually totally location agnostic, which is something desired in deployment scenarios with modern infrastructure. Could it be done any easier? Finally, you might want to mix these two topologies and create connected networks of master/slave broker, which with Fabric, is just easy to do as those basic topologies.

Upgrades and Patching

And now you say you need to upgrade all of the dozen brokers you have? No problem, with Fabric you can centrally update any bundle you’re using in your profile and that change will be propagated to all brokers. Now it’s time to talk about risk management of upgrades and this is where profile versions come into play. Instead of messing with profiles that are used in production at the moment, you can create a new version of it with updated versions of all bundles you plan to use. Once you have that profile ready, you can test it out on one or a few instances. And only after you’re certain that everything is OK, you can apply a new profile to all brokers. Of course, it’s easy to rollback to the previous version if anything goes wrong.

Fabric also provides you with the mechanism to provide a simple incremental patch jar with only certain classes modified (a single bug-fix). These classes will be then used instead of old ones and will allow us to provide a fix for a critical bug without need to upgrade the project version. This can be useful for dealing with critical production bugs, where waiting for the next release is not an option.

More resources

This post was just meant to give you a glance view of what’s coming from FuseSource regarding easier deployment, provisioning and managing of your messaging (and generally integration) infrastructure. You can check some more docs on the projects. Especially, there’s some more info that explains ActiveMQ concepts explained here in more details and shows how to run examples. You will definitely hear more from us on this topic in coming days, so stay tuned.

New ActiveMQ failover and clustering goodies

For the last two weeks I’ve been working on some interesting use cases for the good ol’ failover transport. I finally have some time at my hands, so here’s a brief recap of what’s coming in 5.6 release in this area.

First there’s a new feature, called Priority Backup. It’s described in details here, but in a nutshell it provides you with the mechanism of prioritizing your failover urls and keep your clients connected to them as soon as they are available. The most obvious use case for this is to keep your clients connected to the broker in local data center whenever you can. By doing this, you can both have better performances and stability of your clients, but also save on your bandwidth bills.

Another improvement is coming for automatic broker cluster feature. Although this feature is not new, I spent some time hardening it and thought to share some more insight in how (and when) to use it in your projects.

In search of high availability, people often default to master-slave architecture. This makes sense in most use cases, but if your flow is purely non-persistent you can probably come up with more optimal architecture. Instead of having one broker at the time handling all your load, and other one just waiting for it to fail, you’ll get more efficient system with some kind of active-active configuration where (possibly multiple) brokers share the load all the time. Ideally clients would be evenly distributed and would rebalance if anything changes. Brokers don’t need to share any messages as clients are distributed and messages are non-persistent so they will be lost if broker fails. So can you achieve this kind of architecture with ActiveMQ?

Sure you do. That’s where automatic rebalance and clustering shines. First of all, brokers should be networked but only so they can exchange information on their availability. They shouldn’t exchange the messages (but of course can if your use case needs it). In 5.6 you do that with pure static networks, using configuration like

<networkConnector uri="static:(tcp://host)" staticBridge="true"/>

So now imagine three brokers A,B and C forming a full mesh. In addition every broker uses rebalance options on their transport connectors

<transportConnector name="openwire" uri="tcp://localhost:61616"
                    updateClusterClients="true" updateClusterClientsOnRemove="true"
                    rebalanceClusterClients="true"
/>

All that is left for the client to do is connect to one of the brokers it knows like

failover:(brokerA)

and the broker will fill it with all information on other brokers in the cluster and whether it should reconnect to one of them or not. So having a large number of clients connecting like this, very soon they’ll rebalance over available brokers. You can stop one of the brokers in the cluster for updates and clients will rebalance over remaining ones. You can even add a new broker to the cluster and everything will get rebalanced without any need for you to touch your clients.

So, basically in this way you have both load balancing and high availability for your non-persistent messages. Additionally, your clients are automatically updated with all information they need, and no manual intervention is needed.

Although the basic support for clustering was there since 5.4, I did some more hardening and better rebalancing, so it’s coming in the Apache ActiveMQ 5.6 (and the next Fuse 5.5.1) release. Also, there are some more great stuff regarding broker clustering coming soon, so stay tuned and happy messaging.

ActiveMQ networks and advisory messages explained

Recently Jakub wrote an excellent blog post explaining more on how ActiveMQ networks work. The one thing that somehow always got unexplained and confused users over time is the connection between network of brokers and advisory messages. I finally took some time to document it and introduce some enhancements we have in this area for the upcoming 5.6 broker release. Read more here on how network connectors use advisory messages and how you can tune all that in complex and high load environments.

ActiveMQ in Action released

coverBook projects always (at least for me) takes quite longer to finish than you anticipate at the beginning. But unlike everything we do in software development, when it is finally finished, you get the real physical thing you can hold in your hands. And that’s always a great feeling.

I’m happy to announce that ActiveMQ in Action is released. You can already get final eBook, while the print version will hit the shelfs on March 24th.

Thanks everyone for support and enjoy reading. I can’t wait to get my hands on the printed version.

ActiveMQ 5.5: Audit Logging

The PCI DSS (Payment Card Industry (PCI) Data Security Standard) v2.0, specifies that all user actions must be audited, so they can be inspected later if needed. To be deployable in such environments, we added audit logging to ActiveMQ. In this article you can find the basics on how to configure and use it. Here, I’d like to expand the topic a bit and talk about add-ons you can find in Fuse Message Broker.

For starters, let’s quickly recap how it works. When enabled, by setting

-Dorg.apache.activemq.audit=true

system variable, all user (or to say management) actions will be logged. This basically means, that we will log all JMX commands and operations invoked using web console.

The implementation of audit logging in ActiveMQ is pluggable. A default one will just use the standard application log mechanism to store these logs (in ${ACTIVEMQ_BASE}/data/audit.log file by default). You can easily provide your own by log by implementing AuditLog interface and let the Java Service Loader to find it.

For the FuseSource flavor of ActiveMQ, we prepared some more goodness in this area. So if you deploy your broker in OSGi environment, like Apache Karaf, it will use OSGi platform infrastructure for audit logging. For starters, we will use OSGi service mechanism to lookup for the available audit loggers. Also, we provide a default implementation that uses OSGi event admin mechanism to sends audit logs as events to the org/fusesource/audit topic. A default topic handler is also provided, which simply logs the events in the log file. But with this solution, you can easily provide your topic handler that can process and store audit logs in any way that suits your environment.

In the future you can expect similar solution for the whole range of the FuseSource projects and unified way to handle the logs for the whole platform.