Some people, when confronted with a problem, think "I know, I'll use a queue." Now they have an unbounded number of problems.

Networked message queues like ActiveMQ, RabbitMQ, ZeroMQ, and a host of other Java inspired software tumors are crutches of systems design. I love asynchronous stuff as much as the next guy, but think of a queue like Java: it encourages large swaths of mediocre programmers to overengineer shit, and it keeps systems administrators in business by giving them something to respond to at 4AM.

Here is some of the dumb stuff that queues enable:

The Blocking Consumer

You have some work that is sometimes produced faster than it can be done; a common problem. One of the commonly problematic solutions to it is to stick the work in a message queue, and have one or more consumers that block on the queue, picking work off as soon as it's available and doing it.

What's wrong with this? First of all, it blurs your mental model of what's going on. You end up expecting synchronous behavior out of a system that's asynchronous. One of the concrete outcomes of that is the question how do you determinstically monitor this system? If the queue size is greater than zero, is this a failure state? If the queue size is greater than zero, it means that your system is over capacity, but what is your response to that? Spin more workers or let it ride? If your answer is "spin more workers", then you should be doing the work synchronously because the implication is that you care about the amount of time it takes for a worker to get to the work. If your answer is "let it ride", then how do you know when your system is in trouble: if there are ten jobs in the queue? Ten thousand?

If you are designing a system that relies on a blocking queue consumer, you should likely be doing the work synchronously, without the queue. System gets overloaded? I've got a solution for that, too: capacity planning.

Collecting Data for Offline Processing

Say you've got some events that you want to record, and then process offline in a batch job. Using a message queue for this will only lead to tears.

In such a system, you've usually got multiple data producers, and you want the data aggregated in a single place. As chance would have it, UNIX ships with a facility that can do this consistently and reliably. We call it syslog.

Depending on your queue implementation, when you pop a message off, it's gone. The consumer acknowledges receipt of it, and the queue forgets about it. So, if your data processor fails, you've got data loss. Collecting messages syslog, your processor program is just processing a text file, and can process it again if something goes wrong. Throw in some split and xargs, and you've got parallel processing. Event messages aren't text? You fucked up. Go buy a subscription to the Microsoft Developer Network.

Everybody Loves System Complexity

Obviously I have been generalizing thus far. There are host of situations where you need to separate the production of work from the consumption of work, and fully understand the consequences. I'm not hating on that asynchronous pattern, I'm hating on introducing more software into your stack unnecessarily. Can you use a database table for it? Can you use files on disk or a named pipe? Syslog? (modern syslog implementations will write to a database) Bringing new services in should be the absolute last resort, because every new service is an unknown that needs to be configured and maintained. Adding a queue to your stack isn't just adding a service. It's ancillary maintenance code, libraries, monitoring scripts - all more things that can and will fail.

Liabilities, as it were.