Access Keys:
Skip to content (Access Key - 0)
Cancel    
Cancel   
 

Testing Strategies

Mar 25, 2011 11:56

Robin Pille

Jan 04, 2013 17:49

Mulesoft Current Mule Documentation

Testing Strategies

Mulesoft Documentation Page

Contents

Testing Strategies

Building a comprehensive suite of automated tests for your Mule project is the primary factor that will ensure its longevity: you'll gain the security of a safety net catching any regression or incompatible change in your applications before they even leave your workstation.

We'll look at testing under three different aspects:

*Unit testing: these tests are designed to be fast, with a very narrow system under test. Mule is typically not run for unit tests.
*Functional testing: these tests usually involve running Mule, though with a limited configuration, and should run fast enough to be executed on each build.
*Integration testing: these tests exercise a full Mule application with settings that are as close to production as possible. They are usually slower to run and not part of the regular build.

In practice, unit and functional testing are often merged and executed together.

Unit Testing

In a Mule application, unit testing is limited to the code that can be realistically exercised without the need to run it inside Mule itself. As a rule of thumb, code that is Mule aware (for example, code that relies on the registry), will better be exercised with a functional test

With this in mind, the following are good candidates for unit testing:

*Custom transformers
*Custom components
*Custom expression evaluators
*All the Spring beans that your Mule application will use. Typically, these beans come as part of a dependency JAR and are tested while being built, alleviating the need for re-retesting them in your Mule application project

Mule provides abstract parent classes to help with unit testing. Turn here for more information about them.

Functional Testing

Functional tests are those that most extensively exercise your application configuration. In these tests, you'll have the freedom and tools for simulating happy and unhappy paths.

The "paths" that you will be interested to cover include:

*Message flows
*Rule-based routing, including validation handling within these flows
*Error handling

If you've modularized your configuration files as explained in section 2, you've put yourself in a great position for starting functional testing.

Let's see why:

*Imported configurations can be tested in isolation. This means that you will be able to create one functional test suite for each of the different imported configuration. This reduces the size of the system under test, making it easier to write tests for each of the different cases that need to be covered
*Side-by-side transport configuration allows transport switching and failure injection. This means you'll not need to use real transports (say HTTP, JMS or JDBC) when running your functional but will be able to run everything through VM in-memory queues. You will also have the possibility to create stubs for target services and make them fail to easily simulate unhappy paths

Real transports or not? That is the question you're maybe asking and it is a valid one, as many in-memory alternatives exist for the different infrastructures your Mule application will connect to (for example: ActiveMQ for JMS, HSQLDB for JDBC). The real question is: what are you really testing? Is it relevant for your functional tests to exercise the actual transports, knowing that they're already tested by MuleSoft and that the integration tests will take care of exercising them.

Mule provides a lot of supporting features for implementing functional tests. Let's look into an example and discover them as we go. The following diagram illustrates the flow we will be testing:

This flow accepts incoming messages over HTTP, validates them and dispatches them to JMS if they are acceptable. For the actual implementation, we will be using the Validator configuration pattern and check that the incoming message payload is XML. Keep in mind that the same testing principles and tools apply if you're testing a flow.

Testing with side-by-side configurations

Let's look at the configuration files for this application. First, we have the configuration file that contains the Validator:

Note how the inbound and outbound endpoints are actually references to global ones. These global endpoints are configured in a separate configuration file designed to be loaded side-by-side with the above one. Here is its content, without the JMS connector configuration omitted for brevity:

Note how this configuration provides the actual configuration of the global endpoints used by the other configuration. In order to functional test this, we will have to create an alternative configuration that provides global endpoints with the same name but use the VM transport. Here it is:

Now let's write two tests: one for each possible path (message is XML or not). We will subclass Mule's FunctionalTestCase, an abstract class designed to be the parent of all your functional tests!

The FunctionalTestCase class is a descendant of JUnit's TestCase class.

Here is the test class, without the Java import declarations:

Notice in testValidJob() how we ensure we receive the expected synchronous response to our valid call (starting with "OK:") but also how we check that the message has been correctly dispatched to the expected destination by requesting it from the target VM queue. Conversely in testInvalidJob() we verify that nothing has been sent to the valid work endpoint.

As standard JUnit tests, you can now run these tests either from Eclipse or the command line with Maven.

Using a VM queue to accumulate messages and subsequently requesting them (as we did with vm://work.ok) can only work with the one-way exchange pattern. Using a request-response pattern would make Mule look for a consumer of the VM queue, as a synchronous response is expected. So what do we do when we have to test request-response endpoints? We use the Functional Test Component!

Stubbing out with the Functional Test Component

The Functional Test Component (FTC) is a programmable stub that can be used to consume messages from endpoints, accumulate these messages, respond to them and even throw exceptions. Let's revisit our example and see how the FTC can help us, as our requirements are changing.

We have decided to use a Validator's feature that wasn't used previously, which ensures that the message has been successfully dispatched to the accepted job endpoint and otherwise returns a failure message to the caller. Here is it's new configuration:

The only difference is that an error expression has been added. This addition yields the following changes:

*The Validator will now behave fully synchronously, preventing us from using an outbound VM queue as an accumulator of dispatched messages: we will have to use the FTC to play the role of accumulator,
*A new path will have to be tested as we will want to check the behavior of the system when dispatching fails. We will also use the FTC here, configuring it to throw an exception upon message consumption.

Let's see how introducing the FTC has changed our test transports configuration:

As you can see, the FTC manifests itself as a <test:component /> element. We used the convenience of the Simple Service pattern to make it consume the messages sent to the AcceptedWorkEndpoint.

The FTC supports plenty of configuration options. Read more about it there: http://www.mulesoft.org/documentation/display/MULE3USER/Functional+Testing

Now that we have this in place, let's see first how we can test the new failure path. Here is the source code of the new test method added to our previously existing functional test case:

Note how we get hold of the particular FTC we're interested in: we use getFunctionalTestComponent, a protected method provided by the parent class, to locate the component that sits at the heart of our Simple Service (located by its name).

Once we have gained a reference to the FTC, we configure it for this particular test so it will throw an exception anytime it is called. With this in place, our test works: the exception that is raised makes the Validator use our provided error expression to build its response message.

Now lets look at how we've refactored the existing test methods to use the FTC:

In testValidJob(), the main difference is that we now query the FTC for the dispatched message instead of requesting it from the outbound VM queue.

In testInvalidJob(), the main difference is that we configured the FTC to fail if a message gets dispatched despite the fact it is invalid. This approach actually leads to a better performance of the test because, previously, requesting a nonexistent message from the dispatch queue was blocking until the 5 seconds time-out was kicking in.

3.1.3. Integration Testing

Integration tests is the last layer of tests we'll be adding to be fully covered. These tests will actually be run against Mule running with your full configuration in place. We'll be limited to testing the paths that we can explore when exercising the system as a whole, from the outside. This means that some failure paths, like the one above that simulates a failure of the outbound JMS endpoint, will not be tested.

Though it is possible to use Maven to start Mule before running the integration tests, we recommend that you deploy your application on the container it will be running in in production (either Mule standalone or a Java EE container).

Since integration tests exercise the application as a whole with actual transports enabled, external systems will be affected when these tests will run. For example, in our case a JMS queue will receive a message: we will need to ensure this message has been received, which implies that no other system will consume it (or else we would have to check in these systems that they have received the expected message).

In shared environments, this is tricky to achieve and usually requires the agreement of all systems about the notion of test messages. These test messages exhibit certain characteristics (properties or content) so other systems realize they should not consume or process them.

To learn more about test messages, and for more testing strategies and approaches, please consult Test-Driven Development in Enterprise Integration Projects.

Another very important aspect is the capacity to trace a message as it progresses through Mule flows and reaches external systems: this is achieved by using unique correlation IDs on each message and consistently writing these IDs to log files. As you'll see it later on, we also rely on unique correlation IDs for integration testing. For now, here is our inbound HTTP endpoint refactored to ensure that the Mule correlation ID is set to the same message ID value that is returned in the OK acknowledgement message:

Mule will do the rest: it will ensure that the correlation ID that is been set with the message properties transformer shown above, gets propagated to any internal flow or external system receiving the message.

Maven Failsafe to feel safe

In order to keep our example simple, we'll assume that no other system will attempt to consume the messages dispatched on the target JMS queue: they will be sitting there until we consume them.

To show that no specific tooling is needed to build integration tests, we'll build them in Java, as JUnit test cases, and will run them with Maven's failsafe plug-in . Feel free to use instead any tool you're more familiar with.

For our current needs, soapUI used in conjunction with HermesJMS would give us a nice graphical environment for creating and running integration tests. See http://www.soapui.org/JMS/getting-started.html for more information. Also note that soapUI can be run from Maven too: http://www.soapui.org/Test-Automation/maven-2x.html

Since the main entry point of our application is exposed over HTTP, we'll use HttpUnit in our tests. Let's look at our test case for invalid work submissions:

In this test, which is a Junit 4 annotated test, we send a bad payload to our work manager and ensure that it gets rejected as expected. The WORK_API_URI constant is of course pointing to the Mule instance that is tested.

The test for valid submissions is slightly more involved:

Note that getConnectionFactory() is specific to the JMS implementation in use and, as such, hasn't been included in the above code snippet.

The important take away is that we use the correlation ID returned by the Validator as a mean to select and retrieve the dispatched message from the target JMS queue. As you can see, Mule as propagated its internal correlation ID to the JMS-specific one, opening the door to this kind of characterization and tracking of test messages.

It's time to run these two tests with the Failsafe plug-in. By convention integration test classes are named IT* or *IT or *ITCase and are located under src/it/java. This path is not by default on a standard Maven project build path, so we will need a little bit of jiggery-pokery to make sure they're compiled and loaded. Because we do not want to always add the integration test source path to all builds, we create a Maven profile (named it) and store all the necessary configuration in it:

With this configuration in place in your pom.xml, you can run:

mvn -Pit verify

to execute your first automated Mule integration tests.