Learn how Spring 4.2 simplifies handling transaction bound events (e.g. sent just after a database commit).

Introduction

As you probably already know (e.g. from my previous blog post) it is no longer needed to create a separate class implementing ApplicationListener with onApplicationEvent method to be able to react to application events (both from Spring Framework itself and our own domain events). Starting with Spring 4.2 the support for annotation-driven event listeners was added. It is enough to use @EventListener at the method level which under-the-hood will automatically register corresponding ApplicationListener:

    @EventListener
    public void blogAdded(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

Please notice that using domain objects in the events has notable drawbacks and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Transaction bound events

Simple and compact. For “standard” events everything looks great but in some cases it is needed to perform some operations (usually asynchronous ones) just after the transaction has been committed (or rolled back). What’s then? Can the new mechanism be used as well?

Business requirements

First, a small digression – business requirements. Let’s imagine the super fancy blog aggregation service. An event is generated everytime the new blog is added. Subscribed users can receive an SMS or a push notification. The event could be published after the blog object is scheduled to be saved in a database. However, in in a case of commit/flush failure (database constraints violation, an issue with ID generator, etc.) the whole DB transaction would be rolled back. A lot of angry users with broken notification will appear at the door…

Technical issues

In modern approach to transaction management, transactions are configured declaratively (e.g. with @Transactional annotation) and a commit is triggered at end of transactional scope (e.g. at the end of a method). In general this is very convenient and much less error prone (than the programmatic approach). On the other hand, commit (or rollback) is done automatically outside our code and we are not able to react in a “classical way” (i.e. publish event in the next line after transaction.commit() is called).

Old school implementation

One of the possible solutions for Spring (and a very elegant one) was presented by indispensable Tomek Nurkiewicz. It uses TransactionSynchronizationManager to register transaction synchronization for the current thread. For example:

    @EventListener
    public void blogAddedTransactionalOldSchool(BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2 - broken in not transactional context

        TransactionSynchronizationManager.registerSynchronization(
                new TransactionSynchronizationAdapter() {
                    @Override
                    public void afterCommit() {
                        internalSendBlogAddedNotification(blogAddedEvent);
                    }
                });
    }

The passed code is executed in the proper place in the Spring transaction workflow (for that case “just” after commit).

To provide support for execution in non-transactional context (e.g. in integration test cases which couldn’t care about transactions) it can be extended to the following form to not fail with java.lang.IllegalStateException: Transaction synchronization is not active exception:

    @EventListener
    public void blogAddedTransactionalOldSchool(final BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2

        //"if" to not fail with "java.lang.IllegalStateException: Transaction synchronization is not active"
        if (TransactionSynchronizationManager.isActualTransactionActive()) {

            TransactionSynchronizationManager.registerSynchronization(
                    new TransactionSynchronizationAdapter() {
                        @Override
                        public void afterCommit() {
                            internalSendBlogAddedNotification(blogAddedEvent);
                        }
                    });
        } else {
            log.warn("No active transaction found. Sending notification immediately.");
            externalNotificationSender.newBlogTransactionalOldSchool(blogAddedEvent);
        }
    }

With that change in a case of the lack of active transaction provided code is executed immediately. Works fine so far, but let’s try to achieve the same thing with annotation-driven event listeners in Spring 4.2.

Spring 4.2+ implementation

In addition to @EventListener Spring 4.2 provides also one more annotation @TransactionalEventListener.

    @TransactionalEventListener
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

The execution can be bound to standard transaction phases: before/after commit, after rollback or after completion (both commit or rollback). By default it processes an event only if it was published within the boundaries of a transaction. In other case the event is discarded.

To support the execution in non-transactional context the falbackExecution flag can be used. If set to “true” the event is processed immediately if there is no transaction running.

    @TransactionalEventListener(fallbackExecution = true)
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

Summary

Introduced in Spring 4.2 annotation-driven event listeners continue a trend to reduce boilerplate code in Spring (Boot) based applications. No need to manually create ApplicationListener implementations, no need to use directly TransactionSynchronizationManager – just one annotation with proper configuration. The other side of the coin is that it is a little bit harder to find all event listeners, especially if there are dozens of them in our monolith application (though, it can be easily grouped). Of course, the new approach is only an option which could be useful in a given use-case or not. Nevertheless another piece of Spring (Boot) magic flood into our systems. But maybe resistance is futile?

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively, it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases. Code examples are available from GitHub.

Btw, writing examples for that blog post gave me the first real ability to use the new test transaction management system introduced in Spring 4.1 (in the past I only mentioned it during my Spring training sessions). Probably, I will write more about it soon.

Learn how to reduce boilerplace code in event handling with annotation-driven event listeners in Spring 4.2+.

Introduction

Exchanging events within the application has become indispensable part of many applications and thankfully Spring provides a complete infrastructure for transient events (*). The recent refactoring of transaction bound events gave me an excuse to check in practice the new annotation-driven event listeners introduced in Spring 4.2. Let’s see what can be gained.

(*) – for persistent events in Spring-based application Duramen could be a solution that is worth to see

Spring logo

The old way

To get a notification about an event (both Spring event and custom domain event) a component implementing ApplicationListener with onApplicationEvent has to be created.

@Component
class OldWayBlogModifiedEventListener implements
                        ApplicationListener<OldWayBlogModifiedEvent> {

    (...)

    @Override
    public void onApplicationEvent(OldWayBlogModifiedEvent event) {
        externalNotificationSender.oldWayBlogModified(event);
    }
}

It works fine, but for every event a new class has to be created which generates boilerplate code.

In addition our event has to extend ApplicationEvent class – the base class for all application events in Spring.

class OldWayBlogModifiedEvent extends ApplicationEvent {

    public OldWayBlogModifiedEvent(Blog blog) {
        super(blog);
    }

    public Blog getBlog() {
        return (Blog)getSource();
    }
}

Please notice that using domain objects in the events has notable drawback and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Btw, ExternalNotificationSender in this example is an instance of a class which sends external notifications to registered users (e.g. via email, SMS or Slack).

Annotation-driven event listener

Starting with Spring 4.2 to be notified about the new event it is enough to annotate a method in any Spring component with @EventListener annotation.

    @EventListener
    public void blogModified(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModified(blogModifiedEvent);
    }

Under the hood Spring will create an ApplicationListener instance for the event with a type taken from the method argument. There is no limitation on the number of annotated methods in one class – all related event handlers can be grouped into one class.

Conditional event handling

To make @EventListener even more interesting there is an ability to handle only those events of a given type which fulfill given condition(s) written in SpEL. Let’s assume the following event class:

public class BlogModifiedEvent {

    private final Blog blog;
    private final boolean importantChange;

    public BlogModifiedEvent(Blog blog) {
        this(blog, false);
    }

    public BlogModifiedEvent(Blog blog, boolean importantChange) {
        this.blog = blog;
        this.importantChange = importantChange;
    }

    public Blog getBlog() {
        return blog;
    }

    public boolean isImportantChange() {
        return importantChange;
    }
}

Please note that in the real application there would be probably a hierarchy of Blog related events.
Please also note that in Groovy that class would be much simpler.

To generate event only for important changes the condition parameter can be used:

    @EventListener(condition = "#blogModifiedEvent.importantChange")
    public void blogModifiedSpEL(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedSpEL(blogModifiedEvent);
    }

Relaxed event type hierarchy

Historically ApplicationEventPublisher had only an ability to publish objects which inherited after ApplicationEvent. Starting with Spring 4.2 the interface has been extended to support any object type. In that case the object is wrapped in PayloadApplicationEvent and sent through.

//base class with Blog field - no need to extend `ApplicationEvent`
class BaseBlogEvent {}

class BlogModifiedEvent extends BaseBlogEvent {}
//somewhere in the code
ApplicationEventPublisher publisher = (...);    //injected

publisher.publishEvent(new BlogModifiedEvent(blog)); //just plain instance of the event

That change makes publishing events even easier. However, on the other hand without an internal conscientiousness (e.g. with marker interface for all our domain events) it can make event tracking even harder, especially in larger applications.

Publishing events in response to

Another nice thing with @EventListener is the fact that in a situation of non-void return type Spring will automatically publish returned event.

    @EventListener
    public BlogModifiedResponseEvent blogModifiedWithResponse(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedWithResponse(blogModifiedEvent);
        return new BlogModifiedResponseEvent(
            blogModifiedEvent.getBlog(), BlogModifiedResponseEvent.Status.OK);
    }

Asynchronous event processing

Updated. As rightly suggested by Radek Grębski it is also worth to mention that @EventListener can be easily combined with @Async annotation to provide asynchronous event processing. The code in the particular event listener doesn’t block neither the main code execution nor processing by other listeners.

    @Async    //Remember to enable asynchronous method execution 
              //in your application with @EnableAsync
    @EventListener
    public void blogAddedAsync(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

To make it work it is only required to enable asynchronous method execution in general in your Spring context/application with @EnableAsync.

Summary

Annotation-driven event listeners introduced in Spring 4.2 continue a trend to reduce boilerplate code in Spring (Boot) based applications. The new approach looks interesting especially for small applications with a small amount of events where a maintenance overhead is lower. In the world of ubiquitous Spring (Boot) magic it is more worthy to remember that with great power comes great responsibility.

In the next blog post I will write how the new mechanism can be also used to simplify handling of transaction bound events.

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases.

How to communicate with Maven Central/Nexus without using the password kept locally unencrypted (especially with Gradle, but not limited to it).

Rationale

Unfortunately, Gradle (and many other build tools) does not provide any mechanism to locally keep passwords encrypted (or at least encoded). Without that even such a simple activity like showing your global Gradle configuration (~/.gradle/gradle.properties) to a colleague it uncomfortable, not to mention more serious risks associated with storing passwords on a disk in a plain-text form (see among others Sony Pictures Entertainment hack). It is Gradle, so with all Groovy magic under the hood it would be possible to implement an integration with a system keyring on Linux to fetch a password, but I’m not aware of any existing plugin/mechanism to do that and I would rather prefer not to write it.

Another issue is that nowadays, in the world of ubiquitous automation and cloud environments it is common to use API keys which allow to perform given operation(s). However, its lost doesn’t provide an attacker a possibility to hijack the account (e.g. token cannot be used neither to log into an administration panel nor to change of email or password which requires additional authentication).

It is very important if you need to keep valid credentials on a CI server to make automatic or even continuous releases. Thanks to my gradle-nexus-staging-plugin there is no need to do any manual steps in Nexus GUI to promote artifacts to Maven Central, so this was the next issue I wanted to deal with for my private and our FOSS projects in Codearte.

Nexus API key generation

Internet search for “maven central api key” wasn’t helpful, so I started digging into Nexus REST API documentation and I’ve found that in fact there is a (non widely known) way to generate and use an API key (aka an auth token).

0. Log into Nexus hosting Sonatype OSS Repository Hosting (or your own instance of Nexus).
1. Click on your login name in right-upper corner and choose “Profile”.
2. From the drop-down list with “Summary” text select “User Token”.
3. Click “Access User Token”.

Generating API key in Nexus

Generating API key in Nexus

5. Enter your password
6. Copy and paste your API username and API key (into your ~/.gradle/gradle.properties file or a CI server configuration).
7. Work as usual with a little safer way.

Summary

It is good that using API keys is possible to deploy artifacts to Maven Central/Nexus and it is very easy to set it up. Someone could argue that the permission policy is coarse-grained (nothing or all operations except password/email change), but in my opinion it seems to be enough for the artifact repository system class. In addition, such an approach should work also with Sbt, Ivy, Leiningen and everything else that tries to upload artifacts into Maven Central (including Maven itself by removing limitations of the master password encryption with settings-security.xml). Hopefully, that post will make it widely known.

Mockito-Java8 is a set of Mockito add-ons leveraging Java 8 and lambda expressions to make mocking with Mockito even more compact.

At the beginning of 2015 I gave my flash talk Java 8 brings power to testing! at GeeCON TDD 2015 and DevConf.cz 2015. In my speech using 4 examples I showed how Java 8 – namely lambda expressions – can simplify testing tools and testing in general. One of those tools was Mokcito. To not let my PoC code die on slides and to make it simply available for others I have released a small project with two, useful in specified case, Java 8 add-ons for Mockito.

Mockito logo

Quick introduction

As a prerequisite, let’s assume we have the following data structure:

@Immutable
class ShipSearchCriteria {
    int minimumRange;
    int numberOfPhasers;
}

The library provides two add-ons:

Lambda matcher – allows to define matcher logic within a lambda expression.

given(ts.findNumberOfShipsInRangeByCriteria(
    argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);

Argument Captor – Java 8 edition – allows to use ArgumentCaptor in a one line (here with AssertJ):

verify(ts).findNumberOfShipsInRangeByCriteria(
    assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));

Lambda matcher

With a help of the static method argLambda a lambda matcher instance is created which can be used to define matcher logic within a lambda expression (here for stubbing). It could be especially useful when working with complex classes pass as an argument.

@Test
public void shouldAllowToUseLambdaInStubbing() {
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(
        argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

In comparison the same logic implemented with a custom Answer in Java 7:

@Test
public void stubbingWithCustomAsnwerShouldBeLonger() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(new Answer<Integer>() {
        @Override
        public Integer answer(InvocationOnMock invocation) throws Throwable {
            Object[] args = invocation.getArguments();
            ShipSearchCriteria criteria = (ShipSearchCriteria) args[0];
            if (criteria.getMinimumRange() > 1000) {
                return 4;
            } else {
                return 0;
            }
        }
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Even Java 8 and less readable constructions don’t help too much:

@Test
public void stubbingWithCustomAsnwerShouldBeLongerEvenAsLambda() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(invocation -> {
        ShipSearchCriteria criteria = (ShipSearchCriteria) invocation.getArguments()[0];
        return criteria.getMinimumRange() > 1000 ? 4 : 0;
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Argument Captor – Java 8 edition

A static method assertArg creates an argument matcher which implementation internally uses ArgumentMatcher with an assertion provided in a lambda expression. The example below uses AssertJ to provide meaningful error message, but any assertions (like native from TestNG or JUnit) could be used (if really needed). This allows to have inlined ArgumentCaptor:

@Test
public void shouldAllowToUseAssertionInLambda() {
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    verify(ts).findNumberOfShipsInRangeByCriteria(
        assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));
}

In comparison to 3 lines in the classic way:

@Test
public void shouldAllowToUseArgumentCaptorInClassicWay() {  //old way
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    ArgumentCaptor<ShipSearchCriteria> captor = 
        ArgumentCaptor.forClass(ShipSearchCriteria.class);
    verify(ts).findNumberOfShipsInRangeByCriteria(captor.capture());
    assertThat(captor.getValue().getMinimumRange()).isLessThan(2000);
}

Summary

The presented add-ons were created as PoC for my conference speech, but should be fully functional and potentially useful in the specific cases. To use it in your project it is enough to use Mockito 1.10.x or 2.0.x-beta, add mockito-java8 as a dependency and of course compile your project with Java 8+.

More details are available on the project webpage: https://github.com/szpak/mockito-java8

Update 20151217. You may be also interested in my new blog post about using Mockito wihtout static imports.

Quick tutorial how to promote/release artifacts in a Gradle project to Maven Central, without clicking in the Nexus GUI with Gradle Nexus Staging Plugin.

Introduction

Maven Central (aka The Central Repository) is (probably) the world’s largest set of open source artifacts used by Java and JVM-based projects. It was founded by the creators of Apache Maven and it has been serving artifacts since 2002. Nowadays there are some alternatives (listed below), but for many users Maven Central is still the primary source of project dependencies (and sometimes the only one whitelisted in the corporations).

The Central Repository logo

Problem

To perform the release to The Central Repository, Maven users can use Nexus Staging Maven Plugin – free, but not fully open source plugin. But with Gradle it was required to login into Nexus GUI and manually invoke two actions (close repository and release/promote repository). Quite boring and in addition highly problematic with the Continuous Delivery approach. Luckily Nexus exposes REST API which with some work allows to do the same. Gradle Nexus Staging Plugin arose to do that job.

Quick start

Important. Please pay attention that the prerequisite is to have an active and configured account in Sonatype OSSRH (OSS Repository Hosting) as well as Gradle project configured to publish release artifacts to staging repository. If you don’t have it already please follow a separate section for Gradle in the official guide.

To setup automatic release/promotion in your project add gradle-nexus-staging-plugin to the buildscript dependencies in your build.gradle file for root project:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.5.1"
    }
}

Apply the plugin:

apply plugin: 'io.codearte.nexus-staging'

Configure it:

nexusStaging {
    packageGroup = "org.mycompany.myproject"
    stagingProfileId = "yourStagingProfileId" //when not defined will be got from server using "packageGroup"
}

After successful archives upload (with maven, maven-publish or nexus plugin) to Sonatype OSSRH call:

./gradlew closeRepository promoteRepository

to close staging repository and promote/release it and its artifacts. If a synchronization with Maven Central was enabled the artifacts should automatically appear into Maven Central within several minutes.

Details

The plugin provides two main task:

  • closeRepository – closes the open repository with uploaded artifacts. There should be just one open repository available in the staging profile (possible old/broken repositories can be dropped with Nexus GUI)
  • promoteRepository – promotes/releases closed repository (required to put artifacts to Maven Central)

And one additional:

  • getStagingProfile – gets and displays staging profile id for a given package group. This is a diagnostic task to get the value and put it into the configuration closure as stagingProfileId. To see the result it is required to call gradle with --info switch.

It has to be mentioned that calling Nexus REST API ends immediately, but the closing operation takes a moment, so to make it possible to call closeRepository promoteRepository together there is a built-in retry mechanism.

The plugin is “upload mechanism agnostic” and can be used together with maven, maven-plugin or nexus plugins.

For more details and configuration parameters see project webpage or the working example in the plugin’s own release configuration.

Alternatives to Maven Central?

There is much younger, but promising alternative – Bintray which also allows to serve artifacts. It is free for open source projects and I personally had used it for some other projects and even created an automatic release mechanism for Bintray, Travis and Gradle. It works ok, but to put artifacts also to Maven Central it is required to store a private key used for singing on their servers and in addition provide Nexus credentials. It increases the risk to have them stolen and in Codearte we prefer to use private Jenkins instance to perform the release directly to Maven Central.

Summary

With Gradle Nexus Staging Plugin the whole release process to Maven Central can be performed with Gradle from a command line and with some additional work completely automatic from a CI server. No more buttons to push in Nexus GUI. In addition to Sonatype OSSRH the plugin can be also used with private Nexus instances with enabled staging repositories.

Btw, there possibly are many things that could be enhancement in the plugin. If you need something or found a bug feel free to use issue tracker to report that.

Thanks to Kuba Kubryński for motivation and help with analyzing the not very well documented Nexus REST API.

Quick tutorial how to configure Spock 1.0 with Groovy 2.4 using Maven and Gradle.

Spock 1.0 has been finally released. About new features and enhancements I already wrote two blog posts. One of the recent changes was a separation on artifacts designed for specific Groovy versions: 2.0, 2.2, 2.3 and 2.4 to minimize a chance to come across a binary incompatibility in runtime (in the past there were only versions for Groovy 1.8 and 2.0+). That was done suddenly and based on the messages on the mailing list it confused some people. After being twice asked to help properly configure two projects I decided to write a short post presenting how to configure Spock 1.0 with Groovy 2.4 in Maven and Gradle. It is also a great place to compare how much work is required to do it in those two very popular build systems.

Maven

Maven does not natively support other JVM languages (like Groovy or Scala). To use it in the Maven project it is required to use a third party plugin. For Groovy the best option seems to be GMavenPlus (a rewrite of no longer maintained GMaven plugin). An alternative is a plugin which allows to use Groovy-Eclipse compiler with Maven, but it is not using official groovyc and in the past there were problems with being up-to-date with the new releases/features of Groovy.

Sample configuration of GMavenPlus plugin could look like:

<plugin>
    <groupId>org.codehaus.gmavenplus</groupId>
    <artifactId>gmavenplus-plugin</artifactId>
    <version>1.4</version>
    <executions>
        <execution>
            <goals>
                <goal>compile</goal>
                <goal>testCompile</goal>
            </goals>
        </execution>
    </executions>
</plugin>

As we want to write tests in Spock which recommends to name files with Spec suffix (from specification) in addition it is required to tell Surefire to look for tests also in those files:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>${surefire.version}</version>
    <configuration>
        <includes>
            <include>**/*Spec.java</include> <!-- Yes, .java extension -->
            <include>**/*Test.java</include> <!-- Just in case of having also "normal" JUnit tests -->
        </includes>
    </configuration>
</plugin>

Please notice that it is needed to include **/*Spec.java not **/*Spec.groovy to make it work.

Also dependencies have to be added:

    <dependencies>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <version>2.4.1</version>
        </dependency>
        <dependency>
            <groupId>org.spockframework</groupId>
            <artifactId>spock-core</artifactId>
            <version>1.0-groovy-2.4</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

It is very important to take a proper version of Spock. For Groovy 2.4 version 1.0-groovy-2.4 is required. For Groovy 2.3 version 1.0-groovy-2.3. In case of mistake Spock protests with a clear error message:

Could not instantiate global transform class
org.spockframework.compiler.SpockTransform specified at
jar:file:/home/foo/.../spock-core-1.0-groovy-2.3.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation
because of exception
org.spockframework.util.IncompatibleGroovyVersionException:
The Spock compiler plugin cannot execute because Spock 1.0.0-groovy-2.3 is
not compatible with Groovy 2.4.0. For more information, see
http://versioninfo.spockframework.org

Together with other mandatory pom.xml elements the file size increased to over 50 lines of XML. Quite much just for Groovy and Spock. Let’s see how complicated it is in Gradle.

Gradle

Gradle has built-in support for Groovy and Scala. Without further ado Groovy plugin just has to be applied.

apply plugin: 'groovy'

Next the dependencies has to be added:

compile 'org.codehaus.groovy:groovy-all:2.4.1'
testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'

and the information where Gradle should look for them:

repositories {
    mavenCentral()
}

Together with defining package group and version it took 15 lines of code in Groovy-based DSL.

Btw, in case of Gradle it is also very important to match Spock and Groovy version, e.g. Groovy 2.4.1 and Spock 1.0-groovy-2.4.

Summary

Thanks to embedded support for Groovy and compact DSL Gradle is preferred solution to start playing with Spock (and Groovy in general). Nevertheless if you prefer Apache Maven with a help of GMavenPlus (and XML) it is also possible to build project tested with Spock.

The minimal working project with Spock 1.0 and Groovy 2.4 configured in Maven and Gradle can be cloned from my GitHub.

Graphical comparison of Spock and Groovy configuration in Maven and Gradle

Bonus: Graphical comparison of Spock and Groovy configuration in Maven and Gradle

Note. I haven’t been using Maven in my project for over 2 years (I prefer Gradle), so if there is a better/easier way to configure Groovy and Spock with Maven just let me know in the comments.

Note 2. The configuration examples assume that Groovy is used only for tests and the production code is written in Java. It is possible to mix Groovy and Java code together, but then the configuration is a little more complicated.

Note 3. If you are interested in get know useful tips and tricks about using Spock Framework to test your Java and Groovy code I will have a presentation about that at 4Developers conference, April 20th, 2015.

Update 20150310. Redesigned summary.

Leonard Nimoy 1931-2015

In my previous post I presented how to display and filter dependencies in multi-module Gradle build. This time I will show how to quickly discover why become a dependency of our project.

Problem

Real life use case. Multi-project Gradle build. In the runtime SLF4J reports problem with two discovered implementations: slf4j-logback and slf4j-simple. Logback is used in the project, but where slf4j-simple came from? Of course it is not listed in our build.gradle, but it is packaged into the WAR file and makes a conflict.

Long and bumpy way

With the knowledge from the previous post one of the possible solutions is to write allDeps task, dump dependencies to file and find a rogue dependency.

Not pretty visible tracking down dependency

It is not pretty visible on the first sight even for that small project with only 4 direct dependencies. But luckily there is a better way.

Quick solution

In addition to dependency task (implemented in DependencyReportTask), Gradle has one more similar task – dependencyInsight (implemented in DependencyInsightReportTask. It allows to limit a dependencies tree only to selected dependency (also transitive).

The command takes 2 mandatory parameters:
--configuration – Gradle configuration to search in – e.g. runtime or testRuntime (in a dependency task a configuration was optional to specify)
--dependency – dependency to look for – e.g. org.slf4j:slf4j-simple

In the mentioned project it could be:

gradle sub2:dependencyInsight --configuration testRuntime --dependency slf4j-simple

Clear explaination why a dependency was included

The result is self explanatory. slf4j-simple is (unnecessary) included by Moco library (moco-core). With that knowledge it is easy to exclude that transitive dependency:

compile('com.github.dreamhead:moco-core:0.9.2') {
    exclude group: 'org.slf4j', module: 'slf4j-simple'
}

or when appropriate, do it globally for all configurations:

configurations {
    all*.exclude group: 'org.slf4j', module: 'slf4j-simple'
}

Further tuning

The nice thing is the ability to loosely define expected dependency features. All of the following are valid:

  • --dependency org.slf4j:slf4j-simple:1.7.7 – only exactly version of that dependency
  • --dependency org.slf4j:slf4j-simple – all versions of that dependency
  • --dependency org.slf4j – all dependencies in given group
  • --dependency slf4j-simple – all dependencies with given name regardless of the group (useful when a package was relocated)

Multiple subprojects

dependencyInsight the same as dependency task do not work with multiple subproject. Fortunately it is simple to create that task:

subprojects {
    task allDepInsight(type: DependencyInsightReportTask) << {}
}

It accepts all parameters supported by a base dependencyInsight task and:

gradle allDepInsight --configuration testRuntime --dependency org.slf4j:slf4j-simple

would do its job in all subprojects.

Summary

dependencyInsight task can be very useful when tracking down suspicious and not expected transitive dependencies in the project. An ability to make it multi-project build friendly makes it even more powerful.

Tested with Gradle 2.2.

Gradle logo

gradle dependencies allows to display dependencies in your project printed as pretty ascii tree. Unfortunately it does not work well for submodules in multi-project build. I was not able to find satisfactory solution on the web, so after worked out my own that blog post arose.

Multiple subprojects

For multi-project builds gradle dependencies called in the root directory unexpectedly displays no dependencies:

No dependencies displayed for the root project

No dependencies displayed for the root project

In fact Gradle is right. Root project usually has no code and no compile or runtime dependencies. Only in case of using plugins there could be some additional configurations created by them.

You could think about --recursive or --with-submodules flags, but they do not exist. It is possible to display dependencies for subprojects with “gradle sub1:dependencies” and “gradle sub2:dependencies“, but this is very manual and unpractical for more than a few modules. We could write a shell script, but having regard to (potential) recursive folders traversal there are some catches. Gradle claims to be very extensible with its Groovy based DSL, so why not take advantage of that. Iteration over subprojects can give some effects, but after testing a few conception I ended with pure and simple:

subprojects {
    task allDeps(type: DependencyReportTask) {}
}

When called gradle allDeps it executes dependencies task on all subprojects.

Dependencies for all subprojects

Dependencies for all subprojects

Remove duplication

All dependencies belong to us, but some parts of the tree looks similar (and duplication is a bad thing). Especially configurations default, compile and runtime and the second group testCompile and testRuntime in most cases contain (almost) the same set of dependencies. To make the output shorter we could limit it to runtime (or in case of test dependencies testRuntime). dependencies task provides convenient parameter --configuration and to focus on test dependencies “gradle allDeps --configuration testRuntime” can be used.

Dependencies in one configuration for all subprojects

Dependencies in one configuration for all subprojects

Summary

Where it could be useful? Recently I was pair programming with my old-new colleague in a new project (with dozens submodules) where SLF4J in addition to expected slf4j-logback provider discovered on a classpath also slf4j-simple. We wanted to figure out which library depends on it. Logging dependencies tree to file with a help of grep gave us the answer.

As a bonus during my fights with DependencyReportTask I found an easier way how get know who requires given library. I will write about it in my next post.

Tested with Gradle 2.2.

Gradle logo

When writing integration tests it is sometimes required to set up environment initial conditions/state before/after a given test or the whole specification. Upcoming Spock 1.0 expands the number of available options to do it in the convenient way.

This is the second part of the series about new and noteworthy in (upcoming) Spock 1.0.

New extension @RestoreSystemProperties

System properties provides information about JVM configuration and the environment like JVM vendor and version, operating system, classpath, locale or a time zone. Some of them can impact the way our application works. The following example checks if the protection before running a dangerous program as root works properly on Unix machines. To not affect other tests @RestoreSystemProperties restores the original system properties.

@Stepwise
class RestoreSystemPropertiesSpec extends Specification {

    @RestoreSystemProperties
    def "should deny perform action when running as root"() {
        given:
            System.setProperty("user.name", "root")
        and:
            def sut = new DangerousActionPerformer()
        when:
            sut.squashThemAll()
        then:
            thrown(IllegalStateException)
    }

    def "should perform action when running as 'normal' user"() {
        given:
            def sut = new DangerousActionPerformer()
        when:
            sut.squashThemAll() //current user name was restored
        then:
            noExceptionThrown()
    }

    (...)
}

The extension can be activated using @RestoreSystemProperties annotation. It can be placed on a feature method to be applied right after the given test only or on a class to restore system properties after every test in the specification. The behavior in the second case is identical to placing the annotation on every test method.

Internally Spock makes a copy of a system Properties structure before a test and restores it after.

@ConfineMetaClassChanges (since 0.7)

One of the things which make Groovy language so powerful is metaprogramming – an ability to modify classes (e.g. add/modify new methods) at runtime. That could look like hacking (and that is true), but in some specific cases it is very useful (see Grails and GORM for creating database queries when non existing methods – like findByFirstNameAndSecondName(...) or writing a custom DSL like 6.minutes.ago).

MetaClass operations are usually executed at the class (not instance) level what generally causes that they affect every class instance in given class loader. GORM like changes done in multiple test could interact each other causing tests to fail. For that cases @ConfineMetaClassChanges extension (available already in previous Spock version) was created.

@Stepwise
class ConfineMetaClassChangesSpec extends Specification {

    @ConfineMetaClassChanges(EmptyClass)
    def "should allow to call added method"() {
        given:
            EmptyClass.metaClass.returnFoo = { "foo" }
        when:
            def fooValue = new EmptyClass().returnFoo()
        then:
            fooValue == "foo"

    }

    def "should throw exception on not available method"() {
        when:
            new EmptyClass().returnFoo()
        then:
            thrown(MissingMethodException)
    }
}

class EmptyClass {
}

@ConfineMetaClassChanges annotation can be placed on a feature method to restore MetaClass to the state just before that test or on a class to restore system properties after the last test in the specification to the state before calling the setupSpec method.

Note that @ConfineMetaClassChanges behavior placed on a specification level is different (once after the last test) than @RestoreSystemProperties behavior (every time after every test).

@AutoCleanup (since 0.7)

Another already existing, but rarely known and used extension is @AutoCleanup. Usually external resources are allocated in setup/setupSpec methods and released in cleanup/cleanupSpec methods. But Spock, the same as pure JUnit, support auto initialization instance variables (fields) before every test/feature (which overrides default Java behavior when a field is initialized only once when a class instance is created).

class ManualCleanupSpec extends Specification {

    @Shared
    private DBConnection dbConnection = new H2DBConnection()

    void cleanupSpec() {
        dbConnection.close()    //boilerplate code
    }

    def "should do fancy thing on DB"() {
        ...
    }
}

interface DBConnection {
    void close()
}

To make it easier to release those inlined resources can be annotated with @AutoCleanup to call close method (or any other method defined as a parameter) in the clean up phase.

class AutoCleanupSpec extends Specification {

    @AutoCleanup
    @Shared
    private DBConnection dbConnection = new H2DBConnection()

    def "should do fancy thing on DB"() {
        ...
    }
}

@AutoCleanup supports both instance variables (the method is called after every tests) and static/@Shared fields (the clean up is performed after the last test). All errors during clean up are caught to not interrupt test execution. By default there are logged, which can be disabled using quiet parameter @AutoCleanup(quiet = true).

Where can I find Spock 1.0?

Please notice that blog post is about features available in Git branch planned to become Spock 1.0 sometime in the future. They are currently available only in a 1.0-SNAPSHOT version in a separate repository. As not being released (at the time of writing) they could look different or even (in the extreme case) be not available in the final version. Be aware.

To add Spock 1.0 as a dependency to your project define snapshot repository https://oss.sonatype.org/content/repositories/snapshots/ and the Spock artifact in version 1.0-groovy-2.0-SNAPSHOT. Sample configuration in Gradle:

repositories {
    ...
    maven { url "https://oss.sonatype.org/content/repositories/snapshots/" }
}

dependencies {
    ...
    testCompile ('org.spockframework:spock-core:1.0-groovy-2.0-SNAPSHOT')
}

Summary

Mentioned extensions were designed for integration tests and because of class loader (or system) wide changes it is not safe to execute more than one test in the same time. Therefore it is important to separate pure, very fast and independent, unit tests from integration tests to (inter alia) allow unit tests to run in parallel.

After @Require and @IgnoreIf this is the second post about changes in upcoming Spock 1.0. The examples was written using Spock 1.0-groovy-2.0-SNAPSHOT and they can cloned from GitHub.

May the Spock be with you!

Introduction

Some tests (especially integration tests) should be run only if certain conditions are (or are not) met. Upcoming Spock 1.0 provides new @Requires and improved @IgnoreIf extensions to handle that requirement in a convenient way.

Historical background – waiting for Spock 1.0

The latest stable Spock version (0.7) has been released in October 2012 (~2 years as the time of writing). There have been made hundreds of commits to Git repository since then and many new features and improvements are already available in the current SNAPSHOT version. Unfortunately the “New and Noteworthy” section in the documentation doesn’t cover those changes and the only way (as I know) to get them know is digging into Git commits. As a Spock user I was curious what new is around the corner, so I did that job which resulted in the presentation at Confitura (in Polish). A as bonus I planned a series of blog entries describing those more interesting features and possibly also an update of Spock documentation.

This is just the first part of the changes in (upcoming) Spock 1.0. There is much more to discover and I will be presenting that in future blog posts.

New extension @Requires

@Requires allows to run given test (or the whole specification) only if given criteria are met.

    @Requires({ System.getProperty("os.arch") == "i386" })
    def "should use native libraries on 32-bit JVM"() { ... }

    @Requires({ env.containsKey("ENABLE_CRM_INTEGRATION_TESTS") })
    def "should do complicated things with CRM"() { ... }

It is opposite to already known from Spock 0.7 @Ignore extension which allowed to ignore the given test (or the whole specification) using access to system properties (propertiesSystem.getProperties()), environment variables (envSystem.getenv()) and java version (javaVersionSystem.getProperty("java.version")). In Spock 1.0 branch they both are much more powerful.

Testing my examples in practice I was surprised that a system property “os.arch” returns not a processor architecture taken from an OS, but a JVM version. So, be aware that running a 32-bit JVM on 64-bit system will return “i386”. Hopefully usually that knowledge is not important (unless for example the native libraries are used).

New features in @Requires and @IgnoreIf

In addition to the new @Requires extension it and its twin @IgnoreIf have got new internal abstraction to simplify the way how the operating system information and a JVM version can be accessed.

Operating system

Operating system information can be accessible in Spock 0.7 using os.name and os.version system properties. Unfortunately it can be complicated and error prone in some cases:

    //Run only on Linux and MacOS - in Spock 0.7 style
    @Requires({ System.getProperty("os.name").toLowerCase().contains("mac os") || 
                System.getProperty("os.name").toLowerCase().contains("darwin") || 
                System.getProperty("os.name").toLowerCase().contains("linux")  })
    def "should use fancy text console features"() { ... }

In the meantime Spock has got the new OperatingSystem abstraction which provides methods like:

    String getName() //Linux
    String getVersion() //8.1
    Family getFamily() //SOLARIS (enum)

    boolean isLinux()
    boolean isWindows()
    boolean isMacOs() //also Solaris and Other
    boolean iSolaris()
    boolean isOther()

With their help mentioned example can be simplified to very readable code precisely explaining our intentions:

    @Requires({ os.linux || os.macOs  })    //Spock 1.0 edition
    def "should use fancy text console features"() { ... }

JVM version

In addition to OperatatingSystem there is also Jvm abstraction which provides methods like:

    boolean isJava7() //true only if Java 7
    boolean isJava8Compatible() //true if Java 8+

or more generic:

    String getJavaVersion() //e.g. "1.8.0_05"
    String getJavaSpecificationVersion() //e.g. "1.8"

The original JVM version dependent test cases declaration:

    @IgnoreIf({ javaVersion < 1.8 })
    def "should find at runtime and use CompletableFuture for Java 8+"() { 
        ... 
    }

can be replaced with more verbose:

    @IgnoreIf({ !jvm.java8Compatible })
    def "should find at runtime and use CompletableFuture for Java 8+"() { 
        ... 
    }

Using static methods

@Requires (the same as @IgnoreIf) can also use static methods to define a condition (which is available, but less known in Spock 0.7). Those methods can be declared inside a given class, in an another class or (when using Groovy 2.3+) in a trait.

import static FancyFeatureRequiredIntranetConnectionSpecIT.isUrlAvailable

@Requires({ isUrlAvailable("http://some-host.intranet/") })
class FancyFeatureRequiredIntranetConnectionSpecIT extends Specification {

    //could be also from trait or other class
    static boolean isUrlAvailable(String url) { 
        //...
    }

    def "should do one thing in intranet"() { ... }

    def "should do another thing in intranet"() { ... }

    ...
}

Please pay attention to the static import in this example. Even though that code without an import looks ok and Idea sees the method it will fail at runtime with groovy.lang.MissingMethodException: No signature of method: (...). This is caused by the way how Spock internally tries to resolve references in a Closure. For detailed explanation see the next paragraph.

groovy.lang.MissingMethodException: No signature of method:
FancyFeatureRequiredIntranetConnectionSpecIT$_closure1.isUrlAvailable()
is applicable for argument types: (java.lang.String) values: [http://some-host.intranet/]

How does it internally work (for the curious)?

You may wonder how that code even compile if jvm or os are not known for the Specification class. The key thing is that code inside @Requires (and @IgnoreIf) annotation is wrapped with {} (curly brackets). In Groovy that syntax means “anonymous code block” and it is named Closure. The @Requires annotation accepts a Closure as a parameter:

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
@ExtensionAnnotation(RequiresExtension.class)
public @interface Requires {
  Class<? extends Closure> value(); // <- important line
}

Two important aspects of a Groovy Closure are used in Spock to implement those features:

  • a block of code is executed “at the later point”
  • an execution context can be delegated

The first point causes that Groovy compiler ignores unresolved references. At the compilation time the execution context can be unknown (a Closure can be passed as a parameter far away from the declaration place). The used references will be resolved at runtime and here the second point applies.

By default property references and methods are attempted to be resolved to the owner (an enclosing class or a surrounding Closure) and later to the delegate (by default the same as owner). A delegate can be changed and that is used in RequiresExtension:

    (...)
    condition.setDelegate(new PreconditionContext());
    condition.setResolveStrategy(Closure.DELEGATE_ONLY);
    return condition.call();

PreconditionContext is a delegate class for @Requires and @IgnoreIf providing an execution context. Its getOS() or getJvm() methods are resolved and executed when os or jvm properties are used in the annotation Closure. In addition to set a dedicated delegate, a resolve strategy is changed to not bother with an enclosing class and try resolve unresolved references only using PreconditionContext.

If you are new to Groovy and feel confused with the internals I propose you to take a look at the introduction to Groovy Closure. If you like it there is a lot more things to get know about Groovy Closure and its power.

Where can I find Spock 1.0?

Please notice that blog post is about features available in Git branch planned to become Spock 1.0 sometime in the future. They are currently available only in a 1.0-SNAPSHOT version in a separate repository. As not being released (as the time of writing) they could look different or even (in the extreme case) be not available in the final version. Be aware.

To add Spock 1.0 as a dependency to your project define snapshot repository http://oss.sonatype.org/content/repositories/snapshots/ and the Spock artifact in version 1.0-groovy-2.0-SNAPSHOT. Sample configuration in Gradle:

repositories {
    ...
    maven { url "http://oss.sonatype.org/content/repositories/snapshots/" }
}

dependencies {
    ...
    testCompile 'org.spockframework:spock-core:1.0-groovy-2.0-SNAPSHOT'
}

Summary

New @Requires extension and the enhancements in @IgnoreIf are the first part of the series about new and noteworthy in upcoming Spock 1.0. When will it be released? Citing John Carmack “It’ll be done when it’s done”. If you would like to bring the release closer donate your time and contribute to the project.

Post written using Spock 1.0-groovy-2.0-SNAPSHOT. Examples can be cloned from GitHub.

Test quickly and prosper :).