The simply way how buildscript dependencies (e.g. plugins) can be displayed and analyzed in Gradle

Introduction

This is the third part of my Gradle tricks mini-series related to visualization and analyze of dependencies. In the first post I presented a way how dependencies for all subprojects in multi-project build can be display. In the second I showed a technique of useful in tracking down not expected transitive dependencies in the project. This time less often used things, yet crucial in specific cases – buildscript dependencies.

Dependencies

Real use case

Buildscript dependencies contains plugins used in our project and their dependencies. It would seem nothing interesting unless you are a Gradle plugin developer, but it is not completely true. Once, as a consultant, I was investigating issue with NoSuchMethodException in a large project with custom build framework built on top of Gradle. The problem occurred only when one innocent, very popular open source plugin had been adding to the project. The same plugin worked fine in many other project in that company. In the end I was able to figure out that one of the dependencies used in buildSrc custom scripts overriding the same dependencies in older version from the plugin. As a result plugin failed at runtime with mentioned NoSuchMethodException. To achieve that I had to use my custom script as buildscript/classpath dependencies are completely ignored when ./gradlew dependencies or ./gradlew dependencyInsight is used.

Solution

The idea to write this post arose in at the beginning of 2015. I wanted to present my small Gradle task that using some internal Gradle mechanisms retrieves buildscript dependencies in display them to a console. The post was postponed and almost a year later I was positively surprised reading release notes for Gradle 2.10. The new buildEnvironment task was added.

$ ./gradlew buildEnvironment
:buildEnvironment

------------------------------------------------------------
Root project
------------------------------------------------------------

classpath
+--- com.bmuschko:gradle-nexus-plugin:2.3
\--- io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.5.3
     \--- org.codehaus.groovy.modules.http-builder:http-builder:0.7.1
          +--- org.apache.httpcomponents:httpclient:4.2.1
          |    +--- org.apache.httpcomponents:httpcore:4.2.1
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- commons-codec:commons-codec:1.6
          +--- net.sf.json-lib:json-lib:2.3
          |    +--- commons-beanutils:commons-beanutils:1.8.0
          |    |    \--- commons-logging:commons-logging:1.1.1
          |    +--- commons-collections:commons-collections:3.2.1
          |    +--- commons-lang:commons-lang:2.4
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- net.sf.ezmorph:ezmorph:1.0.6
          |         \--- commons-lang:commons-lang:2.3 -> 2.4
          +--- net.sourceforge.nekohtml:nekohtml:1.9.16
          \--- xml-resolver:xml-resolver:1.2

(*) - dependencies omitted (listed previously)

BUILD SUCCESSFUL

Total time: 1.38 secs

Two plugins and a pack of transitive dependencies to gradle-nexus-staging-plugin thanks to http-builder (maybe it would be good to replace it with Jodd?).

Summary

It is worth to be able to distinguish standard projects dependencies and buildscript dependencies. The new buildEnvironment task helps to deal with the latter. This in turn becomes essential when strange runtime errors start to show up.

Tested with Gradle 2.10.

Picture credits: Zeroturnaround.

Would it be useful to unroll all parameterized Spock tests automatically?

I’ve been always frustrated with the need to add @Unroll annotation to every parameterized test/feature (or at least at the class/specification level) to make unrolling works in Spock. It was even worse to deal with the code with already missing @Unroll annotations and cryptic test results. For backward compatibility unrolling will rather not be enabled by default in the foreseeable future, but luckily there is a quick solution.

Unroll

Photo: Christopher Michel, CC BY 2.0

Unroll for all and for free

To enable global unrolling it is only required to add spock-global-unroll.jar to your classpath:

testCompile 'info.solidsoft.spock:spock-global-unroll:0.5.0'

To make it easier to use spock-global-unroll with different Spock versions (like 1.0-groovy-2.0 and 1.0-groovy-2.3) the plugin does not have the compile dependency on Spock and a proper spock-core jar has to be explicitly defined in a build configuration. E.g.:

testCompile 'info.solidsoft.spock:spock-global-unroll:0.5.0'
testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'

That’s all. spock-global-unroll is a global extension which is activated automatically by Spock. All parameterized Spock tests are unrolled without the need to use @Unroll annotation.

Disabling automatic unrolling for a class

Automatic unrolling can be disabled for a particular class by putting @DisableGlobalUnroll on it.

The nice thing is that the @Unroll annotations manually placed at the test (feature) level can be used to unroll particular tests anyway (even if automatic unrolling has been disabled for given class).

@DisableGlobalUnroll
class PeselValidatorSpec extends Specification {

    //one big test for multiple input parameters
    def "should not be unrolled for some reasons PESEL #number"() { ... }

    (...)
}

Overriding default test name

To override default test name expanding (with #placeHolders in a test name) @Unroll annotation with a custom text can be used on the top of feature method or at the specification level.

@DisableGlobalUnroll
class PeselValidatorSpec extends Specification {

    //one big test for multiple input parameters
    def "should not be unrolled for some reasons PESEL #number"() { ... }

    //unrolled anyway
    @Unroll("PESEL '#pesel' should be #description")
    def "should validate PESEL correctness"() { ... }

    (...)
}

Summary

Being able to implement automatic tests unrolling within 15 minutes I decided to share it with the community – maybe there are others who don’t like to write boilerplate code:). The code written to achieve it has just a few lines of production code (of course there are also 3 test classes to verify if the extension works as expected:) ). This shows the power of Spock extensibility.

The complete source code is available from GitHub: https://github.com/szpak/spock-global-unroll

Update 20160521. I added automatic migration scripts in the project README to make a migration easier.

Btw, if you would like to find out more about “Interesting nooks and crannies of Spock” I will be speaking about them in May and June at GeeCON 2016 in Kraków, Gr8Conf 2016 in Copenhagen and Devoxx Poland again in Kraków.

Geecon big paw logo
GR8 Conf 2016 Europe
Devoxx Poland 2016 Speaker Badge

Have you ever experienced the “Could not find property X on plugin extension Y” error with a freshly cloned GitHub project you wanted to contribute to?

Missing username, password or token to a service you may have never heard of? It usually happens when you try to do anything (like just build a project) not only when a given plugin (like an online code coverage tool) is used. I didn’t like to have to modify my environment to just provide a small fix to another open source project. It was annoying me and I wanted to change it. Starting with Gradle 2.13 it became possible. However, let’s start with the reasons (if you are interested only in the solution please move forward to the last 2 paragraphs).

Gradle logo

Why do I get “Could not find property…”?

Most of Gradle plugins need to be configured. Some properties can be set directly in build.gradle, but some others (especially credentials) are better to keep locally in ~/.gradle/gradle.properties. As a result, a plugin configuration sections often look like this:

bintray {
    user = project.getProperty('bintrayUser')
    key = project.getProperty('bintrayKey')
    ...
}

or that:

bintray {
    user = getProperty('bintrayUser')
    key = getProperty('bintrayKey')
    ...
}

or even shorter:

bintray {
    user = bintrayUser
    key = bintrayKey
    ...
}

It works fine for a project developer having bintrayUser and bintrayKey defined in its local configuration, but for every person not uploading to Bintray on their daily basis it fails with:

* What went wrong:
A problem occurred evaluating root project 'another-nice-open-source-project'.
> Could not find property 'bintrayKey' on com.jfrog.bintray.gradle.BintrayExtension_Decorated@2ecc563.

The result is that project.getProperty(), not to mentioned explicit assignment, just throws exception when a particular property is not found. The bad is that the code is executed in the configuration phrase. For that reason the execution of every task, even not related to that particular plugin (like gw tasks or gw wrapper) fails miserably.

As a workaround a guard check has to be performed:

bintray {    //Gradle <2.13
    user = hasProperty('bintrayUser') ? getProperty('bintrayUser') : ''
    key = hasProperty('bintrayKey') ? getProperty('bintrayKey') : ''
    ...
}

It doesn’t look good very compact. As an another option a dummy placeholder could be kept in project configuration, but starting with Gradle 2.13 there is a better way to cope with that.

project.findProperty()

Gradle 2.13 is the first version with my contribution of the new method project.findProperty(). It behaves the same as getProperty(), but instead of throwing an exception the null value is returned. This simplifies the assignment greatly:

bintray {    //Gradle 2.13+
    user = findProperty('bintrayUser') ?: ''
    key = findProperty('bintrayKey') ?: ''
    ...
}

Some people could say that Optional could be better as a returned value, but this is an API and Gradle supports Java older than 8.

Summary

For me findProperty is a method I’ve been very often looking for in Gradle. I regret that it took me over the year to make this pull request. Gradle 2.13 has been just released and version upgrades across projects will be performed gradually. It can take some time, but every project migrating to 2.13 will be able to simplify its configuration making the “Could not find property X on plugin Y” error message a remembrance of the past (of course unless you really need to configure particular plugin to use it:) ).

Tested with Gradle 2.13.

How parallel execution of blocking “side-effect only” (aka void) tasks became easier with Completable abstraction introduced in RxJava 1.1.1.

As you may have noticed reading my blog I primarily specialize in Software Craftsmanship and automatic code testing. However, in addition I am an enthusiast of Continuous Delivery and broadly defined concurrency. The last point ranges from pure threads and semaphores in C to more high level solutions such as ReactiveX and the actor model. This time an use case for a very convenient (in specific cases) feature introduced in the brand new RxJava 1.1.1 – rx.Completable. Similarly to many my blog entries this one is also a reflection of the actual event I encountered working on real tasks and use cases.

RxJava logo

A task to do

Imagine a system with quite complex processing of asynchronous events coming from different sources. Filtering, merging, transforming, grouping, enriching and more. RxJava suits here very well, especially if we want to be reactive. Let’s assume we have already implemented it (looks and works nicely) and there is only one more thing left. Before we start processing, it is required to tell 3 external systems that we are ready to receive messages. 3 synchronous calls to legacy systems (via RMI, JMX or SOAP). Each of them can last for a number of seconds and we need to wait for all of them before we start. Luckily, they are already implemented and we treat them as black boxes which may succeed (or fail with an exception). We just need to call them (preferably concurrently) and wait for finish.

rx.Observable – approach 1

Having RxJava at fingertips it looks like the obvious approach. Firstly, job execution can be wrapped with Observable:

private Observable<Void> rxJobExecute(Job job) {
    return Observable.fromCallable(() -> { 
        job.execute();
        return null;
    });
}

Unfortunately (in our case) Observable expects to have some element(s) returned. We need to use Void and awkward return null (instead of just method reference job::execute.

Next, we can use subscribeOn() method to use another thread to execute our job (and not block the main/current thread – we don’t want to execute our jobs sequentially). Schedulers.io() provides a scheduler with a set of threads intended for IO-bound work.

Observable<Void> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
Observable<Void> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

Finally we need to wait for all of them to finish (all Obvervables to complete). To do that a zip function can be adapted. It combines items emitted by zipped Obserbables by their sequence number. In our case we are interested only in the first pseudo-item from each job Observable (we emit only null to satisfy API) and wait for them in a blocking way. A zip function in a zip operator needs to return something, thence we need to repeat a workaround with null.

Observable.zip(run1, run2, (r1, r2) -> return null)
         .toBlocking()
         .single();

It is pretty visible that Observable was designed to work with streams of values and there is some additional work required to adjust it to side-effect only (returning nothing) operations. The situation is getting even worse when we would need to combine (e.g. merge) our side-effect only operation with other returning some value(s) – an uglier cast is required. See the real use case from the RxNetty API.

public void execute() {
    Observable<Void> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
    Observable<Void> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

    Observable.zip(run1, run2, (r1, r2) -> null)
        .toBlocking()
        .single();
}

private Observable<Void> rxJobExecute(Job job) {
    return Observable.fromCallable(() -> { 
        job.execute();
        return null;
    });
}

rx.Observable – approach 2

There could be another approach used. Instead of generating an artificial item, an empty Observable with our task can be executed as an onComplete action. This forces us to switch from zip operation to merge. As a result we need to provide an onNext action (which is never executed for empty Observable) which confirms us in conviction the we try to hack the system.

public void execute() {
    Observable<Object> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
    Observable<Object> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

    Observable.merge(run1, run2)
            .toBlocking()
            .subscribe(next -> {});
}

private Observable<Object> rxJobExecute(Job job) {
    return Observable.empty()
            .doOnCompleted(job::execute);
}

rx.Completable

Better support for Observable that doesn’t return any value has been addressed in RxJava 1.1.1. Completable can be considered as a stripped version of Observable which can either finish successfully (onCompleted event is emitted) or fail (onError). The easiest way to create an Completable instance is using of a fromAction method which takes an Action0 which doesn’t return any value (like Runnable).

Completable completable1 = Completable.fromAction(job1::execute)
        .subscribeOn(Schedulers.io());
Completable completable2 = Completable.fromAction(job2::execute)
        .subscribeOn(Schedulers.io());

Next, we can use merge() method which returns a Completable instance that subscribes to all downstream Completables at once and completes when all of them complete (or one of them fails). As we used subscribeOn method with an external scheduler all jobs are executed in parallel (in different threads).

Completable.merge(completable1, completable2)
        .await();

await() method blocks until all jobs finish (in a case of error an exception will be rethrown). Pure and simple.

public void execute() {
    Completable completable1 = Completable.fromAction(job1::execute)
            .subscribeOn(Schedulers.io());
    Completable completable2 = Completable.fromAction(job2::execute)
            .subscribeOn(Schedulers.io());

    Completable.merge(completable1, completable2)
        .await();
}

java.util.concurrent.CompletableFuture

Some of you could ask: Why not to just use CompletableFuture? It would be a good question. Whereas pure Future introduced in Java 5 it could require additional work on our side, ListenableFuture (from Guava) and CompletableFuture (from Java 8) make it quite trivial.

First, we need to run/schedule our jobs execution. Next, using CompletableFuture.allOf() method we can create a new CompletableFuture which is completed the moment all jobs complete (haven’t we seen that conception before?). get() method just blocks waiting for that.

public void execute() {
    try {
        CompletableFuture<Void> run1 = CompletableFuture.runAsync(job1::execute);
        CompletableFuture<Void> run2 = CompletableFuture.runAsync(job2::execute);

        CompletableFuture.allOf(run1, run2)
            .get();

    } catch (InterruptedException | ExecutionException e) {
        throw new RuntimeException("Jobs execution failed", e);
    }
}

We need to do something with checked exceptions (very often we don’t want to pollute our API with them), but in general it looks sensible. However, it is worth to remember that CompletableFuture falls short when more complex chain processing is required. In addition having RxJava already used in our project it is often useful to use the same (or similar) API instead of introducing something completely new.

Summary

Thanks to rx.Completable an execution of side-effect only (returning nothing) tasks with RxJava is much more comfortable. In codebase already using RxJava it could be a preferred over CompletableFuture even for simple cases. However, Completable provides a lot of more advances operators and techniques and in addition can be easily mixed with Observable what makes it even more powerful.

To read more about Completable you may want to see the release notes. For those who want to have a deeper insight into topic there is a very detailed introduction to Completable API on Advanced RxJava blog (part 1 and 2).

The source code for code examples is available from GitHub.

Btw, if you are interested in RxJava in general, I can with a clear conscience recommend you a book which is currently being written by Tomasz Nurkiewicz and Ben Christensen – Reactive Programming with RxJava.

RxJava book

How to simplify Mockito usage by removing static imports in Java 8 based projects.

Rationale

Mockito API is based on static methods aggregated (mostly) in the (BDD)Mockito class followed by extremely fluent, chained method calls. Mock creation, stubbing and call verification can be initiated with mock/spy/given/then/verify static methods:

@Test
public void shouldVerifyMethodExecution() {
    //given
    TacticalStation tsSpy = BDDMockito.spy(TacticalStation.class);
    BDDMockito.willDoNothing().given(tsSpy).fireTorpedo(2);
    //when
    tsSpy.fireTorpedo(2);
    tsSpy.fireTorpedo(2);
    //then
    BDDMockito.then(tsSpy).should(BDDMockito.times(2)).fireTorpedo(2);
}

Quite verbose, but starting with Java 5 one can use static imports to simplify the code, but at the cost of additional static imports:

import static org.mockito.BDDMockito.then;
import static org.mockito.BDDMockito.willDoNothing;
import static org.mockito.BDDMockito.spy;
import static org.mockito.BDDMockito.times;
(...)

@Test
public void shouldVerifyMethodExecution() {
    //given
    TacticalStation tsSpy = spy(TacticalStation.class);
    willDoNothing().given(tsSpy).fireTorpedo(2);
    //when
    tsSpy.fireTorpedo(2);
    tsSpy.fireTorpedo(2);
    //then
    then(tsSpy).should(times(2)).fireTorpedo(2);
}

Imports can be hidden in IDE and usually do not disturb much. Nevertheless to be able to write just a method name (e.g. mock(TacticalStation.class)) without a class is it required to press ALT-ENTER (in IntelliJ IDEA) to add each static import on the first usage of a given method in a test class. The situation is even worse in Eclipse where it is required to earlier add BDDMockito to “Favorites” in “Content Assist” to make it suggested by IDE. Eclipse guys could say “you have to do it just once”, but as I experienced during my testing/TDD trainings it makes a Mockito learning (usage) curve a little bit steeper.

Of course there are some tricks like using star imports by default for Mockito classes to reduce number of required key strokes, but if you use Java 8 in your project (hopefully a majority of you) there is a simpler way to cope with it.

Static imports free approach

Mockito-Java8 2.0.0 (and its counterpart for Mockito 1.10.x – version 1.0.0) introduces a set of interfaces which provide all methods from Mockito API. By “implement” them in a test class all those methods become automatically directly accessible in written tests:

//no static imports needed!

public class SpaceShipTest implements WithBDDMockito {

    @Test
    public void shouldVerifyMethodExecution() {
        //given
        TacticalStation tsSpy = spy(TacticalStation.class);
        willDoNothing().given(tsSpy).fireTorpedo(2);
        //when
        tsSpy.fireTorpedo(2);
        tsSpy.fireTorpedo(2);
        //then
        then(tsSpy).should(times(2)).fireTorpedo(2);
    }
}

The code looks exactly like in the previous snippet, but there is not need to do any static import (besides a normal import of WithBDDMockito itself).

Under the hood the WithBDDMockito interface implementation is dead simple. All methods are default methods which just delegate to proper static method in BDDMockito class.

default <T> BDDMockito.BDDMyOngoingStubbing<T> given(T methodCall) {
    return BDDMockito.given(methodCall);
}

Flavors of Mockito

Mockito methods are provided by 3 base interfaces, being an entry points for given set of methods:
WithBDDMockito – stubbing/mocking API in BDD style (provides also classic API).
WithMockito – classic stubbing/mocking API
WithAdditionalMatchers – additional Mokcito matchers (basic account are included in With(BDD)Mockito)

Summary

Java 8 has opened the new opportunities how (test) code can be written in more compact and readable way. Static imports free Mockito code can simplify writing tests a little bit, but there is more feature already available in Mockito-Java8 and even more to be included in Mockito 3.0 (those for which Mockito internals have to be modified in a non backward compatible way). Too take more ideas how code/projects can be refactored to benefit from Java 8 you can see my short presentation “Java 8 brings power to testing!” (slides and video).

Mockito-Java8 2.0.0-beta (for Mockito >=2.0.22-beta) and 1.0.0-beta (for Mockito 1.10.x and earlier betas of Mockito 2) is available through Maven Central. The versions should be pretty stable, but I would like to get wider feedback about this new feature, so it is labeled as beta. More details can be found on the project webpage.

Acknowledge. The idea was originally proposed by David Gageot (the guy behind Infinitest) in one of his blog posts.

Mockito logo

Learn how Spring 4.2 simplifies handling transaction bound events (e.g. sent just after a database commit).

Introduction

As you probably already know (e.g. from my previous blog post) it is no longer needed to create a separate class implementing ApplicationListener with onApplicationEvent method to be able to react to application events (both from Spring Framework itself and our own domain events). Starting with Spring 4.2 the support for annotation-driven event listeners was added. It is enough to use @EventListener at the method level which under-the-hood will automatically register corresponding ApplicationListener:

    @EventListener
    public void blogAdded(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

Please notice that using domain objects in the events has notable drawbacks and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Transaction bound events

Simple and compact. For “standard” events everything looks great but in some cases it is needed to perform some operations (usually asynchronous ones) just after the transaction has been committed (or rolled back). What’s then? Can the new mechanism be used as well?

Business requirements

First, a small digression – business requirements. Let’s imagine the super fancy blog aggregation service. An event is generated everytime the new blog is added. Subscribed users can receive an SMS or a push notification. The event could be published after the blog object is scheduled to be saved in a database. However, in in a case of commit/flush failure (database constraints violation, an issue with ID generator, etc.) the whole DB transaction would be rolled back. A lot of angry users with broken notification will appear at the door…

Technical issues

In modern approach to transaction management, transactions are configured declaratively (e.g. with @Transactional annotation) and a commit is triggered at end of transactional scope (e.g. at the end of a method). In general this is very convenient and much less error prone (than the programmatic approach). On the other hand, commit (or rollback) is done automatically outside our code and we are not able to react in a “classical way” (i.e. publish event in the next line after transaction.commit() is called).

Old school implementation

One of the possible solutions for Spring (and a very elegant one) was presented by indispensable Tomek Nurkiewicz. It uses TransactionSynchronizationManager to register transaction synchronization for the current thread. For example:

    @EventListener
    public void blogAddedTransactionalOldSchool(BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2 - broken in not transactional context

        TransactionSynchronizationManager.registerSynchronization(
                new TransactionSynchronizationAdapter() {
                    @Override
                    public void afterCommit() {
                        internalSendBlogAddedNotification(blogAddedEvent);
                    }
                });
    }

The passed code is executed in the proper place in the Spring transaction workflow (for that case “just” after commit).

To provide support for execution in non-transactional context (e.g. in integration test cases which couldn’t care about transactions) it can be extended to the following form to not fail with java.lang.IllegalStateException: Transaction synchronization is not active exception:

    @EventListener
    public void blogAddedTransactionalOldSchool(final BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2

        //"if" to not fail with "java.lang.IllegalStateException: Transaction synchronization is not active"
        if (TransactionSynchronizationManager.isActualTransactionActive()) {

            TransactionSynchronizationManager.registerSynchronization(
                    new TransactionSynchronizationAdapter() {
                        @Override
                        public void afterCommit() {
                            internalSendBlogAddedNotification(blogAddedEvent);
                        }
                    });
        } else {
            log.warn("No active transaction found. Sending notification immediately.");
            externalNotificationSender.newBlogTransactionalOldSchool(blogAddedEvent);
        }
    }

With that change in a case of the lack of active transaction provided code is executed immediately. Works fine so far, but let’s try to achieve the same thing with annotation-driven event listeners in Spring 4.2.

Spring 4.2+ implementation

In addition to @EventListener Spring 4.2 provides also one more annotation @TransactionalEventListener.

    @TransactionalEventListener
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

The execution can be bound to standard transaction phases: before/after commit, after rollback or after completion (both commit or rollback). By default it processes an event only if it was published within the boundaries of a transaction. In other case the event is discarded.

To support the execution in non-transactional context the falbackExecution flag can be used. If set to “true” the event is processed immediately if there is no transaction running.

    @TransactionalEventListener(fallbackExecution = true)
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

Summary

Introduced in Spring 4.2 annotation-driven event listeners continue a trend to reduce boilerplate code in Spring (Boot) based applications. No need to manually create ApplicationListener implementations, no need to use directly TransactionSynchronizationManager – just one annotation with proper configuration. The other side of the coin is that it is a little bit harder to find all event listeners, especially if there are dozens of them in our monolith application (though, it can be easily grouped). Of course, the new approach is only an option which could be useful in a given use-case or not. Nevertheless another piece of Spring (Boot) magic flood into our systems. But maybe resistance is futile?

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively, it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases. Code examples are available from GitHub.

Btw, writing examples for that blog post gave me the first real ability to use the new test transaction management system introduced in Spring 4.1 (in the past I only mentioned it during my Spring training sessions). Probably, I will write more about it soon.

Learn how to reduce boilerplace code in event handling with annotation-driven event listeners in Spring 4.2+.

Introduction

Exchanging events within the application has become indispensable part of many applications and thankfully Spring provides a complete infrastructure for transient events (*). The recent refactoring of transaction bound events gave me an excuse to check in practice the new annotation-driven event listeners introduced in Spring 4.2. Let’s see what can be gained.

(*) – for persistent events in Spring-based application Duramen could be a solution that is worth to see

Spring logo

The old way

To get a notification about an event (both Spring event and custom domain event) a component implementing ApplicationListener with onApplicationEvent has to be created.

@Component
class OldWayBlogModifiedEventListener implements
                        ApplicationListener<OldWayBlogModifiedEvent> {

    (...)

    @Override
    public void onApplicationEvent(OldWayBlogModifiedEvent event) {
        externalNotificationSender.oldWayBlogModified(event);
    }
}

It works fine, but for every event a new class has to be created which generates boilerplate code.

In addition our event has to extend ApplicationEvent class – the base class for all application events in Spring.

class OldWayBlogModifiedEvent extends ApplicationEvent {

    public OldWayBlogModifiedEvent(Blog blog) {
        super(blog);
    }

    public Blog getBlog() {
        return (Blog)getSource();
    }
}

Please notice that using domain objects in the events has notable drawback and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Btw, ExternalNotificationSender in this example is an instance of a class which sends external notifications to registered users (e.g. via email, SMS or Slack).

Annotation-driven event listener

Starting with Spring 4.2 to be notified about the new event it is enough to annotate a method in any Spring component with @EventListener annotation.

    @EventListener
    public void blogModified(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModified(blogModifiedEvent);
    }

Under the hood Spring will create an ApplicationListener instance for the event with a type taken from the method argument. There is no limitation on the number of annotated methods in one class – all related event handlers can be grouped into one class.

Conditional event handling

To make @EventListener even more interesting there is an ability to handle only those events of a given type which fulfill given condition(s) written in SpEL. Let’s assume the following event class:

public class BlogModifiedEvent {

    private final Blog blog;
    private final boolean importantChange;

    public BlogModifiedEvent(Blog blog) {
        this(blog, false);
    }

    public BlogModifiedEvent(Blog blog, boolean importantChange) {
        this.blog = blog;
        this.importantChange = importantChange;
    }

    public Blog getBlog() {
        return blog;
    }

    public boolean isImportantChange() {
        return importantChange;
    }
}

Please note that in the real application there would be probably a hierarchy of Blog related events.
Please also note that in Groovy that class would be much simpler.

To generate event only for important changes the condition parameter can be used:

    @EventListener(condition = "#blogModifiedEvent.importantChange")
    public void blogModifiedSpEL(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedSpEL(blogModifiedEvent);
    }

Relaxed event type hierarchy

Historically ApplicationEventPublisher had only an ability to publish objects which inherited after ApplicationEvent. Starting with Spring 4.2 the interface has been extended to support any object type. In that case the object is wrapped in PayloadApplicationEvent and sent through.

//base class with Blog field - no need to extend `ApplicationEvent`
class BaseBlogEvent {}

class BlogModifiedEvent extends BaseBlogEvent {}
//somewhere in the code
ApplicationEventPublisher publisher = (...);    //injected

publisher.publishEvent(new BlogModifiedEvent(blog)); //just plain instance of the event

That change makes publishing events even easier. However, on the other hand without an internal conscientiousness (e.g. with marker interface for all our domain events) it can make event tracking even harder, especially in larger applications.

Publishing events in response to

Another nice thing with @EventListener is the fact that in a situation of non-void return type Spring will automatically publish returned event.

    @EventListener
    public BlogModifiedResponseEvent blogModifiedWithResponse(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedWithResponse(blogModifiedEvent);
        return new BlogModifiedResponseEvent(
            blogModifiedEvent.getBlog(), BlogModifiedResponseEvent.Status.OK);
    }

Asynchronous event processing

Updated. As rightly suggested by Radek Grębski it is also worth to mention that @EventListener can be easily combined with @Async annotation to provide asynchronous event processing. The code in the particular event listener doesn’t block neither the main code execution nor processing by other listeners.

    @Async    //Remember to enable asynchronous method execution 
              //in your application with @EnableAsync
    @EventListener
    public void blogAddedAsync(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

To make it work it is only required to enable asynchronous method execution in general in your Spring context/application with @EnableAsync.

Summary

Annotation-driven event listeners introduced in Spring 4.2 continue a trend to reduce boilerplate code in Spring (Boot) based applications. The new approach looks interesting especially for small applications with a small amount of events where a maintenance overhead is lower. In the world of ubiquitous Spring (Boot) magic it is more worthy to remember that with great power comes great responsibility.

In the next blog post I will write how the new mechanism can be also used to simplify handling of transaction bound events.

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases.

How to communicate with Maven Central/Nexus without using the password kept locally unencrypted (especially with Gradle, but not limited to it).

Rationale

Unfortunately, Gradle (and many other build tools) does not provide any mechanism to locally keep passwords encrypted (or at least encoded). Without that even such a simple activity like showing your global Gradle configuration (~/.gradle/gradle.properties) to a colleague it uncomfortable, not to mention more serious risks associated with storing passwords on a disk in a plain-text form (see among others Sony Pictures Entertainment hack). It is Gradle, so with all Groovy magic under the hood it would be possible to implement an integration with a system keyring on Linux to fetch a password, but I’m not aware of any existing plugin/mechanism to do that and I would rather prefer not to write it.

Another issue is that nowadays, in the world of ubiquitous automation and cloud environments it is common to use API keys which allow to perform given operation(s). However, its lost doesn’t provide an attacker a possibility to hijack the account (e.g. token cannot be used neither to log into an administration panel nor to change of email or password which requires additional authentication).

It is very important if you need to keep valid credentials on a CI server to make automatic or even continuous releases. Thanks to my gradle-nexus-staging-plugin there is no need to do any manual steps in Nexus GUI to promote artifacts to Maven Central, so this was the next issue I wanted to deal with for my private and our FOSS projects in Codearte.

Nexus API key generation

Internet search for “maven central api key” wasn’t helpful, so I started digging into Nexus REST API documentation and I’ve found that in fact there is a (non widely known) way to generate and use an API key (aka an auth token).

0. Log into Nexus hosting Sonatype OSS Repository Hosting (or your own instance of Nexus).
1. Click on your login name in right-upper corner and choose “Profile”.
2. From the drop-down list with “Summary” text select “User Token”.
3. Click “Access User Token”.

Generating API key in Nexus

Generating API key in Nexus

5. Enter your password
6. Copy and paste your API username and API key (into your ~/.gradle/gradle.properties file or a CI server configuration).
7. Work as usual with a little safer way.

Summary

It is good that using API keys is possible to deploy artifacts to Maven Central/Nexus and it is very easy to set it up. Someone could argue that the permission policy is coarse-grained (nothing or all operations except password/email change), but in my opinion it seems to be enough for the artifact repository system class. In addition, such an approach should work also with Sbt, Ivy, Leiningen and everything else that tries to upload artifacts into Maven Central (including Maven itself by removing limitations of the master password encryption with settings-security.xml). Hopefully, that post will make it widely known.

Mockito-Java8 is a set of Mockito add-ons leveraging Java 8 and lambda expressions to make mocking with Mockito even more compact.

At the beginning of 2015 I gave my flash talk Java 8 brings power to testing! at GeeCON TDD 2015 and DevConf.cz 2015. In my speech using 4 examples I showed how Java 8 – namely lambda expressions – can simplify testing tools and testing in general. One of those tools was Mokcito. To not let my PoC code die on slides and to make it simply available for others I have released a small project with two, useful in specified case, Java 8 add-ons for Mockito.

Mockito logo

Quick introduction

As a prerequisite, let’s assume we have the following data structure:

@Immutable
class ShipSearchCriteria {
    int minimumRange;
    int numberOfPhasers;
}

The library provides two add-ons:

Lambda matcher – allows to define matcher logic within a lambda expression.

given(ts.findNumberOfShipsInRangeByCriteria(
    argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);

Argument Captor – Java 8 edition – allows to use ArgumentCaptor in a one line (here with AssertJ):

verify(ts).findNumberOfShipsInRangeByCriteria(
    assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));

Lambda matcher

With a help of the static method argLambda a lambda matcher instance is created which can be used to define matcher logic within a lambda expression (here for stubbing). It could be especially useful when working with complex classes pass as an argument.

@Test
public void shouldAllowToUseLambdaInStubbing() {
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(
        argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

In comparison the same logic implemented with a custom Answer in Java 7:

@Test
public void stubbingWithCustomAsnwerShouldBeLonger() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(new Answer<Integer>() {
        @Override
        public Integer answer(InvocationOnMock invocation) throws Throwable {
            Object[] args = invocation.getArguments();
            ShipSearchCriteria criteria = (ShipSearchCriteria) args[0];
            if (criteria.getMinimumRange() > 1000) {
                return 4;
            } else {
                return 0;
            }
        }
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Even Java 8 and less readable constructions don’t help too much:

@Test
public void stubbingWithCustomAsnwerShouldBeLongerEvenAsLambda() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(invocation -> {
        ShipSearchCriteria criteria = (ShipSearchCriteria) invocation.getArguments()[0];
        return criteria.getMinimumRange() > 1000 ? 4 : 0;
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Argument Captor – Java 8 edition

A static method assertArg creates an argument matcher which implementation internally uses ArgumentMatcher with an assertion provided in a lambda expression. The example below uses AssertJ to provide meaningful error message, but any assertions (like native from TestNG or JUnit) could be used (if really needed). This allows to have inlined ArgumentCaptor:

@Test
public void shouldAllowToUseAssertionInLambda() {
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    verify(ts).findNumberOfShipsInRangeByCriteria(
        assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));
}

In comparison to 3 lines in the classic way:

@Test
public void shouldAllowToUseArgumentCaptorInClassicWay() {  //old way
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    ArgumentCaptor<ShipSearchCriteria> captor = 
        ArgumentCaptor.forClass(ShipSearchCriteria.class);
    verify(ts).findNumberOfShipsInRangeByCriteria(captor.capture());
    assertThat(captor.getValue().getMinimumRange()).isLessThan(2000);
}

Summary

The presented add-ons were created as PoC for my conference speech, but should be fully functional and potentially useful in the specific cases. To use it in your project it is enough to use Mockito 1.10.x or 2.0.x-beta, add mockito-java8 as a dependency and of course compile your project with Java 8+.

More details are available on the project webpage: https://github.com/szpak/mockito-java8

Update 20151217. You may be also interested in my new blog post about using Mockito wihtout static imports.

Quick tutorial how to promote/release artifacts in a Gradle project to Maven Central, without clicking in the Nexus GUI with Gradle Nexus Staging Plugin.

Introduction

Maven Central (aka The Central Repository) is (probably) the world’s largest set of open source artifacts used by Java and JVM-based projects. It was founded by the creators of Apache Maven and it has been serving artifacts since 2002. Nowadays there are some alternatives (listed below), but for many users Maven Central is still the primary source of project dependencies (and sometimes the only one whitelisted in the corporations).

The Central Repository logo

Problem

To perform the release to The Central Repository, Maven users can use Nexus Staging Maven Plugin – free, but not fully open source plugin. But with Gradle it was required to login into Nexus GUI and manually invoke two actions (close repository and release/promote repository). Quite boring and in addition highly problematic with the Continuous Delivery approach. Luckily Nexus exposes REST API which with some work allows to do the same. Gradle Nexus Staging Plugin arose to do that job.

Quick start

Important. Please pay attention that the prerequisite is to have an active and configured account in Sonatype OSSRH (OSS Repository Hosting) as well as Gradle project configured to publish release artifacts to staging repository. If you don’t have it already please follow a separate section for Gradle in the official guide.

To setup automatic release/promotion in your project add gradle-nexus-staging-plugin to the buildscript dependencies in your build.gradle file for root project:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.5.1"
    }
}

Apply the plugin:

apply plugin: 'io.codearte.nexus-staging'

Configure it:

nexusStaging {
    packageGroup = "org.mycompany.myproject"
    stagingProfileId = "yourStagingProfileId" //when not defined will be got from server using "packageGroup"
}

After successful archives upload (with maven, maven-publish or nexus plugin) to Sonatype OSSRH call:

./gradlew closeRepository promoteRepository

to close staging repository and promote/release it and its artifacts. If a synchronization with Maven Central was enabled the artifacts should automatically appear into Maven Central within several minutes.

Details

The plugin provides two main task:

  • closeRepository – closes the open repository with uploaded artifacts. There should be just one open repository available in the staging profile (possible old/broken repositories can be dropped with Nexus GUI)
  • promoteRepository – promotes/releases closed repository (required to put artifacts to Maven Central)

And one additional:

  • getStagingProfile – gets and displays staging profile id for a given package group. This is a diagnostic task to get the value and put it into the configuration closure as stagingProfileId. To see the result it is required to call gradle with --info switch.

It has to be mentioned that calling Nexus REST API ends immediately, but the closing operation takes a moment, so to make it possible to call closeRepository promoteRepository together there is a built-in retry mechanism.

The plugin is “upload mechanism agnostic” and can be used together with maven, maven-plugin or nexus plugins.

For more details and configuration parameters see project webpage or the working example in the plugin’s own release configuration.

Alternatives to Maven Central?

There is much younger, but promising alternative – Bintray which also allows to serve artifacts. It is free for open source projects and I personally had used it for some other projects and even created an automatic release mechanism for Bintray, Travis and Gradle. It works ok, but to put artifacts also to Maven Central it is required to store a private key used for singing on their servers and in addition provide Nexus credentials. It increases the risk to have them stolen and in Codearte we prefer to use private Jenkins instance to perform the release directly to Maven Central.

Summary

With Gradle Nexus Staging Plugin the whole release process to Maven Central can be performed with Gradle from a command line and with some additional work completely automatic from a CI server. No more buttons to push in Nexus GUI. In addition to Sonatype OSSRH the plugin can be also used with private Nexus instances with enabled staging repositories.

Btw, there possibly are many things that could be enhancement in the plugin. If you need something or found a bug feel free to use issue tracker to report that.

Thanks to Kuba Kubryński for motivation and help with analyzing the not very well documented Nexus REST API.