Posts Tagged ‘tutorial’

Get know how to enable named method parameters support in a Gradle project

Introduction

Java 8 has introduced (among others) an ability to get a method parameter name at runtime. For backward compatibility (mostly with existing bytecode manipulation tools) it is required to enable it explicitly. The operation is as simple as an addition of a -parameters flag to a javac call in hello world tutorials. However, it turns out to be more enigmatic to configure in a Gradle project (especially for Gradle newcomers).

PensiveDuke

Gradle

To enable support for named method arguments it is required to set it for every java compilation task in a project. It can be easily attained with:

tasks.withType(JavaCompile) {
    options.compilerArgs << '-parameters'
}

For multi-project build the construction has to be applied on all the subprojects, e.g.:

subprojects {
    (...)
    tasks.withType(JavaCompile) {
        options.compilerArgs << '-parameters'
    }
}

Rationale

For me as a Gradle veteran and Gradle plugins author construction withType and passing different compilation or runtime JVM options is a bread and butter. However, I needed to explain it more than once to less Groovy experienced workmates, so for further reference (aka “Have you read my blog? ;-) ) I have written it down. As a justification for them I have to agree that as a time of writing this blog post the top Google results point to Gradle forum threads containing also “not so good” advises. Hopefully my article will be positioned higher :-).

Tested with Gradle 2.14 and OpenJDK 1.8.0_92.

Image credits: https://duke.kenai.com/

The simply way how buildscript dependencies (e.g. plugins) can be displayed and analyzed in Gradle

Introduction

This is the third part of my Gradle tricks mini-series related to visualization and analyze of dependencies. In the first post I presented a way how dependencies for all subprojects in multi-project build can be display. In the second I showed a technique of useful in tracking down not expected transitive dependencies in the project. This time less often used things, yet crucial in specific cases – buildscript dependencies.

Dependencies

Real use case

Buildscript dependencies contains plugins used in our project and their dependencies. It would seem nothing interesting unless you are a Gradle plugin developer, but it is not completely true. Once, as a consultant, I was investigating issue with NoSuchMethodException in a large project with custom build framework built on top of Gradle. The problem occurred only when one innocent, very popular open source plugin had been adding to the project. The same plugin worked fine in many other project in that company. In the end I was able to figure out that one of the dependencies used in buildSrc custom scripts overriding the same dependencies in older version from the plugin. As a result plugin failed at runtime with mentioned NoSuchMethodException. To achieve that I had to use my custom script as buildscript/classpath dependencies are completely ignored when ./gradlew dependencies or ./gradlew dependencyInsight is used.

Solution

The idea to write this post arose in at the beginning of 2015. I wanted to present my small Gradle task that using some internal Gradle mechanisms retrieves buildscript dependencies in display them to a console. The post was postponed and almost a year later I was positively surprised reading release notes for Gradle 2.10. The new buildEnvironment task was added.

$ ./gradlew buildEnvironment
:buildEnvironment

------------------------------------------------------------
Root project
------------------------------------------------------------

classpath
+--- com.bmuschko:gradle-nexus-plugin:2.3
\--- io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.5.3
     \--- org.codehaus.groovy.modules.http-builder:http-builder:0.7.1
          +--- org.apache.httpcomponents:httpclient:4.2.1
          |    +--- org.apache.httpcomponents:httpcore:4.2.1
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- commons-codec:commons-codec:1.6
          +--- net.sf.json-lib:json-lib:2.3
          |    +--- commons-beanutils:commons-beanutils:1.8.0
          |    |    \--- commons-logging:commons-logging:1.1.1
          |    +--- commons-collections:commons-collections:3.2.1
          |    +--- commons-lang:commons-lang:2.4
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- net.sf.ezmorph:ezmorph:1.0.6
          |         \--- commons-lang:commons-lang:2.3 -> 2.4
          +--- net.sourceforge.nekohtml:nekohtml:1.9.16
          \--- xml-resolver:xml-resolver:1.2

(*) - dependencies omitted (listed previously)

BUILD SUCCESSFUL

Total time: 1.38 secs

Two plugins and a pack of transitive dependencies to gradle-nexus-staging-plugin thanks to http-builder (maybe it would be good to replace it with Jodd?).

Summary

It is worth to be able to distinguish standard projects dependencies and buildscript dependencies. The new buildEnvironment task helps to deal with the latter. This in turn becomes essential when strange runtime errors start to show up.

Tested with Gradle 2.10.

Picture credits: Zeroturnaround.

How parallel execution of blocking “side-effect only” (aka void) tasks became easier with Completable abstraction introduced in RxJava 1.1.1.

As you may have noticed reading my blog I primarily specialize in Software Craftsmanship and automatic code testing. However, in addition I am an enthusiast of Continuous Delivery and broadly defined concurrency. The last point ranges from pure threads and semaphores in C to more high level solutions such as ReactiveX and the actor model. This time an use case for a very convenient (in specific cases) feature introduced in the brand new RxJava 1.1.1 – rx.Completable. Similarly to many my blog entries this one is also a reflection of the actual event I encountered working on real tasks and use cases.

RxJava logo

A task to do

Imagine a system with quite complex processing of asynchronous events coming from different sources. Filtering, merging, transforming, grouping, enriching and more. RxJava suits here very well, especially if we want to be reactive. Let’s assume we have already implemented it (looks and works nicely) and there is only one more thing left. Before we start processing, it is required to tell 3 external systems that we are ready to receive messages. 3 synchronous calls to legacy systems (via RMI, JMX or SOAP). Each of them can last for a number of seconds and we need to wait for all of them before we start. Luckily, they are already implemented and we treat them as black boxes which may succeed (or fail with an exception). We just need to call them (preferably concurrently) and wait for finish.

rx.Observable – approach 1

Having RxJava at fingertips it looks like the obvious approach. Firstly, job execution can be wrapped with Observable:

private Observable<Void> rxJobExecute(Job job) {
    return Observable.fromCallable(() -> { 
        job.execute();
        return null;
    });
}

Unfortunately (in our case) Observable expects to have some element(s) returned. We need to use Void and awkward return null (instead of just method reference job::execute.

Next, we can use subscribeOn() method to use another thread to execute our job (and not block the main/current thread – we don’t want to execute our jobs sequentially). Schedulers.io() provides a scheduler with a set of threads intended for IO-bound work.

Observable<Void> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
Observable<Void> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

Finally we need to wait for all of them to finish (all Obvervables to complete). To do that a zip function can be adapted. It combines items emitted by zipped Obserbables by their sequence number. In our case we are interested only in the first pseudo-item from each job Observable (we emit only null to satisfy API) and wait for them in a blocking way. A zip function in a zip operator needs to return something, thence we need to repeat a workaround with null.

Observable.zip(run1, run2, (r1, r2) -> return null)
         .toBlocking()
         .single();

It is pretty visible that Observable was designed to work with streams of values and there is some additional work required to adjust it to side-effect only (returning nothing) operations. The situation is getting even worse when we would need to combine (e.g. merge) our side-effect only operation with other returning some value(s) – an uglier cast is required. See the real use case from the RxNetty API.

public void execute() {
    Observable<Void> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
    Observable<Void> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

    Observable.zip(run1, run2, (r1, r2) -> null)
        .toBlocking()
        .single();
}

private Observable<Void> rxJobExecute(Job job) {
    return Observable.fromCallable(() -> { 
        job.execute();
        return null;
    });
}

rx.Observable – approach 2

There could be another approach used. Instead of generating an artificial item, an empty Observable with our task can be executed as an onComplete action. This forces us to switch from zip operation to merge. As a result we need to provide an onNext action (which is never executed for empty Observable) which confirms us in conviction the we try to hack the system.

public void execute() {
    Observable<Object> run1 = rxJobExecute(job1).subscribeOn(Schedulers.io());
    Observable<Object> run2 = rxJobExecute(job2).subscribeOn(Schedulers.io());

    Observable.merge(run1, run2)
            .toBlocking()
            .subscribe(next -> {});
}

private Observable<Object> rxJobExecute(Job job) {
    return Observable.empty()
            .doOnCompleted(job::execute);
}

rx.Completable

Better support for Observable that doesn’t return any value has been addressed in RxJava 1.1.1. Completable can be considered as a stripped version of Observable which can either finish successfully (onCompleted event is emitted) or fail (onError). The easiest way to create an Completable instance is using of a fromAction method which takes an Action0 which doesn’t return any value (like Runnable).

Completable completable1 = Completable.fromAction(job1::execute)
        .subscribeOn(Schedulers.io());
Completable completable2 = Completable.fromAction(job2::execute)
        .subscribeOn(Schedulers.io());

Next, we can use merge() method which returns a Completable instance that subscribes to all downstream Completables at once and completes when all of them complete (or one of them fails). As we used subscribeOn method with an external scheduler all jobs are executed in parallel (in different threads).

Completable.merge(completable1, completable2)
        .await();

await() method blocks until all jobs finish (in a case of error an exception will be rethrown). Pure and simple.

public void execute() {
    Completable completable1 = Completable.fromAction(job1::execute)
            .subscribeOn(Schedulers.io());
    Completable completable2 = Completable.fromAction(job2::execute)
            .subscribeOn(Schedulers.io());

    Completable.merge(completable1, completable2)
        .await();
}

java.util.concurrent.CompletableFuture

Some of you could ask: Why not to just use CompletableFuture? It would be a good question. Whereas pure Future introduced in Java 5 it could require additional work on our side, ListenableFuture (from Guava) and CompletableFuture (from Java 8) make it quite trivial.

First, we need to run/schedule our jobs execution. Next, using CompletableFuture.allOf() method we can create a new CompletableFuture which is completed the moment all jobs complete (haven’t we seen that conception before?). get() method just blocks waiting for that.

public void execute() {
    try {
        CompletableFuture<Void> run1 = CompletableFuture.runAsync(job1::execute);
        CompletableFuture<Void> run2 = CompletableFuture.runAsync(job2::execute);

        CompletableFuture.allOf(run1, run2)
            .get();

    } catch (InterruptedException | ExecutionException e) {
        throw new RuntimeException("Jobs execution failed", e);
    }
}

We need to do something with checked exceptions (very often we don’t want to pollute our API with them), but in general it looks sensible. However, it is worth to remember that CompletableFuture falls short when more complex chain processing is required. In addition having RxJava already used in our project it is often useful to use the same (or similar) API instead of introducing something completely new.

Summary

Thanks to rx.Completable an execution of side-effect only (returning nothing) tasks with RxJava is much more comfortable. In codebase already using RxJava it could be a preferred over CompletableFuture even for simple cases. However, Completable provides a lot of more advances operators and techniques and in addition can be easily mixed with Observable what makes it even more powerful.

To read more about Completable you may want to see the release notes. For those who want to have a deeper insight into topic there is a very detailed introduction to Completable API on Advanced RxJava blog (part 1 and 2).

The source code for code examples is available from GitHub.

Btw, if you are interested in RxJava in general, I can with a clear conscience recommend you a book which is currently being written by Tomasz Nurkiewicz and Ben Christensen – Reactive Programming with RxJava.

RxJava book

Learn how Spring 4.2 simplifies handling transaction bound events (e.g. sent just after a database commit).

Introduction

As you probably already know (e.g. from my previous blog post) it is no longer needed to create a separate class implementing ApplicationListener with onApplicationEvent method to be able to react to application events (both from Spring Framework itself and our own domain events). Starting with Spring 4.2 the support for annotation-driven event listeners was added. It is enough to use @EventListener at the method level which under-the-hood will automatically register corresponding ApplicationListener:

    @EventListener
    public void blogAdded(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

Please notice that using domain objects in the events has notable drawbacks and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Transaction bound events

Simple and compact. For “standard” events everything looks great but in some cases it is needed to perform some operations (usually asynchronous ones) just after the transaction has been committed (or rolled back). What’s then? Can the new mechanism be used as well?

Business requirements

First, a small digression – business requirements. Let’s imagine the super fancy blog aggregation service. An event is generated everytime the new blog is added. Subscribed users can receive an SMS or a push notification. The event could be published after the blog object is scheduled to be saved in a database. However, in in a case of commit/flush failure (database constraints violation, an issue with ID generator, etc.) the whole DB transaction would be rolled back. A lot of angry users with broken notification will appear at the door…

Technical issues

In modern approach to transaction management, transactions are configured declaratively (e.g. with @Transactional annotation) and a commit is triggered at end of transactional scope (e.g. at the end of a method). In general this is very convenient and much less error prone (than the programmatic approach). On the other hand, commit (or rollback) is done automatically outside our code and we are not able to react in a “classical way” (i.e. publish event in the next line after transaction.commit() is called).

Old school implementation

One of the possible solutions for Spring (and a very elegant one) was presented by indispensable Tomek Nurkiewicz. It uses TransactionSynchronizationManager to register transaction synchronization for the current thread. For example:

    @EventListener
    public void blogAddedTransactionalOldSchool(BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2 - broken in not transactional context

        TransactionSynchronizationManager.registerSynchronization(
                new TransactionSynchronizationAdapter() {
                    @Override
                    public void afterCommit() {
                        internalSendBlogAddedNotification(blogAddedEvent);
                    }
                });
    }

The passed code is executed in the proper place in the Spring transaction workflow (for that case “just” after commit).

To provide support for execution in non-transactional context (e.g. in integration test cases which couldn’t care about transactions) it can be extended to the following form to not fail with java.lang.IllegalStateException: Transaction synchronization is not active exception:

    @EventListener
    public void blogAddedTransactionalOldSchool(final BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2

        //"if" to not fail with "java.lang.IllegalStateException: Transaction synchronization is not active"
        if (TransactionSynchronizationManager.isActualTransactionActive()) {

            TransactionSynchronizationManager.registerSynchronization(
                    new TransactionSynchronizationAdapter() {
                        @Override
                        public void afterCommit() {
                            internalSendBlogAddedNotification(blogAddedEvent);
                        }
                    });
        } else {
            log.warn("No active transaction found. Sending notification immediately.");
            externalNotificationSender.newBlogTransactionalOldSchool(blogAddedEvent);
        }
    }

With that change in a case of the lack of active transaction provided code is executed immediately. Works fine so far, but let’s try to achieve the same thing with annotation-driven event listeners in Spring 4.2.

Spring 4.2+ implementation

In addition to @EventListener Spring 4.2 provides also one more annotation @TransactionalEventListener.

    @TransactionalEventListener
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

The execution can be bound to standard transaction phases: before/after commit, after rollback or after completion (both commit or rollback). By default it processes an event only if it was published within the boundaries of a transaction. In other case the event is discarded.

To support the execution in non-transactional context the falbackExecution flag can be used. If set to “true” the event is processed immediately if there is no transaction running.

    @TransactionalEventListener(fallbackExecution = true)
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

Summary

Introduced in Spring 4.2 annotation-driven event listeners continue a trend to reduce boilerplate code in Spring (Boot) based applications. No need to manually create ApplicationListener implementations, no need to use directly TransactionSynchronizationManager – just one annotation with proper configuration. The other side of the coin is that it is a little bit harder to find all event listeners, especially if there are dozens of them in our monolith application (though, it can be easily grouped). Of course, the new approach is only an option which could be useful in a given use-case or not. Nevertheless another piece of Spring (Boot) magic flood into our systems. But maybe resistance is futile?

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively, it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases. Code examples are available from GitHub.

Btw, writing examples for that blog post gave me the first real ability to use the new test transaction management system introduced in Spring 4.1 (in the past I only mentioned it during my Spring training sessions). Probably, I will write more about it soon.

Learn how to reduce boilerplace code in event handling with annotation-driven event listeners in Spring 4.2+.

Introduction

Exchanging events within the application has become indispensable part of many applications and thankfully Spring provides a complete infrastructure for transient events (*). The recent refactoring of transaction bound events gave me an excuse to check in practice the new annotation-driven event listeners introduced in Spring 4.2. Let’s see what can be gained.

(*) – for persistent events in Spring-based application Duramen could be a solution that is worth to see

Spring logo

The old way

To get a notification about an event (both Spring event and custom domain event) a component implementing ApplicationListener with onApplicationEvent has to be created.

@Component
class OldWayBlogModifiedEventListener implements
                        ApplicationListener<OldWayBlogModifiedEvent> {

    (...)

    @Override
    public void onApplicationEvent(OldWayBlogModifiedEvent event) {
        externalNotificationSender.oldWayBlogModified(event);
    }
}

It works fine, but for every event a new class has to be created which generates boilerplate code.

In addition our event has to extend ApplicationEvent class – the base class for all application events in Spring.

class OldWayBlogModifiedEvent extends ApplicationEvent {

    public OldWayBlogModifiedEvent(Blog blog) {
        super(blog);
    }

    public Blog getBlog() {
        return (Blog)getSource();
    }
}

Please notice that using domain objects in the events has notable drawback and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Btw, ExternalNotificationSender in this example is an instance of a class which sends external notifications to registered users (e.g. via email, SMS or Slack).

Annotation-driven event listener

Starting with Spring 4.2 to be notified about the new event it is enough to annotate a method in any Spring component with @EventListener annotation.

    @EventListener
    public void blogModified(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModified(blogModifiedEvent);
    }

Under the hood Spring will create an ApplicationListener instance for the event with a type taken from the method argument. There is no limitation on the number of annotated methods in one class – all related event handlers can be grouped into one class.

Conditional event handling

To make @EventListener even more interesting there is an ability to handle only those events of a given type which fulfill given condition(s) written in SpEL. Let’s assume the following event class:

public class BlogModifiedEvent {

    private final Blog blog;
    private final boolean importantChange;

    public BlogModifiedEvent(Blog blog) {
        this(blog, false);
    }

    public BlogModifiedEvent(Blog blog, boolean importantChange) {
        this.blog = blog;
        this.importantChange = importantChange;
    }

    public Blog getBlog() {
        return blog;
    }

    public boolean isImportantChange() {
        return importantChange;
    }
}

Please note that in the real application there would be probably a hierarchy of Blog related events.
Please also note that in Groovy that class would be much simpler.

To generate event only for important changes the condition parameter can be used:

    @EventListener(condition = "#blogModifiedEvent.importantChange")
    public void blogModifiedSpEL(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedSpEL(blogModifiedEvent);
    }

Relaxed event type hierarchy

Historically ApplicationEventPublisher had only an ability to publish objects which inherited after ApplicationEvent. Starting with Spring 4.2 the interface has been extended to support any object type. In that case the object is wrapped in PayloadApplicationEvent and sent through.

//base class with Blog field - no need to extend `ApplicationEvent`
class BaseBlogEvent {}

class BlogModifiedEvent extends BaseBlogEvent {}
//somewhere in the code
ApplicationEventPublisher publisher = (...);    //injected

publisher.publishEvent(new BlogModifiedEvent(blog)); //just plain instance of the event

That change makes publishing events even easier. However, on the other hand without an internal conscientiousness (e.g. with marker interface for all our domain events) it can make event tracking even harder, especially in larger applications.

Publishing events in response to

Another nice thing with @EventListener is the fact that in a situation of non-void return type Spring will automatically publish returned event.

    @EventListener
    public BlogModifiedResponseEvent blogModifiedWithResponse(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedWithResponse(blogModifiedEvent);
        return new BlogModifiedResponseEvent(
            blogModifiedEvent.getBlog(), BlogModifiedResponseEvent.Status.OK);
    }

Asynchronous event processing

Updated. As rightly suggested by Radek Grębski it is also worth to mention that @EventListener can be easily combined with @Async annotation to provide asynchronous event processing. The code in the particular event listener doesn’t block neither the main code execution nor processing by other listeners.

    @Async    //Remember to enable asynchronous method execution 
              //in your application with @EnableAsync
    @EventListener
    public void blogAddedAsync(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

To make it work it is only required to enable asynchronous method execution in general in your Spring context/application with @EnableAsync.

Summary

Annotation-driven event listeners introduced in Spring 4.2 continue a trend to reduce boilerplate code in Spring (Boot) based applications. The new approach looks interesting especially for small applications with a small amount of events where a maintenance overhead is lower. In the world of ubiquitous Spring (Boot) magic it is more worthy to remember that with great power comes great responsibility.

In the next blog post I will write how the new mechanism can be also used to simplify handling of transaction bound events.

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases.

How to communicate with Maven Central/Nexus without using the password kept locally unencrypted (especially with Gradle, but not limited to it).

Rationale

Unfortunately, Gradle (and many other build tools) does not provide any mechanism to locally keep passwords encrypted (or at least encoded). Without that even such a simple activity like showing your global Gradle configuration (~/.gradle/gradle.properties) to a colleague it uncomfortable, not to mention more serious risks associated with storing passwords on a disk in a plain-text form (see among others Sony Pictures Entertainment hack). It is Gradle, so with all Groovy magic under the hood it would be possible to implement an integration with a system keyring on Linux to fetch a password, but I’m not aware of any existing plugin/mechanism to do that and I would rather prefer not to write it.

Another issue is that nowadays, in the world of ubiquitous automation and cloud environments it is common to use API keys which allow to perform given operation(s). However, its lost doesn’t provide an attacker a possibility to hijack the account (e.g. token cannot be used neither to log into an administration panel nor to change of email or password which requires additional authentication).

It is very important if you need to keep valid credentials on a CI server to make automatic or even continuous releases. Thanks to my gradle-nexus-staging-plugin there is no need to do any manual steps in Nexus GUI to promote artifacts to Maven Central, so this was the next issue I wanted to deal with for my private and our FOSS projects in Codearte.

Nexus API key generation

Internet search for “maven central api key” wasn’t helpful, so I started digging into Nexus REST API documentation and I’ve found that in fact there is a (non widely known) way to generate and use an API key (aka an auth token).

0. Log into Nexus hosting Sonatype OSS Repository Hosting (or your own instance of Nexus).
1. Click on your login name in right-upper corner and choose “Profile”.
2. From the drop-down list with “Summary” text select “User Token”.
3. Click “Access User Token”.

Generating API key in Nexus

Generating API key in Nexus

5. Enter your password
6. Copy and paste your API username and API key (into your ~/.gradle/gradle.properties file or a CI server configuration).
7. Work as usual with a little safer way.

Summary

It is good that using API keys is possible to deploy artifacts to Maven Central/Nexus and it is very easy to set it up. Someone could argue that the permission policy is coarse-grained (nothing or all operations except password/email change), but in my opinion it seems to be enough for the artifact repository system class. In addition, such an approach should work also with Sbt, Ivy, Leiningen and everything else that tries to upload artifacts into Maven Central (including Maven itself by removing limitations of the master password encryption with settings-security.xml). Hopefully, that post will make it widely known.

Mockito-Java8 is a set of Mockito add-ons leveraging Java 8 and lambda expressions to make mocking with Mockito even more compact.

At the beginning of 2015 I gave my flash talk Java 8 brings power to testing! at GeeCON TDD 2015 and DevConf.cz 2015. In my speech using 4 examples I showed how Java 8 – namely lambda expressions – can simplify testing tools and testing in general. One of those tools was Mokcito. To not let my PoC code die on slides and to make it simply available for others I have released a small project with two, useful in specified case, Java 8 add-ons for Mockito.

Mockito logo

Quick introduction

As a prerequisite, let’s assume we have the following data structure:

@Immutable
class ShipSearchCriteria {
    int minimumRange;
    int numberOfPhasers;
}

The library provides two add-ons:

Lambda matcher – allows to define matcher logic within a lambda expression.

given(ts.findNumberOfShipsInRangeByCriteria(
    argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);

Argument Captor – Java 8 edition – allows to use ArgumentCaptor in a one line (here with AssertJ):

verify(ts).findNumberOfShipsInRangeByCriteria(
    assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));

Lambda matcher

With a help of the static method argLambda a lambda matcher instance is created which can be used to define matcher logic within a lambda expression (here for stubbing). It could be especially useful when working with complex classes pass as an argument.

@Test
public void shouldAllowToUseLambdaInStubbing() {
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(
        argLambda(sc -> sc.getMinimumRange() > 1000))).willReturn(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

In comparison the same logic implemented with a custom Answer in Java 7:

@Test
public void stubbingWithCustomAsnwerShouldBeLonger() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(new Answer<Integer>() {
        @Override
        public Integer answer(InvocationOnMock invocation) throws Throwable {
            Object[] args = invocation.getArguments();
            ShipSearchCriteria criteria = (ShipSearchCriteria) args[0];
            if (criteria.getMinimumRange() > 1000) {
                return 4;
            } else {
                return 0;
            }
        }
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Even Java 8 and less readable constructions don’t help too much:

@Test
public void stubbingWithCustomAsnwerShouldBeLongerEvenAsLambda() {  //old way
    //given
    given(ts.findNumberOfShipsInRangeByCriteria(any())).willAnswer(invocation -> {
        ShipSearchCriteria criteria = (ShipSearchCriteria) invocation.getArguments()[0];
        return criteria.getMinimumRange() > 1000 ? 4 : 0;
    });
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(1500, 2))).isEqualTo(4);
    //expect
    assertThat(ts.findNumberOfShipsInRangeByCriteria(
        new ShipSearchCriteria(700, 2))).isEqualTo(0);
}

Argument Captor – Java 8 edition

A static method assertArg creates an argument matcher which implementation internally uses ArgumentMatcher with an assertion provided in a lambda expression. The example below uses AssertJ to provide meaningful error message, but any assertions (like native from TestNG or JUnit) could be used (if really needed). This allows to have inlined ArgumentCaptor:

@Test
public void shouldAllowToUseAssertionInLambda() {
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    verify(ts).findNumberOfShipsInRangeByCriteria(
        assertArg(sc -> assertThat(sc.getMinimumRange()).isLessThan(2000)));
}

In comparison to 3 lines in the classic way:

@Test
public void shouldAllowToUseArgumentCaptorInClassicWay() {  //old way
    //when
    ts.findNumberOfShipsInRangeByCriteria(searchCriteria);
    //then
    ArgumentCaptor<ShipSearchCriteria> captor = 
        ArgumentCaptor.forClass(ShipSearchCriteria.class);
    verify(ts).findNumberOfShipsInRangeByCriteria(captor.capture());
    assertThat(captor.getValue().getMinimumRange()).isLessThan(2000);
}

Summary

The presented add-ons were created as PoC for my conference speech, but should be fully functional and potentially useful in the specific cases. To use it in your project it is enough to use Mockito 1.10.x or 2.0.x-beta, add mockito-java8 as a dependency and of course compile your project with Java 8+.

More details are available on the project webpage: https://github.com/szpak/mockito-java8

Update 20151217. You may be also interested in my new blog post about using Mockito wihtout static imports.

Quick tutorial how to configure Spock 1.0 with Groovy 2.4 using Maven and Gradle.

Spock 1.0 has been finally released. About new features and enhancements I already wrote two blog posts. One of the recent changes was a separation on artifacts designed for specific Groovy versions: 2.0, 2.2, 2.3 and 2.4 to minimize a chance to come across a binary incompatibility in runtime (in the past there were only versions for Groovy 1.8 and 2.0+). That was done suddenly and based on the messages on the mailing list it confused some people. After being twice asked to help properly configure two projects I decided to write a short post presenting how to configure Spock 1.0 with Groovy 2.4 in Maven and Gradle. It is also a great place to compare how much work is required to do it in those two very popular build systems.

Maven

Maven does not natively support other JVM languages (like Groovy or Scala). To use it in the Maven project it is required to use a third party plugin. For Groovy the best option seems to be GMavenPlus (a rewrite of no longer maintained GMaven plugin). An alternative is a plugin which allows to use Groovy-Eclipse compiler with Maven, but it is not using official groovyc and in the past there were problems with being up-to-date with the new releases/features of Groovy.

Sample configuration of GMavenPlus plugin could look like:

<plugin>
    <groupId>org.codehaus.gmavenplus</groupId>
    <artifactId>gmavenplus-plugin</artifactId>
    <version>1.4</version>
    <executions>
        <execution>
            <goals>
                <goal>compile</goal>
                <goal>testCompile</goal>
            </goals>
        </execution>
    </executions>
</plugin>

As we want to write tests in Spock which recommends to name files with Spec suffix (from specification) in addition it is required to tell Surefire to look for tests also in those files:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>${surefire.version}</version>
    <configuration>
        <includes>
            <include>**/*Spec.java</include> <!-- Yes, .java extension -->
            <include>**/*Test.java</include> <!-- Just in case of having also "normal" JUnit tests -->
        </includes>
    </configuration>
</plugin>

Please notice that it is needed to include **/*Spec.java not **/*Spec.groovy to make it work.

Also dependencies have to be added:

    <dependencies>
        <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <version>2.4.1</version>
        </dependency>
        <dependency>
            <groupId>org.spockframework</groupId>
            <artifactId>spock-core</artifactId>
            <version>1.0-groovy-2.4</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

It is very important to take a proper version of Spock. For Groovy 2.4 version 1.0-groovy-2.4 is required. For Groovy 2.3 version 1.0-groovy-2.3. In case of mistake Spock protests with a clear error message:

Could not instantiate global transform class
org.spockframework.compiler.SpockTransform specified at
jar:file:/home/foo/.../spock-core-1.0-groovy-2.3.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation
because of exception
org.spockframework.util.IncompatibleGroovyVersionException:
The Spock compiler plugin cannot execute because Spock 1.0.0-groovy-2.3 is
not compatible with Groovy 2.4.0. For more information, see
http://versioninfo.spockframework.org

Together with other mandatory pom.xml elements the file size increased to over 50 lines of XML. Quite much just for Groovy and Spock. Let’s see how complicated it is in Gradle.

Gradle

Gradle has built-in support for Groovy and Scala. Without further ado Groovy plugin just has to be applied.

apply plugin: 'groovy'

Next the dependencies has to be added:

compile 'org.codehaus.groovy:groovy-all:2.4.1'
testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'

and the information where Gradle should look for them:

repositories {
    mavenCentral()
}

Together with defining package group and version it took 15 lines of code in Groovy-based DSL.

Btw, in case of Gradle it is also very important to match Spock and Groovy version, e.g. Groovy 2.4.1 and Spock 1.0-groovy-2.4.

Summary

Thanks to embedded support for Groovy and compact DSL Gradle is preferred solution to start playing with Spock (and Groovy in general). Nevertheless if you prefer Apache Maven with a help of GMavenPlus (and XML) it is also possible to build project tested with Spock.

The minimal working project with Spock 1.0 and Groovy 2.4 configured in Maven and Gradle can be cloned from my GitHub.

Graphical comparison of Spock and Groovy configuration in Maven and Gradle

Bonus: Graphical comparison of Spock and Groovy configuration in Maven and Gradle

Note. I haven’t been using Maven in my project for over 2 years (I prefer Gradle), so if there is a better/easier way to configure Groovy and Spock with Maven just let me know in the comments.

Note 2. The configuration examples assume that Groovy is used only for tests and the production code is written in Java. It is possible to mix Groovy and Java code together, but then the configuration is a little more complicated.

Note 3. If you are interested in get know useful tips and tricks about using Spock Framework to test your Java and Groovy code I will have a presentation about that at 4Developers conference, April 20th, 2015.

Update 20150310. Redesigned summary.

Leonard Nimoy 1931-2015

In my previous post I presented how to display and filter dependencies in multi-module Gradle build. This time I will show how to quickly discover why become a dependency of our project.

Problem

Real life use case. Multi-project Gradle build. In the runtime SLF4J reports problem with two discovered implementations: slf4j-logback and slf4j-simple. Logback is used in the project, but where slf4j-simple came from? Of course it is not listed in our build.gradle, but it is packaged into the WAR file and makes a conflict.

Long and bumpy way

With the knowledge from the previous post one of the possible solutions is to write allDeps task, dump dependencies to file and find a rogue dependency.

Not pretty visible tracking down dependency

It is not pretty visible on the first sight even for that small project with only 4 direct dependencies. But luckily there is a better way.

Quick solution

In addition to dependency task (implemented in DependencyReportTask), Gradle has one more similar task – dependencyInsight (implemented in DependencyInsightReportTask. It allows to limit a dependencies tree only to selected dependency (also transitive).

The command takes 2 mandatory parameters:
--configuration – Gradle configuration to search in – e.g. runtime or testRuntime (in a dependency task a configuration was optional to specify)
--dependency – dependency to look for – e.g. org.slf4j:slf4j-simple

In the mentioned project it could be:

gradle sub2:dependencyInsight --configuration testRuntime --dependency slf4j-simple

Clear explaination why a dependency was included

The result is self explanatory. slf4j-simple is (unnecessary) included by Moco library (moco-core). With that knowledge it is easy to exclude that transitive dependency:

compile('com.github.dreamhead:moco-core:0.9.2') {
    exclude group: 'org.slf4j', module: 'slf4j-simple'
}

or when appropriate, do it globally for all configurations:

configurations {
    all*.exclude group: 'org.slf4j', module: 'slf4j-simple'
}

Further tuning

The nice thing is the ability to loosely define expected dependency features. All of the following are valid:

  • --dependency org.slf4j:slf4j-simple:1.7.7 – only exactly version of that dependency
  • --dependency org.slf4j:slf4j-simple – all versions of that dependency
  • --dependency org.slf4j – all dependencies in given group
  • --dependency slf4j-simple – all dependencies with given name regardless of the group (useful when a package was relocated)

Multiple subprojects

dependencyInsight the same as dependency task do not work with multiple subproject. Fortunately it is simple to create that task:

subprojects {
    task allDepInsight(type: DependencyInsightReportTask) << {}
}

It accepts all parameters supported by a base dependencyInsight task and:

gradle allDepInsight --configuration testRuntime --dependency org.slf4j:slf4j-simple

would do its job in all subprojects.

Summary

dependencyInsight task can be very useful when tracking down suspicious and not expected transitive dependencies in the project. An ability to make it multi-project build friendly makes it even more powerful.

Tested with Gradle 2.2.

Gradle logo

gradle dependencies allows to display dependencies in your project printed as pretty ascii tree. Unfortunately it does not work well for submodules in multi-project build. I was not able to find satisfactory solution on the web, so after worked out my own that blog post arose.

Multiple subprojects

For multi-project builds gradle dependencies called in the root directory unexpectedly displays no dependencies:

No dependencies displayed for the root project

No dependencies displayed for the root project

In fact Gradle is right. Root project usually has no code and no compile or runtime dependencies. Only in case of using plugins there could be some additional configurations created by them.

You could think about --recursive or --with-submodules flags, but they do not exist. It is possible to display dependencies for subprojects with “gradle sub1:dependencies” and “gradle sub2:dependencies“, but this is very manual and unpractical for more than a few modules. We could write a shell script, but having regard to (potential) recursive folders traversal there are some catches. Gradle claims to be very extensible with its Groovy based DSL, so why not take advantage of that. Iteration over subprojects can give some effects, but after testing a few conception I ended with pure and simple:

subprojects {
    task allDeps(type: DependencyReportTask) {}
}

When called gradle allDeps it executes dependencies task on all subprojects.

Dependencies for all subprojects

Dependencies for all subprojects

Remove duplication

All dependencies belong to us, but some parts of the tree looks similar (and duplication is a bad thing). Especially configurations default, compile and runtime and the second group testCompile and testRuntime in most cases contain (almost) the same set of dependencies. To make the output shorter we could limit it to runtime (or in case of test dependencies testRuntime). dependencies task provides convenient parameter --configuration and to focus on test dependencies “gradle allDeps --configuration testRuntime” can be used.

Dependencies in one configuration for all subprojects

Dependencies in one configuration for all subprojects

Summary

Where it could be useful? Recently I was pair programming with my old-new colleague in a new project (with dozens submodules) where SLF4J in addition to expected slf4j-logback provider discovered on a classpath also slf4j-simple. We wanted to figure out which library depends on it. Logging dependencies tree to file with a help of grep gave us the answer.

As a bonus during my fights with DependencyReportTask I found an easier way how get know who requires given library. I will write about it in my next post.

Tested with Gradle 2.2.

Gradle logo