Archive for the ‘Tricks & Tips’ Category

Tune up your JUnit test class template for Idea with the BDD-like syntax, Java 8 and the Mockito-AssertJ duo.

Topics covered in this article may seem trivial. However, from my trainer experience I know that (unfortunately) it is not a common practice. Therefore, I decided to write this short blog post to propagate them and to be able to refer to it in the future.

given-when-then-template

My favorite testing framework for Java (and Groovy) is Spock. However, its mocks are not suitable for some purpose and I still use Mockito in various places. In addition, I still conduct a lot of my testing training in a JUnit/Mockito/AssertJ variant for teams which already have a test suite in that stack and would like to improve their skills without changing the known technology. Therefore, as an interlude, this blog post about testing in the pure Java style and propose how to tune up your JUnit testing framework assuming that you are already using Mockito and AssertJ (you should give them a try in the other case).

This blog post consists of tree parts. Firstly, I propose a BDD-style section-based test structure to keep your test more consist and more readable. Next, I explain how simplify – using the AssertJ and Mockito – constructions with Java 8. Last, but not least, I show how to configure it in IntelliJ IDEA as a default JUnit test (class) template (which isn’t as trivial as it should).

Part 1. BDD-style sections

Well written unit tests should meet several requirements (but it is a topic for a separate post). One of the useful practices is a clear separation into 3 code blocks with precisely defined responsibility. You can read more on that topic in my previous blog post.

As a repetition just the core rules presented in a short form:

  • given – an object under test initialization + stubs/mocks creation, stubbing and injection
  • when – an operation to test in a given test
  • then – received result assertion + mocks verification (if needed)
@Test
public void shouldXXX() {
  //given
  ...
  //when
  ...
  //then
  ...
}

That separation helps to keep tests short and focused on just one responsibility to test (in the end it’s just an unit test).

In Spock those sections are mandatory (*) – without them a test will not even compile. In JUnit there are just comments. However, having them in place encourage people to use them instead of having one big block of mess inside (especially useful for newbies in a testing area).

Btw, the mentioned given-when-then convention is based on (is a subset of) a much wider Behavior-Driven Development concept. You may encounter a similar division on 3 code blocks named arrange-act-assert which in general is an equivalent.

Part 2. Java 8 for AssertJ and Mockito

One of the features of Java 8 is an ability to put default methods in an interface. That can be used to simplify of calling static methods which is prevalent in the testing frameworks such as AssertJ and Mockito. The idea is simple. A test class willing to use a given framework can implement a dedicated interface to “see” those methods as its own methods on code completion in an IDE (instead of static methods from external class which require giving a class name before or a static import). Under the hood those default methods just delegate execution to static methods. You can read more about it in my other blog post.

AssertJ natively supports those construction starting with version 3.0.0. Mockito 1.10 and 2.x are Java 6 compatible and therefore it is required to use a 3rd-party project – mockito-java8 (which should be integrated into Mockito 3 – once available).

To benefit from easier method completion in Idea it is enough to implement two interfaces:

import info.solidsoft.mockito.java8.api.WithBDDMockito;
import org.assertj.core.api.WithAssertions;

class SampleTest implements WithAssertions, WithBDDMockito {

}

Part 3. Default template in Idea

I’m a big enthusiast of omnipresent automation. Wouldn’t it be good to have both given-when-then sections and extra interfaces automatically in place in your test classes? Let’s eliminate those boring things from our life.

Test method

Changing a JUnit test method is easy. One of the possible ways is “CTRL-SHIFT-A -> File Template -> Code” and a modification of JUnit4 Test Method to:

@org.junit.Test
public void should${NAME}() {
  //given
  ${BODY}
  //when
  //then
}

To add a new test in an existing test class just press ALT-INSERT and select (or type) JUnit4 Test Method.

Test class

With the whole test class the situation is a little bit more complicated. Idea provides a way to edit existing templates, however, it is used only if a test is generated with CTRL-SHIFT-T from a production class. It’s not very handy with TDD where a test should be created first. It would be good to have a new position “New JUnit test class” next to “Java class” displayed if ALT-INSERT is pressed being in a package view in a test context. Unfortunately, to do that a new plugin would need to be written (a sample implementation for Spock). As a workaround we can define a regular file template which (as a limitation) will be accessible everywhere (e.g. even in a resource directory).

Do “CTRL-SHIFT-A -> File Template -> Files”, press INSERT, name template “JUnit with AssertJ and Mockito Test”, set extension to “java” and paste the following template:

package ${PACKAGE_NAME};

import info.solidsoft.mockito.java8.api.WithBDDMockito;
import org.assertj.core.api.WithAssertions;

#parse("File Header.java") 
public class ${NAME} implements WithAssertions, WithBDDMockito {

}

Showcase

We are already set. Let’s check how it can look in practice (click to enlarge the animation).

idea-test-templates-in-action

Summary

I hope I convinced you to tune your test template to improve readability of your tests and to safe several keystrokes per test. In that case, please spend 4 minutes right now to configure it in your Idea. Depending on a number of tests written it may start to pay off sooner than you expect :).

Btw, at the beginning of October I will be giving a presentation about new features in Mockito 2 at JDD in Kraków.

JDD logo

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock/JUnit/Mockito/AssertJ quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Advertisements

Stubbing methods returning java.util.Optional with Spock is more tricky that you would probably expect. Get know how to do it efficiently.

by Infrogmation, Wikimedia Commons, public domain

Introduction

One of the nice features of the mocking framework in Spock is an ability to return sensible default values for unstubbed method calls made on stubs. Empty list for a method returning List, 0 for long, etc. Very handy if you don’t care about returned value in a particular test, but for example would like to prevent NullPointerException later in the flow. Unfortunately Spock 1.0 and 1.1-rc-2 (still compatible with Java 6) is completely not aware of types added in Java 8 (such as Optional or CompletableFutures). You may say “no problem” null is acceptable in many cases, but with Optional the situation is even worse.

Issue

Imaging the following code – method returning Optional and a try to use it in a test:

interface Repository<T> {
    Optional<T> getMaybeById(long id)
}

@Ignore("Broken")
def "should not fail on unstubbed call with Optional return type"() {
    given:
        Dao<Order> dao = Stub()
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

You may think – null will be returned on the getMaybeById() call, but it’s not.

Expected no exception to be thrown, but got 'org.spockframework.mock.CannotCreateMockException'

    at spock.lang.Specification.noExceptionThrown(Specification.java:119)
    at info.solidsoft.blog.spock10.other.CustomDefaultResponseSpec.should not fail on unstubbed call with Optional return type(CustomDefaultResponseSpec.groovy:19)
Caused by: org.spockframework.mock.CannotCreateMockException: Cannot create mock for class java.util.Optional because Java mocks cannot mock final classes. If the code under test is written in Groovy, use a Groovy mock.
    at org.spockframework.mock.runtime.JavaMockFactory.createInternal(JavaMockFactory.java:49)
    at org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:40)
(...)

The test fails at runtime as Spock is not able to stub java.util.Optional which is a final class:

CannotCreateMockException: Cannot create mock for class java.util.Optional
    because Java mocks cannot mock final classes.

What we can do?

Two workarounds

The EmptyOrDummyResponse factory class (which tries to be smart) is used by default for stubs when an ustubbed method is being called. However, it can be changed on demand during a stub creation:

def "should not fail on unstubbed call with Optional return type - workaround 1"() {
    given:
        Dao<Order> dao = Stub([defaultResponse: ZeroOrNullResponse.INSTANCE])
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

This test will pass (getMaybeById() just returned null), but there is an easier way to achieve the same result.

Spock uses EmptyOrDummyResponse only for stubs (created with a Stub() method). For mocks (created with a Mock() method) the ZeroOrNullResponse factory is used (which makes sense as mocks should focus on interaction verification not just stubbing). Thanks to that a smart logic trying to return sensible default value is disabled in much simpler way:

def "should not fail on unstubbed call with Optional return type - workaround 2"() {
    given:
        Dao<Order> dao = Mock()
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

However, this workaround is far from being perfect. Firstly, your colleagues may be surprised why a mock is created while only stubbing is performed (by the way, both stubbing and verifying interaction on the same mock is tricky itself in Spock, but this is a topic for an another blog post). Secondly, wouldn’t it be nice to have an empty optional (instead of null) returned by default?

Solution

In addition to an aforementioned way to use predefined factories for default return types Spock provides an ability to write a custom one. Let’s create EmptyOrDummyResponse-life factory which is aware of Java 8 types. In fact, the implementation is very straightforward:

class Java8EmptyOrDummyResponse implements IDefaultResponse {

    public static final Java8EmptyOrDummyResponse INSTANCE = new Java8EmptyOrDummyResponse()

    private Java8EmptyOrDummyResponse() {}

    @Override
    public Object respond(IMockInvocation invocation) {
        if (invocation.getMethod().getReturnType() == Optional) {
            return Optional.empty()
        }
        //possibly CompletableFutures.completedFuture(), dates and maybe others

        return EmptyOrDummyResponse.INSTANCE.respond(invocation)
    }
}

Our class implements an IDefaultResponse interface with one respond() method. Inside, we can apply custom logic for Optional, CompletableFutures and maybe other Java 8 specific types. As a fallback (for “standard” types) we switch to the original EmptyOrDummyResponse. This code works as expected:

@SuppressWarnings("GroovyPointlessBoolean")
def "should return empty Optional for unstubbed calls"() {
    given:
        Dao<Order> dao = Stub([defaultResponse: Java8EmptyOrDummy.INSTANCE])
    when:
        Optional<Order> result = dao.getMaybeById(5)
    then:
        result?.isPresent() == false    //NOT the same as !result?.isPresent()
}

Please pay attention to consider Groovy truth implementation while making assertions with Optional. !result?.isPresent() would be fulfilled also for null returned from a method.

However, maybe it would be good to simplify a Java 8 aware stub creation a little bit? To do that an extra method can be created:

private <T> T Stub8(Class<T> clazz) {
    return Stub([defaultResponse: Java8EmptyOrDummy.INSTANCE], clazz)
}

@SuppressWarnings("GroovyPointlessBoolean")
def "should return empty Optional for unstubbed calls with Stub8"() {
    given:
        Dao<Order> dao = Stub8(Dao)
    when:
        Optional<Order> result = dao.getMaybeById(5)
    then:
        result?.isPresent() == false    //NOT the same as !result?.isPresent()
}

Unfortunately in in that case an enhanced more compact stub creation syntax available in Spock 1.1 cannot be used with our Stub8() method. All because Spock will not be able to determine it’s type looking on he left side on assignment. In the end, however, it is much shorter than setting defaultResponse in an every stub creation.

Please note that due to Spock limitations that method cannot be put in a trait (or a separate class) and has to be defined in the current test or a custom base (super) class for all the tests (extending itself spock.lang.Specification), e.g.:

abstract class Java8AwareSpecification extends Specification {
    protected <T> T Stub8(Class<T> clazz) { ... }
}

class MyFancyTest extends Java8AwareSpecification { ... }

Summary

Thanks to exploring some Spock internals related to a stub and mock creation it was possible to enhance default strategy for smart responses for unstubbed calls to nicely support Java 8 features. This is just one of the topics I covered in my advanced “Interesting nooks and crannies of Spock you (may) have never seen before” presentation gave at Gr8Conf Europe 2016. You may want to see it :-).

Btw, the good news is that upcoming Spock 1.1(-rc-3) will contain native support for returning sensible default values for unstubbed Optional method calls.

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Learn how to visualize complex input parameters in parameterized tests in the way improving the readability of the test report.

Trimmed Hedge

By Tomwsulcer – CC0, Wikipedia

Introduction

Parameterized tests can simplify the way how the same functionality can be verified with different input parameters. Spock with its where block, data tables and data pipes makes it very easy to use in a very readable way. The input parameters are nicely presented in a test execution report (in IDE, Jenkins or generated HTML). They can be formatted in a desired order and completed with a custom constant message. It usually works flawlessly for simple objects (such as numbers, booleans, enums and strings). However, in the situation where complex objects are used (e.g. bigger value objects or custom classes) the whole output can be hijacked by just one very verbose parameter:

Very long full toString

or meaningless default toString() implementation: foo.bar.Unknown@78ac1102:

Meaningless Default toString in tests

Our sample code base

To present different available approaches I will use a very simplified version of an account & invoice related domain implemented with DDD in one of the projects I worked in.

The main class here is Invoice which represents an invoice :). The object is immutable (here with Groovy AST transformation, but it could be also achieved with Project Lombok or manually) which means that methods modifying a state return a new version of this class.

@Immutable
class Invoice {

    enum Status {
        ISSUED, PAID, OVERDUE, CANCELLED
    }

    Status status
    BigDecimal issuedAmount
    BigDecimal remainingAmount
    LocalDate issueDate
    //some other fields

    //different production methods

    //production toString() with all useful business fields
}

There is also Account class with an amountToPay() method returning amount to pay based in open invoices.

Naive approach (strongly not recommended)

As the first idea one could be tempted to modify toString() method implementation to, for example, display only 2 of the 10 fields in a class. However, it is a bad idea to change production toString() just for the better output in tests. What is more, other tests or error reporting in a production system can prefer to display more information. Luckily in Spock we have two nice techniques to cope with it.

Technique 1 – an extra formatting method

Test/specification names in Spock can be enhanced with #parameterName (not with $ character used internally by Groovy which is not allowed in a method name) placed in a test name or in an @Unroll annotation. In addition there is an ability to use object property value or call a parameterless method.

class AmountToPayInvoiceAccountSpec extends Specification {

    def "paid and cancelled invoices (#invoice.formatForTest()) should be ignored in current amount to pay"() {
        given:
            Account account = AccountTestFixture.createForInvoice(invoice)
        expect:
            account.amountToPay() == 0.0
        where:
            invoice << [paidInvoice, cancelledInvoice]
    }

    (...)
}

//required modifications in production code
class Invoice {

   (...)

   String formatForTest() {
       return "$issuedAmount: $status"
   }

Test specific production method - result

It’s nice to get just two fields from an object, however, in many cases we don’t want to add an artificial formatting method to production code just to be used in tests.

A tip. Don’t forget to enable unrolling of paremterized tests to instruct Spock to create a separate (sub)test for every input parameters set. It can be done manually by placing @Unroll annotation on an every parameterized method or at the class level. Alternatively the spock-global-unroll extension can be used to turn it on automatically in the whole project.

Technique 2 – an extra input parameter

Luckily, as an alternative it is possible to define an another artificial input parameter directly in a test. It looks like a ordinarily variable, but has access to a set of input parameters (for a given iteration) and can operate on they. That extra parameter is treated by Spock equally to others (however usually there is no need to reference to it in a test code – beside a test name).

class AmountToPayInvoiceAccountSpec extends Specification {

    @Shared
    private Invoice first = createOpenForAmount(200)
    @Shared
    private Invoice second = createOpenForAmount(300)

    def "current amount to pay (#expectedToPayAmount) should ignore paid and cancelled invoices (#invoicesDesc)"() {
        given:
            Account account = AccountTestFixture.createForInvoices(invoices)
        expect:
            account.amountToPay() == expectedToPayAmount
        where:
            invoices                   || expectedToPayAmount
            [paid(first), second]      || 300.0
            [first, cancelled(second)] || 200.0
            [first, second]            || 500.0

            invoicesDesc = createInvoicesDesc(invoices)
    }

    (...)
}

Implementation note. Methods createOpenForAmount() as well as paid() and cancelled() are implemented in a test specific InvoiceTestFixture class.

The result looks very nicely:
spock-formatting-input-parameters-test-specified-result

Just from the report it is pretty visible that there is a (regression) issue with handling CANELLED invoices. The assertion error is also helpful:

spock-formatting-input-parameters-test-specified-error-message

It’s worth to notice in this place that this technique can be also mixed with data pipes (in addition to data tables):

    where:
        invoice << [paidInvoice, cancelledInvoice]
        invoiceDesc = createInvoiceDesc(invoice)

A tip. Pay attention that in opposite to regular parameters in Spock the artificial one is created with an = operator not with <<.

Summary

The aforementioned techniques can be used to improve the readability of your test execution report. It’s useful during the development, but what is even more important it becomes indispensable if Spock is used for Behavior Driven Development and reports are read by, so called, Business People (i.e. need to be worded in the specific way).

[OT] The reason to bring up this topic is a fact that recently two colleagues of mine were struggling with that issue in their tests. Unfortunately they overlooked that slide in my advanced Spock presentation at Gr8Conf EU ;). Blessing in disguise, I was in the office to support them immediately. Nevertheless, not so long ago I’ve seen a presentation by Scott Hanselman about productivity. I liked the idea that every good question is worth to be answered on a blog. Replying privately (especially via email) usually can help only just one person. Writing a blog post and sending that person a link in addition can help other people struggling with the same issue.

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Get know how to create mocks and spies in even more compact way with Spock 1.1

Introduction

Spock heavily leverages operator overloading (and Groovy magic) to provide very compact and highly readable specifications (tests) which wouldn’t be able to achieve in Java. This is very clearly visible among others in the whole mocking framework. However, preparing my Spock training I found a place for further improvements in a way how mocks and spies are created.

Shorter and shorter pencils

The Groovy way

Probably the most common way to create mocks (and spies) among devoted Groovy & Grails users is:

def dao = Mock(Dao)

The type inference in IDE works fine (there is type aware code completion). Nonetheless, this syntax is usually less readable for Java newcomers (people using Spock to tests production code written in Java) and in general for people preferring strong typing (including me).

The Java way

The same mock creation in the Java way would look as above:

Dao dao = Mock(Dao)

The first impression about this code snipped is – very verbose. Well, it is a Java way – why should we expect anything more ;).

The shorter Java way

As I already mentioned Spock leverages Groovy magic and the following construction works perfectly fine:

Dao dao = Mock()

Under the hood Spock uses a type used in the left side of an assignment to determine a type for which a mock should be created. Nominally everything is ok. Unfortunately there is one awkward limitation:

spock-1-0-mock-warning

IDE complains about unsafe type assignment and without getting deeper into the logic used in Spock it is justified. Luckily the situation is not hopeless.

The shorter Java way – Spock 1.1

Preparing practical exercises for my Spock training some time ago gave me an excuse to get into the details of implementation and after a few minutes I was able to improve the code to make it work cleanly in IDE (after a few years of living with that limitation!).

Dao dao = Mock()

spock-1-1-mock-no-warning2

No warning in IDE anymore.

Summary

Multiple times in my career I experienced a well known truth that preparing a presentation is very educational also for the presenter. In a case of a new 3-day long training it is even more noticeable – attendees have much more time to ask you uncomfortable question :). Not for the first time my preparations resulted in a new feature or an enhancement in some popular libraries or frameworks.

The last code snippet requires Spock in version 1.1 (which as a time of writing is available as the release candidate 3 – 1.1-rc-3 to not trigger a warning in IDE. There is a lot of new features in Spock 1.1 – why wouldn’t you give it a try? :)

Picture credits: GDJ from openclipart.org

Get know how to enable named method parameters support in a Gradle project

Introduction

Java 8 has introduced (among others) an ability to get a method parameter name at runtime. For backward compatibility (mostly with existing bytecode manipulation tools) it is required to enable it explicitly. The operation is as simple as an addition of a -parameters flag to a javac call in hello world tutorials. However, it turns out to be more enigmatic to configure in a Gradle project (especially for Gradle newcomers).

PensiveDuke

Gradle

To enable support for named method arguments it is required to set it for every java compilation task in a project. It can be easily attained with:

tasks.withType(JavaCompile) {
    options.compilerArgs << '-parameters'
}

For multi-project build the construction has to be applied on all the subprojects, e.g.:

subprojects {
    (...)
    tasks.withType(JavaCompile) {
        options.compilerArgs << '-parameters'
    }
}

Rationale

For me as a Gradle veteran and Gradle plugins author construction withType and passing different compilation or runtime JVM options is a bread and butter. However, I needed to explain it more than once to less Groovy experienced workmates, so for further reference (aka “Have you read my blog? ;-) ) I have written it down. As a justification for them I have to agree that as a time of writing this blog post the top Google results point to Gradle forum threads containing also “not so good” advises. Hopefully my article will be positioned higher :-).

Tested with Gradle 2.14 and OpenJDK 1.8.0_92.

Image credits: https://duke.kenai.com/

The simply way how buildscript dependencies (e.g. plugins) can be displayed and analyzed in Gradle

Introduction

This is the third part of my Gradle tricks mini-series related to visualization and analyze of dependencies. In the first post I presented a way how dependencies for all subprojects in multi-project build can be display. In the second I showed a technique of useful in tracking down not expected transitive dependencies in the project. This time less often used things, yet crucial in specific cases – buildscript dependencies.

Dependencies

Real use case

Buildscript dependencies contains plugins used in our project and their dependencies. It would seem nothing interesting unless you are a Gradle plugin developer, but it is not completely true. Once, as a consultant, I was investigating issue with NoSuchMethodException in a large project with custom build framework built on top of Gradle. The problem occurred only when one innocent, very popular open source plugin had been adding to the project. The same plugin worked fine in many other project in that company. In the end I was able to figure out that one of the dependencies used in buildSrc custom scripts overriding the same dependencies in older version from the plugin. As a result plugin failed at runtime with mentioned NoSuchMethodException. To achieve that I had to use my custom script as buildscript/classpath dependencies are completely ignored when ./gradlew dependencies or ./gradlew dependencyInsight is used.

Solution

The idea to write this post arose in at the beginning of 2015. I wanted to present my small Gradle task that using some internal Gradle mechanisms retrieves buildscript dependencies in display them to a console. The post was postponed and almost a year later I was positively surprised reading release notes for Gradle 2.10. The new buildEnvironment task was added.

$ ./gradlew buildEnvironment
:buildEnvironment

------------------------------------------------------------
Root project
------------------------------------------------------------

classpath
+--- com.bmuschko:gradle-nexus-plugin:2.3
\--- io.codearte.gradle.nexus:gradle-nexus-staging-plugin:0.5.3
     \--- org.codehaus.groovy.modules.http-builder:http-builder:0.7.1
          +--- org.apache.httpcomponents:httpclient:4.2.1
          |    +--- org.apache.httpcomponents:httpcore:4.2.1
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- commons-codec:commons-codec:1.6
          +--- net.sf.json-lib:json-lib:2.3
          |    +--- commons-beanutils:commons-beanutils:1.8.0
          |    |    \--- commons-logging:commons-logging:1.1.1
          |    +--- commons-collections:commons-collections:3.2.1
          |    +--- commons-lang:commons-lang:2.4
          |    +--- commons-logging:commons-logging:1.1.1
          |    \--- net.sf.ezmorph:ezmorph:1.0.6
          |         \--- commons-lang:commons-lang:2.3 -> 2.4
          +--- net.sourceforge.nekohtml:nekohtml:1.9.16
          \--- xml-resolver:xml-resolver:1.2

(*) - dependencies omitted (listed previously)

BUILD SUCCESSFUL

Total time: 1.38 secs

Two plugins and a pack of transitive dependencies to gradle-nexus-staging-plugin thanks to http-builder (maybe it would be good to replace it with Jodd?).

Summary

It is worth to be able to distinguish standard projects dependencies and buildscript dependencies. The new buildEnvironment task helps to deal with the latter. This in turn becomes essential when strange runtime errors start to show up.

Tested with Gradle 2.10.

Picture credits: Zeroturnaround.

Would it be useful to unroll all parameterized Spock tests automatically?

I’ve been always frustrated with the need to add @Unroll annotation to every parameterized test/feature (or at least at the class/specification level) to make unrolling works in Spock. It was even worse to deal with the code with already missing @Unroll annotations and cryptic test results. For backward compatibility unrolling will rather not be enabled by default in the foreseeable future, but luckily there is a quick solution.

Unroll

Photo: Christopher Michel, CC BY 2.0

Unroll for all and for free

To enable global unrolling it is only required to add spock-global-unroll.jar to your classpath:

testCompile 'info.solidsoft.spock:spock-global-unroll:0.5.0'

To make it easier to use spock-global-unroll with different Spock versions (like 1.0-groovy-2.0 and 1.0-groovy-2.3) the plugin does not have the compile dependency on Spock and a proper spock-core jar has to be explicitly defined in a build configuration. E.g.:

testCompile 'info.solidsoft.spock:spock-global-unroll:0.5.0'
testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'

That’s all. spock-global-unroll is a global extension which is activated automatically by Spock. All parameterized Spock tests are unrolled without the need to use @Unroll annotation.

Disabling automatic unrolling for a class

Automatic unrolling can be disabled for a particular class by putting @DisableGlobalUnroll on it.

The nice thing is that the @Unroll annotations manually placed at the test (feature) level can be used to unroll particular tests anyway (even if automatic unrolling has been disabled for given class).

@DisableGlobalUnroll
class PeselValidatorSpec extends Specification {

    //one big test for multiple input parameters
    def "should not be unrolled for some reasons PESEL #number"() { ... }

    (...)
}

Overriding default test name

To override default test name expanding (with #placeHolders in a test name) @Unroll annotation with a custom text can be used on the top of feature method or at the specification level.

@DisableGlobalUnroll
class PeselValidatorSpec extends Specification {

    //one big test for multiple input parameters
    def "should not be unrolled for some reasons PESEL #number"() { ... }

    //unrolled anyway
    @Unroll("PESEL '#pesel' should be #description")
    def "should validate PESEL correctness"() { ... }

    (...)
}

Summary

Being able to implement automatic tests unrolling within 15 minutes I decided to share it with the community – maybe there are others who don’t like to write boilerplate code :). The code written to achieve it has just a few lines of production code (of course there are also 3 test classes to verify if the extension works as expected :) ). This shows the power of Spock extensibility.

The complete source code is available from GitHub: https://github.com/szpak/spock-global-unroll

Update 20160521. I added automatic migration scripts in the project README to make a migration easier.

Btw, if you would like to find out more about “Interesting nooks and crannies of Spock” I will be speaking about them in May and June at GeeCON 2016 in Kraków, Gr8Conf 2016 in Copenhagen and Devoxx Poland again in Kraków.

Geecon big paw logo
GR8 Conf 2016 Europe
Devoxx Poland 2016 Speaker Badge

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock quickly and efficiently? I conduct condensed (unit) testing training which you may find useful.

Have you ever experienced the “Could not find property X on plugin extension Y” error with a freshly cloned GitHub project you wanted to contribute to?

Missing username, password or token to a service you may have never heard of? It usually happens when you try to do anything (like just build a project) not only when a given plugin (like an online code coverage tool) is used. I didn’t like to have to modify my environment to just provide a small fix to another open source project. It was annoying me and I wanted to change it. Starting with Gradle 2.13 it became possible. However, let’s start with the reasons (if you are interested only in the solution please move forward to the last 2 paragraphs).

Gradle logo

Why do I get “Could not find property…”?

Most of Gradle plugins need to be configured. Some properties can be set directly in build.gradle, but some others (especially credentials) are better to keep locally in ~/.gradle/gradle.properties. As a result, a plugin configuration sections often look like this:

bintray {
    user = project.getProperty('bintrayUser')
    key = project.getProperty('bintrayKey')
    ...
}

or that:

bintray {
    user = getProperty('bintrayUser')
    key = getProperty('bintrayKey')
    ...
}

or even shorter:

bintray {
    user = bintrayUser
    key = bintrayKey
    ...
}

It works fine for a project developer having bintrayUser and bintrayKey defined in its local configuration, but for every person not uploading to Bintray on their daily basis it fails with:

* What went wrong:
A problem occurred evaluating root project 'another-nice-open-source-project'.
> Could not find property 'bintrayKey' on com.jfrog.bintray.gradle.BintrayExtension_Decorated@2ecc563.

The result is that project.getProperty(), not to mentioned explicit assignment, just throws exception when a particular property is not found. The bad is that the code is executed in the configuration phrase. For that reason the execution of every task, even not related to that particular plugin (like gw tasks or gw wrapper) fails miserably.

As a workaround a guard check has to be performed:

bintray {    //Gradle <2.13
    user = hasProperty('bintrayUser') ? getProperty('bintrayUser') : ''
    key = hasProperty('bintrayKey') ? getProperty('bintrayKey') : ''
    ...
}

It doesn’t look good very compact. As an another option a dummy placeholder could be kept in project configuration, but starting with Gradle 2.13 there is a better way to cope with that.

project.findProperty()

Gradle 2.13 is the first version with my contribution of the new method project.findProperty(). It behaves the same as getProperty(), but instead of throwing an exception the null value is returned. This simplifies the assignment greatly:

bintray {    //Gradle 2.13+
    user = findProperty('bintrayUser') ?: ''
    key = findProperty('bintrayKey') ?: ''
    ...
}

Some people could say that Optional could be better as a returned value, but this is an API and Gradle supports Java older than 8.

Summary

For me findProperty is a method I’ve been very often looking for in Gradle. I regret that it took me over the year to make this pull request. Gradle 2.13 has been just released and version upgrades across projects will be performed gradually. It can take some time, but every project migrating to 2.13 will be able to simplify its configuration making the “Could not find property X on plugin Y” error message a remembrance of the past (of course unless you really need to configure particular plugin to use it :) ).

Tested with Gradle 2.13.

Learn how Spring 4.2 simplifies handling transaction bound events (e.g. sent just after a database commit).

Introduction

As you probably already know (e.g. from my previous blog post) it is no longer needed to create a separate class implementing ApplicationListener with onApplicationEvent method to be able to react to application events (both from Spring Framework itself and our own domain events). Starting with Spring 4.2 the support for annotation-driven event listeners was added. It is enough to use @EventListener at the method level which under-the-hood will automatically register corresponding ApplicationListener:

    @EventListener
    public void blogAdded(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

Please notice that using domain objects in the events has notable drawbacks and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Transaction bound events

Simple and compact. For “standard” events everything looks great but in some cases it is needed to perform some operations (usually asynchronous ones) just after the transaction has been committed (or rolled back). What’s then? Can the new mechanism be used as well?

Business requirements

First, a small digression – business requirements. Let’s imagine the super fancy blog aggregation service. An event is generated everytime the new blog is added. Subscribed users can receive an SMS or a push notification. The event could be published after the blog object is scheduled to be saved in a database. However, in in a case of commit/flush failure (database constraints violation, an issue with ID generator, etc.) the whole DB transaction would be rolled back. A lot of angry users with broken notification will appear at the door…

Technical issues

In modern approach to transaction management, transactions are configured declaratively (e.g. with @Transactional annotation) and a commit is triggered at end of transactional scope (e.g. at the end of a method). In general this is very convenient and much less error prone (than the programmatic approach). On the other hand, commit (or rollback) is done automatically outside our code and we are not able to react in a “classical way” (i.e. publish event in the next line after transaction.commit() is called).

Old school implementation

One of the possible solutions for Spring (and a very elegant one) was presented by indispensable Tomek Nurkiewicz. It uses TransactionSynchronizationManager to register transaction synchronization for the current thread. For example:

    @EventListener
    public void blogAddedTransactionalOldSchool(BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2 - broken in not transactional context

        TransactionSynchronizationManager.registerSynchronization(
                new TransactionSynchronizationAdapter() {
                    @Override
                    public void afterCommit() {
                        internalSendBlogAddedNotification(blogAddedEvent);
                    }
                });
    }

The passed code is executed in the proper place in the Spring transaction workflow (for that case “just” after commit).

To provide support for execution in non-transactional context (e.g. in integration test cases which couldn’t care about transactions) it can be extended to the following form to not fail with java.lang.IllegalStateException: Transaction synchronization is not active exception:

    @EventListener
    public void blogAddedTransactionalOldSchool(final BlogAddedEvent blogAddedEvent) {
        //Note: *Old school* transaction handling before Spring 4.2

        //"if" to not fail with "java.lang.IllegalStateException: Transaction synchronization is not active"
        if (TransactionSynchronizationManager.isActualTransactionActive()) {

            TransactionSynchronizationManager.registerSynchronization(
                    new TransactionSynchronizationAdapter() {
                        @Override
                        public void afterCommit() {
                            internalSendBlogAddedNotification(blogAddedEvent);
                        }
                    });
        } else {
            log.warn("No active transaction found. Sending notification immediately.");
            externalNotificationSender.newBlogTransactionalOldSchool(blogAddedEvent);
        }
    }

With that change in a case of the lack of active transaction provided code is executed immediately. Works fine so far, but let’s try to achieve the same thing with annotation-driven event listeners in Spring 4.2.

Spring 4.2+ implementation

In addition to @EventListener Spring 4.2 provides also one more annotation @TransactionalEventListener.

    @TransactionalEventListener
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

The execution can be bound to standard transaction phases: before/after commit, after rollback or after completion (both commit or rollback). By default it processes an event only if it was published within the boundaries of a transaction. In other case the event is discarded.

To support the execution in non-transactional context the falbackExecution flag can be used. If set to “true” the event is processed immediately if there is no transaction running.

    @TransactionalEventListener(fallbackExecution = true)
    public void blogAddedTransactional(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.newBlogTransactional(blogAddedEvent);
    }

Summary

Introduced in Spring 4.2 annotation-driven event listeners continue a trend to reduce boilerplate code in Spring (Boot) based applications. No need to manually create ApplicationListener implementations, no need to use directly TransactionSynchronizationManager – just one annotation with proper configuration. The other side of the coin is that it is a little bit harder to find all event listeners, especially if there are dozens of them in our monolith application (though, it can be easily grouped). Of course, the new approach is only an option which could be useful in a given use-case or not. Nevertheless another piece of Spring (Boot) magic flood into our systems. But maybe resistance is futile?

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively, it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases. Code examples are available from GitHub.

Btw, writing examples for that blog post gave me the first real ability to use the new test transaction management system introduced in Spring 4.1 (in the past I only mentioned it during my Spring training sessions). Probably, I will write more about it soon.

Learn how to reduce boilerplace code in event handling with annotation-driven event listeners in Spring 4.2+.

Introduction

Exchanging events within the application has become indispensable part of many applications and thankfully Spring provides a complete infrastructure for transient events (*). The recent refactoring of transaction bound events gave me an excuse to check in practice the new annotation-driven event listeners introduced in Spring 4.2. Let’s see what can be gained.

(*) – for persistent events in Spring-based application Duramen could be a solution that is worth to see

Spring logo

The old way

To get a notification about an event (both Spring event and custom domain event) a component implementing ApplicationListener with onApplicationEvent has to be created.

@Component
class OldWayBlogModifiedEventListener implements
                        ApplicationListener<OldWayBlogModifiedEvent> {

    (...)

    @Override
    public void onApplicationEvent(OldWayBlogModifiedEvent event) {
        externalNotificationSender.oldWayBlogModified(event);
    }
}

It works fine, but for every event a new class has to be created which generates boilerplate code.

In addition our event has to extend ApplicationEvent class – the base class for all application events in Spring.

class OldWayBlogModifiedEvent extends ApplicationEvent {

    public OldWayBlogModifiedEvent(Blog blog) {
        super(blog);
    }

    public Blog getBlog() {
        return (Blog)getSource();
    }
}

Please notice that using domain objects in the events has notable drawback and is not the best idea in many situations. Pseudodomain objects in the code examples were used to not introduce unnecessary complexity.

Btw, ExternalNotificationSender in this example is an instance of a class which sends external notifications to registered users (e.g. via email, SMS or Slack).

Annotation-driven event listener

Starting with Spring 4.2 to be notified about the new event it is enough to annotate a method in any Spring component with @EventListener annotation.

    @EventListener
    public void blogModified(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModified(blogModifiedEvent);
    }

Under the hood Spring will create an ApplicationListener instance for the event with a type taken from the method argument. There is no limitation on the number of annotated methods in one class – all related event handlers can be grouped into one class.

Conditional event handling

To make @EventListener even more interesting there is an ability to handle only those events of a given type which fulfill given condition(s) written in SpEL. Let’s assume the following event class:

public class BlogModifiedEvent {

    private final Blog blog;
    private final boolean importantChange;

    public BlogModifiedEvent(Blog blog) {
        this(blog, false);
    }

    public BlogModifiedEvent(Blog blog, boolean importantChange) {
        this.blog = blog;
        this.importantChange = importantChange;
    }

    public Blog getBlog() {
        return blog;
    }

    public boolean isImportantChange() {
        return importantChange;
    }
}

Please note that in the real application there would be probably a hierarchy of Blog related events.
Please also note that in Groovy that class would be much simpler.

To generate event only for important changes the condition parameter can be used:

    @EventListener(condition = "#blogModifiedEvent.importantChange")
    public void blogModifiedSpEL(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedSpEL(blogModifiedEvent);
    }

Relaxed event type hierarchy

Historically ApplicationEventPublisher had only an ability to publish objects which inherited after ApplicationEvent. Starting with Spring 4.2 the interface has been extended to support any object type. In that case the object is wrapped in PayloadApplicationEvent and sent through.

//base class with Blog field - no need to extend `ApplicationEvent`
class BaseBlogEvent {}

class BlogModifiedEvent extends BaseBlogEvent {}
//somewhere in the code
ApplicationEventPublisher publisher = (...);    //injected

publisher.publishEvent(new BlogModifiedEvent(blog)); //just plain instance of the event

That change makes publishing events even easier. However, on the other hand without an internal conscientiousness (e.g. with marker interface for all our domain events) it can make event tracking even harder, especially in larger applications.

Publishing events in response to

Another nice thing with @EventListener is the fact that in a situation of non-void return type Spring will automatically publish returned event.

    @EventListener
    public BlogModifiedResponseEvent blogModifiedWithResponse(BlogModifiedEvent blogModifiedEvent) {
        externalNotificationSender.blogModifiedWithResponse(blogModifiedEvent);
        return new BlogModifiedResponseEvent(
            blogModifiedEvent.getBlog(), BlogModifiedResponseEvent.Status.OK);
    }

Asynchronous event processing

Updated. As rightly suggested by Radek Grębski it is also worth to mention that @EventListener can be easily combined with @Async annotation to provide asynchronous event processing. The code in the particular event listener doesn’t block neither the main code execution nor processing by other listeners.

    @Async    //Remember to enable asynchronous method execution 
              //in your application with @EnableAsync
    @EventListener
    public void blogAddedAsync(BlogAddedEvent blogAddedEvent) {
        externalNotificationSender.blogAdded(blogAddedEvent);
    }

To make it work it is only required to enable asynchronous method execution in general in your Spring context/application with @EnableAsync.

Summary

Annotation-driven event listeners introduced in Spring 4.2 continue a trend to reduce boilerplate code in Spring (Boot) based applications. The new approach looks interesting especially for small applications with a small amount of events where a maintenance overhead is lower. In the world of ubiquitous Spring (Boot) magic it is more worthy to remember that with great power comes great responsibility.

In the next blog post I will write how the new mechanism can be also used to simplify handling of transaction bound events.

Please note that Spring Framework 4.2 is a default dependency of Spring Boot 1.3 (at the time of writing 1.3.0.M5 is available). Alternatively it is possible to manually upgrade Spring Framework version in Gradle/Maven for Spring Boot 1.2.5 – it should work for most of the cases.