Mutation testing allows to check the quality (effectiveness) of automatic tests. PIT (aka pitest) is a leading mutation testing tool for Java environment. In my last blog post about PIT in January 2013 I have covered version 0.29. Since then the PIT development team has been busy and the 4 releases introduced various new features (besides fixed bugs). In this post I will cover the most important (in my opinion) changes in the project (up to recently released version 0.33).

PIT mutation testing tool logo

New features

– preliminary support for Java 8 bytecode – PIT can be used with code which contains Java 8 syntax and constructions (including lamdas)
– internal refactoring resulted in much faster “standard” line coverage calculation
– support for parametrized JUnit tests written with Spock (in Groovy) and JUnitParams
– ability to define a coverage threshold (both line and mutation) below which the build will fail
– ability to use PIT with Robolectric
– new Remove Conditionals Mutator (a conditional statement will always be true – not enabled by default as of 0.33)
– new Remove Increments Mutator (an increment operation will be removed – not enabled by default as of 0.33)
– ability to choose JVM to be used for mutation testing
– ability to run PIT only for locally changed files for Maven build with configured SCM plugin
– demanding users can define their own strategies for: test selection, output format and test prioritization – PIT provides extension points which allow to write custom implementations
– partial support for JUnit categories
– support for mutating static initializers in TestNG

In the meantime there were also releases of plugins/tools based on PIT. My plugin for Gradle was enhanced with the dynamic task dependencies resolution (just “gradle pitest” takes care about all the requisites in the Gradle build lifecycle) and support for the additional main and test source sets. Plugin for Eclipse has got (inter alia) a new mutation view and an ability to run PIT against all the projects in a workspace.

Not only releases

Besides new releases PIT has got brand new Bootstrap based webpage, the logo (see above) and the source code was migrated from Mercurial on Google Code to GitHub. The nice thing is that the move resulted in a few contributions withing the first weeks.

Henry Coles the author of PIT also started the new commercial project FaultSeed – “better mutation testing tools for the JVM” which will be based on PIT and has a goal to be 50% faster than PIT and support also Groovy and Scala. Very promising.

PIT (and mutation testing in general) becomes more and more popular and recently there were given a number of talks about it (including my talk at Developer Conference 2014slides). The number of questions on the project’s mailing list also significantly increased. And you, have you tried PIT in your project yet?

DevConf.cz 2014 logo

Mockito uses a lazy approach for stubbing and when a not stubbed method is called it returns a default value instead of throwing an exception (like EasyMock). This is very useful to not overspecify the test.

A default returned value depends on a return type of a stubbed method. For methods eturning collections we have an empty collection, for numbers – 0, for booleans false, for ordinary objects – null (in Mockito 2.0 the set of not null values will be extended – this can be also achieved with 1.9.x and ReturnsMoreEmptyValues answer).

Mockito logo

Before we go any further a quick introduction do Answers (you can skip to the next paragraph if Answers are for you like an open book). In addition to simple stubbing based on desired value passed to Mockito directly:

given(tacticalStationMock.getNumberOfTubes())
    .willReturn(TEST_NUMBER_OF_TORPEDO_TUBES);

or for consecutive calls:

given(tacticalStationMock.getNumberOfEnemyShipsInRange()).willReturn(2, 3);

the stubbing API provides a way to pass an object with the logic to determine what should be returned in given case (based on method arguments or even an internal state (for consecutive calls)). A simple practical example returning always the first parameter passed to the called method:

public class ReturnFirstArgumentAnswer implements Answer<Object> {
    @Override
    public Object answer(InvocationOnMock invocation) throws Throwable {
        Object[] arguments = invocation.getArguments();
        if (arguments.length == 0) {
            throw new MockitoException("...");
        }
        return arguments[0];
    }
}

A sample usage when stubbing:

given(mock.methodToStub("arg1", "arg2"))
    .willReturn(new ReturnFirstArgumentAnswer());

Mockito provides a set of build-in answers. Some of them (like ThrowsException or CallRealMethods) are used by Mockito internally, but some others (like ReturnsArgumentAt introduced in 1.9.5) can be also useful for developers writing tests.

Let’s return to the main topic. Sometimes it is useful to change those default values. In addition to using the answer mechanism for stubbing a specific method calls Mockito provides a way to specify an answer which will be used for every not stubbed method execution on given mock. To do so we can use a static mock() method which in addition to a class to mock takes an additional parameter – a default answer.

mock(SpaceShip.class, Mockito.RETURNS_DEFAULTS);

As returns defaults is a default behavior in Mockito above code is just a more explicit version of:

mock(SpaceShip.class);

but we can use this construction to achieve a few interesting behaviors. One of the predefined answers provided by Mockito is RETURNS_DEEP_STUBS. It causes an automatic stubbing of chained methods calls and allows to do following:

SpaceShip spaceShipMock = mock(SpaceShip.class, Mockito.RETURNS_DEEP_STUBS);
given(spaceShipMock.getTacticalStation().getNumberOfTubes()).willReturn(5);

Please note that with default configuration it would cause NullPointerException due to the fact spaceShipMock.getTacticalStation() returns null. With RETURNS_DEEP_STUBS Mockito under the hood creates a mock for every middle method call. This is an equivalent of:

//NOTE. Deep stubs implemented manually - no more needed with RETURNS_DEEP_STUBS.
//See the previous example with an equivalent functionality in 2 lines.
SpaceShip spaceShipMock = mock(SpaceShip.class);
TacticalStation tacticalStationMock = mock(TacticalStation.class);
given(spaceShipMock.getTacticalStation()).willReturn(tacticalStationMock);
given(tacticalStationMock.getNumberOfTubes()).willReturn(5);

As a bonus, deep stubbing allows to perform a verification (only) on the last mock in the chain:

verify(spaceShipMock.getTacticalStation()).getNumberOfTubes();

Another provided answer is RETURNS_MOCKS. This tries to return default value using ReturnsMoreEmptyValues answer (an extended version of a default ReturnsEmptyValues), but if it fails a mock is returned. Only in the situation where the return type cannot be mocked (e.g. is final) null is returned.

mock(OperationsStation.class, Mockito.RETURNS_MOCKS);

Sometimes it can be useful to stub specified methods, but delegate remaining calls to the real implementations. It can be done with CALLS_REAL_METHODS. It can be useful for example when testing an abstract class (just the implemented methods without the need to subclass to create a concrete subclass).

mock(AbstractClass.class, Mockito.CALLS_REAL_METHODS);

Please note that using RETURN_DEEP_STUBS, RETURN_MOCKS and CALLS_REAL_METHODS should be not needed when dealing with well crafted code, written with the usage of Test-Driven Development. Nevertheless sometimes it is required to write tests for legacy code before a try to refactor it.

From a set of default answers defined in Mockito.java, there is also a very useful RETURNS_SMART_NULLS option. This returns SmartNull class instance instead of plain null, which provides a hint which mock stubbing was not performed correctly (and caused NPE). I wrote more about this mode some time ago in Beyond the Mockito Refcard #1.

In addition to define a default answer we can use any class which implements org.mockito.stubbing.Answer interface – both provided by Mockito or hand written. One more tip. In case you would like to use RETURNS_SMART_NULLS or ReturnsMoreEmptyValues globally for all mocks in your application you can check a trick with MockitoConfiguration.

Btw, in case you are starting an adventure with Mockito or want to learn more or just want to organize your knowledge you can be interested in my Mockito Refcard available for free from dzone.com.

Btw2, in addition if you are new to Mockito and live near Warszawa you can consider an attendance in my lecture/workshop about Mockito at Jinkubator – 18 II 2014 (next Tuesday).

Jinkubator

This post is the fourth part of the series Beyond the Mockito refcard extending released some time ago my Mockito reference card.

AppFuse 3.0 has been released. It refreshes used technologies (Java 7+, Spring 4, Spring Security 3.2) and adds a bunch of new (Bootstrap 3, PrimeFaces, wro4j, WebJars and finally Apache Wicket).

AppFuse logo

From the project home page:

AppFuse is a full-stack framework for building web applications on the JVM. It was originally developed to eliminate the ramp-up time found when building new web applications for customers. Over the years, it has matured into a very testable and secure system for creating Java-based webapps. At its core, AppFuse is a project skeleton, similar to the one that’s created by your IDE when you click through a wizard to create a new web project.

I you are looking for some foundation for your project or just would like to see how the same things look in different technologies you can give AppFuse a try. A quick start page should be a good starting point.

There is also a personal thread in this story. Steadfast readers can remember that over 3 years ago I started working on Wicket frontend for AppFuse. I definitely prefer working on backends, but I wanted to get know Wicket (and its famous ability to being tested without Selenium) better and an engagement in the AppFuse development seemed to be a good way to practice these skills. There are still places in a Wicket frontend which need to be polished, but the work is mostly done (what I’m happy about :) ).

As a summary of my work I can write that even Wicket (where all page logic is written in Java or Groovy classes – no more c:forEach tags!) cannot completely remove the pain which comes with the limitation of HTTP (which wasn’t designed for the “enterprise applications”) and differences between browsers (although with jQuery and Bootstrap it is much easier). In addition 3 years is a lot of time in IT and currently there are even more use cases where component-based server side frameworks aren’t the best solution to make a good looking, responsible, scalable and trendy UI. Maybe it is time to work on an AngularJS frontend ;).

The Happiness Door is a great method of collecting instant feedback. I have used it successfully on my training sessions and recently had a first try to use it after my conference speech. In this post I present my case study and the reasons why I will definitely use it on upcoming events.

Part 1. From the beginning. It becomes a tradition to end Warszawa JUG’s meeting seasons at the edge of June and July in a great style – with Confitura – the largest free conference about Java and JVM in Poland. This year the seventh edition was planned for about 1000 people and it took less than 2 days to sold out all the tickets (including overbooking). 5 parallel tracks with 35 presentations on various topics. From tuning JVM, different languages on JVM and a lot of frameworks though Java Script, Android, databases, architecture and testing to motivating people, Gordon Ramsay and company management in the 21th century. Too much to embrace during a day, but fortunately all presentations were recorded and will be available online.

My presentation was about mutation testing in Java environment with PIT. I wrote a few posts on that topic already. In a nutshell – a nice way to check how good your test really are. Writing testable code had wide representation at Confitura – I counted four more testing-related presentations.

Part 2. The Happiness Door method (if you new to this method read my previous post first). Before my presentation the sticky notes were distributed across the room and 5 smiley faces put on the door. At the begging I explained to the audience what was going on in this method and why their feedback was so important for me. Leaving the room about 50% attendees gave numerical feedback. Almost half of them had some comments (from “nice talk” to an essay placed on both sides of a sticky note :) ). Some of them were very interesting. Thanks!

I’m very happy I used this method on my speech. First time just minutes after a presentation I knew what people thought about it. In my case it was: “quite good, but there is still a field for improvement”.

The Happiness Door (one leaf) after my speech at Confitura 2013

The Happiness Door (one leaf) after my speech at Confitura 2013

Here is a good place to thank Małgorzata Majewska and Anna Zajączkowska who helped me polish many aspects of my speech.

The interesting thing is that when I have got an email from the organizers with the feedback collected using the online survey the average score was very similar, just got a month later, so why should I wait? With The Happiness Door method it is possible to get it know immediately (and even with the larger test sample).

Btw, it is not easy to prepare to the presentation (there were as always some problems with a projector) and put a sticky note on every place in the audience room (~150 seats) in 15 minutes. Thanks to Edmund Fliski and Dominika Biel for their help with a distribution.

Btw2, after the talk I had very interesting conversations with the people wanting to use mutation testing in their projects, the people already experimenting with PIT and Konrad Hałas – the guy from Warsaw who wrote MutPy – a mutation testing tool for Python.

Part 3. The slides from my speeches evolves over time. 2 years ago my slides were full of the bullet lists – the feedback – boring, to much text. Recently I had some internal presentation with less than 10 slides with no text, just images – the feedback – hard to follow, to little text. In the mean time I did an experimented with a presentation based on a mind map – mixed feedback. This time I combined images and text – some people liked it some not :). Be the judge. The slides (in Polish) are available for download.

Appendix. Thanks to Chris Rimmer for the idea of presenting IT topics using a metaphor to non IT events and people.

I hope I encouraged you to give The Happiness Door method a try. Please share your experience in comments.

P.S. Good news for all the people who missed my presentation at Confitura. My proposal was by accepted by The Program Committee and I will be speaking in Kraków at JDD 2013 (October 14-15th).

jdd13-speaker

The Happiness Door is a method of collecting immediate feedback I have read about some time ago on the Jurgen Appelo’s blog. I used it this year during my training sessions and it worked very well. I would like to popularize it a little bit.

This method requires to select a strategically located place (like the second leaf of the exit door) with marked scale (I use 5 smileys from a very sad to a very happy one) and ask people to put distributed sticky notes on a level corresponding to their satisfaction of the session. They are encouraged to add a concrete comment(s) explaining given score (like “boring” or “too less practical exercises”), but it is completely valid to just attach an empty card in the selected place. The mentioned issues could be discussed with the whole group to determine how the given thing could be improved best. I start getting feedback before a lunch break on the first training day and gently remind about it on every following break.

The main advantage of using this method is to get both instant numerical feedback (how much people like it) and concrete comments (what exactly do they (dis)like). The feedback is gathered very fast when there is still a room for improvements (in contrast to the more formal survey at the end of the training). I have got numerous comments from attenders that they like this method as well and I plan to use it also in my further sessions.

On my courses I even introduced a small enhancement to the original method. Every day I give away sticky notes in a different color. It allows to easily distinguish feedback given on a particular day and identify a trend. On the photos bellow for example it is pretty visible that after a feedback I got on the first day (yellow cards) I was able to adapt my training to the group’s level and expectations (blue cards).

Feedback after my training - day 1 - The Happiness Door method

Day 1 – a moderate result

Feedback after my training - day 2 - The Happiness Door method

Day 2 – a visible uptrend



This spring was quite busy for me as a trainer. I was a mentor at Git Kata – a free Git workshop, gave a talk about asynchronous calls testing on “6 tastes of testing – flashtalks” and recently did a short workshop about Mockito at Test Kata. In the meantime I conducted two 3-day training sessions about writing “good code” and plan one more Test Driven Development session at the end of June. Everything together with my main occupation – writing good software and help team members to do the same. What is more recently I’ve got very pleasant information that my presentation proposal about Mutation Testing was accepted and at the beginning of July I will close this training season speaking at Confitura 2013 (which nota bene was sold out (1200 tickets!) in less than 2 days). See you at Confitura.

Confitura 2013 - Speaker

Confitura 2013 – Speaker

Recently there was an interesting and unusual event in Warsaw for all enthusiasts of testing – “6 tastes of testing – flashtalks“. Instead of one long presentation common for WJUG meetings 6 guys gave 6 flashtalks (~15 minutes long presentations) about different aspects of testing.

The audience could listen about:

  • Fest Assertions – a set of assertions with fluent API for TestNG and JUnit by Michał Lewandowski,
  • JUnitPrams – a better way to write parameterized tests with JUnit by Paweł Lipiński,
  • Spock – a setting new standards testing framework for Java and Groovy code by Jakub Nabrdalik,
  • Geb – a Groovy way for acceptance testing with Web Driver and Page Object Pattern by Tomasz Kalkosiński,
  • asynchronous calls testing with Awaitility by Marcin Zajączkowski (a.k.a. me),
  • the idea how to use IoC and Guice in tests by Paweł Cesar Sanjuan Szklarz.

On my own speach I performed an experiment with a presentation format. Instead of a classical Impress/Power Point slides I used Vym – View Your Mind – an interesting tool for mind mapping with a build in slide editor. It allowed me to “animate” my mind map and talk about every its point (node). By the way I reported a dozen of feature requests for Vym, so there is a chance to make a slide editor even better on the next presentation :). The downside of using such tool is a limited way how it could be exported to some neutral format. The picture below contains the whole map, but without code snippets.

Sleepless asynchronous calls testing with Awaitility

The people I asked were pleased with the presentations. They were cross-sectional and touched many different aspects briefly, but even though I knew covered topics before I noted down a few useful tips. The venue was tightly filled (about 100 people). In addition to interesting topics there were an ability to eat pizza and win an SSD disk. The event was completed with a networking session in the near restaurant. For those who cannot attend there is a video available (in Polish) on the WJUG’s You Tube channel.

Last Saturday together with 8 other mentors we were showing various Git-related technics on Git kata event for over 80 people.

Git kata was a free git workshop conducted in a kata form. Paraphrasing Wikipedia “A Git kata is an exercise in using Git which helps an user hone their skills through practice and repetition”. During sessions a mentor was showing selected Git aspects in practice providing listeners detailed comments on each performed step. The attenders could follow master’s steps using their own laptops.

There were various Git technics covered including:
- undoing changes (reset, revert, reflog),
- useful aliases, configuration tricks, Git internals and git prompt with fish shell,
- collaboration with other using public services (like GitHub, Bitbucket, GitLab, Gitorious) or patches via email/USB,
- submodules, filter branch and rerere.

In the past I was leading different programming katas, but I have never head about the idea to use it with Git. It sounded very interesting when I was asked to join a mentors team on Git kata. I hope I helped some people better understand internals of Git commands and in addition having two free slots I got to know (among other things) about rerere which can reduce number of manual merges (have you even heard about it before?). “branch.autosetuprebase always” flag which set a rebase as a default strategy on pull for a newly created branches or a “help.autocorrect 1” flag which automatically applies “did you mean” suggestions in case of typo. The event was very successful and I wonder if not to extend my training portfolio by a Git course.

Recently I wanted to configure an ability to run both TestNG and JUnit tests in one Maven module (project). At the end I managed how to do it clean and short, but before that I have found a few different solutions on the web (top 5 in Google) which part of them didn’t work and the rest applied to the earlier versions of Surefire plugin and was overly complicated (e.g. two separate executions). Therefore I decided to write this short post to show how it could be done in Surefire 2.13 – the newest version available in March 2013.

Mixing those tests frameworks in one module can be done just by adding both JUnit and TestNG as a plugin dependencies (not as project dependencies):

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>${surefire.version}</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.maven.surefire</groupId>
            <artifactId>surefire-junit47</artifactId>
            <version>${surefire.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.maven.surefire</groupId>
            <artifactId>surefire-testng</artifactId>
            <version>${surefire.version}</version>
        </dependency>
    </dependencies>
</plugin>

As the result both test types are executed.

[INFO] --- maven-surefire-plugin:2.13:test (default-test) @ junit-testng-poc ---
[INFO] Surefire report directory: /tmp/junit-testng-poc/target/surefire-reports
[INFO] Using configured provider org.apache.maven.surefire.junitcore.JUnitCoreProvider
[INFO] Using configured provider org.apache.maven.surefire.testng.TestNGProvider

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
parallel='none', perCoreThreadCount=true, threadCount=2, useUnlimitedThreads=false
Running info.solidsoft.rnd.junit.testng.SampleJUnitTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0


-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running TestSuite
Configuring TestNG with: TestNG652Configurator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.869 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

Why to use TestNG with JUnit together? TestNG has a few features which are unavailable or less flexible in JUnit (just to mention a few: dependencies between tests and groups of tests (irreplaceable for integration tests with long startup), parametrized tests, concurrent execution or per suite/group/class init/shutdown operations). Therefore it is tempting to migrate existing tests from JUnit to TestNG. Having large code base it could be not so easy to migrate all of them at once and presented configuration allows to write the new tests in TestNG and rewrite the old tests when appropriate.

The whole working example can be found in my GitHub repository.

Btw, it is worth to mention that thanks to the fact TestNG generates reports also in the JUnit’s XML format all the tools compatible with JUnit (Jenkins, Sonar, …) will merge test results from JUnit and TestNG and display all of them properly.

Btw2, the same configuration works also with Failsafe plugin.

Btw3, thanks to the fact Spock Framework is under the hood a runner for JUnit presented trick can be used also to mix it with TestNG. This requires some additional work to integrate also Groovy and I plan to write about it in one of my future posts.

Mutation testing can efficiently detect places in code which are insufficiently covered by tests. The price we have to pay for it is time – number of mutations has to be tested with a set of unit tests. This time is much longer than calculating a “normal” code coverage. The newest PIT 0.29 provides a long awaiting feature – incremental analysis. When enabled PIT will store results from the previous runs on disk and track changes in the code and tests to avoid rerunning analyses which result should stay the same.

To start using incremental analysis it is necessary set historyInputLocation and historyOutputLocation configuration properties. For example in Pitest Maven Plugin it could be:

<plugin>
    <groupId>org.pitest</groupId>
    <artifactId>pitest-maven</artifactId>
    <version>0.29</version>
    <configuration>
        <threads>4</threads>
        <historyInputLocation>target/pitHistory.txt</historyInputLocation>
        <historyOutputLocation>target/pitHistory.txt</historyOutputLocation>
    </configuration>
</plugin>

It is worth to notice that for a basic usage (i.e. run analysis from time to time) input and output history locations will point to the same file. Therefor in Gradle plugin for PIT 0.29.0 there was added an additional parameter enableDefaultIncrementalAnalysis which when enabled automatically set historyInputLocation and historyOutputLocation to build/pitHistory.txt simplifying a configuration.

buildscript {
    (...)
    dependencies {
        classpath 'info.solidsoft.gradle.pitest:gradle-pitest-plugin:0.29.0'
        classpath 'org.pitest:pitest:0.29'
    }
}

apply plugin: 'pitest'

pitest {
    targetClasses = ['our.base.package.*']
    threads = 4
    enableDefaultIncrementalAnalysis = true
}

There is a number of optimizations already implemented in PIT. The author warns of potential errors which can be introduced in that way into the analysis, but although being currently an experimental feature it can dramatically reduce calculation time especially when used on very large codebases.

Update 2013-04-13: Added missing line which applies plugin to a project. Thanks to Bruno de Carvalho for report the issue.

Looking for a way to use mutation testing and PIT with your Gradle-based project? Your search is over. Recently released gradle-pitest-plugin makes it possible in a very comfortable way.

In short the idea with mutation testing is to modify the production code (introduce mutations) which should change its behavior (produce different results) and cause unit tests to fail. The lack of the failure may indicate that given part of the production code was not covered good enough by the tests. To read more about mutation testing take a look on my previous post or PIT webpage directly.

To start using PIT add following configuration to a build.gradle file in your project:

buildscript {
    repositories {
        mavenLocal()
        mavenCentral()
        //Needed to use a plugin JAR uploaded to GitHub (not available in a Maven repository)
        add(new org.apache.ivy.plugins.resolver.URLResolver()) {
            name = 'GitHub'
            addArtifactPattern 'http://cloud.github.com/downloads/szpak/[module]/[module]-[revision].[ext]'
        }
    }
    dependencies {
        classpath 'info.solidsoft.gradle.pitest:gradle-pitest-plugin:0.28.0'
        classpath 'org.pitest:pitest:0.28'
    }
}

This will add required dependencies to a build script together with a proper repositories configuration.

The second thing is to configure plugin itself.

pitest {
    targetClasses = ['our.base.package.*']
    threads = 4
}

The only required parameter is “targetClasses” – a package containing a code which should be mutated (usually the base package for your project), but in case your tests are written in a thread safe manner I encourage your to give a “threads” parameter a try. It can decrease a time required for mutation analysis dramatically. gradle-pitest-plugin supports all reasonable parameters available in PIT.

Having everything configured running mutation testing is as easy as:

gradle pitest

After a while you should see PIT summary similar to:

================================================================================
- Statistics
================================================================================
Generated 59 mutations Killed 52 (88%)
Ran 161 tests (2.73 tests per mutation)
================================================================================

Detailed reports with information about survived mutations and the corresponding parts of code is written to build/reports/pitest/ directory relative to your project root.

Sample PIT report

Sample PIT report

Btw, there is a version 0.29 of PIT just around the corner which provides such interesting features as incremental analysis (i.e. run only mutation tests for code that have been changed). I plan to write a post about it when released, so stay tuned.

Disclaimer. I am the author of gradle-pitest-plugin.