Publishing a newly created Git branch to a remote repository can be easier than you might expect.

Introduction

It is a very often situation in various Git workflow models to create a new branch and push (publish) it to a remote repository. Majority of people creates a lot of new branches. Just to initialize a pull (merge) request, to show code to remote workmates or just to backup local changes overnight.

git publish branch

Unfortunately, it is not as easy in Git as it could be:

~/C/my-fancy-project (master|✓) $ git checkout -b featureX
Switched to a new branch 'featureX'

~/C/my-fancy-project (featureX|✓) $ git push
fatal: The current branch featureX has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin featureX

Hmm, just copy-paste the given line and you are set:

~/C/my-fancy-project (featureX|✓) $ git push --set-upstream origin featureX
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureX -> featureX
Branch 'featureX' set up to track remote branch 'featureX' from 'origin'.

Of course you may memorize it after some time (however, I observe that many people do not) or even use the shorter syntax:

~/C/my-fancy-project (featureX|✓) $ git push -u origin featureX
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureX -> featureX
Branch 'featureX' set up to track remote branch 'featureX' from 'origin'.

Nonetheless, for me it was to many characters to type, especially repeated multiple times, especially in a typical workflow with one remote repository (usually named origin).

xkcd - Is It Worth the Time?

xkcd – Is It Worth the Time? – https://xkcd.com/1205/

Solution

The perfect solution for me would be just one command. Something like git publish.

~/C/my-fancy-project (master|✓) $ git checkout -b featureY
Switched to a new branch 'featureY'

~/C/my-fancy-project (featureY|✓) $ git publish
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureY -> featureY
Branch 'featureY' set up to track remote branch 'featureY' from 'origin'.

Would not it be nice?

As you may know from my previous posts, I am a big enthusiast of comprehensive automation (such as CI/CD) or at least semi-automation (aka “making things easier”) when the previous is not possible (or viable). Therefore, at the time, I started looking at possible improvements. Git is written by developers for developers and offers different ways of customization. The easiest is write an alias. In that case is as simple as adding to ~/.gitconfig:

[alias]
    # Gets the current branch name - useful in other commands
    # Git 2.22 (June 2019) introduced "git branch --show-current"
    branch-name = "!git rev-parse --abbrev-ref HEAD"

    # Pushes the current branch to the remote "origin" (or the remote passed as the parameter)
    # and set it to track the upstream branch
    publish = "!sh -c 'git push -u ${1:-origin} $(git branch-name)' -"

As a result in addition to the basic case (seting an upstream branch to origin (if needed) and pushing branches from the current branch to origin):

$ git publish

it is also possible to do publish to some other remote repository:

$ git publish myOtherRemote

Cleaning up

As a counterpart to git publish, it is easy to implement git unpublish:

[alias]
    # Deletes the remote version of the current branch from the remote "origin"
    # (or the remote passed as the parameter)
    unpublish = "!sh -c 'git push ${1:-origin} :$(git branch-name) && git branch --unset-upstream $(git branch-name)' -"

to be remove the current branch from a remote repository (origin or passed as the second parameter):

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git unpublish
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

instead of:

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git push origin --delete featureNoLongerNeeded
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

or

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git push origin :featureNoLongerNeeded
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

Again, shorter and easier to remember.

Partial built-in solution

As proposed by indispensable Łukasz Szczęsny, hassle-free pushing only (without pulling) can be also achieved with Git configuration itself. It may be sufficient having branches removed automatically after a PR is merged (e.g. in properly configured GitLab or GitHub). In that case it is required to set pull.default configuration parameter to current:

~/C/my-fancy-project (master|✓) $ git checkout -b featureZ
Switched to a new branch 'featureZ'

~/C/my-fancy-project (featureZ|✓) $ git push -u
fatal: The current branch featureX has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin featureZ

~/C/my-fancy-project (featureZ|✓) $ git config --global push.default current

~/C/my-fancy-project (featureZ|✓) $ git push -u
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureZ -> featureZ
Branch 'featureZ' set up to track remote branch 'featureZ' from 'origin'.

~/C/my-fancy-project (featureZ|✓) $ git pull
Already up to date.

Please pay attention to the -u flag in git push -u. It is required to setup remote branch tracking. Without that git pull alone would not work.

Summary

I have been using git publish (and git unpublish) for many years and I really like it. Taking the opportunity of writing this Git Tricks blog series I decided to share it with others (felt in love in a command line :-) ). Remember, however, it is now a part GitKurka (or its uncensored upstream project) – a set of useful and productive tweaks and aliases for Git.

Btw, I do not conduct Git training anymore, but people wanting to develop their Git skills even more may consider an on-site course from Bottega (PL/EN), an online course by Maciej Aniserowicz (devstyle.pl) (PL) or a comprehensive Pro Git book (EN).

Update 20190910. Added partial built-in alternative solution suggested by Łukasz Szczęsny.
Update 20190913. Added missing “branch-name” alias. Pointed out by Paul in a comment.

The lead photo based on the Iva Balk‘s work published in Pixabay, Pixabay License.

Advertisements

Get know how to solve issue with pushing to submodules directly from the main repo while keeping the project easily cloneable by external contributors.

Introduction

The Git submodules mechanism is pretty handy to keep the source code of lousily related dependent software together in one Git repository while leaving their development separate. It is something like symlinks in the Unix world, but with an ability to refer also to the previous version. It’s quite popular in the projects using source code integration (instead of shared libraries) or to speed up development by making related changes in multiple repositories easier. It is not the only possible solution – Gradle, for instance, provides a composite build mechanism. Monorepo is an another approach, but it has its own limitations and it very problematic to use in FOSS projects developed by different people/teams.

As usual, I encountered that situation in one of my projects. As a big enthusiast of automatic code testing and Continuous Delivery, some time ago I have been working on improving the reliability of my (automatically released) gradle-nexus-staging-plugin. After each commit (so also before the release) I wanted to have the end-to-end tests executed to verify that the plugin is able to pass a simple (but real) project through the release process to Maven Central (aka The Central Repository).

I could move that test project to the plugin repository, but – well – it’s a distinct project which could be also released separately or replaced with some other project. In addition, testing two variants of releasing it was handy to use keep it in two branches “mounted” twice in my root repository.

gradle-nexus-staging-plugin
├── src
│   ├── funcTest
│   │   ├── groovy
│   │   │   └── ...
│   │   └── resources
│   │       └── sampleProjects
│   │           ├── nexus-at-minimal (submodule - master branch)
│   │           │   └── ...
│   │           ├── nexus-at-minimal-publish (submodule - publish branch)
│   │           │   └── ...
│   │           └── ...
│   ├── ...

Problem (aka Challenge)

gradle-nexus-staging-plugin contains a submodule nexus-at-minimal from a separate repo. Working on new fancy feature in GNSP it is needed to tweak the acceptance testing project. To make the development smoother it’s very useful to be able to commit introduces changes in the dependent project directly from the working copy of the main project. It works out of the box. However, later on we should also directly push them back to a separate repo of that dependent project (by stepping into the submodule and calling git push or – all-in-once by – using the git push --recurse-submodules=on-deamand parameter when pushing from the main repository). And then – in the long term – we encounter some inconvenience.

Let’s start by defining a submodule path to connect via SSH:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = git@gitlab.com:nexus-at/nexus-at-minimal.git

In general it works fine and pushing is allowed. However, any non-developer trying to clone that repo (and initialize submodules) gets:

...
git@gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

It’s pretty bad in the open source (FOSS) development where the projects are publicly available other people are encouraged to download (clone) it, build and contribute.

The obvious remedy to the previous error is switching to HTTPS:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = https://gitlab.com/nexus-at/nexus-at-minimal.git

However then, to allow developers to commit changes directly to a submodule it is required to go through a separate HTTPS authentication which in the most cases is completely not use in favor of SSH (and would need to be configured independently with an access token).

Transparent solution

To keep external contributors happy the developers could manually change the url in .gitmodules from HTTPS to SSH for every cloned project. However, it’s quite tedious. The better solution is an usage of pushInsteadOf. For the aforementioned example, the developers need only to add to the global ~/.gitconfig configuration file:

[url "git@gitlab.com:nexus-at/"]
    pushInsteadOf = https://gitlab.com/nexus-at/

It effectively overrides the push URL to use SSH instead of HTTPS for the whole group in GitLab/GitHub, covering also submodules (where HTTPS – external contributors friendly scheme – is left as a default).

Simple solution for the same server

As I use GitLab for my profession activity, at that time, I was evaluating its usage also for my FOSS project. Therefor, two different servers (providers) used for the main project and a submodule (here GitHub and GitLab). However, very often it is placed on the same server or even in the same group. In that case the configuration can be simplified even more with relative paths:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = ../nexus-at-minimal.git

The protocol used to clone the main repository (SSH or HTTPS) will be used also for a submodule.

Summary

URL rewriting and conditional configuration (covered in the first part) are just a subset of options available in Git to make the development more flexible and simple. Simple, assuming we already found the required features and learned how to use it ;-).

Update 20190405. Added simple solution with a relative path.

The lead photo based on the Mohamed Hassan‘s work published in Pixabay, Pixabay License.

Have you even committed to Git using wrong email address working on/for different projects/companies? Luckily with a little configuration Git can auto-switch the identities for you.

(Too long) introduction and reasoning

Being an (experienced) IT professional can give you an opportunity to work on different things in the same time frame. For instance, in addition to work for the main client, I do some consultancy stuff with code quality & testing and Continuoues Delivery for other companies. What’s more, I also conduct training sessions (will a lot of exercises with code) and work on my own and other’s FOSS projects. Usually, it is convenient to do it on the same computer. It can happen to commit something with the wrong email address (to be detected later on by an external auditor ;-) ) and a push with force to a remote master after rebase is not the best possible solution :-). I started with some Fish-based script to deal with it, but in the end I have found a built-in mechanism in Git itself.

Disclaimer. This particular blog post doesn’t treat about anything new or revolutionary. However, I’ve been living in unawareness long enough to give my blog readers a chance to know that mechanism right now.

superhero-git-identities

Project situation

My ~/Code directory structure could be simplified to something like that:

Code/
├── gradle-nexus-staging-plugin
├── mockito-java8
├── spock-global-unroll
├── ...
├── external-foss
│   ├── ...
│   ├── awaitility
│   └── spock
├── training
│   ├── ...
│   ├── code-testing
│   ├── java11
│   └── jenkins-as-code
└── work
    ├── ...
    ├── codearte
    └── companyX

The goal is to keep an email address in the company’s projects appropriate.

Solution

The first idea which can spring to your mind is to manually override git config user.email "..." in every clonned company repository. However, it’s tedious and error prone (especially in the microservice-based architecture :) ).

Luckily, one of the features introduces in Git 2.13.0 (long time ago – May 2017) is a conditional configuration applying (aka Conditional includes). Armed with it our task is pretty simple.

First, keep your default name and email defined in the [user] section in ~/.gitconfig:

[user]
  name = Marcin Zajączkowski
  email = foss.hacker@mydomain.example.com

Next, create a company related config file in the home directory, e.g. ~/.gitconfig-companyX with just overriden email address:

[user]
  email = marcin.zajaczkowski@companyx.example.com

Glue it together by placing in ~/.gitconfig the following code:

[includeIf "gitdir:~/Code/work/companyX/"]
  path = .gitconfig-companyX

The .gitconfig-companyX is applied only if a current path prefix matches the one defined in includeIf. Simple, yet powerful. That mechanism can be also used to configure different things – such as conditionally using GPG to sign your commits.

Btw, being a paranoid you can even remove the email field from the root configuration to be notified if you forgot to add an email for any new company you colaborate with.

Summary

Thanks to the includeIf mechanism you will never (*) again commit to a repo with wrong name or email. That and other Git related tricks and commands/aliases (such as collected in git-kurka make it easier to focus on delivering the stuff :).

Feel free to leave your favorite Git tips & tricks in the comments.

Update. From the comment made on priv:
– Git works great in that case, but for more generic changes in the environment variables Wybczu suggested very powerful direnv (with support for bash, zsh, tcsh, fish and elvish)

The lead photo based on the Elias Sch‘s work published in Pixabay, Pixabay License.

Make your (automatic) releasing to Maven Central from Travis (and not only) more reliable thanks to the explicit staging repository creation feature set implemented at the edge of 2018 and 2019.

Background

If you are only interested in getting information how to make your artifacts releasing more reliable from Travis, move forward to the another section.

Automatic artifacts releasing (using a staging repository and its promotion) from Gradle to Maven Central has been always tricky. The Nexus REST API related to those operations is very poorly documented. In addition, Gradle doesn’t natively support uploading artifacts to a dedicated staging repository even if it was already created explicitly. In the result a heuristic to determine which repository contains just uploaded artifacts has to be used, what brings some serious limitations. The apogee of the problems was to have Travis changes its architecture to more stateless in the late autumn of 2018. It caused the upload requests for particular artifacts to be routed via machines with different IP addresses, which resulted in multiple stating repositories created for a single gradle uploadArchives or gradle publish calls. That made automatic artifact releasing with Gradle from Travis completely broken. Up until now.

reliable-releasing-to-maven-central

Improvements

Two good things happened at the edge of the years. The first was the appearance of the new nexus-publish plugin by Marc Philipp. It creates an explicit staging repository using the Nexus API and enhance the Gradle publish task to use that repository. The second thing was an enhancement in my gradle-nexus-staging plugin which started to allow setting the staging repository ID which should be used during the release operation. That leaded to improve the reliability of releasing to Maven Central using Gradle.

Instead of relying on a heuristic to determine which repository should be used for release, the new staging repository is explicitly created. The artifacts are uploaded directly to it, it is closed and releasing. Thanks to that, everything works smother and it more error-proof. In addition, there is no problem with parallel releasing of different projects belonging to the same staging profile and it finally works properly back again with Travis.

Configuration

This post assumes you have already configured uploading your artifacts to Maven Central (aka The Central Repository) using the maven-publish plugin. If not you may consult this link. This configuration will make your deployment and releasing more reliable without a need to do any manual operations in Nexus UI.

plugins {
    ... //other plugins used in your project
    id 'io.codearte.nexus-staging' version '0.20.0'
    id 'de.marcphilipp.nexus-publish' version '0.2.0'
}

publishing {
    ... //your current publishing to Maven Central configuration
}

//optionally
nexusStaging {
    packageGroup = "your-package-group-if-different-than-groupId"
}

//optionally
nexusPublishing {
    //for custom configuration if needed - credentials
    //       are by default taken from nexus-staging
    //       or from properties nexusUsername and nexusPassword
}

Do you expect much more code (configuration) to write? Everything is hidden in the plugins which leverage each other. Just please remember to use nexus-staging 0.20.0+ and nexus-publish 0.2.0+.

After that artifacts uploading with releasing is a matter of one command:

./gradlew publish closeAndReleaseRepository

Calling the publish task (or publishToNexus for the older versions) creates a staging repository and memorized its ID. closeAndReleaseRepository closes and releases that one particular repository. After a few minutes your artifacts should be available in Maven Central.

Important. Bear in mind that publish (or publishToNexus) and closeAndReleaseRepository has to be used in the one Gradle execution to be able to leverage explicitly created staging repository.

Update 20190506. With nexus-publish 0.2.0+ calling just publish instead of publishToNexus is enough to trigger the logic, so the sample command was simplified.

Summary

Gradle is a very nice build tool where (almost) the sky is the limit. Unfortunately, there are still some long lasting issues which require using some hacks or writing custom plugins to overcome them. The promising is that with every release they are slowly fixed/implemented. To solve that particular problem a bottom-up work was required to bring releasing back for Travis and more reliable in general.

Please note. The presented approach works pretty well for using the (recently improved) publishing plugin. If you still use the old maven plugin (having the uploadArchives task instead of publish one) you need to migrate and/or put your comment in the corresponding issue.

The lead photo based on the Siala‘s work published in Pixabay, Pixabay License.

Learn how leverage Spock 1.2 to slice a Spring context of a legacy application writing integration tests.

Have you ever wanted, having some legacy application which you were starting to work on, to write some tests to get know what is going on and possibly be notified about regressions? That feeling when you want to instantiate a single class and it fails with NullPointerException. 6 replaced (with difficulty) dependencies later there are still some errors from the classes that you haven’t heard about before. Sounds familiar?

There are different techniques to deal with hidden dependencies. There is the whole dedicated book about that (and probably a few other that I haven’t read). Occasionally, it may be feasible to start with the integration tests and run through some process. It may be even more “entertaining” to see what exotic components are required to just setup the context, even if they are completely not needed in our case. Thank you (too wide and carelessly used) @ComponentScan :).

Injecting stubs/mocks inside the test context is a way to go as an emergency assistance (see the last paragraph, there are better, yet harder approaches). It can be achieved “manually” with an extra bean definition with the @Primary annotation (usually a reason to think twice before doing that) for every dependency at which level we want to make a cut of (or for every unneeded bean which is instantiated by the way). @MockBean placed on a field in a test is more handy, but still, it is needed to define a field in our tests and put the annotation on it (5? 10? 15 beans?). Spock 1.2 introduces somehow less know feature which may be useful here – @StubBeans.

Mockied dependencies Spring context

It can be used to simply provide a list of classes which (possible) instances should be replaced with stubs in the Spring test context. Of course before the real objects are being instantiated (to prevent for example NPE in a constructor). Thanks to that up to several lines of stubbing/mock injections:

@RunWith(SpringRunner.class) //Spring Boot + Mockito
@SpringBootTest //possibly some Spring configuration with @ComponentScan is imported in this legacy application
public class BasicPathReportGeneratorInLegacyApplicationITTest { //usual approach

    @MockBean
    private KafkaClient kafkaClientMock;

    @MockBean
    private FancySelfieEnhancer fancySelfieEnhancerMock;

    @MockBean
    private FastTwitterSubscriber fastTwitterSubscriberMock;

    @MockBean
    private WaterCoolerWaterLevelAterter waterCoolerWaterLevelAterterMock;

    @MockBean
    private NsaSilentNotifier nsaSilentNotifierMock;

    //a few more - remember, this is legacy application, genuine since 1999 ;)
    //...

    @Autowired
    private ReportGenerator reportGenerator;

    @Test
    public void shouldGenerateEmptyReportForEmptyInputData() {
        ...
    }
}

can be replaced with just one (long) line:

@SpringBootTest //possibly some Spring configuration with @ComponentScan is imported in this legacy application
@StubBeans([KafkaClient, FancySelfieEnhancer, FastTwitterSubscriber, WaterCoolerWaterLevelAterter, NsaSilentNotifier/(, ... */])
  //all classes of real beans which should be replaced with stubs
class BasicPathReportGeneratorInLegacyApplicationITSpec extends Specification {

    @Autowired
    private ReportGenerator reportGenerator

    def "should generate empty report for empty input data"() {
        ....
    }
}

(tested with Spock 1.2-RC2)

It’s worth to mention that @StubBeans is intended just to provide placeholders. In a situation it is required to provide stubbing and/or an invocation verification @SpringBean or @SpringSpy (also introduced in Spock 1.2) are better. I wrote more about it in my previous blog post.

There is one important aspect to emphasize. @StubBeans are handy to be used in a situation when we have some “legacy” project and want to start writing integration regression tests quickly to see the results. However, as a colleague of mine Darek Kaczyński brightly summarized, blindly replacing beans which “explode” in tests is just “sweeping problems under carpet”. After the initial phase, when we are starting to understand what is going on, it is a good moment to rethink the way the context – both in production and in tests – is created. The already mentioned too wide @ComponentScan is very often the root of all evil. An ability to setup a partial context and put it together (if needed) is a good place to start. Using @Profile or conditional beans are the very powerful mechanisms in tests (and not only there). @TestConfiguration and proper bean selection to improve context caching are something worth to keep in your mind. However, I started this article to present the new mechanism in Spock which might be useful in some cases and I want to keep it short. There could be an another, more generic blog post just about managing the Spring context in the integration tests. I have to seriously thing about it :).

Discover how to automatically inject Spock’s mocks and spies into the Spring context using Spock 1.2.

Coffee beans with 3 different beans

Stubs/mocks/spies in Spock (and their life cycle) have been always tightly coupled with the Spock Specification class. It was only possible to create them in a test class. Therefore, using shared, predefined mocks (in both unit and integration tests) was problematic.

The situation was slightly improved in Spock 1.1, but only with the brand new Spock 1.2 (1.2-RC1 as a time of writing) using the Spock mocking subsystem in Spring-based integration tests is as easy as using @SpringMock for Mockito mocks in Spring Boot. Let’s check it up.

Btw, to be more cutting edge in addition to Spock 1.2-RC1, I will be using Spring Boot 2.1.0.M2, Spring 5.1.0.RC2 and Groovy 2.5.2 (but everything should work with the stable versions of Spring (Boot) and Groovy 2.4).

One more thing. For the sake of simplicity, in this article, I will be using a term ‘mock’ to refer also stubs and spies. They differs in behavior, however, in a scope of injecting it into the Spring context in the Spock tests it usually doesn’t matter.

Spock 1.1 – manual way

Thanks to the work of Leonard Brünings, mocks in Spock were decoupled from the Specification class. It was finally possible to create them outside and to attach it later on into a running test. It was the cornerstone of using Spock mocks in the Spring (or any other) context.

In this sample code we have the ShipDatabase class which uses OwnShipIndex and EnemyShipIndex (of course injected by a constructor :) ) to return aggregated information about all known ships matched by name.

//@ContextConfiguration just for simplification, @(Test)Configuration is usually more convenient for Spring Boot tests
//Real beans can exist in the context or not
@ContextConfiguration(classes = [ShipDatabase, TestConfig/*, OwnShipIndex, EnemyShipIndex*/])
class ShipDatabase11ITSpec extends Specification {

    private static final String ENTERPRISE_D = "USS Enterprise (NCC-1701-D)"
    private static final String BORTAS_ENTERA = "IKS Bortas Entera"

    @Autowired
    private OwnShipIndex ownShipIndexMock

    @Autowired
    private EnemyShipIndex enemyShipIndexMock

    @Autowired
    private ShipDatabase shipDatabase

    def "should find ship in both indexes"() {
        given:
            ownShipIndexMock.findByName("Enter") >> [ENTERPRISE_D]
            enemyShipIndexMock.findByName("Enter") >> [BORTAS_ENTERA]
        when:
            List<String> foundShips = shipDatabase.findByName("Enter")
        then:
            foundShips == [ENTERPRISE_D, BORTAS_ENTERA]
    }

    static class TestConfig {
        private DetachedMockFactory detachedMockFactory = new DetachedMockFactory()

        @Bean
        @Primary    //if needed, beware of consequences
        OwnShipIndex ownShipIndexStub() {
            return detachedMockFactory.Stub(OwnShipIndex)
        }

        @Bean
        @Primary    //if needed, beware of consequences
        EnemyShipIndex enemyShipIndexStub() {
            return detachedMockFactory.Stub(EnemyShipIndex)
        }
    }
}

The mocks are created in a separate class (outside the Specification) and therefore DetachedMockFactory has to be used (or alternatively SpockMockFactoryBean). Those mocks have to be attached (and detached) to the test instance (the Specification instance), but it is automatically handled by the spock-spring module (as of 1.1). For generic mocks created externally also MockUtil.attachMock() and mockUtil.detachMock() would need to be used to make it work.

As a result it was possible to create and use mocks in the Spring context, but it was not very convenient and it was not commonly used.

Spock 1.2 – first class support

Spring Boot 1.4 brought the new quality to integration testing with (Mockito’s) mocks. It leveraged the idea, originally presented in Springockito back in 2012 (when the Spring configuration was mostly written in XML :) ) to automatically inject mocks (or spies) into the Spring (Boot) context. The Spring Boot team extended the idea and thanks to having it as the internally supported feature it (usually) works reliably just by adding an annotation or two in your test.

Similar annotation-based mechanism is built-in in Spock 1.2.

//@ContextConfiguration just for simplification, @(Test)Configuration is usually more convenient for Spring Boot tests
//Real beans can exist in the context or not
@ContextConfiguration(classes = [ShipDatabase/*, OwnShipIndex, EnemyShipIndex*/])
class ShipDatabaseITSpec extends Specification {

    private static final String ENTERPRISE_D = "USS Enterprise (NCC-1701-D)"
    private static final String BORTAS_ENTERA = "IKS Bortas Entera"

    @SpringBean
    private OwnShipIndex ownShipIndexMock = Stub()  //could be Mock() if needed

    @SpringBean
    private EnemyShipIndex enemyShipIndexMock = Stub()

    @Autowired
    private ShipDatabase shipDatabase

    def "should find ship in both indexes"() {
        given:
            ownShipIndexMock.findByName("Enter") >> [ENTERPRISE_D]
            enemyShipIndexMock.findByName("Enter") >> [BORTAS_ENTERA]
        when:
            List<String> foundShips = shipDatabase.findByName("Enter")
        then:
            foundShips == [ENTERPRISE_D, BORTAS_ENTERA]
    }
}

There is not much to be added. @SpringBean instructs Spock to inject a mock into a Spring context. Similarly, @SpringSpy wraps the real bean with a spy. In a case of @SpringBean it is required to initialize a field to let Spock know if we plan to use a stub or a mock.

In addition, there is also a more general annotation @StubBeans to replace all defined beans with stubs. However, I plan to cover it separately in an another blog post.

Limitations

For those of you who look forward to rewrite all Mockito’s mocks to Spock’s mocks in your Spock tests right after the lecture of this article there is a word of warning. Spock’s mocks – due to their nature and relation to Specification – have some limitations. The implementation under the hood creates a proxy which is injected into the Spring context which (potentially) replaces real beans (stubs/mocks) or wraps them (spies). That proxy is shared between all the tests in the particular test (specification) class. In fact, it also can span across other tests with the same bean/mock declarations in the situation Spring is able to cache the context (similar situation to Mockito’s mocks or Spring integration tests in general).

However, what is really important, a proxy is attached to a tests right before its execution and is detached right after it. Therefore, in fact, every test has it’s own mock instance (it cannot be applied to @Shared fields) and it is problematic for instance to group interactions from different tests and verify them together (which usually is quite sensible, but might lead to some duplication). Nevertheless, with using a setup block (or in-line stubbing) it is possible to share stubbing and interaction expectancy.

Summary

Spock 1.2 finally brings hassle-free Spock’s stubs/mocks/spies support for using them in the Spring context which is comparable with the one provided in Spring Boot for Mockito. It is just enough to add the spock-spring module to the project test dependencies. Despite some limitations, it is one point less for mixing native Spock’s mocking subsystem with external mocking frameworks (such as Mockito) in your Spock (integration) tests. And what is nice, it should work also in plain Spring Framework tests (not only Spring Boot tests). The same feature has been implemented for Guice (but I haven’t tested it).

Furthermore, Spock 1.2 brings also some other changes including better support for Java 9+ and it is worth to give it a try in your test suite (and of course report any potentially spotted regression bugs :) ).

One more good news. In addition to the Leonard’s work who made Spock 1.2 possible and a legion of bug reporters and PR contributors, since recently, there are also some other committers who are working on making Spock even better. Some of them you may know from some other popular FOSS projects. What is more, Spock 1.2 is (preliminary) planned to be the last version based on JUnit 4 and the next stable Spock version could be 2.0, leveraging JUnit 5 and (among others) its native ability to run tests in parallel.

The examples were written using Spock 1.2-RC1. It will be updated to 1.2-final once released. The source code is available from GitHub.

Btw, have you wonder if it is still worth using Spock in the time of JUnit 5? I try to help answer that question in my presentation which will be possible to see at JDD 2018, this October in Kraków, Poland. See you there.

JDD 2018 logo with date

The lead photo based on the Couleur‘s work published in Pixabay, CC0 1.0

Starting with the version 2.17.0 Mockito provides the official (built-in) support for managing a mocking life cycle if JUnit 5 is used.

mockito-junit5-logo

Getting started

To take advantage of the integration, the Mockito’s mockito-junit-jupiter dependency is required to be added next to the JUnit 5’s junit-platform-engine one (see below for details).

After that, the new Mockito extension MockitoExtension for JUnit 5 has to be enabled. And that’s enough. All the Mockito annotation should automatically start to work.

import org.junit.jupiter.api.Test;  //do not confuse with 'org.junit.Test'!
//other imports
import org.mockito.junit.jupiter.MockitoExtension;

@ExtendWith(MockitoExtension.class)
class SpaceShipJUnit5Test {

    @InjectMocks
    private SpaceShip spaceShip;

    @Mock
    private TacticalStation tacticalStation;

    @Mock
    private OperationsStation operationsStation;

    @Test
    void shouldInjectMocks() {
        assertThat(spaceShip).isNotNull();
        assertThat(tacticalStation).isNotNull();
        assertThat(operationsStation).isNotNull();
        assertThat(spaceShip.getTacticalStation()).isSameAs(tacticalStation);
        assertThat(spaceShip.getOperationsStation()).isSameAs(operationsStation);
    }
}

It’s nice that both a test class and test methods don’t need to be public anymore.

Please note. Having also JUnit 4 on a classpath (e.g. via junit-vintage-engine) for the “legacy” part of tests it is important to do not confuse org.junit.jupiter.api.Test with the old one org.junit.Test. It will not work.

Stubbing and verification

If for some reasons you are not a fan of AssertJ (although, I encourage you to at least give it a try) JUnit 5 provides a native assertion assertThrows (which is very similar to assertThatThrownBy() from AssertJ). It provides a meaningful error message in a case of an assertion failure.

@Test
void shouldMockSomething() {
    //given
    willThrow(SelfCheckException.class).given(tacticalStation).doSelfCheck();   //void method "given..will" not "when..then" cannot be used
    //when
    Executable e = () -> spaceShip.doSelfCheck();
    //then
    assertThrows(SelfCheckException.class, e);
}

I were not myself if I would not mention here that leveraging support for default methods in interfaces available in AssertJ and mockito-java8 a lot of static imports can be made redundant.

@ExtendWith(MockitoExtension.class)
class SpaceShipJUnit5Test implements WithAssertions, WithBDDMockito {
    ...
}

Tweaking default behavior

It also worth to point our that using the JUnit 5 extension Mockito by default works in the “strict mode”. It means that – for example – unneeded stubbing will fail the test. While very often it is a code smell, there are some cases where that test construction is desired. To change the default behavior an @MockitoSettings annotation can be used.

@ExtendWith(MockitoExtension.class)
@MockitoSettings(strictness = Strictness.WARN)
class SpaceShipJUnitAdvTest implements WithAssertions, WithBDDMockito {
    ....
}

Dependencies

As I already mentioned, to start using it is required to add a Mockito’s mockito-junit-jupiter dependency next to a JUnit 5’s junit-platform-engine one. In a Gradle build it could look like:

dependencies {
    testCompile 'org.junit.vintage:junit-platform-engine:5.1.0'
    testCompile 'org.mockito:mockito-junit-jupiter:2.17.2'  //mockito-core is implicitly added

    testCompile 'org.junit.vintage:junit-vintage-engine:5.1.0'  //for JUnit 4.12 test execution, if needed
    testCompile 'org.assertj:assertj-core:3.9.1'    //if you like it (you should ;) )
}

Please note. Due to a bug with injecting mocks via constructor into final fields that I have found writing this blog post, it is recommended to use at least version 2.17.2 instead of 2.17.0. That “development” version is not available in the Maven Central and the extra Bintray repository has to be added.

repositories {
    mavenCentral()
    maven { url "https://dl.bintray.com/mockito/maven" }    //for development versions of Mockito
}

In addition, it would be a waste to do not use a brand new native support for JUnit 5 test execution in Gradle 4.6+.

test {
    useJUnitPlatform()
}

IntelliJ IDEA has provided JUnit support since 2016.2 (JUnit 5 Milestone 2 at that time). Eclipse Oxygen also seems to add support for JUnit 5 recently.

mockito-junit5-idea-results

Summary

It is really nice to have a native support for JUnit 5 in Mockito. Not getting ahead there is ongoing work to make it even better.
The feature has been implemented by Christian Schwarz and polished by Tim van der Lippe with great assist from a few other people.

The source code is available from GitHub.

Btw, are you wondering how JUnit 5 compares with Spock? I will be talking about that at GeeCON 2018.

Geecon logo

Get know how fast (or slow) mutation testing in fact is (based on tangible results from the real FOSS projects) and how can it be optimized (with real implementation in PIT, a tool for Java projects).

One of the questions I’ve got after my first presentation about mutation testing with PIT at DevConf.cz in Brno years ago was “how long would it take in my project?”. I wasn’t able to answer precisely. The execution time depends on multiple factors. Size of the project, number of tests, kind of tests (unit/integration), just to mention a few. There are also some optimization techniques which can speed up the analysis. To give you a roughly idea about that, I collected the results from a mutation testing session in two popular FOSS projects performed with PIT.

I assume that you already know something about mutation testing. If not, please read that article before going further.

The fast and the furious 1910

Warm-up

As my guinea pigs I chose two popular open source projects – Assertj – which provides “fluent assertions for Java” and Joda-Time providing “a quality replacement for the Java date and time classes” which was a must for Java projects before Java 8 where similar solution became available out-of-box.

AssertJ can be named a medium size project according to FOSS standards. It has ~85K lines of production code and 150K lines of tests (as measured by Idea statistics plugin – including empty lines and comments). The code compilation takes ~2 seconds. Test execution ~12 seconds (almost 10_000 (mostly) unit tests). Joda-Time is somehow smaller with ~70K LOC in production and ~72K LOC in tests. Test execution takes ~3 seconds (~4200 tests).

I chose AssertJ and Joda-Time as they have a very solid test harness which consists (mostly) of unit tests – the best possible kind of tests for most of the cases. While integration tests can also be used with mutation testing (with some limitations and side effects) unit tests are the perfect candidate.

Brute force

Based on the further PIT analysis I estimated a number of possible to generate mutants at 8000 for AssertJ and 10000 for Joda-time. Of course it depends on the number of mutations available in your tool belt, however, to compare a brute force method with PIT the same number of mutants is accurate.

You may ask why there is greater number of mutants generated for Joda-time which has smaller codebase. It would be a fair question. PIT (or any other mutation testing tool) generates mutants in the lines where something can be “broken”. As Joda-time contains more “real logic” inside it was possible to generate more mutants than in AssertJ code where is a lot of delegation. In the other words, it’s usually possible to generate more mutants generated in a line with an if statement than in a line which just calls some void method in an another class. Therefore, number of mutations does not depend only on the number of lines.

Let’s get back to our calculations. AssertJ, brute force algorithm. For every mutation code has to be compiled – 8000 * 2 seconds and all tests have to be executed 8000 * 12 seconds. It gives 112000 seconds in total. It is ~1867 minutes, over 31 hours! Even for 5 times smaller Joda-time it is still: 10000 * 1 seconds + 10000 * 3 = 40000 seconds -> 667 minutes -> over 11 hours!

Wow, over 11 and 31 hours. For not so large projects. It is one of the reasons why mutation testing – first proposed by Richard Lipton in 1971 (over 40 years ago!) – had been an academic technique only. Let’s take a look how it could be optimized to make it conform to the reality in enterprise projects.

Optimizations

Test selection

Running all tests for every mutation it greatly ineffective. One of the naive approaches is an usage of a name convention – tests usually are named SomeProductionClass*Test. However, it tends to underestimate test suite effectiveness as no all tests are written in that way (especially integration/acceptance ones) and there could be also some typos. Another idea is a static call analysis. It’s more reliable, however, it completely skips reflection calls and in addition it can have a problem with polymorphism.

PIT in turns, leverages some other techniques. First of all, tests are executed in a “normal” mode and a standard code coverage metric is gathered. Thanks to that procedure, mutants with no “standard” coverage can be skipped. There is no chance to have them killed – no test even executes that given line. That technique can be especially beneficial if PIT is used in a project having small number of automated tests (low standard code coverage). What is more, PIT knows exactly which tests execute the particular line. Therefore, no test will ever be run against a mutant that it will not execute. This is essential to limit the number of tests executed per mutant. As a bonus, PIT (based on the first execution) is aware of tests execution time. Tests covering a given mutant can be reordered to have fast ones executed first. If a mutant has been killed, the subsequent (slower) tests can be skipped.

Those optimizations (among other technical solutions – such as creating mutations at the bytecode level instead of the source code one – which is must faster, albeit harder in implementation and also has some limitations) contributed to much faster mutation testing analysis in comparison to theoretical brute force mode.

Let’s compare the theoretical brute force analysis time with the one achieved with vanilla PIT configuration.

Execution time in minutes Brute force PIT (1 thread)
AssertJ 1866.67 14.15
Joda-time 666.67 11.65

Brute force mutation analysis vs PIT

Looking at the chart please notice that Y-axis has a logarithmic scale (!). It’s ~14 minutes for AssertJ (as opposed to ~1867 minutes – over 31 hours) and ~11 minutes for Joda-time (compared to ~667 minutes – over 11 hours). It’s ~130 and ~60 times faster respectively. Looks good, but it can be done even better.

Parallel execution

One of the consequences of breaking the Moore’s law years ago was an increment of number of cores in modern CPUs. Nowadays, 4 or 8 virtual cores is a standard of a developer workstation. Servers can have much more of them. PIT advertised as “a state of the art in mutation testing” in addition to aforementioned optimizations supports parallel execution. Let’s take a look how much it can be gained in our tests case.

Test environment

All tests were performed using a dedicated m4.4xlarge AWS instance (32 virtual cores – 2x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz with 8 physical cores each – and 64GB RAM). Most of the executions were repeated to verify the achieved result, however, the methodology was far from one required in scientific researches (which was definitely not an aim of my work).

Raw results

AssertJ
Number of threads 1 2 4 6 8 10 12 16 20 24 32
Execution time in minutes 14.15 7.48 4.27 3.27 2.92 2.77 2.85 2.88 3.02 3.12 3.85
Number of timeout errors 17 17 17 17 17 17 17 18 31 31 260

AssertJ - PIT analysis time

Joda-time
Number of threads 1 2 4 8 12 16 20 24 32
Execution time in minutes 11.65 6.27 3.83 2.62 2.35 2.42 2.47 2.55 2.60
Number of timeout errors 49 49 50 49 49 49 49 48 51

Joda-time - PIT analysis time

Commented results

The aforementioned results clearly shows that having quite powerful machine it was possible to the reduce mutation testing analysis execution time over 5 times (in comparison to sequential analysis) thanks to doing multiple things at the same time. In the test subjects the saturation point was placed around 10-12 threads (out of 32 virtual cores). At that state the machine seemed to be (almost) fully loaded (see the screenshot below) by most of the time (excluding the initial and the last stage of the analysis). After certain point the number of timeout errors was increased significantly (in a case of AssertJ) which could confirming that the machine was overloaded. Unfortunately, I don’t have a good explanation why the system was saturated much below 32 virtual cores (16 physical cores) as PIT should not use more threads as defined and there was no extra parallelism in the tests itself. As reasonably suggested by Henry Coles, the author of PIT, next time I will attach a profiler to the analysis to try to find potential places to speed the things up even further.

CPU utilisation - PIT, 12 threads

CPU utilisation – PIT, 12 threads

Further optimizations

Having mutation testing performed regularly in large codebase, the amount of changed code is usually meaningless. To not to have to re-execute mutation on all the code, PIT incorporates the incremental mode. When enabled, PIT determines which mutant could have got a chance to get killed by the recent changes in the production code and tests. As a result, the analysis scope can be limited to those classes. This approach is perfect for running a local analysis by a developer on his/her workstation or in a change/pull request to determine the “quality” of the recently introduced changes. For large codebase savings on time execution can be tremendously. It is only worth to remember that the heuristic selecting which part of the analysis should be executed has some limitation and from time to time it is good to run the full analysis anyway.

Summary

With a few smart optimizations (here implemented in PIT) it is possible to reduce (a theoretical) brute force mutation testing execution time by even ~130 times (from 31 hours to less than 15 minutes in a case of AssertJ). Having a hi-end quad core laptop it is possible to cut another chunk thanks to parallel execution. However, only the power of very strong server machine (here a server with 32 virtual cores which is not uncommon among CI server executors used in large projects) allows to unleash the power of PIT to speed up the mutation testing analysis. The full analysis time was possible to reduce by 3 orders of magnitude (!) (from 31 hours to less than 3 minutes in a case of AssertJ). It, together with incremental analysis, can make mutation testing feasible* to use on a daily basis in the real life, large enterprise projects (especially with unit tests posed the majority of the tests used in a project – but it is a topic for an another blog post :) ).

Btw, what is your experience in using mutation testing in non-academic projects? Leave your comment below.

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock/JUnit/Mockito/AssertJ/PIT quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Tune up your JUnit test class template for Idea with the BDD-like syntax, Java 8 and the Mockito-AssertJ duo.

Topics covered in this article may seem trivial. However, from my trainer experience I know that (unfortunately) it is not a common practice. Therefore, I decided to write this short blog post to propagate them and to be able to refer to it in the future.

given-when-then-template

My favorite testing framework for Java (and Groovy) is Spock. However, its mocks are not suitable for some purpose and I still use Mockito in various places. In addition, I still conduct a lot of my testing training in a JUnit/Mockito/AssertJ variant for teams which already have a test suite in that stack and would like to improve their skills without changing the known technology. Therefore, as an interlude, this blog post about testing in the pure Java style and propose how to tune up your JUnit testing framework assuming that you are already using Mockito and AssertJ (you should give them a try in the other case).

This blog post consists of tree parts. Firstly, I propose a BDD-style section-based test structure to keep your test more consist and more readable. Next, I explain how simplify – using the AssertJ and Mockito – constructions with Java 8. Last, but not least, I show how to configure it in IntelliJ IDEA as a default JUnit test (class) template (which isn’t as trivial as it should).

Part 1. BDD-style sections

Well written unit tests should meet several requirements (but it is a topic for a separate post). One of the useful practices is a clear separation into 3 code blocks with precisely defined responsibility. You can read more on that topic in my previous blog post.

As a repetition just the core rules presented in a short form:

  • given – an object under test initialization + stubs/mocks creation, stubbing and injection
  • when – an operation to test in a given test
  • then – received result assertion + mocks verification (if needed)
@Test
public void shouldXXX() {
  //given
  ...
  //when
  ...
  //then
  ...
}

That separation helps to keep tests short and focused on just one responsibility to test (in the end it’s just an unit test).

In Spock those sections are mandatory (*) – without them a test will not even compile. In JUnit there are just comments. However, having them in place encourage people to use them instead of having one big block of mess inside (especially useful for newbies in a testing area).

Btw, the mentioned given-when-then convention is based on (is a subset of) a much wider Behavior-Driven Development concept. You may encounter a similar division on 3 code blocks named arrange-act-assert which in general is an equivalent.

Part 2. Java 8 for AssertJ and Mockito

One of the features of Java 8 is an ability to put default methods in an interface. That can be used to simplify of calling static methods which is prevalent in the testing frameworks such as AssertJ and Mockito. The idea is simple. A test class willing to use a given framework can implement a dedicated interface to “see” those methods as its own methods on code completion in an IDE (instead of static methods from external class which require giving a class name before or a static import). Under the hood those default methods just delegate execution to static methods. You can read more about it in my other blog post.

AssertJ natively supports those construction starting with version 3.0.0. Mockito 1.10 and 2.x are Java 6 compatible and therefore it is required to use a 3rd-party project – mockito-java8 (which should be integrated into Mockito 3 – once available).

To benefit from easier method completion in Idea it is enough to implement two interfaces:

import info.solidsoft.mockito.java8.api.WithBDDMockito;
import org.assertj.core.api.WithAssertions;

class SampleTest implements WithAssertions, WithBDDMockito {

}

Part 3. Default template in Idea

I’m a big enthusiast of omnipresent automation. Wouldn’t it be good to have both given-when-then sections and extra interfaces automatically in place in your test classes? Let’s eliminate those boring things from our life.

Test method

Changing a JUnit test method is easy. One of the possible ways is “CTRL-SHIFT-A -> File Template -> Code” and a modification of JUnit4 Test Method to:

@org.junit.Test
public void should${NAME}() {
  //given
  ${BODY}
  //when
  //then
}

To add a new test in an existing test class just press ALT-INSERT and select (or type) JUnit4 Test Method.

Test class

With the whole test class the situation is a little bit more complicated. Idea provides a way to edit existing templates, however, it is used only if a test is generated with CTRL-SHIFT-T from a production class. It’s not very handy with TDD where a test should be created first. It would be good to have a new position “New JUnit test class” next to “Java class” displayed if ALT-INSERT is pressed being in a package view in a test context. Unfortunately, to do that a new plugin would need to be written (a sample implementation for Spock). As a workaround we can define a regular file template which (as a limitation) will be accessible everywhere (e.g. even in a resource directory).

Do “CTRL-SHIFT-A -> File Template -> Files”, press INSERT, name template “JUnit with AssertJ and Mockito Test”, set extension to “java” and paste the following template:

package ${PACKAGE_NAME};

import info.solidsoft.mockito.java8.api.WithBDDMockito;
import org.assertj.core.api.WithAssertions;

#parse("File Header.java") 
public class ${NAME} implements WithAssertions, WithBDDMockito {

}

Showcase

We are already set. Let’s check how it can look in practice (click to enlarge the animation).

idea-test-templates-in-action

Summary

I hope I convinced you to tune your test template to improve readability of your tests and to safe several keystrokes per test. In that case, please spend 4 minutes right now to configure it in your Idea. Depending on a number of tests written it may start to pay off sooner than you expect :).

Btw, at the beginning of October I will be giving a presentation about new features in Mockito 2 at JDD in Kraków.

JDD logo

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock/JUnit/Mockito/AssertJ quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Create better defined, shorter and maintainable unit tests with given-when-then sections.

Recently, I’ve been writing rather about more advanced concepts related to automatic testing (mostly related to Spock). However, conducting my testing training I clearly see that very often knowledge of particular tools is not the main problem. Even with Spock it is possible to write bloated and hard-to-maintain test, breaking (or not being aware of) good practices related to writing unit tests. Therefore, I decided to write about more fundamental things to promote them and by the way have a ready to use material to reference when coaching less experienced colleagues.

Introduction

Well written unit tests should meet several requirements and it is a topic for the whole series. In this blog post I would like to present a quite mature concept of dividing an unit test on 3 separate blocks with a strictly defined function (which in turn is a subset of Behavior-driven Development).

given-when-then to the victory

Unit tests are usually focused on testing some specific behavior of a given unit (usually one given class). As opposed to acceptance tests performed via UI, it is cheap (fast) to setup a class to test (a class under test) from scratch in an every test with stubs/mocks as its collaborators. Therefore, performance should not be a problem.

Sample test

To demonstrate the rules I will use a small example. ShipDictionary is a class providing an ability to search space ships based on particular criteria (by a part of a name, a production year, etc.). That dictionary is powered (energized) by different indexes of ships (ships in service, withdrawn from service, in production, etc.). In that one particular test it is tested an ability to search ship by a part of its name.

private static final String ENTERPRISE_D = "USS Enterprise (NCC-1701-D)";

@Test
public void shouldFindOwnShipByName() {
    //given
    ShipDatabase shipDatabase = new ShipDatabase(ownShipIndex, enemyShipIndex);
    given(ownShipIndex.findByName("Enterprise")).willReturn(singletonList(ENTERPRISE_D));
    //when
    List foundShips = shipDatabase.findByName("Enterprise");
    //then
    assertThat(foundShips).contains(ENTERPRISE_D);
}

given-when-then

The good habit which exists in both Test-driven and Behavior-driven Development methodologies is ‘a priori’ knowledge what will be tested (asserted) in a particular test case. It could be done in a more formal way (e.g. scenarios written in Cucumber/Gherkin for acceptance tests) or in a free form (e.g. ad hoc noted points or just an idea of what should be tested next). With that knowledge it should be quite easy to determine three crucial things (being a separated sections) of which the whole test will consist.

given – preparation

In the first section – called given – of a unit test it is required to create a real object instance on which the tested operation will be performed. In focused unit tests there is only one class in which the logic to be tested is placed. In addition, other objects required to perform a test (named collaborators) should be initialized as stubs/mocks and properly stubbed (if needed). All collaborators have to be also injected into the object under test which usually is combined with that object creation (as a constructor injection should be a preferred technique of dependency injection).

//given
ShipDatabase shipDatabase = new ShipDatabase(ownShipIndex, enemyShipIndex);
given(ownShipIndex.findByName("Enterprise")).willReturn(singletonList(ENTERPRISE_D));

when – execution

In the when section an operation to be tested is performed. In our case it is a search request followed by result memorization in a variable for further assertion.

//when
List foundShips = shipDatabase.findByName("Enterprise");

In most cases it is good to have just one operation in that section. More elements may suggest an attempt to test more than one operation which (possibly) could be divided into more tests.

then – assertion

The responsibility of the final section – then – is mostly an assertion of the previously received result. It should be equal to the expected value.

//then
assertThat(foundShips).contains(ENTERPRISE_D);

In addition, it may be necessary to perform a verification of method executions on declared mocks. It should not be a common practice as an assertion on received value in most cases is enough to confirm that code being tested works as expected (according to set boundaries). Nevertheless, especially with testing void methods it is required to verify that a particular method was executed with anticipated arguments.

AAA aka 3A – an alternative syntax

As I’ve already mentioned, BDD is a much wider concept which is especially handy for writing functional/acceptance tests with requirements defined upfront, (often) in a non technical form. An alternative test division syntax (with very similar meaning for the sections) is arrange-act-assert often abbreviated to AAA or 3A. If you don’t use BDD at all and three A letters are easier to remember for you than GWT, it’s perfectly fine to use it to create the same high quality unit tests.

Tuning & optimization

The process of matching used tools and methodologies to ongoing process of skill acquisition (aka the Dreyfus model) has been nicely described in the book Pragmatic Thinking and Learning: Refactor Your Wetware. Of course, in many cases it may be handy to use a simplified variant of a test with a given section moved to a setup/init/before section or initialized inline. The same may apply to when and then sections which could be merged together (into an expect section, especially in parameterized tests). Having some experience and fluency in writing unit tests it is perfectly valid to use shorthand and optimizations (especially testing some non-trivial cases). As long as the whole team understand the convention and is able to remember about basic assumptions regarding writing good unit tests.

Summary

Based on my experience in software development and as a trainer I clearly see that dividing (unit) tests into sections makes them shorter and more readable, especially having less experienced people in the team. It’s simpler to fill 3 sections with concisely defined responsibility than figure out and write everything in the tests at once. In closing, particularly for people reading only the first and the last sections of the article, here are condensed rules to follow:

  • given – an object under test initialization + stubs/mocks creation, stubbing and injection
  • when – an operation to test in a given test
  • then – received result assertion + mocks verification (if needed)

P.S. It is good to have a test template set in your IDE to safe a number of keystrokes required to write every test.
P.S.S. I you found this article useful you can let me know to motivate me to write more about unit test basics in the future.

Picture credits: Tomas Sobek, Openclipart, https://openclipart.org/detail/242959/old-scroll

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock/JUnit/Mockito/AssertJ quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.