Get know about my migration from WordPress to Hugo and the reasons behind it.

IMPORTANT. This blog has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

I started my technical blog at the end of 2010. The primary reason was a need to find a place to write about my tools, my ideas and spotted problems (or course with solutions to make life easier for others :-) ). I didn’t plan to write a lot and I accomplished that. By that 9+ years I published 61 posts – just ~6,5 per year, gaining popularity year by year.

The trends were changing over time. The number one for long time was posted in 2011 guide about improving the TestNG reporting with ReportNG. Next, it was Gradle and displaying dependencies in a multi-project build, followed by more compact Mockito with Java 8 and using Mockito with JUnit 5. I am also happy that people are looking for basic, but crutial topics such as importance of given-when-then in unit tests.

Coming back to the editorial things, initially, I chose WordPress as I was a popular FOSS solution with an ability to (optionally) make it self-hosted (as opposed to Blogger). It was possible to easily setup everything and to have a fully-fledged blog quickly. Thank you Automattic for that!

My blog has been always a side project to express myself and help others. As that, I would prefer to do not pay monthly subscription fee. In the free version wordpress.com has a few limitations. First of all, you cannot use your own domain. As a result even having redirects from your domain, most of the external sites link to wordpress.com and you do not fully control the content. Secondly, you are limited to the predefined themes and a level of its customization (e.g. CSS styles) is quite restricted. Thirdly, ads. Despite a clear “Occasionally, some of your visitors may see an advertisement here,” banner displayed to me, long time I wasn’t aware how it looks in practice (my hardened browser configuration limits it greatly). My visitors seeing that (different, possible targeted) ads on my blog could consider me as its source, which I didn’t like.

As hosting WordPress on my VPS could be challenging (I don’t know PHP, nor WP itself) and risky (it needs to be updated regularly due to detected security threats), I decided to go into static site generators (namely Hugo). Having initial configuration behind (which can take some time…, even with so nice and future rich theme as Zzo which in addition has a very friendly maintainer) it promises to be easy to maintain. In addition, it can be much easily hosted. Here, I chose Netlify which, in addition to a lot of interesting features, also provides great integration with the Git-based deployment workflow (blog as code). All at the cost of much harder integration with dynamic features (such as comments). There is a chance that I will write something some it in the future.

I have some ideas for (hopefully) interesting topics to cover. See you at my new blog!

P.S. As I don’t have an access to the list of subscribers, please subscribe to the new feed or the new email notifications in the new blog to get know what is going on here :-).

Get know what you can expect from Spock 2.0 M2 (based on JUnit 5), how to migrate to it in Gradle and Maven, and why it is important to report spotted problems :).

IMPORTANT. This blog site has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

Important note. I definitely do not encourage you to migrate your real-life project to Spock 2.0 M1 for good! This is the first (pre-)release of 2.x with not finalized API, intended to gather user feedback related to internal Spock migration to JUnit Platform.

This blog post arose to try to encourage you to make a test migration of your projects to Spock 2.0, see what started to fail, fix it (if caused by your tests) or report it (if it is a regression in Spock itself). As a result – at the Spock side – it will be possible to improve the code base before Milestone 2. The benefit for you – in addition to contribution to the FOSS project :-) – will be awareness of required changes (kept in a side branch) and readiness to migration once Spock 2.0 is more mature.

I plan to update this blog post when the next Spock 2 versions are available.

Updated 2020-02-10 to cover Spock 2.0 M2 with a dedicated Groovy 3 support.

Spock 2 + JUnit 5

Powered by JUnit Platform

The main change in Spock 2.0 M1 is migration to JUnit 5 (precisely speaking to execute tests with JUnit Platform 1.5, part of JUnit 5 instead of the JUnit 4 runner API). This is very convenient as Spock tests should be automatically recognized and executed everywhere the JUnit Platform is supported (IDEs, build tools, quality tools, etc.). In addition, the features provided by the platform itself (such as parallel test execution) should be (eventually) available also for Spock.

Gradle

To bring Spock 2 to a Gradle project it is needed to bump the Spock version:

testImplementation('org.spockframework:spock-core:2.0-M2-groovy-2.5')

and activate tests execution by JUnit Platform:

test {
    useJUnitPlatform()
}

Update 20200218. It is enough, but as Tomek Przybysz reminded in his comment, Gradle by default doesn’t fail if not tests are found. It may lead to a situation when after making that switch a build finished successfully, giving a false sense of security, while there are no tests executed at all.
It is a known issue in Gradle, not only limited to Spock. As a woraround the aforementioned configuration might be extended to:

test {
    useJUnitPlatform()

    afterSuite { desc, result ->
        if (!desc.parent) {
            if (result.testCount == 0) {
                throw new IllegalStateException("No tests were found. Failing the build")
            }
        }
    }
}

Maven

With Maven on the other hand, it is still required to switch to a never Spock version:

<dependency>
  <groupId>org.spockframework</groupId>
  <artifactId>spock-core</artifactId>
  <version>2.0-M2-groovy-2.5</version>
  <scope>test</scope>
</dependency>

but that is all. The Surefire plugin (if you use version 3.0.0+) executes JUnit Platform tests by default, if junit-platform-engine (a transitive dependency of Spock 2) is found.

The minimal working project for Gradle i Maven is available from GitHub.

Other changes

Having such big change as migration to JUnit Platform, number of other changes in Spock 2.0 M1 is limited, to make finding a reason of potential regressions a little bit easier. As a side effects of the migration itself, the required Java version is 8.

In addition, all parameterized tests are (finally) “unrolled” automatically. That is great, however, currently there is no way to “roll” particular tests, as known from spock-global-unroll for Spock 1.x.

Some other changes (such as temporarily disabled SpockReportingExtension) can be found in the release notes.

More (possibly breaking) changes are expected to be merged into Milestone 2.

Issue with JUnit 4 rules

The tests using JUnit 4 @Rules (or @ClassRules) are expected to fail with an error message suggesting that requested objects were not created/initialized before a test (e.g. NullPointerException or IllegalStateException: the temporary folder has not yet been created) or were not verified/cleaned up after it (e.g. soft assertions from AssertJ). The Rules API is no longer supported by JUnit Platform. However, to make the migration easier (@TemporaryFolder is probably very often used in Spock-based integration tests), there is a dedicated spock-junit4 which internally wraps JUnit 4 rules into the Spock extensions and executes it in the Spock’s lifecycle. As it is implemented as a global extension, the only required thing to add is another dependency. In Gradle:

testImplementation 'org.spockframework:spock-junit4:2.0-M2-groovy-2.5'

or in Maven:

<dependency>
    <groupId>org.spockframework</groupId>
    <artifactId>spock-junit4</artifactId>
    <version>2.0-M2-groovy-2.5</version>
    <scope>test</scope>
</dependency>

That makes migration easier, but it is good to think about a switch to native Spock counterpart, if available/feasible.

Groovy 3 support

Updated 2020-02-10. The whole Groovy 3 section was added to cover changes in Spock 2.0 Milestone 2.

After my complains about runtime failure when Spock 2.0 M1 is used with Groovy 3.0, I rolled up my sleeves to check how hard would it be to provide that support. It took me some time, however, after after 23 (multiple times rebased) commits, constructive feedback from other Spock contributors and fruitful discussion with the Groovy developers, the support for Groovy 3 has been merged and is available as the main feature of Spock 2.0 M2, released right after Groovy 3.0.0 final.

To use Spock 2.0 M2 with Groovy 3 it is enough to just use the spock-*-2.0-M1 artifacts with the -groovy-3.0 suffix.

It is worth noting that Groovy 3 is not backward compatible with Groovy 2. To keep one Spock codebase, there is a layer of abstraction in Spock to allow to build (and use) the project with both Groovy 2 and 3. As a result, an extra artifact spock-groovy2-compat is (automatically) used in projects with Groovy 2. It is very important to do not mix the spock-*-2.x-groovy-2.5 artifacts with the groovy-*-3.x artifacts on a classpath. This may result in weird runtime errors.

I’m really happy that developers can immediately start testing the new Groovy 3.0.0 with Spock 2.0-M2 in their projects. In addition, as Spock is quite important (and low level) project in the Groovy ecosystem, it was nice to confirm that Groovy 3 works properly with it (and to report – along the way – a few minor detected issues to make Groovy even better ;-) ).

Other issues and limitations

Updated 2020-02-10. This section originally refereed to limitations of Spock 2.0 M1. Milestone 2 supports Groovy 3.

Spock 2.0 M1 is compiled and tested only with Groovy 2.5.8. As of M1, execution with Groovy 3.0 is currently blocked at runtime. Unfortunately, instead of a clear error message about incompatible Groovy version there is only a very cryptic error message:

Could not instantiate global transform class org.spockframework.compiler.SpockTransform specified at
jar:file:/.../spock-core-2.0-M1-groovy-2.5.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation
because of exception java.lang.reflect.InvocationTargetException

It is already reported and should be enhanced by M2.

Sadly, the limitation to Groovy 2.5 only, reduces potential feedback from people experimenting with Groovy 3 which is pretty close to a stable version (RC2 as of 2019/2020). It’s especially inconvenient as many Spock tests just work with Groovy 3 (of course there are some corner cases). There is a chance that Spock 2 before getting final will be adjusted to changes in Groovy 3 or at least the aforementioned hard limitation will be lifted. In the meantime, it is required to test Groovy 3 support with the snapshot version – 2.0-groovy-2.5-SNAPSHOT (which has that check disabled).

Summary

The action to do after reading this post is simple. Try to temporarily play with Spock 2.0 M2 in your projects and report any spotted issues, to help make Spock 2.0 even better :).

“Not only individuals and interactions, but also a community of professionals” – The Software Craftsmanship Manifesto, 2009

Beginning

A long time ago (still) in a world of the prevailing Waterfall, I was fascinated by how Agile can simplify and streamline the process of software development. However, after some time committed to implementing the new approach in my organization, I was hit by the fact that even with Scrum (and all its fancy ceremonies) we can still produce code that – to use a euphemism — leaves more to be desired.

I started looking for some techniques to improve the quality of the created code and — as a result — the quality of the forged products and solutions. Clean code, automatic code testing, and Test-Driven Development were the most important missing parts I discovered. It was somehow satisfying to realize, after a while, that Extreme Programming, the Software Craftsmanship movement, and me, all had a common goal :)

Community of professionals

Expansion

However, turning the ideas and theoretical knowledge into an efficiently working mechanism was not so easy. First, I needed to grasp (and preferably master) it on my own to be able to spread it successfully out into the company as a whole. In addition to literature and the Internet, I was lucky enough to encounter some fantastic people and interesting local initiatives which allowed me to meet other developers thinking the same way, exchange knowledge (and ideas), and practice using it in a safe environment (a number of Coding Dojo sessions with a lot of katas made on my own, in a pair or in a group). Being equipped with those (practical) skills, I had a solid foothold to now spread it within the company and pass it on to interested teammates. All with the ultimate goal in mind – to finally improve the quality of created solutions and to make customers and developers more satisfied. It was not hassle-free, to say the least, and in the end, not all of my goals were achieved, but sometime later – perusing my own path of a Software Craftsman – I moved on from that particular organization with the feeling that I had left some good things and ideas behind.

The main trigger for this — somehow personal — blog post is the fact that, in December, 10 years ago, I gave my first public speech at the Warszawa JUG meeting. It featured the power of the full-text search mechanism embedded into a custom application with the help of the Compass library (which later on became a foundation of Elastic Search). The talk wasn’t perfect but based on the feedback, some of the attendees definitely learned something new. In addition, what was also very important was that it showed me how much I can learn myself while preparing a presentation for others. So, I started to share the ideas in wider circles.

Stabilization

The topics covered quickly turned into my main specialization — code quality and automatic code testing (together with the related aspects such as Continuous Delivery) to show other people the way and encourage them to try it on their own. Since that time, I have given dozens of talks for local user groups, small closed events, and large international conferences, both in Poland and across Europe. I (co-)organized some local meetups and larger conferences. I also wrote a number of (educational) blog posts. While I was gradually reducing my active commitment in the community (after all, a day has only 24 hours), a while after my initial presentation, I also started to conduct training sessions for groups and organizations to spread (good) ideas and practical skills in depth. Being a trainer, in the process of time, turned into a regular part of my working life, but I still try to be on terms with current techniques and tools, working hands-on with real projects and solving real problems. It also helps me to enrich my training with anecdotage and real-world examples from my own experience. I really like to demonstrate to less experienced fellow developers how their solutions could be more readable, more bullet-proof, and easier to maintain.

The Future

Is it still needed in 2020+, one might ask? Based on my experience from all my activities, the situation in general has improved over time. Unfortunately, there is still a lot to do. Automatic code testing is still not properly covered during studies or is not presented to students at all. Large parts of commercially developed software cannot be truly named “high-quality” and solid automatic code testing (not to mention Test-Driven Development) is not considered to be a MUST HAVE in all projects. Therefore, I continue my mission to make the world a (slightly) better place, by preaching the ideas that are worth spreading, building a community of professionals, and paying back to people and communities who — at the beginning of my journey — were there to assist in my growth.

Are you able to change the world? Good point. The IT world is quite big, and I cannot reach everyone. However, I am not the only one. For instance, looking at my fellow trainers, I see a number of passionate professionals with various areas of expertise, caring – in different ways – about high levels of knowledge in the community, which makes the whole thing look (a little) less hopeless :).

Is it worth doing? Well, firstly, continuous learning — a requirement to be a good trainer (and speaker) — is definitely a benefit for me. Secondly, I want to work with sensible people on a daily basis. Therefore, the more of them, the better. Thirdly, occasionally there are people who encounter me in one or another way to say “Thank you” for the things I directly or indirectly did (which were somehow beneficial for them). It is really elevating to see that you were able to help someone in something. In those moments, I clearly see that my work did not go entirely down the drain :).

P.S. Recommended reading: “The Software Craftsman: Professionalism, Pragmatism, Pride” by Sandro Mancuso
(Please note, the link above is linked to my account at Amazon. Feel free to use this generic link instead if preferred)

The lead photo by Anemone123 obtained from Pixabay.

Publishing a newly created Git branch to a remote repository can be easier than you might expect.

Introduction

It is a very often situation in various Git workflow models to create a new branch and push (publish) it to a remote repository. Majority of people creates a lot of new branches. Just to initialize a pull (merge) request, to show code to remote workmates or just to backup local changes overnight.

git publish branch

Unfortunately, it is not as easy in Git as it could be:

~/C/my-fancy-project (master|✓) $ git checkout -b featureX
Switched to a new branch 'featureX'

~/C/my-fancy-project (featureX|✓) $ git push
fatal: The current branch featureX has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin featureX

Hmm, just copy-paste the given line and you are set:

~/C/my-fancy-project (featureX|✓) $ git push --set-upstream origin featureX
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureX -> featureX
Branch 'featureX' set up to track remote branch 'featureX' from 'origin'.

Of course you may memorize it after some time (however, I observe that many people do not) or even use the shorter syntax:

~/C/my-fancy-project (featureX|✓) $ git push -u origin featureX
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureX -> featureX
Branch 'featureX' set up to track remote branch 'featureX' from 'origin'.

Nonetheless, for me it was to many characters to type, especially repeated multiple times, especially in a typical workflow with one remote repository (usually named origin).

xkcd - Is It Worth the Time?

xkcd – Is It Worth the Time? – https://xkcd.com/1205/

Solution

The perfect solution for me would be just one command. Something like git publish.

~/C/my-fancy-project (master|✓) $ git checkout -b featureY
Switched to a new branch 'featureY'

~/C/my-fancy-project (featureY|✓) $ git publish
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureY -> featureY
Branch 'featureY' set up to track remote branch 'featureY' from 'origin'.

Would not it be nice?

As you may know from my previous posts, I am a big enthusiast of comprehensive automation (such as CI/CD) or at least semi-automation (aka “making things easier”) when the previous is not possible (or viable). Therefore, at the time, I started looking at possible improvements. Git is written by developers for developers and offers different ways of customization. The easiest is write an alias. In that case is as simple as adding to ~/.gitconfig:

[alias]
    # Gets the current branch name - useful in other commands
    # Git 2.22 (June 2019) introduced "git branch --show-current"
    branch-name = "!git rev-parse --abbrev-ref HEAD"

    # Pushes the current branch to the remote "origin" (or the remote passed as the parameter)
    # and set it to track the upstream branch
    publish = "!sh -c 'git push -u ${1:-origin} $(git branch-name)' -"

As a result in addition to the basic case (seting an upstream branch to origin (if needed) and pushing branches from the current branch to origin):

$ git publish

it is also possible to do publish to some other remote repository:

$ git publish myOtherRemote

Cleaning up

As a counterpart to git publish, it is easy to implement git unpublish:

[alias]
    # Deletes the remote version of the current branch from the remote "origin"
    # (or the remote passed as the parameter)
    unpublish = "!sh -c 'git push ${1:-origin} :$(git branch-name) && git branch --unset-upstream $(git branch-name)' -"

to be remove the current branch from a remote repository (origin or passed as the second parameter):

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git unpublish
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

instead of:

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git push origin --delete featureNoLongerNeeded
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

or

~/C/my-fancy-project (featureNoLongerNeeded|✓) $ git push origin :featureNoLongerNeeded
To /tmp/my-fancy-project-remote/
 - [deleted]         featureNoLongerNeeded

Again, shorter and easier to remember.

Partial built-in solution

As proposed by indispensable Łukasz Szczęsny, hassle-free pushing only (without pulling) can be also achieved with Git configuration itself. It may be sufficient having branches removed automatically after a PR is merged (e.g. in properly configured GitLab or GitHub). In that case it is required to set pull.default configuration parameter to current:

~/C/my-fancy-project (master|✓) $ git checkout -b featureZ
Switched to a new branch 'featureZ'

~/C/my-fancy-project (featureZ|✓) $ git push -u
fatal: The current branch featureX has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin featureZ

~/C/my-fancy-project (featureZ|✓) $ git config --global push.default current

~/C/my-fancy-project (featureZ|✓) $ git push -u
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/my-fancy-project-remote/
 * [new branch]      featureZ -> featureZ
Branch 'featureZ' set up to track remote branch 'featureZ' from 'origin'.

~/C/my-fancy-project (featureZ|✓) $ git pull
Already up to date.

Please pay attention to the -u flag in git push -u. It is required to setup remote branch tracking. Without that git pull alone would not work.

Summary

I have been using git publish (and git unpublish) for many years and I really like it. Taking the opportunity of writing this Git Tricks blog series I decided to share it with others (felt in love in a command line :-) ). Remember, however, it is now a part GitKurka (or its uncensored upstream project) – a set of useful and productive tweaks and aliases for Git.

Btw, I do not conduct Git training anymore, but people wanting to develop their Git skills even more may consider an on-site course from Bottega (PL/EN), an online course by Maciej Aniserowicz (devstyle.pl) (PL) or a comprehensive Pro Git book (EN).

Update 20190910. Added partial built-in alternative solution suggested by Łukasz Szczęsny.
Update 20190913. Added missing “branch-name” alias. Pointed out by Paul in a comment.

The lead photo based on the Iva Balk‘s work published in Pixabay, Pixabay License.

Get know how to solve issue with pushing to submodules directly from the main repo while keeping the project easily cloneable by external contributors.

Introduction

The Git submodules mechanism is pretty handy to keep the source code of lousily related dependent software together in one Git repository while leaving their development separate. It is something like symlinks in the Unix world, but with an ability to refer also to the previous version. It’s quite popular in the projects using source code integration (instead of shared libraries) or to speed up development by making related changes in multiple repositories easier. It is not the only possible solution – Gradle, for instance, provides a composite build mechanism. Monorepo is an another approach, but it has its own limitations and it very problematic to use in FOSS projects developed by different people/teams.

As usual, I encountered that situation in one of my projects. As a big enthusiast of automatic code testing and Continuous Delivery, some time ago I have been working on improving the reliability of my (automatically released) gradle-nexus-staging-plugin. After each commit (so also before the release) I wanted to have the end-to-end tests executed to verify that the plugin is able to pass a simple (but real) project through the release process to Maven Central (aka The Central Repository).

I could move that test project to the plugin repository, but – well – it’s a distinct project which could be also released separately or replaced with some other project. In addition, testing two variants of releasing it was handy to use keep it in two branches “mounted” twice in my root repository.

gradle-nexus-staging-plugin
├── src
│   ├── funcTest
│   │   ├── groovy
│   │   │   └── ...
│   │   └── resources
│   │       └── sampleProjects
│   │           ├── nexus-at-minimal (submodule - master branch)
│   │           │   └── ...
│   │           ├── nexus-at-minimal-publish (submodule - publish branch)
│   │           │   └── ...
│   │           └── ...
│   ├── ...

Problem (aka Challenge)

gradle-nexus-staging-plugin contains a submodule nexus-at-minimal from a separate repo. Working on new fancy feature in GNSP it is needed to tweak the acceptance testing project. To make the development smoother it’s very useful to be able to commit introduces changes in the dependent project directly from the working copy of the main project. It works out of the box. However, later on we should also directly push them back to a separate repo of that dependent project (by stepping into the submodule and calling git push or – all-in-once by – using the git push --recurse-submodules=on-deamand parameter when pushing from the main repository). And then – in the long term – we encounter some inconvenience.

Let’s start by defining a submodule path to connect via SSH:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = git@gitlab.com:nexus-at/nexus-at-minimal.git

In general it works fine and pushing is allowed. However, any non-developer trying to clone that repo (and initialize submodules) gets:

...
git@gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

It’s pretty bad in the open source (FOSS) development where the projects are publicly available other people are encouraged to download (clone) it, build and contribute.

The obvious remedy to the previous error is switching to HTTPS:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = https://gitlab.com/nexus-at/nexus-at-minimal.git

However then, to allow developers to commit changes directly to a submodule it is required to go through a separate HTTPS authentication which in the most cases is completely not use in favor of SSH (and would need to be configured independently with an access token).

Transparent solution

To keep external contributors happy the developers could manually change the url in .gitmodules from HTTPS to SSH for every cloned project. However, it’s quite tedious. The better solution is an usage of pushInsteadOf. For the aforementioned example, the developers need only to add to the global ~/.gitconfig configuration file:

[url "git@gitlab.com:nexus-at/"]
    pushInsteadOf = https://gitlab.com/nexus-at/

It effectively overrides the push URL to use SSH instead of HTTPS for the whole group in GitLab/GitHub, covering also submodules (where HTTPS – external contributors friendly scheme – is left as a default).

Simple solution for the same server

As I use GitLab for my profession activity, at that time, I was evaluating its usage also for my FOSS project. Therefor, two different servers (providers) used for the main project and a submodule (here GitHub and GitLab). However, very often it is placed on the same server or even in the same group. In that case the configuration can be simplified even more with relative paths:

[submodule "src/funcTest/resources/sampleProjects/nexus-at-minimal"]
        path = src/funcTest/resources/sampleProjects/nexus-at-minimal
        url = ../nexus-at-minimal.git

The protocol used to clone the main repository (SSH or HTTPS) will be used also for a submodule.

Summary

URL rewriting and conditional configuration (covered in the first part) are just a subset of options available in Git to make the development more flexible and simple. Simple, assuming we already found the required features and learned how to use it ;-).

Update 20190405. Added simple solution with a relative path.

The lead photo based on the Mohamed Hassan‘s work published in Pixabay, Pixabay License.

Have you even committed to Git using wrong email address working on/for different projects/companies? Luckily with a little configuration Git can auto-switch the identities for you.

IMPORTANT. This blog has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

(Too long) introduction and reasoning

Being an (experienced) IT professional can give you an opportunity to work on different things in the same time frame. For instance, in addition to work for the main client, I do some consultancy stuff with code quality & testing and Continuoues Delivery for other companies. What’s more, I also conduct training sessions (will a lot of exercises with code) and work on my own and other’s FOSS projects. Usually, it is convenient to do it on the same computer. It can happen to commit something with the wrong email address (to be detected later on by an external auditor ;-) ) and a push with force to a remote master after rebase is not the best possible solution :-). I started with some Fish-based script to deal with it, but in the end I have found a built-in mechanism in Git itself.

Disclaimer. This particular blog post doesn’t treat about anything new or revolutionary. However, I’ve been living in unawareness long enough to give my blog readers a chance to know that mechanism right now.

superhero-git-identities

Project situation

My ~/Code directory structure could be simplified to something like that:

Code/
├── gradle-nexus-staging-plugin
├── mockito-java8
├── spock-global-unroll
├── ...
├── external-foss
│   ├── ...
│   ├── awaitility
│   └── spock
├── training
│   ├── ...
│   ├── code-testing
│   ├── java11
│   └── jenkins-as-code
└── work
    ├── ...
    ├── codearte
    └── companyX

The goal is to keep an email address in the company’s projects appropriate.

Solution

The first idea which can spring to your mind is to manually override git config user.email "..." in every clonned company repository. However, it’s tedious and error prone (especially in the microservice-based architecture :) ).

Luckily, one of the features introduces in Git 2.13.0 (long time ago – May 2017) is a conditional configuration applying (aka Conditional includes). Armed with it our task is pretty simple.

First, keep your default name and email defined in the [user] section in ~/.gitconfig:

[user]
  name = Marcin Zajączkowski
  email = foss.hacker@mydomain.example.com

Next, create a company related config file in the home directory, e.g. ~/.gitconfig-companyX with just overriden email address:

[user]
  email = marcin.zajaczkowski@companyx.example.com

Glue it together by placing in ~/.gitconfig the following code:

[includeIf "gitdir:~/Code/work/companyX/"]
  path = .gitconfig-companyX

The .gitconfig-companyX is applied only if a current path prefix matches the one defined in includeIf. Simple, yet powerful. That mechanism can be also used to configure different things – such as conditionally using GPG to sign your commits.

Btw, being a paranoid you can even remove the email field from the root configuration to be notified if you forgot to add an email for any new company you colaborate with.

Summary

Thanks to the includeIf mechanism you will never (*) again commit to a repo with wrong name or email. That and other Git related tricks and commands/aliases (such as collected in git-kurka make it easier to focus on delivering the stuff :).

Feel free to leave your favorite Git tips & tricks in the comments.

Update. From the comment made on priv:
– Git works great in that case, but for more generic changes in the environment variables Wybczu suggested very powerful direnv (with support for bash, zsh, tcsh, fish and elvish)

The lead photo based on the Elias Sch‘s work published in Pixabay, Pixabay License.

Make your (automatic) releasing to Maven Central from Travis (and not only) more reliable thanks to the explicit staging repository creation feature set implemented at the edge of 2018 and 2019.

Background

If you are only interested in getting information how to make your artifacts releasing more reliable from Travis, move forward to the another section.

Automatic artifacts releasing (using a staging repository and its promotion) from Gradle to Maven Central has been always tricky. The Nexus REST API related to those operations is very poorly documented. In addition, Gradle doesn’t natively support uploading artifacts to a dedicated staging repository even if it was already created explicitly. In the result a heuristic to determine which repository contains just uploaded artifacts has to be used, what brings some serious limitations. The apogee of the problems was to have Travis changes its architecture to more stateless in the late autumn of 2018. It caused the upload requests for particular artifacts to be routed via machines with different IP addresses, which resulted in multiple stating repositories created for a single gradle uploadArchives or gradle publish calls. That made automatic artifact releasing with Gradle from Travis completely broken. Up until now.

reliable-releasing-to-maven-central

Improvements

Two good things happened at the edge of the years. The first was the appearance of the new nexus-publish plugin by Marc Philipp. It creates an explicit staging repository using the Nexus API and enhance the Gradle publish task to use that repository. The second thing was an enhancement in my gradle-nexus-staging plugin which started to allow setting the staging repository ID which should be used during the release operation. That leaded to improve the reliability of releasing to Maven Central using Gradle.

Instead of relying on a heuristic to determine which repository should be used for release, the new staging repository is explicitly created. The artifacts are uploaded directly to it, it is closed and releasing. Thanks to that, everything works smother and it more error-proof. In addition, there is no problem with parallel releasing of different projects belonging to the same staging profile and it finally works properly back again with Travis.

Configuration

This post assumes you have already configured uploading your artifacts to Maven Central (aka The Central Repository) using the maven-publish plugin. If not you may consult this link. This configuration will make your deployment and releasing more reliable without a need to do any manual operations in Nexus UI.

plugins {
    ... //other plugins used in your project
    id 'io.codearte.nexus-staging' version '0.20.0'
    id 'de.marcphilipp.nexus-publish' version '0.2.0'
}

publishing {
    ... //your current publishing to Maven Central configuration
}

//optionally
nexusStaging {
    packageGroup = "your-package-group-if-different-than-groupId"
}

//optionally
nexusPublishing {
    //for custom configuration if needed - credentials
    //       are by default taken from nexus-staging
    //       or from properties nexusUsername and nexusPassword
}

Do you expect much more code (configuration) to write? Everything is hidden in the plugins which leverage each other. Just please remember to use nexus-staging 0.20.0+ and nexus-publish 0.2.0+.

After that artifacts uploading with releasing is a matter of one command:

./gradlew publish closeAndReleaseRepository

Calling the publish task (or publishToNexus for the older versions) creates a staging repository and memorized its ID. closeAndReleaseRepository closes and releases that one particular repository. After a few minutes your artifacts should be available in Maven Central.

Important. Bear in mind that publish (or publishToNexus) and closeAndReleaseRepository has to be used in the one Gradle execution to be able to leverage explicitly created staging repository.

Update 20190506. With nexus-publish 0.2.0+ calling just publish instead of publishToNexus is enough to trigger the logic, so the sample command was simplified.

Summary

Gradle is a very nice build tool where (almost) the sky is the limit. Unfortunately, there are still some long lasting issues which require using some hacks or writing custom plugins to overcome them. The promising is that with every release they are slowly fixed/implemented. To solve that particular problem a bottom-up work was required to bring releasing back for Travis and more reliable in general.

Please note. The presented approach works pretty well for using the (recently improved) publishing plugin. If you still use the old maven plugin (having the uploadArchives task instead of publish one) you need to migrate and/or put your comment in the corresponding issue.

The lead photo based on the Siala‘s work published in Pixabay, Pixabay License.

Learn how leverage Spock 1.2 to slice a Spring context of a legacy application writing integration tests.

IMPORTANT. This blog site has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

Have you ever wanted, having some legacy application which you were starting to work on, to write some tests to get know what is going on and possibly be notified about regressions? That feeling when you want to instantiate a single class and it fails with NullPointerException. 6 replaced (with difficulty) dependencies later there are still some errors from the classes that you haven’t heard about before. Sounds familiar?

There are different techniques to deal with hidden dependencies. There is the whole dedicated book about that (and probably a few other that I haven’t read). Occasionally, it may be feasible to start with the integration tests and run through some process. It may be even more “entertaining” to see what exotic components are required to just setup the context, even if they are completely not needed in our case. Thank you (too wide and carelessly used) @ComponentScan :).

Injecting stubs/mocks inside the test context is a way to go as an emergency assistance (see the last paragraph, there are better, yet harder approaches). It can be achieved “manually” with an extra bean definition with the @Primary annotation (usually a reason to think twice before doing that) for every dependency at which level we want to make a cut of (or for every unneeded bean which is instantiated by the way). @MockBean placed on a field in a test is more handy, but still, it is needed to define a field in our tests and put the annotation on it (5? 10? 15 beans?). Spock 1.2 introduces somehow less know feature which may be useful here – @StubBeans.

Mockied dependencies Spring context

It can be used to simply provide a list of classes which (possible) instances should be replaced with stubs in the Spring test context. Of course before the real objects are being instantiated (to prevent for example NPE in a constructor). Thanks to that up to several lines of stubbing/mock injections:

@RunWith(SpringRunner.class) //Spring Boot + Mockito
@SpringBootTest //possibly some Spring configuration with @ComponentScan is imported in this legacy application
public class BasicPathReportGeneratorInLegacyApplicationITTest { //usual approach

    @MockBean
    private KafkaClient kafkaClientMock;

    @MockBean
    private FancySelfieEnhancer fancySelfieEnhancerMock;

    @MockBean
    private FastTwitterSubscriber fastTwitterSubscriberMock;

    @MockBean
    private WaterCoolerWaterLevelAterter waterCoolerWaterLevelAterterMock;

    @MockBean
    private NsaSilentNotifier nsaSilentNotifierMock;

    //a few more - remember, this is legacy application, genuine since 1999 ;)
    //...

    @Autowired
    private ReportGenerator reportGenerator;

    @Test
    public void shouldGenerateEmptyReportForEmptyInputData() {
        ...
    }
}

can be replaced with just one (long) line:

@SpringBootTest //possibly some Spring configuration with @ComponentScan is imported in this legacy application
@StubBeans([KafkaClient, FancySelfieEnhancer, FastTwitterSubscriber, WaterCoolerWaterLevelAterter, NsaSilentNotifier/(, ... */])
  //all classes of real beans which should be replaced with stubs
class BasicPathReportGeneratorInLegacyApplicationITSpec extends Specification {

    @Autowired
    private ReportGenerator reportGenerator

    def "should generate empty report for empty input data"() {
        ....
    }
}

(tested with Spock 1.2-RC2)

It’s worth to mention that @StubBeans is intended just to provide placeholders. In a situation it is required to provide stubbing and/or an invocation verification @SpringBean or @SpringSpy (also introduced in Spock 1.2) are better. I wrote more about it in my previous blog post.

There is one important aspect to emphasize. @StubBeans are handy to be used in a situation when we have some “legacy” project and want to start writing integration regression tests quickly to see the results. However, as a colleague of mine Darek Kaczyński brightly summarized, blindly replacing beans which “explode” in tests is just “sweeping problems under carpet”. After the initial phase, when we are starting to understand what is going on, it is a good moment to rethink the way the context – both in production and in tests – is created. The already mentioned too wide @ComponentScan is very often the root of all evil. An ability to setup a partial context and put it together (if needed) is a good place to start. Using @Profile or conditional beans are the very powerful mechanisms in tests (and not only there). @TestConfiguration and proper bean selection to improve context caching are something worth to keep in your mind. However, I started this article to present the new mechanism in Spock which might be useful in some cases and I want to keep it short. There could be an another, more generic blog post just about managing the Spring context in the integration tests. I have to seriously thing about it :).

Discover how to automatically inject Spock’s mocks and spies into the Spring context using Spock 1.2.

IMPORTANT. This blog site has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

Coffee beans with 3 different beans

Stubs/mocks/spies in Spock (and their life cycle) have been always tightly coupled with the Spock Specification class. It was only possible to create them in a test class. Therefore, using shared, predefined mocks (in both unit and integration tests) was problematic.

The situation was slightly improved in Spock 1.1, but only with the brand new Spock 1.2 (1.2-RC1 as a time of writing) using the Spock mocking subsystem in Spring-based integration tests is as easy as using @SpringMock for Mockito mocks in Spring Boot. Let’s check it up.

Btw, to be more cutting edge in addition to Spock 1.2-RC1, I will be using Spring Boot 2.1.0.M2, Spring 5.1.0.RC2 and Groovy 2.5.2 (but everything should work with the stable versions of Spring (Boot) and Groovy 2.4).

One more thing. For the sake of simplicity, in this article, I will be using a term ‘mock’ to refer also stubs and spies. They differs in behavior, however, in a scope of injecting it into the Spring context in the Spock tests it usually doesn’t matter.

Spock 1.1 – manual way

Thanks to the work of Leonard Brünings, mocks in Spock were decoupled from the Specification class. It was finally possible to create them outside and to attach it later on into a running test. It was the cornerstone of using Spock mocks in the Spring (or any other) context.

In this sample code we have the ShipDatabase class which uses OwnShipIndex and EnemyShipIndex (of course injected by a constructor :) ) to return aggregated information about all known ships matched by name.

//@ContextConfiguration just for simplification, @(Test)Configuration is usually more convenient for Spring Boot tests
//Real beans can exist in the context or not
@ContextConfiguration(classes = [ShipDatabase, TestConfig/*, OwnShipIndex, EnemyShipIndex*/])
class ShipDatabase11ITSpec extends Specification {

    private static final String ENTERPRISE_D = "USS Enterprise (NCC-1701-D)"
    private static final String BORTAS_ENTERA = "IKS Bortas Entera"

    @Autowired
    private OwnShipIndex ownShipIndexMock

    @Autowired
    private EnemyShipIndex enemyShipIndexMock

    @Autowired
    private ShipDatabase shipDatabase

    def "should find ship in both indexes"() {
        given:
            ownShipIndexMock.findByName("Enter") >> [ENTERPRISE_D]
            enemyShipIndexMock.findByName("Enter") >> [BORTAS_ENTERA]
        when:
            List<String> foundShips = shipDatabase.findByName("Enter")
        then:
            foundShips == [ENTERPRISE_D, BORTAS_ENTERA]
    }

    static class TestConfig {
        private DetachedMockFactory detachedMockFactory = new DetachedMockFactory()

        @Bean
        @Primary    //if needed, beware of consequences
        OwnShipIndex ownShipIndexStub() {
            return detachedMockFactory.Stub(OwnShipIndex)
        }

        @Bean
        @Primary    //if needed, beware of consequences
        EnemyShipIndex enemyShipIndexStub() {
            return detachedMockFactory.Stub(EnemyShipIndex)
        }
    }
}

The mocks are created in a separate class (outside the Specification) and therefore DetachedMockFactory has to be used (or alternatively SpockMockFactoryBean). Those mocks have to be attached (and detached) to the test instance (the Specification instance), but it is automatically handled by the spock-spring module (as of 1.1). For generic mocks created externally also MockUtil.attachMock() and mockUtil.detachMock() would need to be used to make it work.

As a result it was possible to create and use mocks in the Spring context, but it was not very convenient and it was not commonly used.

Spock 1.2 – first class support

Spring Boot 1.4 brought the new quality to integration testing with (Mockito’s) mocks. It leveraged the idea, originally presented in Springockito back in 2012 (when the Spring configuration was mostly written in XML :) ) to automatically inject mocks (or spies) into the Spring (Boot) context. The Spring Boot team extended the idea and thanks to having it as the internally supported feature it (usually) works reliably just by adding an annotation or two in your test.

Similar annotation-based mechanism is built-in in Spock 1.2.

//@ContextConfiguration just for simplification, @(Test)Configuration is usually more convenient for Spring Boot tests
//Real beans can exist in the context or not
@ContextConfiguration(classes = [ShipDatabase/*, OwnShipIndex, EnemyShipIndex*/])
class ShipDatabaseITSpec extends Specification {

    private static final String ENTERPRISE_D = "USS Enterprise (NCC-1701-D)"
    private static final String BORTAS_ENTERA = "IKS Bortas Entera"

    @SpringBean
    private OwnShipIndex ownShipIndexMock = Stub()  //could be Mock() if needed

    @SpringBean
    private EnemyShipIndex enemyShipIndexMock = Stub()

    @Autowired
    private ShipDatabase shipDatabase

    def "should find ship in both indexes"() {
        given:
            ownShipIndexMock.findByName("Enter") >> [ENTERPRISE_D]
            enemyShipIndexMock.findByName("Enter") >> [BORTAS_ENTERA]
        when:
            List<String> foundShips = shipDatabase.findByName("Enter")
        then:
            foundShips == [ENTERPRISE_D, BORTAS_ENTERA]
    }
}

There is not much to be added. @SpringBean instructs Spock to inject a mock into a Spring context. Similarly, @SpringSpy wraps the real bean with a spy. In a case of @SpringBean it is required to initialize a field to let Spock know if we plan to use a stub or a mock.

In addition, there is also a more general annotation @StubBeans to replace all defined beans with stubs. However, I plan to cover it separately in an another blog post.

Limitations

For those of you who look forward to rewrite all Mockito’s mocks to Spock’s mocks in your Spock tests right after the lecture of this article there is a word of warning. Spock’s mocks – due to their nature and relation to Specification – have some limitations. The implementation under the hood creates a proxy which is injected into the Spring context which (potentially) replaces real beans (stubs/mocks) or wraps them (spies). That proxy is shared between all the tests in the particular test (specification) class. In fact, it also can span across other tests with the same bean/mock declarations in the situation Spring is able to cache the context (similar situation to Mockito’s mocks or Spring integration tests in general).

However, what is really important, a proxy is attached to a tests right before its execution and is detached right after it. Therefore, in fact, every test has it’s own mock instance (it cannot be applied to @Shared fields) and it is problematic for instance to group interactions from different tests and verify them together (which usually is quite sensible, but might lead to some duplication). Nevertheless, with using a setup block (or in-line stubbing) it is possible to share stubbing and interaction expectancy.

Summary

Spock 1.2 finally brings hassle-free Spock’s stubs/mocks/spies support for using them in the Spring context which is comparable with the one provided in Spring Boot for Mockito. It is just enough to add the spock-spring module to the project test dependencies. Despite some limitations, it is one point less for mixing native Spock’s mocking subsystem with external mocking frameworks (such as Mockito) in your Spock (integration) tests. And what is nice, it should work also in plain Spring Framework tests (not only Spring Boot tests). The same feature has been implemented for Guice (but I haven’t tested it).

Furthermore, Spock 1.2 brings also some other changes including better support for Java 9+ and it is worth to give it a try in your test suite (and of course report any potentially spotted regression bugs :) ).

One more good news. In addition to the Leonard’s work who made Spock 1.2 possible and a legion of bug reporters and PR contributors, since recently, there are also some other committers who are working on making Spock even better. Some of them you may know from some other popular FOSS projects. What is more, Spock 1.2 is (preliminary) planned to be the last version based on JUnit 4 and the next stable Spock version could be 2.0, leveraging JUnit 5 and (among others) its native ability to run tests in parallel.

The examples were written using Spock 1.2-RC1. It will be updated to 1.2-final once released. The source code is available from GitHub.

Btw, have you wonder if it is still worth using Spock in the time of JUnit 5? I try to help answer that question in my presentation which will be possible to see at JDD 2018, this October in Kraków, Poland. See you there.

JDD 2018 logo with date

The lead photo based on the Couleur‘s work published in Pixabay, CC0 1.0

Starting with the version 2.17.0 Mockito provides the official (built-in) support for managing a mocking life cycle if JUnit 5 is used.

IMPORTANT. This blog site has been archived. You may read an updated version of this post here.
Visit https://blog.solidsoft.pl/ to follow my new articles.

mockito-junit5-logo

Getting started

To take advantage of the integration, the Mockito’s mockito-junit-jupiter dependency is required to be added next to the JUnit 5’s junit-platform-engine one (see below for details).

After that, the new Mockito extension MockitoExtension for JUnit 5 has to be enabled. And that’s enough. All the Mockito annotation should automatically start to work.

import org.junit.jupiter.api.Test;  //do not confuse with 'org.junit.Test'!
//other imports
import org.mockito.junit.jupiter.MockitoExtension;

@ExtendWith(MockitoExtension.class)
class SpaceShipJUnit5Test {

    @InjectMocks
    private SpaceShip spaceShip;

    @Mock
    private TacticalStation tacticalStation;

    @Mock
    private OperationsStation operationsStation;

    @Test
    void shouldInjectMocks() {
        assertThat(spaceShip).isNotNull();
        assertThat(tacticalStation).isNotNull();
        assertThat(operationsStation).isNotNull();
        assertThat(spaceShip.getTacticalStation()).isSameAs(tacticalStation);
        assertThat(spaceShip.getOperationsStation()).isSameAs(operationsStation);
    }
}

It’s nice that both a test class and test methods don’t need to be public anymore.

Please note. Having also JUnit 4 on a classpath (e.g. via junit-vintage-engine) for the “legacy” part of tests it is important to do not confuse org.junit.jupiter.api.Test with the old one org.junit.Test. It will not work.

Stubbing and verification

If for some reasons you are not a fan of AssertJ (although, I encourage you to at least give it a try) JUnit 5 provides a native assertion assertThrows (which is very similar to assertThatThrownBy() from AssertJ). It provides a meaningful error message in a case of an assertion failure.

@Test
void shouldMockSomething() {
    //given
    willThrow(SelfCheckException.class).given(tacticalStation).doSelfCheck();   //void method "given..will" not "when..then" cannot be used
    //when
    Executable e = () -> spaceShip.doSelfCheck();
    //then
    assertThrows(SelfCheckException.class, e);
}

I were not myself if I would not mention here that leveraging support for default methods in interfaces available in AssertJ and mockito-java8 a lot of static imports can be made redundant.

@ExtendWith(MockitoExtension.class)
class SpaceShipJUnit5Test implements WithAssertions, WithBDDMockito {
    ...
}

Tweaking default behavior

It also worth to point our that using the JUnit 5 extension Mockito by default works in the “strict mode”. It means that – for example – unneeded stubbing will fail the test. While very often it is a code smell, there are some cases where that test construction is desired. To change the default behavior an @MockitoSettings annotation can be used.

@ExtendWith(MockitoExtension.class)
@MockitoSettings(strictness = Strictness.WARN)
class SpaceShipJUnitAdvTest implements WithAssertions, WithBDDMockito {
    ....
}

Dependencies

As I already mentioned, to start using it is required to add a Mockito’s mockito-junit-jupiter dependency next to a JUnit 5’s junit-platform-engine one. In a Gradle build it could look like:

dependencies {
    testCompile 'org.junit.vintage:junit-platform-engine:5.1.0'
    testCompile 'org.mockito:mockito-junit-jupiter:2.17.2'  //mockito-core is implicitly added

    testCompile 'org.junit.vintage:junit-vintage-engine:5.1.0'  //for JUnit 4.12 test execution, if needed
    testCompile 'org.assertj:assertj-core:3.9.1'    //if you like it (you should ;) )
}

Please note. Due to a bug with injecting mocks via constructor into final fields that I have found writing this blog post, it is recommended to use at least version 2.17.2 instead of 2.17.0. That “development” version is not available in the Maven Central and the extra Bintray repository has to be added.

repositories {
    mavenCentral()
    maven { url "https://dl.bintray.com/mockito/maven" }    //for development versions of Mockito
}

In addition, it would be a waste to do not use a brand new native support for JUnit 5 test execution in Gradle 4.6+.

test {
    useJUnitPlatform()
}

IntelliJ IDEA has provided JUnit support since 2016.2 (JUnit 5 Milestone 2 at that time). Eclipse Oxygen also seems to add support for JUnit 5 recently.

mockito-junit5-idea-results

Summary

It is really nice to have a native support for JUnit 5 in Mockito. Not getting ahead there is ongoing work to make it even better.
The feature has been implemented by Christian Schwarz and polished by Tim van der Lippe with great assist from a few other people.

The source code is available from GitHub.

Btw, are you wondering how JUnit 5 compares with Spock? I will be talking about that at GeeCON 2018.

Geecon logo