Posts Tagged ‘training’

“Not only individuals and interactions, but also a community of professionals” – The Software Craftsmanship Manifesto, 2009


A long time ago (still) in a world of the prevailing Waterfall, I was fascinated by how Agile can simplify and streamline the process of software development. However, after some time committed to implementing the new approach in my organization, I was hit by the fact that even with Scrum (and all its fancy ceremonies) we can still produce code that – to use a euphemism — leaves more to be desired.

I started looking for some techniques to improve the quality of the created code and — as a result — the quality of the forged products and solutions. Clean code, automatic code testing, and Test-Driven Development were the most important missing parts I discovered. It was somehow satisfying to realize, after a while, that Extreme Programming, the Software Craftsmanship movement, and me, all had a common goal :)

Community of professionals


However, turning the ideas and theoretical knowledge into an efficiently working mechanism was not so easy. First, I needed to grasp (and preferably master) it on my own to be able to spread it successfully out into the company as a whole. In addition to literature and the Internet, I was lucky enough to encounter some fantastic people and interesting local initiatives which allowed me to meet other developers thinking the same way, exchange knowledge (and ideas), and practice using it in a safe environment (a number of Coding Dojo sessions with a lot of katas made on my own, in a pair or in a group). Being equipped with those (practical) skills, I had a solid foothold to now spread it within the company and pass it on to interested teammates. All with the ultimate goal in mind – to finally improve the quality of created solutions and to make customers and developers more satisfied. It was not hassle-free, to say the least, and in the end, not all of my goals were achieved, but sometime later – perusing my own path of a Software Craftsman – I moved on from that particular organization with the feeling that I had left some good things and ideas behind.

The main trigger for this — somehow personal — blog post is the fact that, in December, 10 years ago, I gave my first public speech at the Warszawa JUG meeting. It featured the power of the full-text search mechanism embedded into a custom application with the help of the Compass library (which later on became a foundation of Elastic Search). The talk wasn’t perfect but based on the feedback, some of the attendees definitely learned something new. In addition, what was also very important was that it showed me how much I can learn myself while preparing a presentation for others. So, I started to share the ideas in wider circles.


The topics covered quickly turned into my main specialization — code quality and automatic code testing (together with the related aspects such as Continuous Delivery) to show other people the way and encourage them to try it on their own. Since that time, I have given dozens of talks for local user groups, small closed events, and large international conferences, both in Poland and across Europe. I (co-)organized some local meetups and larger conferences. I also wrote a number of (educational) blog posts. While I was gradually reducing my active commitment in the community (after all, a day has only 24 hours), a while after my initial presentation, I also started to conduct training sessions for groups and organizations to spread (good) ideas and practical skills in depth. Being a trainer, in the process of time, turned into a regular part of my working life, but I still try to be on terms with current techniques and tools, working hands-on with real projects and solving real problems. It also helps me to enrich my training with anecdotage and real-world examples from my own experience. I really like to demonstrate to less experienced fellow developers how their solutions could be more readable, more bullet-proof, and easier to maintain.

The Future

Is it still needed in 2020+, one might ask? Based on my experience from all my activities, the situation in general has improved over time. Unfortunately, there is still a lot to do. Automatic code testing is still not properly covered during studies or is not presented to students at all. Large parts of commercially developed software cannot be truly named “high-quality” and solid automatic code testing (not to mention Test-Driven Development) is not considered to be a MUST HAVE in all projects. Therefore, I continue my mission to make the world a (slightly) better place, by preaching the ideas that are worth spreading, building a community of professionals, and paying back to people and communities who — at the beginning of my journey — were there to assist in my growth.

Are you able to change the world? Good point. The IT world is quite big, and I cannot reach everyone. However, I am not the only one. For instance, looking at my fellow trainers, I see a number of passionate professionals with various areas of expertise, caring – in different ways – about high levels of knowledge in the community, which makes the whole thing look (a little) less hopeless :).

Is it worth doing? Well, firstly, continuous learning — a requirement to be a good trainer (and speaker) — is definitely a benefit for me. Secondly, I want to work with sensible people on a daily basis. Therefore, the more of them, the better. Thirdly, occasionally there are people who encounter me in one or another way to say “Thank you” for the things I directly or indirectly did (which were somehow beneficial for them). It is really elevating to see that you were able to help someone in something. In those moments, I clearly see that my work did not go entirely down the drain :).

P.S. Recommended reading: “The Software Craftsman: Professionalism, Pragmatism, Pride” by Sandro Mancuso
(Please note, the link above is linked to my account at Amazon. Feel free to use this generic link instead if preferred)

The lead photo by Anemone123 obtained from Pixabay.

Learn how to visualize complex input parameters in parameterized tests in the way improving the readability of the test report.

Trimmed Hedge

By Tomwsulcer – CC0, Wikipedia


Parameterized tests can simplify the way how the same functionality can be verified with different input parameters. Spock with its where block, data tables and data pipes makes it very easy to use in a very readable way. The input parameters are nicely presented in a test execution report (in IDE, Jenkins or generated HTML). They can be formatted in a desired order and completed with a custom constant message. It usually works flawlessly for simple objects (such as numbers, booleans, enums and strings). However, in the situation where complex objects are used (e.g. bigger value objects or custom classes) the whole output can be hijacked by just one very verbose parameter:

Very long full toString

or meaningless default toString() implementation:

Meaningless Default toString in tests

Our sample code base

To present different available approaches I will use a very simplified version of an account & invoice related domain implemented with DDD in one of the projects I worked in.

The main class here is Invoice which represents an invoice :). The object is immutable (here with Groovy AST transformation, but it could be also achieved with Project Lombok or manually) which means that methods modifying a state return a new version of this class.

class Invoice {

    enum Status {

    Status status
    BigDecimal issuedAmount
    BigDecimal remainingAmount
    LocalDate issueDate
    //some other fields

    //different production methods

    //production toString() with all useful business fields

There is also Account class with an amountToPay() method returning amount to pay based in open invoices.

Naive approach (strongly not recommended)

As the first idea one could be tempted to modify toString() method implementation to, for example, display only 2 of the 10 fields in a class. However, it is a bad idea to change production toString() just for the better output in tests. What is more, other tests or error reporting in a production system can prefer to display more information. Luckily in Spock we have two nice techniques to cope with it.

Technique 1 – an extra formatting method

Test/specification names in Spock can be enhanced with #parameterName (not with $ character used internally by Groovy which is not allowed in a method name) placed in a test name or in an @Unroll annotation. In addition there is an ability to use object property value or call a parameterless method.

class AmountToPayInvoiceAccountSpec extends Specification {

    def "paid and cancelled invoices (#invoice.formatForTest()) should be ignored in current amount to pay"() {
            Account account = AccountTestFixture.createForInvoice(invoice)
            account.amountToPay() == 0.0
            invoice << [paidInvoice, cancelledInvoice]


//required modifications in production code
class Invoice {


   String formatForTest() {
       return "$issuedAmount: $status"

Test specific production method - result

It’s nice to get just two fields from an object, however, in many cases we don’t want to add an artificial formatting method to production code just to be used in tests.

A tip. Don’t forget to enable unrolling of paremterized tests to instruct Spock to create a separate (sub)test for every input parameters set. It can be done manually by placing @Unroll annotation on an every parameterized method or at the class level. Alternatively the spock-global-unroll extension can be used to turn it on automatically in the whole project.

Technique 2 – an extra input parameter

Luckily, as an alternative it is possible to define an another artificial input parameter directly in a test. It looks like a ordinarily variable, but has access to a set of input parameters (for a given iteration) and can operate on they. That extra parameter is treated by Spock equally to others (however usually there is no need to reference to it in a test code – beside a test name).

class AmountToPayInvoiceAccountSpec extends Specification {

    private Invoice first = createOpenForAmount(200)
    private Invoice second = createOpenForAmount(300)

    def "current amount to pay (#expectedToPayAmount) should ignore paid and cancelled invoices (#invoicesDesc)"() {
            Account account = AccountTestFixture.createForInvoices(invoices)
            account.amountToPay() == expectedToPayAmount
            invoices                   || expectedToPayAmount
            [paid(first), second]      || 300.0
            [first, cancelled(second)] || 200.0
            [first, second]            || 500.0

            invoicesDesc = createInvoicesDesc(invoices)


Implementation note. Methods createOpenForAmount() as well as paid() and cancelled() are implemented in a test specific InvoiceTestFixture class.

The result looks very nicely:

Just from the report it is pretty visible that there is a (regression) issue with handling CANELLED invoices. The assertion error is also helpful:


It’s worth to notice in this place that this technique can be also mixed with data pipes (in addition to data tables):

        invoice << [paidInvoice, cancelledInvoice]
        invoiceDesc = createInvoiceDesc(invoice)

A tip. Pay attention that in opposite to regular parameters in Spock the artificial one is created with an = operator not with <<.


The aforementioned techniques can be used to improve the readability of your test execution report. It’s useful during the development, but what is even more important it becomes indispensable if Spock is used for Behavior Driven Development and reports are read by, so called, Business People (i.e. need to be worded in the specific way).

[OT] The reason to bring up this topic is a fact that recently two colleagues of mine were struggling with that issue in their tests. Unfortunately they overlooked that slide in my advanced Spock presentation at Gr8Conf EU ;). Blessing in disguise, I was in the office to support them immediately. Nevertheless, not so long ago I’ve seen a presentation by Scott Hanselman about productivity. I liked the idea that every good question is worth to be answered on a blog. Replying privately (especially via email) usually can help only just one person. Writing a blog post and sending that person a link in addition can help other people struggling with the same issue.

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

Get know how to create mocks and spies in even more compact way with Spock 1.1


Spock heavily leverages operator overloading (and Groovy magic) to provide very compact and highly readable specifications (tests) which wouldn’t be able to achieve in Java. This is very clearly visible among others in the whole mocking framework. However, preparing my Spock training I found a place for further improvements in a way how mocks and spies are created.

Shorter and shorter pencils

The Groovy way

Probably the most common way to create mocks (and spies) among devoted Groovy & Grails users is:

def dao = Mock(Dao)

The type inference in IDE works fine (there is type aware code completion). Nonetheless, this syntax is usually less readable for Java newcomers (people using Spock to tests production code written in Java) and in general for people preferring strong typing (including me).

The Java way

The same mock creation in the Java way would look as above:

Dao dao = Mock(Dao)

The first impression about this code snipped is – very verbose. Well, it is a Java way – why should we expect anything more ;).

The shorter Java way

As I already mentioned Spock leverages Groovy magic and the following construction works perfectly fine:

Dao dao = Mock()

Under the hood Spock uses a type used in the left side of an assignment to determine a type for which a mock should be created. Nominally everything is ok. Unfortunately there is one awkward limitation:


IDE complains about unsafe type assignment and without getting deeper into the logic used in Spock it is justified. Luckily the situation is not hopeless.

The shorter Java way – Spock 1.1

Preparing practical exercises for my Spock training some time ago gave me an excuse to get into the details of implementation and after a few minutes I was able to improve the code to make it work cleanly in IDE (after a few years of living with that limitation!).

Dao dao = Mock()


No warning in IDE anymore.


Multiple times in my career I experienced a well known truth that preparing a presentation is very educational also for the presenter. In a case of a new 3-day long training it is even more noticeable – attendees have much more time to ask you uncomfortable question :). Not for the first time my preparations resulted in a new feature or an enhancement in some popular libraries or frameworks.

The last code snippet requires Spock in version 1.1 (which as a time of writing is available as the release candidate 3 – 1.1-rc-3 to not trigger a warning in IDE. There is a lot of new features in Spock 1.1 – why wouldn’t you give it a try? :)

Picture credits: GDJ from

Mockito uses a lazy approach for stubbing and when a not stubbed method is called it returns a default value instead of throwing an exception (like EasyMock). This is very useful to not overspecify the test.

A default returned value depends on a return type of a stubbed method. For methods eturning collections we have an empty collection, for numbers – 0, for booleans false, for ordinary objects – null (in Mockito 2.0 the set of not null values will be extended – this can be also achieved with 1.9.x and ReturnsMoreEmptyValues answer).

Mockito logo

Before we go any further a quick introduction do Answers (you can skip to the next paragraph if Answers are for you like an open book). In addition to simple stubbing based on desired value passed to Mockito directly:


or for consecutive calls:

given(tacticalStationMock.getNumberOfEnemyShipsInRange()).willReturn(2, 3);

the stubbing API provides a way to pass an object with the logic to determine what should be returned in given case (based on method arguments or even an internal state (for consecutive calls)). A simple practical example returning always the first parameter passed to the called method:

public class ReturnFirstArgumentAnswer implements Answer<Object> {
    public Object answer(InvocationOnMock invocation) throws Throwable {
        Object[] arguments = invocation.getArguments();
        if (arguments.length == 0) {
            throw new MockitoException("...");
        return arguments[0];

A sample usage when stubbing:

given(mock.methodToStub("arg1", "arg2"))
    .willReturn(new ReturnFirstArgumentAnswer());

Mockito provides a set of build-in answers. Some of them (like ThrowsException or CallRealMethods) are used by Mockito internally, but some others (like ReturnsArgumentAt introduced in 1.9.5) can be also useful for developers writing tests.

Let’s return to the main topic. Sometimes it is useful to change those default values. In addition to using the answer mechanism for stubbing a specific method calls Mockito provides a way to specify an answer which will be used for every not stubbed method execution on given mock. To do so we can use a static mock() method which in addition to a class to mock takes an additional parameter – a default answer.

mock(SpaceShip.class, Mockito.RETURNS_DEFAULTS);

As returns defaults is a default behavior in Mockito above code is just a more explicit version of:


but we can use this construction to achieve a few interesting behaviors. One of the predefined answers provided by Mockito is RETURNS_DEEP_STUBS. It causes an automatic stubbing of chained methods calls and allows to do following:

SpaceShip spaceShipMock = mock(SpaceShip.class, Mockito.RETURNS_DEEP_STUBS);

Please note that with default configuration it would cause NullPointerException due to the fact spaceShipMock.getTacticalStation() returns null. With RETURNS_DEEP_STUBS Mockito under the hood creates a mock for every middle method call. This is an equivalent of:

//NOTE. Deep stubs implemented manually - no more needed with RETURNS_DEEP_STUBS.
//See the previous example with an equivalent functionality in 2 lines.
SpaceShip spaceShipMock = mock(SpaceShip.class);
TacticalStation tacticalStationMock = mock(TacticalStation.class);

As a bonus, deep stubbing allows to perform a verification (only) on the last mock in the chain:


Another provided answer is RETURNS_MOCKS. This tries to return default value using ReturnsMoreEmptyValues answer (an extended version of a default ReturnsEmptyValues), but if it fails a mock is returned. Only in the situation where the return type cannot be mocked (e.g. is final) null is returned.

mock(OperationsStation.class, Mockito.RETURNS_MOCKS);

Sometimes it can be useful to stub specified methods, but delegate remaining calls to the real implementations. It can be done with CALLS_REAL_METHODS. It can be useful for example when testing an abstract class (just the implemented methods without the need to subclass to create a concrete subclass).

mock(AbstractClass.class, Mockito.CALLS_REAL_METHODS);

Please note that using RETURN_DEEP_STUBS, RETURN_MOCKS and CALLS_REAL_METHODS should be not needed when dealing with well crafted code, written with the usage of Test-Driven Development. Nevertheless sometimes it is required to write tests for legacy code before a try to refactor it.

From a set of default answers defined in, there is also a very useful RETURNS_SMART_NULLS option. This returns SmartNull class instance instead of plain null, which provides a hint which mock stubbing was not performed correctly (and caused NPE). I wrote more about this mode some time ago in Beyond the Mockito Refcard #1.

In addition to define a default answer we can use any class which implements org.mockito.stubbing.Answer interface – both provided by Mockito or hand written. One more tip. In case you would like to use RETURNS_SMART_NULLS or ReturnsMoreEmptyValues globally for all mocks in your application you can check a trick with MockitoConfiguration.

Btw, in case you are starting an adventure with Mockito or want to learn more or just want to organize your knowledge you can be interested in my Mockito Refcard available for free from

Btw2, in addition if you are new to Mockito and live near Warszawa you can consider an attendance in my lecture/workshop about Mockito at Jinkubator – 18 II 2014 (next Tuesday).


This post is the fourth part of the series Beyond the Mockito refcard extending released some time ago my Mockito reference card.

The Happiness Door is a method of collecting immediate feedback I have read about some time ago on the Jurgen Appelo’s blog. I used it this year during my training sessions and it worked very well. I would like to popularize it a little bit.

This method requires to select a strategically located place (like the second leaf of the exit door) with marked scale (I use 5 smileys from a very sad to a very happy one) and ask people to put distributed sticky notes on a level corresponding to their satisfaction of the session. They are encouraged to add a concrete comment(s) explaining given score (like “boring” or “too little practical exercises”), but it is completely valid to just attach an empty card in the selected place. The mentioned issues could be discussed with the whole group to determine how the given thing could be improved best. I start getting feedback before a lunch break on the first training day and gently remind about it on every following break.

Feedback after my testing training

After my ‘Effective code testing’ training. (Almost) all attendies were pleased again :-).

Update 2015: I decreased a frequency of obligatory opining to twice a day (after a lunch break and at the end of a day) to save some trees and give the attendees more time to consider the things.

The main advantage of using this method is to get both instant numerical feedback (how much people like it) and concrete comments (what exactly do they (dis)like). The feedback is gathered very fast when there is still a room for improvements (in contrast to the more formal survey at the end of the training). I have got numerous comments from attenders that they like this method as well and I plan to use it also in my further sessions.

On my courses I even introduced a small enhancement to the original method. Every day I give away sticky notes in a different color. It allows to easily distinguish feedback given on a particular day and identify a trend. On the photos bellow for example it is pretty visible that after a feedback I got on the first day (yellow cards) I was able to adapt my training to the group’s level and expectations (blue cards).

Feedback after my training - day 1 - The Happiness Door method

Day 1 – a moderate result – the attendees didn’t get a training program in their company and expected something completely different…

Feedback after my training - day 2 - The Happiness Door method

Day 2 – a visible uptrend – I heavily diverged from the program to follow people’s expectations. Day 3 was even better :-)

This spring was quite busy for me as a trainer. I was a mentor at Git Kata – a free Git workshop, gave a talk about asynchronous calls testing on “6 tastes of testing – flashtalks” and recently did a short workshop about Mockito at Test Kata. In the meantime I conducted two 3-day training sessions about writing “good code” and plan one more Test Driven Development session at the end of June. Everything together with my main occupation – writing good software and help team members to do the same. What is more recently I’ve got very pleasant information that my presentation proposal about Mutation Testing was accepted and at the beginning of July I will close this training season speaking at Confitura 2013 (which nota bene was sold out (1200 tickets!) in less than 2 days). See you at Confitura.

Confitura 2013 - Speaker

Confitura 2013 – Speaker

Last Saturday together with 8 other mentors we were showing various Git-related technics on Git kata event for over 80 people.

Git kata was a free git workshop conducted in a kata form. Paraphrasing Wikipedia “A Git kata is an exercise in using Git which helps an user hone their skills through practice and repetition”. During sessions a mentor was showing selected Git aspects in practice providing listeners detailed comments on each performed step. The attenders could follow master’s steps using their own laptops.

There were various Git technics covered including:
– undoing changes (reset, revert, reflog),
– useful aliases, configuration tricks, Git internals and git prompt with fish shell,
– collaboration with other using public services (like GitHub, Bitbucket, GitLab, Gitorious) or patches via email/USB,
– submodules, filter branch and rerere.

In the past I was leading different programming katas, but I have never head about the idea to use it with Git. It sounded very interesting when I was asked to join a mentors team on Git kata. I hope I helped some people better understand internals of Git commands and in addition having two free slots I got to know (among other things) about rerere which can reduce number of manual merges (have you even heard about it before?). “branch.autosetuprebase always” flag which set a rebase as a default strategy on pull for a newly created branches or a “help.autocorrect 1” flag which automatically applies “did you mean” suggestions in case of typo. The event was very successful and I wonder if not to extend my training portfolio by a Git course.