Posts Tagged ‘presentation’

Stubbing methods returning java.util.Optional with Spock is more tricky that you would probably expect. Get know how to do it efficiently.

by Infrogmation, Wikimedia Commons, public domain

Introduction

One of the nice features of the mocking framework in Spock is an ability to return sensible default values for unstubbed method calls made on stubs. Empty list for a method returning List, 0 for long, etc. Very handy if you don’t care about returned value in a particular test, but for example would like to prevent NullPointerException later in the flow. Unfortunately Spock 1.0 and 1.1-rc-2 (still compatible with Java 6) is completely not aware of types added in Java 8 (such as Optional or CompletableFutures). You may say “no problem” null is acceptable in many cases, but with Optional the situation is even worse.

Issue

Imaging the following code – method returning Optional and a try to use it in a test:

interface Repository<T> {
    Optional<T> getMaybeById(long id)
}

@Ignore("Broken")
def "should not fail on unstubbed call with Optional return type"() {
    given:
        Dao<Order> dao = Stub()
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

You may think – null will be returned on the getMaybeById() call, but it’s not.

Expected no exception to be thrown, but got 'org.spockframework.mock.CannotCreateMockException'

    at spock.lang.Specification.noExceptionThrown(Specification.java:119)
    at info.solidsoft.blog.spock10.other.CustomDefaultResponseSpec.should not fail on unstubbed call with Optional return type(CustomDefaultResponseSpec.groovy:19)
Caused by: org.spockframework.mock.CannotCreateMockException: Cannot create mock for class java.util.Optional because Java mocks cannot mock final classes. If the code under test is written in Groovy, use a Groovy mock.
    at org.spockframework.mock.runtime.JavaMockFactory.createInternal(JavaMockFactory.java:49)
    at org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:40)
(...)

The test fails at runtime as Spock is not able to stub java.util.Optional which is a final class:

CannotCreateMockException: Cannot create mock for class java.util.Optional
    because Java mocks cannot mock final classes.

What we can do?

Two workarounds

The EmptyOrDummyResponse factory class (which tries to be smart) is used by default for stubs when an ustubbed method is being called. However, it can be changed on demand during a stub creation:

def "should not fail on unstubbed call with Optional return type - workaround 1"() {
    given:
        Dao<Order> dao = Stub([defaultResponse: ZeroOrNullResponse.INSTANCE])
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

This test will pass (getMaybeById() just returned null), but there is an easier way to achieve the same result.

Spock uses EmptyOrDummyResponse only for stubs (created with a Stub() method). For mocks (created with a Mock() method) the ZeroOrNullResponse factory is used (which makes sense as mocks should focus on interaction verification not just stubbing). Thanks to that a smart logic trying to return sensible default value is disabled in much simpler way:

def "should not fail on unstubbed call with Optional return type - workaround 2"() {
    given:
        Dao<Order> dao = Mock()
    when:
        dao.getMaybeById(5)
    then:
        noExceptionThrown()
}

However, this workaround is far from being perfect. Firstly, your colleagues may be surprised why a mock is created while only stubbing is performed (by the way, both stubbing and verifying interaction on the same mock is tricky itself in Spock, but this is a topic for an another blog post). Secondly, wouldn’t it be nice to have an empty optional (instead of null) returned by default?

Solution

In addition to an aforementioned way to use predefined factories for default return types Spock provides an ability to write a custom one. Let’s create EmptyOrDummyResponse-life factory which is aware of Java 8 types. In fact, the implementation is very straightforward:

class Java8EmptyOrDummyResponse implements IDefaultResponse {

    public static final Java8EmptyOrDummyResponse INSTANCE = new Java8EmptyOrDummyResponse()

    private Java8EmptyOrDummyResponse() {}

    @Override
    public Object respond(IMockInvocation invocation) {
        if (invocation.getMethod().getReturnType() == Optional) {
            return Optional.empty()
        }
        //possibly CompletableFutures.completedFuture(), dates and maybe others

        return EmptyOrDummyResponse.INSTANCE.respond(invocation)
    }
}

Our class implements an IDefaultResponse interface with one respond() method. Inside, we can apply custom logic for Optional, CompletableFutures and maybe other Java 8 specific types. As a fallback (for “standard” types) we switch to the original EmptyOrDummyResponse. This code works as expected:

@SuppressWarnings("GroovyPointlessBoolean")
def "should return empty Optional for unstubbed calls"() {
    given:
        Dao<Order> dao = Stub([defaultResponse: Java8EmptyOrDummy.INSTANCE])
    when:
        Optional<Order> result = dao.getMaybeById(5)
    then:
        result?.isPresent() == false    //NOT the same as !result?.isPresent()
}

Please pay attention to consider Groovy truth implementation while making assertions with Optional. !result?.isPresent() would be fulfilled also for null returned from a method.

However, maybe it would be good to simplify a Java 8 aware stub creation a little bit? To do that an extra method can be created:

private <T> T Stub8(Class<T> clazz) {
    return Stub([defaultResponse: Java8EmptyOrDummy.INSTANCE], clazz)
}

@SuppressWarnings("GroovyPointlessBoolean")
def "should return empty Optional for unstubbed calls with Stub8"() {
    given:
        Dao<Order> dao = Stub8(Dao)
    when:
        Optional<Order> result = dao.getMaybeById(5)
    then:
        result?.isPresent() == false    //NOT the same as !result?.isPresent()
}

Unfortunately in in that case an enhanced more compact stub creation syntax available in Spock 1.1 cannot be used with our Stub8() method. All because Spock will not be able to determine it’s type looking on he left side on assignment. In the end, however, it is much shorter than setting defaultResponse in an every stub creation.

Please note that due to Spock limitations that method cannot be put in a trait (or a separate class) and has to be defined in the current test or a custom base (super) class for all the tests (extending itself spock.lang.Specification), e.g.:

abstract class Java8AwareSpecification extends Specification {
    protected <T> T Stub8(Class<T> clazz) { ... }
}

class MyFancyTest extends Java8AwareSpecification { ... }

Summary

Thanks to exploring some Spock internals related to a stub and mock creation it was possible to enhance default strategy for smart responses for unstubbed calls to nicely support Java 8 features. This is just one of the topics I covered in my advanced “Interesting nooks and crannies of Spock you (may) have never seen before” presentation gave at Gr8Conf Europe 2016. You may want to see it :-).

Btw, the good news is that upcoming Spock 1.1(-rc-3) will contain native support for returning sensible default values for unstubbed Optional method calls.

Self promotion. Would you like to improve your and your team testing skills and knowledge of Spock quickly and efficiently? I conduct a condensed (unit) testing training which you may find useful.

AssertJ and Awaitility are two of my favorites tools using in automatic code testing. Unfortunately until recently it was not possible to use it together. But then Java 8 entered the game and several dozens lines of code was enough to make it happen in Awaility 1.6.0.

Awaitility logo

AssertJ provides a rich set of assertions with very helpful error messages, all available though the fluent type aware API. Awaitility allows to express expectations of asynchronous calls in a concise and easy to read way leveraging an active wait pattern which shortens the duration of tests (no more sleep(5000)!).

The idea to use it together came into my mind a year ago when I was working on an algo trading project using Complex event processing (CEP) and I didn’t like to learn Hamcrest assertion just for asynchronous tests with Awaitility. I was able to do a working PoC, but it required to make some significant duplication in AssertJ (then FEST Assert) code and I shelved the idea. A month ago I was preparing my presentation about asynchronous code testing for the 4Developers conference and asked myself a question: How Java 8 could simplify the usage of Awaitility?

For the few examples I will use asynchronousMessageQueue which can be used to send ping request and return number of received packets. One of the ways to test it with Awaitility in Java 7 (besides proxy based condition) is to create a Callable class instance:

    @Test
    public void shouldReceivePacketAfterWhileJava7Edition() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(receivedPackageCount(), equalTo(1));
    }

    private Callable<Integer> receivedPackageCount() {
        return new Callable<Integer>() {
            @Override
            public Integer call() throws Exception {
                return asynchronousMessageQueue.getNumberOfReceivedPackets();
            }
        };
    }

where equalTo() is a standard Hamcrest matcher.

The first idea to reduce verbosity is to replace Callable with a lambda expression and inline the private method:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> asynchronousMessageQueue.getNumberOfReceivedPackets(), equalTo(1));
    }

Much better. Going forward lambda expression can be replaced with a method reference:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(asynchronousMessageQueue::getNumberOfReceivedPackets, equalTo(1));
    }

Someone could go even further and remove Hamcrest matcher:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> asynchronousMessageQueue.getNumberOfReceivedPackets() == 1);  //poor error message
    }

but while it still works the error message becomes much less meaningful:

ConditionTimeoutException: Condition with lambda expression in
AwaitilityAsynchronousShowCaseTest was not fulfilled within 2 seconds.

instead of very clear:

ConditionTimeoutException: Lambda expression in AwaitilityAsynchronousShowCaseTest
that uses AbstractMessageQueueFacade: expected <1> but was <0> within 2 seconds.>

The solution would be to use AssertJ assertion inside lambda expression:

    @Test
    public void shouldReceivePacketAfterWhileAssertJEdition() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> assertThat(asynchronousMessageQueue.getNumberOfReceivedPackets()).isEqualTo(1));
    }

and thanks to the new AssertionCondition initially hacked within a few minutes it became a reality in Awaitility 1.6.0. Of course AssertJ fluent API and meaningful failure messages for different data types are preserved.

As a bonus all assertions that throw AssertionError (so particularly TestNG and JUnit standard assertions) can be used in the lambda expression as well (but I don’t know anyone who came back to “standard” assertion knowing the power of AssertJ).

The nice thing is that the changes itself leverage Runnable class to implement lambdas and AssertJ support and Awaitility 1.6.0 is still Java 5 compatible. Nevertheless for the sake of readability it is only sensible to use the new constructions in Java 8 based projects.

Btw, here are the “slides” from my presentation at 4Developers.

4developers logo

The Happiness Door is a great method of collecting instant feedback. I have used it successfully on my training sessions and recently had a first try to use it after my conference speech. In this post I present my case study and the reasons why I will definitely use it on upcoming events.

Part 1. From the beginning. It becomes a tradition to end Warszawa JUG’s meeting seasons at the edge of June and July in a great style – with Confitura – the largest free conference about Java and JVM in Poland. This year the seventh edition was planned for about 1000 people and it took less than 2 days to sold out all the tickets (including overbooking). 5 parallel tracks with 35 presentations on various topics. From tuning JVM, different languages on JVM and a lot of frameworks though Java Script, Android, databases, architecture and testing to motivating people, Gordon Ramsay and company management in the 21th century. Too much to embrace during a day, but fortunately all presentations were recorded and will be available online.

My presentation was about mutation testing in Java environment with PIT. I wrote a few posts on that topic already. In a nutshell – a nice way to check how good your test really are. Writing testable code had wide representation at Confitura – I counted four more testing-related presentations.

Part 2. The Happiness Door method (if you new to this method read my previous post first). Before my presentation the sticky notes were distributed across the room and 5 smiley faces put on the door. At the begging I explained to the audience what was going on in this method and why their feedback was so important for me. Leaving the room about 50% attendees gave numerical feedback. Almost half of them had some comments (from “nice talk” to an essay placed on both sides of a sticky note :) ). Some of them were very interesting. Thanks!

I’m very happy I used this method on my speech. First time just minutes after a presentation I knew what people thought about it. In my case it was: “quite good, but there is still a field for improvement”.

The Happiness Door (one leaf) after my speech at Confitura 2013

The Happiness Door (one leaf) after my speech at Confitura 2013

Here is a good place to thank Małgorzata Majewska and Anna Zajączkowska who helped me polish many aspects of my speech.

The interesting thing is that when I have got an email from the organizers with the feedback collected using the online survey the average score was very similar, just got a month later, so why should I wait? With The Happiness Door method it is possible to get it know immediately (and even with the larger test sample).

Btw, it is not easy to prepare to the presentation (there were as always some problems with a projector) and put a sticky note on every place in the audience room (~150 seats) in 15 minutes. Thanks to Edmund Fliski and Dominika Biel for their help with a distribution.

Btw2, after the talk I had very interesting conversations with the people wanting to use mutation testing in their projects, the people already experimenting with PIT and Konrad Hałas – the guy from Warsaw who wrote MutPy – a mutation testing tool for Python.

Part 3. The slides from my speeches evolves over time. 2 years ago my slides were full of the bullet lists – the feedback – boring, to much text. Recently I had some internal presentation with less than 10 slides with no text, just images – the feedback – hard to follow, to little text. In the mean time I did an experimented with a presentation based on a mind map – mixed feedback. This time I combined images and text – some people liked it some not :). Be the judge. The slides (in Polish) are available for download.

Appendix. Thanks to Chris Rimmer for the idea of presenting IT topics using a metaphor to non IT events and people.

I hope I encouraged you to give The Happiness Door method a try. Please share your experience in comments.

P.S. Good news for all the people who missed my presentation at Confitura. My proposal was by accepted by The Program Committee and I will be speaking in Kraków at JDD 2013 (October 14-15th).

jdd13-speaker

The Happiness Door is a method of collecting immediate feedback I have read about some time ago on the Jurgen Appelo’s blog. I used it this year during my training sessions and it worked very well. I would like to popularize it a little bit.

This method requires to select a strategically located place (like the second leaf of the exit door) with marked scale (I use 5 smileys from a very sad to a very happy one) and ask people to put distributed sticky notes on a level corresponding to their satisfaction of the session. They are encouraged to add a concrete comment(s) explaining given score (like “boring” or “too less practical exercises”), but it is completely valid to just attach an empty card in the selected place. The mentioned issues could be discussed with the whole group to determine how the given thing could be improved best. I start getting feedback before a lunch break on the first training day and gently remind about it on every following break.

The main advantage of using this method is to get both instant numerical feedback (how much people like it) and concrete comments (what exactly do they (dis)like). The feedback is gathered very fast when there is still a room for improvements (in contrast to the more formal survey at the end of the training). I have got numerous comments from attenders that they like this method as well and I plan to use it also in my further sessions.

On my courses I even introduced a small enhancement to the original method. Every day I give away sticky notes in a different color. It allows to easily distinguish feedback given on a particular day and identify a trend. On the photos bellow for example it is pretty visible that after a feedback I got on the first day (yellow cards) I was able to adapt my training to the group’s level and expectations (blue cards).

Feedback after my training - day 1 - The Happiness Door method

Day 1 – a moderate result

Feedback after my training - day 2 - The Happiness Door method

Day 2 – a visible uptrend



This spring was quite busy for me as a trainer. I was a mentor at Git Kata – a free Git workshop, gave a talk about asynchronous calls testing on “6 tastes of testing – flashtalks” and recently did a short workshop about Mockito at Test Kata. In the meantime I conducted two 3-day training sessions about writing “good code” and plan one more Test Driven Development session at the end of June. Everything together with my main occupation – writing good software and help team members to do the same. What is more recently I’ve got very pleasant information that my presentation proposal about Mutation Testing was accepted and at the beginning of July I will close this training season speaking at Confitura 2013 (which nota bene was sold out (1200 tickets!) in less than 2 days). See you at Confitura.

Confitura 2013 - Speaker

Confitura 2013 – Speaker

Recently there was an interesting and unusual event in Warsaw for all enthusiasts of testing – “6 tastes of testing – flashtalks“. Instead of one long presentation common for WJUG meetings 6 guys gave 6 flashtalks (~15 minutes long presentations) about different aspects of testing.

The audience could listen about:

  • Fest Assertions – a set of assertions with fluent API for TestNG and JUnit by Michał Lewandowski,
  • JUnitPrams – a better way to write parameterized tests with JUnit by Paweł Lipiński,
  • Spock – a setting new standards testing framework for Java and Groovy code by Jakub Nabrdalik,
  • Geb – a Groovy way for acceptance testing with Web Driver and Page Object Pattern by Tomasz Kalkosiński,
  • asynchronous calls testing with Awaitility by Marcin Zajączkowski (a.k.a. me),
  • the idea how to use IoC and Guice in tests by Paweł Cesar Sanjuan Szklarz.

On my own speach I performed an experiment with a presentation format. Instead of a classical Impress/Power Point slides I used Vym – View Your Mind – an interesting tool for mind mapping with a build in slide editor. It allowed me to “animate” my mind map and talk about every its point (node). By the way I reported a dozen of feature requests for Vym, so there is a chance to make a slide editor even better on the next presentation :). The downside of using such tool is a limited way how it could be exported to some neutral format. The picture below contains the whole map, but without code snippets.

Sleepless asynchronous calls testing with Awaitility

The people I asked were pleased with the presentations. They were cross-sectional and touched many different aspects briefly, but even though I knew covered topics before I noted down a few useful tips. The venue was tightly filled (about 100 people). In addition to interesting topics there were an ability to eat pizza and win an SSD disk. The event was completed with a networking session in the near restaurant. For those who cannot attend there is a video available (in Polish) on the WJUG’s You Tube channel.

Last Saturday together with 8 other mentors we were showing various Git-related technics on Git kata event for over 80 people.

Git kata was a free git workshop conducted in a kata form. Paraphrasing Wikipedia “A Git kata is an exercise in using Git which helps an user hone their skills through practice and repetition”. During sessions a mentor was showing selected Git aspects in practice providing listeners detailed comments on each performed step. The attenders could follow master’s steps using their own laptops.

There were various Git technics covered including:
– undoing changes (reset, revert, reflog),
– useful aliases, configuration tricks, Git internals and git prompt with fish shell,
– collaboration with other using public services (like GitHub, Bitbucket, GitLab, Gitorious) or patches via email/USB,
– submodules, filter branch and rerere.

In the past I was leading different programming katas, but I have never head about the idea to use it with Git. It sounded very interesting when I was asked to join a mentors team on Git kata. I hope I helped some people better understand internals of Git commands and in addition having two free slots I got to know (among other things) about rerere which can reduce number of manual merges (have you even heard about it before?). “branch.autosetuprebase always” flag which set a rebase as a default strategy on pull for a newly created branches or a “help.autocorrect 1” flag which automatically applies “did you mean” suggestions in case of typo. The event was very successful and I wonder if not to extend my training portfolio by a Git course.

Gdyby istniała mapa obrazująca położenie osób interesujących się Javą oraz technologiami związanymi z JVM to w minioną sobotę w Warszawie na Dolnym Mokotowie można by było zaobserwować bardzo jasny punkt – uczestników konferencji Confitura 2011. Zgodnie z komunikatem organizatorów w tym roku na konferencji pojawiło się około 800 osób. Cieszy, iż tak duża rzesza ludzi chciała w sobotę przyjść (a czasem nawet przyjechać z daleka), aby przez cały dzień móc dowiedzieć się ciekawych rzeczy i pogłębić swoją wiedzę. Gratulacje i podziękowania dla ludzi zaangażowanych w przygotowanie wydarzenia.

Poniżej zamieszczam slajdy z mojej prezentacji o tworzeniu dobrego kodu. Dziękuję obecnym za liczne przybycie i ponawiam prośbę o przesyłanie uwag i sugestii, abym wiedział, które nad którymi elementami szczególnie powinienem jeszcze popracować (pozytywne mogą być w komentarzach, mocno negatywne proszę na priv ;) ).

Tworzenie dobrego kodu dla niewtajemniczonych Confitura 2011 – slajdy

Największa bezpłatna konferencja społecznościowa dotycząca technologii ze świata Java w Polsce odbędzie się 11 czerwca 2011 w Warszawie. W tym roku po raz pierwszy pod nową nazwą – Confitura (dawniej Javarsovia). Sądząc po pierwszych dniach rejestracji frekwencja ma szansę pobić tę z ubiegłego roku (ponad 650 osób).

Tyle żądnych dowiedzenia się czegoś nowego osób w jednym miejscu na pewno cieszy, ale we mnie budzi również trochę obawy, gdyż jedną z ponad dwudziestu zakwalifikowanych prezentacji będzie moja dotycząca fundamentów Software Craftsmanship i tworzenia dobrego kodu (szczęśliwie planowanych jest kilka ścieżek, więc nikt nie będzie na mnie skazany :) ). To wystąpienie (po prezentacjach w WIT i na Politechnice Warszawskiej) wpisuje się w podjętą przeze mnie w ostatnim czasie próbę propagowania (szczególnie wśród studentów i osób zaczynających przygodę z programowaniem) idei i technik pomagających tworzyć lepsze oprogramowanie w sprawiający więcej przyjemności sposób.

Więcej informacji (w tym możliwość zarejestrowania się) na stronach konferencji.

Confitura Logo

Miałem dzisiaj przyjemność poprowadzić gościnną prezentację o pisaniu dobrego kodu dla niewtajemniczonych na wydziale informatyki Wyższej Szkoły Informatyki Stosowanej i Zarządzania WIT. Mile byłem zaskoczony skuteczną promocją wydarzenia i determinacją studentów, którzy mimo godzin popołudniowych (prawie 18-ta) i nieobowiązkowości wykładu pojawili się w ilości ponad 50-ciu sztuk. Sądząc po wyrazach twarzy większość z nich była żywo zainteresowana tematem.
Niestety w Polsce ruch Software Craftsmanship jest wciąż bardzo mało popularny, mniej nawet niż wiedza o metodykach Agile. Studenci często nie mają w ogóle świadomości, że istnieje alternatywa do metodyk formalnych oraz iż działający kod to jeszcze nie wszystko. Cieszę się, iż mogłem przedstawić kilka aspektów, które pomagają w pisaniu dobrego kodu oraz mam nadzieję, że zachęciłem przynajmniej kilka osób do przeczytania książki “Clean Code” Roberta C. Martina, która w mojej opinii powinna być obowiązkową pozycją na wszystkich kierunkach programistycznych.

Jeżeli byłeś na prezentacji, możesz wyrazić swoją opinię w komentarzu.