Simple real life problem. The proper service implementation is taken from a map based on a given service key. For an unknown/unsupported key a meaningful exception should be thrown. How could it be implemented without an if statement?

Business context. PriceProvider allows to get price for the requested product. A concrete implementation call proper provider via Web Service (REST or SOAP). There is a delegate/dispatcher which depending on a given provider name/id selects the proper implementation.

Implementation note. Mapping selected provider to the corresponding provider adapter implementation can be implemented in many ways. An external properties file, a records in a database or even have a separate web interface. However in many cases those connections are rather permanent and do not need to be changed at runtime. Then a simple map is a very compact and efficient solution. No, 6 “if..else..if..” statements are not the option :).

There is a map with pairs: provider id -> provider service. It could be initialized (in Java) for example with:

Map<PriceProviderKey, PriceProvider> map = new EnumMap<>(PriceProviderKey);
javaMap.put(PriceProviderKey.PROVIDER1, provider1Service);
javaMap.put(PriceProviderKey.PROVIDER2, provider2Service);
(...)

On checkPrice() call proper provider service should be selected to perform a request:

// PriceProviderDelegate

public PriceResult checkPrice(PriceRequest request) {
    PriceProvider provider = map.get(request.getProvider());
    if (provider == null) {	//Not very elegant
        throw new UnsupportedOperationException(
            String.format("Operator %s is not supported", request.getOperator()));
    }
    return provider.checkPrice(request);
}

I don’t like ifs like that. The first idea could be old good Null Object Pattern:

class DefaultPriceProvider implements PriceProvider {
    @Override
    public PriceResult checkPrice(PriceRequest request) {
        //We don't have access to operator name which causes this exception
        throw new UnsupportedOperationException("Operator ??? is not supported")
    }
}

The problem is that we need to implement all methods from the interface and what is worst we don’t know which provider id caused the situation to put it in the exception.

Then I recalled this code is in Groovy and my thoughts went to closure coercion. At the end it turned out that Groovy has a very nice mechanism for it. We can use Map.withDefault(Closure) method which allows to define logic to calculate returned value for unknown keys.

Technical insight. Under the hood Groovy creates a wrapper which for unknown keys calls the closure and put its execution result into a map (for given key). Therefor even for complicated logic its cost is paid only once – on the next query the value is got directly from a map.

Usually a constant/calculated value is returned:

def map = [a:1, b:2].withDefault{ -1 }

but in our case inside the closure we can just throw an exception:

def map = [(PriceProviderKey.PROVIDER1): provider1Service,
           (PriceProviderKey.PROVIDER2): provider2Service]
    .withDefault{ providerKey ->
            throw new UnsupportedOperationException("Provider ${providerKey} is not supported")
    }

Then checkPrice() code can be reduced to just positive flow:

public PriceResult checkPrice(PriceRequest request) {
    def priceProvider = map.get(request.provider);
    priceProvider.checkPrice(request);
}

I like places when Groovy can simplify my code so much!

Groovy logo

Spock is a testing framework for Java and Groovy applications. It allows to write tests in highly expressive specification language which under the hood leverages Groovy compile-time metaprogramming capabilities (especially AST transformations).

Spock developers prefer flat test code formatting with empty lines:

class OrderedInteractionsSpec extends Specification {
  def "collaborators must be invoked in order"() {
    def coll1 = Mock(Collaborator)
    def coll2 = Mock(Collaborator)

    when:
    // try to reverse the order of these invocations and see what happens
    coll1.collaborate()
    coll2.collaborate()

    then:
    1 * coll1.collaborate()

    then:
    1 * coll2.collaborate()
  }
}

I, in turn, prefer more vertically compact formatting without empty lines, but with intend statements after label:

class OrderedInteractionsSpec extends Specification {
    def "collaborators must be invoked in order"() {
        given:
            def coll1 = Mock(Collaborator)
            def coll2 = Mock(Collaborator)
        when:
            // try to reverse the order of these invocations and see what happens
            coll1.collaborate()
            coll2.collaborate()
        then:
            1 * coll1.collaborate()
        then:
            1 * coll2.collaborate()
    }
}

Unfortunately my formatting style was not supported in IntelliJ Idea, even with idea-spock-enhancements plugin. This forced people like me to format it manually and pay attention to automatic code reformat.

Idea 13 provides a lot of new features and improvements. One of them is:

Label Formatting
Code style settings now let you specify custom indentation for the label blocks.

I have to admit I treated it as something not very important and until some time later when a colleague asked me if I know that feature I realized this is something I wanted.

With a simple configuration change Idea 13+ can be configured to support compact Spock specification formatting with code indentation. In File -> Settings (ALT-CTRL-S) -> Code Style -> Groovy set Label indent to 4 (or 2 if preferred) and Label indent style to Indent statements after label.

Spock test code formatting in Idea13

That is all. ALT-CTRL-L (Reformat code) becomes your friend again.

Btw, one more thing pointed out by @mariusz_s. By default labels in Idea looks like ordinal text. In Spock tests labels are part of the specification language and it would be nice to stand them out. It is possible with Spock Enhancements plugin for Idea which make the labels bold and colored.

Label highlighting with Spock plugin

The plugin provides also live inspections for block ordering errors which is very handy and I suggest it to all writing tests in Spock. Code long and prosper ;)

AssertJ and Awaitility are two of my favorites tools using in automatic code testing. Unfortunately until recently it was not possible to use it together. But then Java 8 entered the game and several dozens lines of code was enough to make it happen in Awaility 1.6.0.

Awaitility logo

AssertJ provides a rich set of assertions with very helpful error messages, all available though the fluent type aware API. Awaitility allows to express expectations of asynchronous calls in a concise and easy to read way leveraging an active wait pattern which shortens the duration of tests (no more sleep(5000)!).

The idea to use it together came into my mind a year ago when I was working on an algo trading project using Complex event processing (CEP) and I didn’t like to learn Hamcrest assertion just for asynchronous tests with Awaitility. I was able to do a working PoC, but it required to make some significant duplication in AssertJ (then FEST Assert) code and I shelved the idea. A month ago I was preparing my presentation about asynchronous code testing for the 4Developers conference and asked myself a question: How Java 8 could simplify the usage of Awaitility?

For the few examples I will use asynchronousMessageQueue which can be used to send ping request and return number of received packets. One of the ways to test it with Awaitility in Java 7 (besides proxy based condition) is to create a Callable class instance:

    @Test
    public void shouldReceivePacketAfterWhileJava7Edition() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(receivedPackageCount(), equalTo(1));
    }

    private Callable<Integer> receivedPackageCount() {
        return new Callable<Integer>() {
            @Override
            public Integer call() throws Exception {
                return asynchronousMessageQueue.getNumberOfReceivedPackets();
            }
        };
    }

where equalTo() is a standard Hamcrest matcher.

The first idea to reduce verbosity is to replace Callable with a lambda expression and inline the private method:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> asynchronousMessageQueue.getNumberOfReceivedPackets(), equalTo(1));
    }

Much better. Going forward lambda expression can be replaced with a method reference:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(asynchronousMessageQueue::getNumberOfReceivedPackets, equalTo(1));
    }

Someone could go even further and remove Hamcrest matcher:

    @Test
    public void shouldReceivePacketAfterWhile() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> asynchronousMessageQueue.getNumberOfReceivedPackets() == 1);  //poor error message
    }

but while it still works the error message becomes much less meaningful:

ConditionTimeoutException: Condition with lambda expression in
AwaitilityAsynchronousShowCaseTest was not fulfilled within 2 seconds.

instead of very clear:

ConditionTimeoutException: Lambda expression in AwaitilityAsynchronousShowCaseTest
that uses AbstractMessageQueueFacade: expected <1> but was <0> within 2 seconds.>

The solution would be to use AssertJ assertion inside lambda expression:

    @Test
    public void shouldReceivePacketAfterWhileAssertJEdition() {
        //when
        asynchronousMessageQueue.sendPing();
        //then
        await().until(() -> assertThat(asynchronousMessageQueue.getNumberOfReceivedPackets()).isEqualTo(1));
    }

and thanks to the new AssertionCondition initially hacked within a few minutes it became a reality in Awaitility 1.6.0. Of course AssertJ fluent API and meaningful failure messages for different data types are preserved.

As a bonus all assertions that throw AssertionError (so particularly TestNG and JUnit standard assertions) can be used in the lambda expression as well (but I don’t know anyone who came back to “standard” assertion knowing the power of AssertJ).

The nice thing is that the changes itself leverage Runnable class to implement lambdas and AssertJ support and Awaitility 1.6.0 is still Java 5 compatible. Nevertheless for the sake of readability it is only sensible to use the new constructions in Java 8 based projects.

Btw, here are the “slides” from my presentation at 4Developers.

4developers logo

Mutation testing allows to check the quality (effectiveness) of automatic tests. PIT (aka pitest) is a leading mutation testing tool for Java environment. In my last blog post about PIT in January 2013 I have covered version 0.29. Since then the PIT development team has been busy and the 4 releases introduced various new features (besides fixed bugs). In this post I will cover the most important (in my opinion) changes in the project (up to recently released version 0.33).

PIT mutation testing tool logo

New features

– preliminary support for Java 8 bytecode – PIT can be used with code which contains Java 8 syntax and constructions (including lamdas)
– internal refactoring resulted in much faster “standard” line coverage calculation
– support for parametrized JUnit tests written with Spock (in Groovy) and JUnitParams
– ability to define a coverage threshold (both line and mutation) below which the build will fail
– ability to use PIT with Robolectric
– new Remove Conditionals Mutator (a conditional statement will always be true – not enabled by default as of 0.33)
– new Remove Increments Mutator (an increment operation will be removed – not enabled by default as of 0.33)
– ability to choose JVM to be used for mutation testing
– ability to run PIT only for locally changed files for Maven build with configured SCM plugin
– demanding users can define their own strategies for: test selection, output format and test prioritization – PIT provides extension points which allow to write custom implementations
– partial support for JUnit categories
– support for mutating static initializers in TestNG

In the meantime there were also releases of plugins/tools based on PIT. My plugin for Gradle was enhanced with the dynamic task dependencies resolution (just “gradle pitest” takes care about all the requisites in the Gradle build lifecycle) and support for the additional main and test source sets. Plugin for Eclipse has got (inter alia) a new mutation view and an ability to run PIT against all the projects in a workspace.

Not only releases

Besides new releases PIT has got brand new Bootstrap based webpage, the logo (see above) and the source code was migrated from Mercurial on Google Code to GitHub. The nice thing is that the move resulted in a few contributions withing the first weeks.

Henry Coles the author of PIT also started the new commercial project FaultSeed – “better mutation testing tools for the JVM” which will be based on PIT and has a goal to be 50% faster than PIT and support also Groovy and Scala. Very promising.

PIT (and mutation testing in general) becomes more and more popular and recently there were given a number of talks about it (including my talk at Developer Conference 2014slides). The number of questions on the project’s mailing list also significantly increased. And you, have you tried PIT in your project yet?

DevConf.cz 2014 logo

Mockito uses a lazy approach for stubbing and when a not stubbed method is called it returns a default value instead of throwing an exception (like EasyMock). This is very useful to not overspecify the test.

A default returned value depends on a return type of a stubbed method. For methods eturning collections we have an empty collection, for numbers – 0, for booleans false, for ordinary objects – null (in Mockito 2.0 the set of not null values will be extended – this can be also achieved with 1.9.x and ReturnsMoreEmptyValues answer).

Mockito logo

Before we go any further a quick introduction do Answers (you can skip to the next paragraph if Answers are for you like an open book). In addition to simple stubbing based on desired value passed to Mockito directly:

given(tacticalStationMock.getNumberOfTubes())
    .willReturn(TEST_NUMBER_OF_TORPEDO_TUBES);

or for consecutive calls:

given(tacticalStationMock.getNumberOfEnemyShipsInRange()).willReturn(2, 3);

the stubbing API provides a way to pass an object with the logic to determine what should be returned in given case (based on method arguments or even an internal state (for consecutive calls)). A simple practical example returning always the first parameter passed to the called method:

public class ReturnFirstArgumentAnswer implements Answer<Object> {
    @Override
    public Object answer(InvocationOnMock invocation) throws Throwable {
        Object[] arguments = invocation.getArguments();
        if (arguments.length == 0) {
            throw new MockitoException("...");
        }
        return arguments[0];
    }
}

A sample usage when stubbing:

given(mock.methodToStub("arg1", "arg2"))
    .willReturn(new ReturnFirstArgumentAnswer());

Mockito provides a set of build-in answers. Some of them (like ThrowsException or CallRealMethods) are used by Mockito internally, but some others (like ReturnsArgumentAt introduced in 1.9.5) can be also useful for developers writing tests.

Let’s return to the main topic. Sometimes it is useful to change those default values. In addition to using the answer mechanism for stubbing a specific method calls Mockito provides a way to specify an answer which will be used for every not stubbed method execution on given mock. To do so we can use a static mock() method which in addition to a class to mock takes an additional parameter – a default answer.

mock(SpaceShip.class, Mockito.RETURNS_DEFAULTS);

As returns defaults is a default behavior in Mockito above code is just a more explicit version of:

mock(SpaceShip.class);

but we can use this construction to achieve a few interesting behaviors. One of the predefined answers provided by Mockito is RETURNS_DEEP_STUBS. It causes an automatic stubbing of chained methods calls and allows to do following:

SpaceShip spaceShipMock = mock(SpaceShip.class, Mockito.RETURNS_DEEP_STUBS);
given(spaceShipMock.getTacticalStation().getNumberOfTubes()).willReturn(5);

Please note that with default configuration it would cause NullPointerException due to the fact spaceShipMock.getTacticalStation() returns null. With RETURNS_DEEP_STUBS Mockito under the hood creates a mock for every middle method call. This is an equivalent of:

//NOTE. Deep stubs implemented manually - no more needed with RETURNS_DEEP_STUBS.
//See the previous example with an equivalent functionality in 2 lines.
SpaceShip spaceShipMock = mock(SpaceShip.class);
TacticalStation tacticalStationMock = mock(TacticalStation.class);
given(spaceShipMock.getTacticalStation()).willReturn(tacticalStationMock);
given(tacticalStationMock.getNumberOfTubes()).willReturn(5);

As a bonus, deep stubbing allows to perform a verification (only) on the last mock in the chain:

verify(spaceShipMock.getTacticalStation()).getNumberOfTubes();

Another provided answer is RETURNS_MOCKS. This tries to return default value using ReturnsMoreEmptyValues answer (an extended version of a default ReturnsEmptyValues), but if it fails a mock is returned. Only in the situation where the return type cannot be mocked (e.g. is final) null is returned.

mock(OperationsStation.class, Mockito.RETURNS_MOCKS);

Sometimes it can be useful to stub specified methods, but delegate remaining calls to the real implementations. It can be done with CALLS_REAL_METHODS. It can be useful for example when testing an abstract class (just the implemented methods without the need to subclass to create a concrete subclass).

mock(AbstractClass.class, Mockito.CALLS_REAL_METHODS);

Please note that using RETURN_DEEP_STUBS, RETURN_MOCKS and CALLS_REAL_METHODS should be not needed when dealing with well crafted code, written with the usage of Test-Driven Development. Nevertheless sometimes it is required to write tests for legacy code before a try to refactor it.

From a set of default answers defined in Mockito.java, there is also a very useful RETURNS_SMART_NULLS option. This returns SmartNull class instance instead of plain null, which provides a hint which mock stubbing was not performed correctly (and caused NPE). I wrote more about this mode some time ago in Beyond the Mockito Refcard #1.

In addition to define a default answer we can use any class which implements org.mockito.stubbing.Answer interface – both provided by Mockito or hand written. One more tip. In case you would like to use RETURNS_SMART_NULLS or ReturnsMoreEmptyValues globally for all mocks in your application you can check a trick with MockitoConfiguration.

Btw, in case you are starting an adventure with Mockito or want to learn more or just want to organize your knowledge you can be interested in my Mockito Refcard available for free from dzone.com.

Btw2, in addition if you are new to Mockito and live near Warszawa you can consider an attendance in my lecture/workshop about Mockito at Jinkubator – 18 II 2014 (next Tuesday).

Jinkubator

This post is the fourth part of the series Beyond the Mockito refcard extending released some time ago my Mockito reference card.

AppFuse 3.0 has been released. It refreshes used technologies (Java 7+, Spring 4, Spring Security 3.2) and adds a bunch of new (Bootstrap 3, PrimeFaces, wro4j, WebJars and finally Apache Wicket).

AppFuse logo

From the project home page:

AppFuse is a full-stack framework for building web applications on the JVM. It was originally developed to eliminate the ramp-up time found when building new web applications for customers. Over the years, it has matured into a very testable and secure system for creating Java-based webapps. At its core, AppFuse is a project skeleton, similar to the one that’s created by your IDE when you click through a wizard to create a new web project.

I you are looking for some foundation for your project or just would like to see how the same things look in different technologies you can give AppFuse a try. A quick start page should be a good starting point.

There is also a personal thread in this story. Steadfast readers can remember that over 3 years ago I started working on Wicket frontend for AppFuse. I definitely prefer working on backends, but I wanted to get know Wicket (and its famous ability to being tested without Selenium) better and an engagement in the AppFuse development seemed to be a good way to practice these skills. There are still places in a Wicket frontend which need to be polished, but the work is mostly done (what I’m happy about :) ).

As a summary of my work I can write that even Wicket (where all page logic is written in Java or Groovy classes – no more c:forEach tags!) cannot completely remove the pain which comes with the limitation of HTTP (which wasn’t designed for the “enterprise applications”) and differences between browsers (although with jQuery and Bootstrap it is much easier). In addition 3 years is a lot of time in IT and currently there are even more use cases where component-based server side frameworks aren’t the best solution to make a good looking, responsible, scalable and trendy UI. Maybe it is time to work on an AngularJS frontend ;).

The Happiness Door is a great method of collecting instant feedback. I have used it successfully on my training sessions and recently had a first try to use it after my conference speech. In this post I present my case study and the reasons why I will definitely use it on upcoming events.

Part 1. From the beginning. It becomes a tradition to end Warszawa JUG’s meeting seasons at the edge of June and July in a great style – with Confitura – the largest free conference about Java and JVM in Poland. This year the seventh edition was planned for about 1000 people and it took less than 2 days to sold out all the tickets (including overbooking). 5 parallel tracks with 35 presentations on various topics. From tuning JVM, different languages on JVM and a lot of frameworks though Java Script, Android, databases, architecture and testing to motivating people, Gordon Ramsay and company management in the 21th century. Too much to embrace during a day, but fortunately all presentations were recorded and will be available online.

My presentation was about mutation testing in Java environment with PIT. I wrote a few posts on that topic already. In a nutshell – a nice way to check how good your test really are. Writing testable code had wide representation at Confitura – I counted four more testing-related presentations.

Part 2. The Happiness Door method (if you new to this method read my previous post first). Before my presentation the sticky notes were distributed across the room and 5 smiley faces put on the door. At the begging I explained to the audience what was going on in this method and why their feedback was so important for me. Leaving the room about 50% attendees gave numerical feedback. Almost half of them had some comments (from “nice talk” to an essay placed on both sides of a sticky note :) ). Some of them were very interesting. Thanks!

I’m very happy I used this method on my speech. First time just minutes after a presentation I knew what people thought about it. In my case it was: “quite good, but there is still a field for improvement”.

The Happiness Door (one leaf) after my speech at Confitura 2013

The Happiness Door (one leaf) after my speech at Confitura 2013

Here is a good place to thank Małgorzata Majewska and Anna Zajączkowska who helped me polish many aspects of my speech.

The interesting thing is that when I have got an email from the organizers with the feedback collected using the online survey the average score was very similar, just got a month later, so why should I wait? With The Happiness Door method it is possible to get it know immediately (and even with the larger test sample).

Btw, it is not easy to prepare to the presentation (there were as always some problems with a projector) and put a sticky note on every place in the audience room (~150 seats) in 15 minutes. Thanks to Edmund Fliski and Dominika Biel for their help with a distribution.

Btw2, after the talk I had very interesting conversations with the people wanting to use mutation testing in their projects, the people already experimenting with PIT and Konrad Hałas – the guy from Warsaw who wrote MutPy – a mutation testing tool for Python.

Part 3. The slides from my speeches evolves over time. 2 years ago my slides were full of the bullet lists – the feedback – boring, to much text. Recently I had some internal presentation with less than 10 slides with no text, just images – the feedback – hard to follow, to little text. In the mean time I did an experimented with a presentation based on a mind map – mixed feedback. This time I combined images and text – some people liked it some not :). Be the judge. The slides (in Polish) are available for download.

Appendix. Thanks to Chris Rimmer for the idea of presenting IT topics using a metaphor to non IT events and people.

I hope I encouraged you to give The Happiness Door method a try. Please share your experience in comments.

P.S. Good news for all the people who missed my presentation at Confitura. My proposal was by accepted by The Program Committee and I will be speaking in Kraków at JDD 2013 (October 14-15th).

jdd13-speaker

The Happiness Door is a method of collecting immediate feedback I have read about some time ago on the Jurgen Appelo’s blog. I used it this year during my training sessions and it worked very well. I would like to popularize it a little bit.

This method requires to select a strategically located place (like the second leaf of the exit door) with marked scale (I use 5 smileys from a very sad to a very happy one) and ask people to put distributed sticky notes on a level corresponding to their satisfaction of the session. They are encouraged to add a concrete comment(s) explaining given score (like “boring” or “too less practical exercises”), but it is completely valid to just attach an empty card in the selected place. The mentioned issues could be discussed with the whole group to determine how the given thing could be improved best. I start getting feedback before a lunch break on the first training day and gently remind about it on every following break.

The main advantage of using this method is to get both instant numerical feedback (how much people like it) and concrete comments (what exactly do they (dis)like). The feedback is gathered very fast when there is still a room for improvements (in contrast to the more formal survey at the end of the training). I have got numerous comments from attenders that they like this method as well and I plan to use it also in my further sessions.

On my courses I even introduced a small enhancement to the original method. Every day I give away sticky notes in a different color. It allows to easily distinguish feedback given on a particular day and identify a trend. On the photos bellow for example it is pretty visible that after a feedback I got on the first day (yellow cards) I was able to adapt my training to the group’s level and expectations (blue cards).

Feedback after my training - day 1 - The Happiness Door method

Day 1 – a moderate result

Feedback after my training - day 2 - The Happiness Door method

Day 2 – a visible uptrend



This spring was quite busy for me as a trainer. I was a mentor at Git Kata – a free Git workshop, gave a talk about asynchronous calls testing on “6 tastes of testing – flashtalks” and recently did a short workshop about Mockito at Test Kata. In the meantime I conducted two 3-day training sessions about writing “good code” and plan one more Test Driven Development session at the end of June. Everything together with my main occupation – writing good software and help team members to do the same. What is more recently I’ve got very pleasant information that my presentation proposal about Mutation Testing was accepted and at the beginning of July I will close this training season speaking at Confitura 2013 (which nota bene was sold out (1200 tickets!) in less than 2 days). See you at Confitura.

Confitura 2013 - Speaker

Confitura 2013 – Speaker

Recently there was an interesting and unusual event in Warsaw for all enthusiasts of testing – “6 tastes of testing – flashtalks“. Instead of one long presentation common for WJUG meetings 6 guys gave 6 flashtalks (~15 minutes long presentations) about different aspects of testing.

The audience could listen about:

  • Fest Assertions – a set of assertions with fluent API for TestNG and JUnit by Michał Lewandowski,
  • JUnitPrams – a better way to write parameterized tests with JUnit by Paweł Lipiński,
  • Spock – a setting new standards testing framework for Java and Groovy code by Jakub Nabrdalik,
  • Geb – a Groovy way for acceptance testing with Web Driver and Page Object Pattern by Tomasz Kalkosiński,
  • asynchronous calls testing with Awaitility by Marcin Zajączkowski (a.k.a. me),
  • the idea how to use IoC and Guice in tests by Paweł Cesar Sanjuan Szklarz.

On my own speach I performed an experiment with a presentation format. Instead of a classical Impress/Power Point slides I used Vym – View Your Mind – an interesting tool for mind mapping with a build in slide editor. It allowed me to “animate” my mind map and talk about every its point (node). By the way I reported a dozen of feature requests for Vym, so there is a chance to make a slide editor even better on the next presentation :). The downside of using such tool is a limited way how it could be exported to some neutral format. The picture below contains the whole map, but without code snippets.

Sleepless asynchronous calls testing with Awaitility

The people I asked were pleased with the presentations. They were cross-sectional and touched many different aspects briefly, but even though I knew covered topics before I noted down a few useful tips. The venue was tightly filled (about 100 people). In addition to interesting topics there were an ability to eat pizza and win an SSD disk. The event was completed with a networking session in the near restaurant. For those who cannot attend there is a video available (in Polish) on the WJUG’s You Tube channel.

Last Saturday together with 8 other mentors we were showing various Git-related technics on Git kata event for over 80 people.

Git kata was a free git workshop conducted in a kata form. Paraphrasing Wikipedia “A Git kata is an exercise in using Git which helps an user hone their skills through practice and repetition”. During sessions a mentor was showing selected Git aspects in practice providing listeners detailed comments on each performed step. The attenders could follow master’s steps using their own laptops.

There were various Git technics covered including:
– undoing changes (reset, revert, reflog),
– useful aliases, configuration tricks, Git internals and git prompt with fish shell,
– collaboration with other using public services (like GitHub, Bitbucket, GitLab, Gitorious) or patches via email/USB,
– submodules, filter branch and rerere.

In the past I was leading different programming katas, but I have never head about the idea to use it with Git. It sounded very interesting when I was asked to join a mentors team on Git kata. I hope I helped some people better understand internals of Git commands and in addition having two free slots I got to know (among other things) about rerere which can reduce number of manual merges (have you even heard about it before?). “branch.autosetuprebase always” flag which set a rebase as a default strategy on pull for a newly created branches or a “help.autocorrect 1” flag which automatically applies “did you mean” suggestions in case of typo. The event was very successful and I wonder if not to extend my training portfolio by a Git course.