Custom client-side password decryption and Spring Cloud Config Server

When using Spring Cloud Config Server it usually doesn’t take long before passwords end up in the underlying GIT repository as well. Using the {cipher} prefix and symmetric or private/public key (stored in a keystore), it is possible to store encrypted versions of these passwords, such that they don’t appear in plaintext in the GIT repository. By default, the config server decrypts them before sending them to the client application.

This blog posts follows a slightly different use case and discusses some modification to the default approach:

  • Decrypting at the client side
  • Decryption using a custom algorithm
  • Using autoconfiguration to do this transparently

The example I’ll construct has several elements: a configuration server, a client application and a library that will do the decryption transparently.

This blog post was written for Spring Cloud Config 1.2.2.RELEASE. The code can be found at https://github.com/kuipercm/custom-spring-cloud-config-decryption-example.

Situation

The requirements we’re trying to satisfy are as follows:

  1. It is not allowed to send plaintext passwords ‘over the line’. This is a security requirement, even when using HTTPS connections with mutual authentication and a cipher.
  2. The decryption should be transparent to the application: it shouldn’t know that the passwords are encrypted, nor what kind of algorithm they use
  3. There is a custom, business-approved encryption/decryption methodology in place. For the purposes of this blog post, the algorithm is simply to reverse the encrypted password, since it is an easy method to demonstrate. In reality you can think of any other method. The method used in this example is not secure at all, so don’t use it in real applications!

Configuration Server

To get started, let’s first create a “default” configuration server.

@SpringBootApplication
@EnableConfigServer
public class ConfigurationServer {
    public static void main(String[] args) {
        SpringApplication.run(ConfigurationServer.class, args);
    }
}

By setting the appropriate properties, the configuration server will know which GIT repository to clone and to expose through its endpoints. This is all default behavior and is well documented in the documentation.

It is however important to note that the property to send decrypted passwords to clients has been disabled: spring.cloud.config.server.encrypt.enabled=false. This way the clients are in charge of decrypting the encrypted data.

Configuration Client

The client is also relatively straightforward.

@SpringBootApplication
public class ConfigurationClient {
    public static void main(String[] args) {
        SpringApplication.run(ConfigurationClient.class, args);
    }
}

where this application has a dependency on Spring Cloud Config Client in its pom.xml. (For full details, see the git repository.)

The client application also contains a repository class which connects to the in-memory h2 database using HikariCP. The properties for this connection, including the encrypted password, are in the configuration repository.

spring:
  datasource:
    type: com.zaxxer.hikari.HikariDataSource
    url: jdbc:h2:mem:testdb
    driver-class-name: org.h2.Driver
    username: johnsmith
    password: '{cipher}drowssapxelpmocym'
    hikari:
      idle-timeout: 10000

Since we told the server application not to decrypt encrypted properties before sending them to the client, the client application will receive the properties as described above. So, now it’s up to the client to decrypt these properties.

The decryption library

There are several options to facilitate the decryption.

The most naive implementation I can come up with is to create a Utility class that is called manually to decrypt the properties as needed. This works, but requires us to inject properties manually, decrypt them and then use them to construct the objects we want. In the case of the Hikari datasource, this approach makes autoconfiguration of the object impossible and we’d have to do it by hand. Probably we can do better.

An alternative, less naive, approach is to use an application event, decrypt all encrypted properties upon this event and place them back in the Spring environment. That way we would still be able to use the default autoconfiguration behavior and the whole decryption part is transparent to the rest of the application, which is nice. The downside is that we have to have a lot of knowledge about the internals of the configuration client. For example, we have to know how the properties are inserted into the Spring context by the client such that we can do the right things when trying to decrypt them. Should the internals of the configuration client change, there is a good chance we have to change our code as well.

The cleanest solution I’ve found is to tap into the mechanism used by the configuration client when decrypting the encrypted values and substitute an alternative decryption strategy to be used.

The TextExcryptor

At the basis of the whole encryption/decryption mechanism is the so-called TextEncryptor and, despite what its name suggest, it is used to both encrypt and decrypt. In this example, the TextEncryptor is very, very simple, so once again the warning: don’t try this at home! This implementation is not secure at all, but is for demonstration purposes only.

public class ReversableEncryptor implements TextEncryptor {
    @Override
    public String encrypt(String toEncrypt) {
        return new StringBuilder().append(toEncrypt).reverse().toString();
    }

    @Override
    public String decrypt(String toDecrypt) {
        return new StringBuilder().append(toDecrypt).reverse().toString();
    }
}

Please note that this implementation doesn’t do anything with the prefix ‘{cipher}’. This will already be stripped by the configuration client.

Tap into autoconfiguration

Now we need to make sure that the custom encryptor is picked up by the Spring configuration and that it replaces other encryptors. The easiest way to achieve this is by using Spring Boots autoconfiguration to load the encryptor.

@Configuration
public class CustomEncryptorBootstrapConfiguration {
    public static final String CUSTOM_ENCRYPT_PROPERTY_NAME = "bldn.encryption";
    public static final String REVERSABLE_ENCRYPTION = "reversable";

    @Configuration
    @ConditionalOnProperty(name = CUSTOM_ENCRYPT_PROPERTY_NAME, havingValue = REVERSABLE_ENCRYPTION)
    protected static class ReverableEncryptorConfiguration {
        @Bean
        @ConditionalOnMissingBean(ReversableEncryptor.class)
        public TextEncryptor reversableEncryptor() {
            return new ReversableEncryptor();
        }
    }
}

In this configuration, our ReversableEncryptor is loaded when there is a property “bldn.encryption” present with the value “reverable”. This property can be provided via de commandline or via other means, although it is not advisable to provide it through the configuration server (as it might come too late then).

To actually load this Configuration as part of the autoconfiguration of the application, we need to add a spring.factories file with (at least) the following content

org.springframework.cloud.bootstrap.BootstrapConfiguration=\
nl.bldn.projects.customdecryption.CustomEncryptorBootstrapConfiguration

This will make sure the CustomEncryptorBootstrapConfiguration is loaded during application startup and when the require property is present, the custom TextEncryptor will take the place of the default one.

When the properties are now received by the client, the custom decryption mechanism will kick in and the application will use the decrypted properties for other autoconfigured beans.

Final Remarks

This blog post has shown a way to setup custom decryption of encrypted properties received from a Spring Cloud Config Server. In this scenario the decryption is left to the client application and the method of decryption cannot be captured by the default methods available in Spring Cloud Config.

The code of this example can be found at https://github.com/kuipercm/custom-spring-cloud-config-decryption-example. To test out the code, follow the instructions of the readme file.

Happy coding!

Unit testing log statements

Usually logging is not considered when writing tests. This is often because logging is not the primary purpose of a class and will, in terms of line coverage, usually be covered automatically. There are however situations when logging is the primary purpose of a class and as such it requires more thorough testing. When using a logging framework such as SLF4J however, the difficulty is in how to determine that the logging actually happening and that is has the correct level and content: the logging framework is usually not very test-orientated.

There are several posts on stackoverflow that deal with unit testing logging in java, so for a broader overview, check there. This post will give single, straightforward method of testing log statements.

The code for this example can be found at https://github.com/kuipercm/test-logging

Situation

Let’s assume that we are writing a component that handles sending and receiving messages to an external source. For traceability and auditability purposes it is a business requirement to log all incoming and outgoing messages. This makes the logging an essential part of the application and it should therefore be verified in tests.

To facilitate the logging, a separate class is created such the details of what to log and how are still abstracted from the main, more functional, program flow. It has the added benefit that the logging class can be more easily tested.

public class FruitMachine {
    private final FruitLogger logger = FruitLogger.INSTANCE;

    private final String remoteUrl;

    public FruitMachine(String remoteUrl) {
        this.remoteUrl = remoteUrl;
    }

    public void sendMessage(FruityMessage message) {
        logger.logAllFruitMessages(message);
        //do some remote call
    }

}

In the code sample above, the logging of the FruitMachine class is handled by the FruitLogger instance. The logger is called for all FruitMessages.

The FruitLogger itself it not special:

public class FruitLogger {
    public static final FruitLogger INSTANCE = new FruitLogger();

    static final Logger log = LoggerFactory.getLogger(FruitLogger.class);

    private FruitLogger() {
        //don't instatiate outside this class
    }

    public void logAllFruitMessages(FruityMessage message) {
        if (message != null) {
            log.info("Outgoing message body: {}", message.getBody());
        }
    }
}

It logs the message body at INFO level. The Logger is an SLF4j component.

Since the primary purpose of the FruitLogger is to log the message (albeit a little bit simplistic in this example), it is good to have (at least) a unit test for it.

The Test

In the FruitLogger class above, there are two important things to note:

  1. The Logger “log” field has package private (or “default”) visibility. This is to facilitate easier interaction in the unit test.
  2. Of all non-null messages, the message body is logged.

The difficult part is how to test that the “log” will actually contain the message body, because even though we can access the “log” field in the unit test, there are no methods on it such that we can examine what has been sent to it.

To solve this, we can create a test in the following way

public class FruitLoggerTest {
    private static Logger fruityLogger;
    private static ListAppender<ILoggingEvent> fruityLoggerAppender;

    @BeforeClass
    public static void setupBeforeClass() {
        LoggerContext context = (LoggerContext)LoggerFactory.getILoggerFactory();
        fruityLogger = context.getLogger(log.getName());
        fruityLoggerAppender = new ListAppender<>();
        fruityLoggerAppender.start();
        fruityLogger.addAppender(fruityLoggerAppender);
    }

    @Before
    public void setup() {
        fruityLogger.setLevel(Level.ALL);
        fruityLoggerAppender.clearAllFilters();
        fruityLoggerAppender.list.clear();
    }

    @Test
    public void verify_that_the_fruity_message_content_is_logged_at_info_level() {
        assertThat(fruityLoggerAppender.list).hasSize(0);

        FruityMessage fruityMessage = new FruityMessage("apples", "oranges");
        INSTANCE.logAllFruitMessages(fruityMessage);

        assertThat(fruityLoggerAppender.list).hasSize(1);

        ILoggingEvent loggedEvent = fruityLoggerAppender.list.get(0);
        assertThat(loggedEvent.getLevel()).isEqualTo(Level.INFO);
        assertThat(loggedEvent.getFormattedMessage()).contains("body");
        assertThat(loggedEvent.getFormattedMessage()).contains("oranges");
    }
}

This test extracts the fruityLogger from the logging context based on the name of the logger. Subsequently it adds a ListAppender to this logger. The ListAppender is a type of Appender that has an internal list of the ILoggingEvents sent to it and this list can be accessed. This makes usage of this Appender in the test context very useful, as can be seen in this test class.

There are a few things to note with respect to the setup() method:

  • the log level of the fruityLogger is reset to Level.ALL: in unit tests you can change the log level of the logger, to verify that when a certain log level is set, certain messages will no longer be logged. For example, if the log level were to be set to WARN, then the FruitLogger should no longer log the message body. Although it can be considered testing of the logging framework to execute such a test, there are situations in which it can be crucial to verify the behavior.
  • the Appender is cleared of all filters: filters can interfere with which messages are accepted in the appender. It is a good idea to start each test with a clean slate.
  • the ListAppender.list field is also cleared to make sure that each test will only put its own messages in the Appender and will not be influenced by other tests.

The test itself is not difficult to understand. It should be noted that the FruityMessage consists of a header and body field. In this example, the header is “apples”, the body is “oranges”.

Final remarks

As demonstrated above, it is possible to test the logging of classes in unit tests. This is particularly useful in situations where logging becomes a primary purpose of a class due to business requirements.

The code of this example can be found on https://github.com/kuipercm/test-logging. What is not shown in the code samples above is which dependencies are required for this code to work. These are in the pom.xml of the project in Github.

Happy testing!

Multi-module project dependency management in Maven

Just a quick one here on an old topic: dependency management with Maven. This is definitely not a new topic so I won’t waste your time on it too much. There is however one finding I would like to share which has improved my workflow in large projects.

The Problem

As you probably know, Maven manages dependencies. In multi-module projects this can be cumbersome and to improve workflow you can use the dependency management section to describe the version of a certain dependency in all its child projects.

Now for the tricky part: when you have a multi-module project BUT not all children of the parent are in the same project structure (so in different git repos for example), the management of dependencies can become difficult, eventhough you use the dependencyManagement section in the parent. This is due to the fact that the inheritance structure of the poms makes it difficult to see whether the (sub-)module is still at the correct parent version and which dependencies it actually has. IDEs help, but only so much.
At the same time, when you update the parent version because of a dependency version update, this might have unexpected results in certain children (incompatibilities and so on), so you cannot update all parent versions in all children without issues.

The Fix

To fix this, I’ve recently started to use a new way that I like very much so far. I didn’t invent this, it’s been around for a long time, but I stumbled upon the concept while using Spring Boot.
The idea is to use a separate project that is not part of the hierarchy but that only defines dependencies in a dependencyManagement section and gets included at the (sub-)module level dependencyManagement section like this:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>nl.bdln.projects</groupId>
            <artifactId>all-dependencies</artifactId>
            <version>1.3.15</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

This has the effect that the hierarchy does not define the dependencies used in the (sub-)module, but the included version of the dependency pom does. Therefore, you can update the dependencies on a per-module basis, while still using a fixed set of (approved/consistent) dependencies. Basically, the dependency versions are decoupled from the pom inheritance, which is easier to handle

Remarks

  • the parent pluginManagement section should no longer define dependency versions (it should be empty) to avoid conflicts and uncertainties
  • you can only update the main project when all children have been updated (this was always the case, but is now clear)
  • it’s still a requirement to make sure each (sub-)module is backwards compatible with the older dependency pom version, since the main project might still be at an older version of the main dependencies pom.
  • the approach described above does not hold for the pluginManagement section of the parent pom build section: it’s actually useful to describe the plugins here, since they change relatively little over time and are linked to the build requirements more than to the actual project

See also

 

Spring Boot in an Existing Application (part 1)

Introduction

Recently I’ve been working on a job to Spring Boot-ify an existing application. This turned out to be a tricky process, because  the strong opinions of Spring Boot. Spring Boot is primary geared toward new webapps and I might go so far as to say that they are geared toward new single-service webapps. This makes it relatively hard to convert an existing app, not in the least because the documentation on converting an existing app, especially one with an xml configuration, is very limited. So I thought I’d help out with a series of posts on the subject.

In this first item, we’ll be looking at Spring Boot and the minimum requirements for an existing app when you don’t want to use the starter-poms (http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-starter-poms). We’ll create a project without depending on the spring-boot-parent and without using for example the spring-boot-starter-web pom. A nice clean, lightweight setup.

Spring Boot

In my introduction it might have sounded like Spring Boot is no good. That’s not true in the least: for new apps, it’s definitely worth looking at. It’s very easy to develop a basic app and have it running within minutes. That’s a big benefit when you want to prototype or simply get underway quickly. And obviously it’s very cool that the developed app can run anywhere, since you have the container embedded!

However, as usual, convenience comes at a price. In this case, the price is that Spring Boot is quite opinionated about how the app is supposed to be constructed. For example, it is assumed that the configuration of the app is done completely through Java Beans with annotations. If you happen to have an app that uses XML configuration, you won’t find much help in the official documentation (see also http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-configuration-classes).

So it’s too bad we have an existing app with XML configuration. We’re pretty much on our own…

Requirements

Maybe it’s good at this point to explain why we would want to keep using the existing configuration, since it might be possible to create a new java based configuration. There are two reasons:

  • Converting an existing configuration from XML to Java is possible but might be very time consuming, depending on the size of the configuration. When moving to Spring Boot, it should be possible to transport the configuration one-to-one and focus on the problems we’re introducing by adding the embedded server and new startup method, which might be quite extensive in themselves.
  • Currently the app is build with Maven into a WAR file. The new structure should be a runnable WAR file, but hopefully, this WAR file wille still be deployable in a separate container (such as Tomcat or Jetty). This is not a hard requirement, because we can simply create the runnable war while wrapping the existing war, but that still puts the constraint on the existing WAR: it should be possible to run inside a container. This means that the existing app should be changed as little as possible. The reason for this decision is that the infrastructure for the deployment and the deployment mechanism are not optimized for runnable JARS/WARS yet and so we might end up in situation where both are required.

Based on these two requirements, we’re stuck with an xml based configuration which will have to be loaded from a new Spring Boot configuration.

Basic Setup

The general goal of this blog post is to create a Spring Boot app that will run, without using a spring-boot-starter pom. The reasons we don’t want to use starter poms are that

  1. We have an existing application with spring configuration: we shouldn’t need a bunch of “default” configuration, since we’ve already specified our own.
  2. The default configurations are usually somewhat heavy handed. The list of (possible) jars in for example the spring-boot-starter-web pom is quite large and we might not need any of those. The main idea here is that using a starter pom might distract us from what we’re trying to build.
  3. (As an expansion of the previous points) The default main class with the @SpringBootApplication annotation that you’ll find in almost every online example is somewhat useless to us: it does a lot of automagic configuration, but as I’ve said, we already have a working configuration.

So, let’s go for a basic HelloWorld app. For this demo, we’ll use Maven as a build tool, but it should work in Gradle too without much difficulty.

The code for this post is available at github.

And immediately, I’m going to mention that I lied before: we are actually using starter poms, but the lightweight ones that we’ll use for logging and the embedded container.

The pom.xml

The pom.xml file is created without using an archetype. It simply starts out as a pom.xml for a new project: it defines the groupId, the artifactId and the version of the new project. Also, it’s packaging is set to WAR, because we want to be able to run this app inside an (external) container.

The pom.xml further defines some basics for a Spring Boot application that is not inheriting from the spring-boot-parent and that can run inside an external container. Not inheriting from the spring-boot-parent gives us more freedom to choose a (different) parent, which is particularly useful in corporate environments. It also adds the dependencies for

  • spring-boot,
  • spring-web and
  • spring-webmvc.

These are the basic dependencies required for a Spring Boot webapp. The Spring Boot dependency takes care of the Spring Boot specific stuff. The Spring Web and Spring Web MVC are used to add the functionality for a basic RESTful webapp.

Then we do define another two dependencies: the spring-boot-logging-starter and the spring-boot-tomcat-starter. Although we said upfront that we didn’t want to use the starter poms, we do use these two: they add the required set of dependencies for logging in a Spring Boot application and the dependencies for the embedded Tomcat container. The reason we do use these starter-pom dependencies is that they are relatively lightweight: they only add dependencies that are necessary.

Finally, the spring-boot-maven-plugin is added to the build configuration. It’s configuration is conform the description in the Spring Boot documentation.

The Main-class

The pom.xml file by far the most work in the app. It seems easy enough to setup when seen in it’s final form, but putting the pieces together was harder than it looks. Now comes the easy part: creating the application code.

A Spring Boot application needs a “main” class. This class is the entrypoint for the application. It has the task of defining the startup code and loading the additional configuration for the application. In the example code on github, the main class is the HelloWorldApplication.java file. (Incidentally, you might have already noticed this class as it is listed as the start-class in the spring-boot-maven-plugin in the pom.xml.)

The main class defines a public static void main method. This is the starting point of the application. Following the examples and the description in the Spring Boot documentation, this class uses the SpringApplication.run() method to start the application. Nothing fancy here.

It’s important to remark that the SpringApplication.run() method should be called with a class annotated with @Configuration, although according to the documentation you should be able to call it with an xml configuration as well. The reason the @Configuration is here in the first place is that this class also defines configuration @Beans:

  • The DispatcherServlet, which defines the root for the Spring Web Mvc application.
  • The TomcatEmbeddedServletContainerFactory, which provides the embedded Tomcat server functionality. This will be configured in a later stage, but for now, it’s enough to have the port hardcoded in the application.

Finally, the HelloWorldApplication class is annotated with @ComponentScan and @EnableWebMvc. These annotations are necessary to configure scanning the current package and its subpackages for annotation processing (mostly @Component and friends) and to enable the WebMvc annotation processing (for example, @RestController).

The RestController

Since we’re creating a very basic web app, we still need to demonstrate that the current configuration will give us a runnable WAR with a “hello world!” message output on a certain url.

To do this, add a class in the current package or a sub-package and annotate it with the @RestController annotation. Also, create a method in it that returns the hello world string and annotate that with the @RequestMapping annotation on a certain path and for the GET HTTP method. The @ComponentScan on the HelloWorldApplication class will make sure it gets scanned and the rest of the configuration will make sure the endpoint gets picked up and loaded on the given path.

The web.xml

When the project is built, Maven will complain that there should be a web.xml since we’re trying to build a WAR file. For now, it’s good enough to add an empty web.xml. Remember that in reality we already have an existing WAR file that we’re rewriting to a Spring Boot app, so in reality, there will be a web.xml as part of the original WAR file. It’s a little useless to spend much time on this now, so an empty file will be fine.

Conclusion

We’ve created a basic app that doesn’t use any of the major Spring Boot dependencies, but is relatively lightweight. To run the app, use (from the commandline in the root of the project)

  • mvn clean package
  • java -jar target\springboot-hello-world-app-1.0-SNAPSHOT.war

 

This blog post is part of a series. The next episode will appear soon. Stay tuned.

Certified Professional Scrum Master

Wat betreft certificering geloof ik dat het een bewijs moet zijn van kunnen, niet alleen van kennen. Het is net als bij het bakken van een taart: leuk dat je het recept kunt lezen en de ingrediënten kunt kopen, maar dat betekent niet dat je ook een mooie taart kunt maken. Het maken van een mooi (kwalitatief hoogwaardig) product vraagt om veel oefening en de juiste techniek, zowel als banketbakker, maar ook als software ontwikkelaar.

Ongeveer twee maanden geleden heb ik mezelf gecertificeerd als Professional Scrum Master (PSM). Niet omdat ik het nou zo nodig vond om dat papiertje te hebben: helemaal niet. Iedereen die een beetje feitjes kan stampen kan zo’n examen halen, zeker als je de online oefentest een aantal keer doorloopt. Nee, het papiertje zelf stelt in mijn ogen weinig voor.

Zoals in veel zaken is in scrum feitenkennis slechts het begin. Veel belangrijker is dat je de Agile principes waarop scrum gebouwd is en de regels van scrum begrijpt, maar ook zelf kunt interpreteren en toepassen binnen je eigen team. Met begrijpen bedoel ik hier ook de invloed die het invoeren van scrum heeft op het proces binnen het team en het bedrijf.

Scrum is in oorsprong met strakke regels opgesteld en wel omdat het moeilijk genoeg is voor bedrijven en ontwikkelteams om hun werkwijze om te vormen. Maak de richtlijnen waarlangs men zich moet vormen flexibel en niemand weet meer waar het naar toe moet, met als gevolg dat de gewenste verandering niet plaats vindt of juist averechts werkt.

Daar staat tegenover dat niet alle regels in alle team even goed werken en dat sterkere scrum teams door een andere aanpak nog weer verder kunnen groeien. De taak van de scrum master is daarom ook om niet alleen naar de regels te kijken, maar het bredere plaatje erbij te betrekken.

Een voorbeeld hiervan is de dagelijkse stand-up: als je net aan scrum begint, is het goed om het voorgestelde patroon met de drie basisvragen te blijven volgen

  • wat heb je gisteren gedaan,
  • wat ga je vandaag doen,
  • zijn er zaken die jouw werk blokkeren?

Zodra het team echter wat verder is gegroeid en wat meer kennis van het proces heeft, is het bijvoorbeeld ook een optie om meer vanuit taken te gaan denken

  • wat is de status van deze story op het sprint backlog en hoe gaan we er vandaag voor zorgen dat deze gesloten wordt?

Effectief gezien zullen door beide benaderingen dezelfde zaken op tafel komen, maar de scrum methode schrijft de eerste manier voor en de tweede niet.

 

Een goed scrum master zal daarom op basis van zijn ervaring, bij voorkeur door onderdeel te zijn geweest van een aantal scrum teams, kunnen inschatten waar de regels gevolgd moeten worden en waar het team een stuk vrijheid mag nemen. Zij zal dan ook stimuleren dat het team zelf de verantwoordelijkheid neemt om die vrijheid te ontdekken door hiervoor tijdens de retrospective voldoende ruimte te bieden. Het uitgangspunt moet daarbij ook zijn dat de scrum master in dienst van het team blijft en niet de kartrekker wordt van deze veranderingen: in principe zou hij overbodig moeten zijn of zichzelf overbodig moeten maken. Het team is ten alle tijden zelf aan zet.

Terugkomend bij mezelf vond ik het na drie jaar met scrum gewerkt te hebben niet meer dan normaal om me te certificeren. Daarnaast besef ik me echter ook dat dit niet een eindpunt is: het blijft nodig om met scrum teams ontwikkelingen door te maken en te leren hoe we als software ontwikkelaars met elkaar het meeste uit onze tijd en energie halen, zodat we onze klanten als ware banketbakkers het allerbeste kunnen voorschotelen.