Passionate Developer

Memory is unreliable like a software, so make my thoughts more eternal and my software more reliable

SOA Patterns - Book Review

Overview

I took this book from my bookshelf when I was preparing internal presentation about micro services for my Roche colleagues. I was mainly interested in Saga and Composite Front End patterns. But when I started, I decided to read rest of the book.

Patterns

Below you can find my short summary about every pattern described in the book:

Service Host

Every service needs the host where it works. For me Spring Framework is excellent example of the service host.

Active Service

Very similar to Micro Services concept, when the service should be autonomous.

Transactional Service

I know a few alternative names of this pattern: Unit of Work, Open Session in View. In JEE world implemented using ThreadLocal.

Workflodize

Strange pattern name. I don’t really like complexity of workflow engines and prefer simple object oriented finite state machine implementation.

Edge Component

Separate infrastructure code from domain. Just simple like that.

Decoupled Invocation

Use event / command bus for communication.

Parallel Pipelines

Apply Unix philosophy to your services. SRP on the higher level.

Gridable Service

Horizontal scaling.

Service Instance

Horizonatal scaling.

Virtual Endpoint

Make your deployment configuration flexible.

Service Watchdog

Service monitoring should be built-in.

Secured Message

Encrypt what should be secured on the message level (privacy, integrity, impersonation).

Secured Infrastructure

Encrypt what should be secured on the protocol level (privacy, integrity, impersonation).

Service Firewall

Security on the network level. Expose only what is really needed.

Identity Provider

Single Sign On.

Service Monitor

Monitoring on the business process level.

Request/Reply

Synchronous point to point communication.

Request/Reaction

Asynchronous point to point communication.

Inversion of Communications

Command Bus, Event Bus, messaging middleware in general. Complex Event Processing (CEP).

Saga

Long running business transactions. Distributed transactions without XA.

Reservation

Related to Saga, how to avoid XA transactions.

Composite Front End

How to compose services into single web application? Author does not answer my doubts in this chapter.

Client/Server/Service

How to deal with legacy systems. How to move from monolithic architecture to SOA.

Service Bus

Message Bus, Service Bus, ESB – nice explanation.

Orchestration

Externalize business long running processes. But still encapsulate business logic in services not in the orchestrator!

Aggregated Reporting

Looks like CQRS for me.

Antipatterns

Funny names for real problems when SOA is used:

  • Knot – problems with coupling.
  • Nanoservice – problems with bounded contexts.
  • Transactional Integration – problems with XA transations.
  • Same Old Way – problems with CRUD like services.

Summary

For sure it’s worth reading but I expected more from Arnon Rotem-Gal-Oz. Sometimes I felt that author covers only the top of the iceberg, when demons are under the hood. The sample code fragments are not very helpful, with high accidental complexity but do not clearly show the problem.

In addition the book was published in 2012 but you will easily realized that author had started ten years before, some parts seems to be outdated.

Rrelease It! - Book Review

Recently I read excellent book Release It! written by Michael Nygard. The book is 7 years old and I don’t know how I could miss the book until now.

Michael Nygard shows how to design and architect medium or large scale web applications. Real lessons learnt from the trenches not golden rules from ivory architects.

This blog post is a dump of taken notes when I was reading the book. The list could be used as a checklist for system architects and developers. There is no particular order of the notes, perhaps there are duplications too.

  • admin access – should use separate networks than regular traffic, if not administrator will not be able connect to the system when something is wrong.

  • network timeouts – should be always defined, if not our system could hang if there is a problem with remote service.

  • firewall – be aware of timeouts on firewall connection tracking tables, if the connection is unused for long time (e.g connection from the pool), firewall could drop packets silently.

  • failure probability – are dependant, not like during coin toss.

  • 3rd party vendors – their client library often sucks, you can not define timeouts, you can not configure threading correctly.

  • method wait – always provide the timeout, do not use method Object.wait().

  • massive email with deep links – do not send massive emails with deep links, bunch of requests to single resource could kill your application.

  • threads ratio – check front-end and back-end threads ratio, the system is as fast as its slowest part.

  • SLA – define different SLAs for different subsystems, not everything must have 99.99%

  • high CPU utilization – check GC logs first.

  • JVM crash – typical after OOM, when native code is trying to allocate memory – malloc() returns error but only few programmers handle this error.

  • Collection size – do not use unbounded collections, huge data set kills your application eventually.

  • Outgoing communication – define timeouts.

  • Incoming communication – fail fast, be pleasant for other systems.

  • separate threads pool – for admin access, your last way to fix the system.

  • input validation – fail fast, use JS validation even if validation must be duplicated.

  • circuit braker – design pattern for handling unavailable remote services.

  • handshake in protocol – alternative for circuit braker if you desing your own protocol.

  • test harness – test using production like environment (but how to do that???)

  • capacity – always multiply by number of users, requests, etc.

  • safety limits on everything – nice general rule.

  • oracle and connection pool – Oracle in default configuration spawns separate process for every connection, check how much memory is used only for handling client connections.

  • unbalanced resources – underestimated part will fail first, and it could hang whole system.

  • JSP and GC – be aware of noclassgc JVM option, compiled JSP files use perm gen space.

  • http sessions – users do not understand the concept, do not keep shopping card in the session :–)

  • whitespaces – remove any unnecessary whitespace from the pages, in large scale it saves a lot of traffic.

  • avoid hand crafted SQLs – hard to predict the outcome, and hard to optimize for performance.

  • database tests – use the real data volume.

  • unicast – could be used for up to ~10 servers, for bigger cluster use multicast.

  • cache – always limit cache size.

  • hit ratio – always monitor cache hit ratio.

  • precompute html – huge server resource saver, not everything changes on every request.

  • JVM tuning – is application release specific, on every release memory utilization could be different.

  • multihomed servers – on production network topology is much more complex.

  • bonding – single network configured with multiple network cards and multiple switch ports.

  • backup – use separate network, backup always consumes your whole bandwidth.

  • virtual IP – always configure virtual IP, your configuration will be much more flexible.

  • technical accounts – do not share accounts between services, it would be security flaws.

  • cluster configuration verification – periodically check configuration on the cluster nodes, even if the configuration is deployed automatically.

  • separate configuration specific for the single cluster node – keep node specific configuration separated from shared configuration.

  • configuration property names – based on function not nature (e.g: hostname is too generic).

  • graceful shutdown – do not terminate existing business transations.

  • thread dumps – prepare scripts for that, during accident time is really precious (SLAs).

  • recovery oriented computing – be prepared for restarting only part of the system, restarting everything is time consuming.

  • transparency – be able to monitor everything.

  • monitoring policy, alerts – should not be defined by the service, configure the policies outside (perhaps in central place).

  • log format – should be human readable, humans are the best in pattern matching, use tabulators and fixed width columns.

  • CIMSNMP superior.

  • SSL accelerator – what it really is???

  • OpsDB monitoring – measurements and expectations, end to end business process monitoring.

  • Node Identifiers – assign to teams in block.

  • Observe, Orient, Decide, Act – military methodology, somehow similar to Agile :–)

  • review – tickets, stack traces in log files, volume of problems, data volumes, query statistics periodically.

  • DB migration – expansion phase for incompatible schema changes.

Acceptance Testing Using JBehave, Spring Framework and Maven

This post documents acceptance testing best practices collected in regular projects I was working on. Best practices materialized into working project, using Jbehave, Spring Framework and Maven.

After the lecture you will know:

  • How to implement automated acceptance tests and avoid common traps.
  • How to organize project build using Maven.
  • How to configure project and glue everything together using Spring Framework.
  • How to write test scenarios using JBehave.
  • Finally how to run tests from command line and from your favourite IDE.

Automated acceptance tests

Automated acceptance test suite is a system documentation, the real single source of truth. The best documentation I’ve ever seen: always up-to-date, unambiguous and precise.

But I found many traps when I was trying to apply acceptance tests automation in practice.

Acceptance Testing is about collaboration not tools.

You will get much better results if you will collaborate closely with product owner, end users and customer. You could write test scenario only by yourself but perhaps you will fail. When you are able to work on test scenarios together, you could think about tools and automation. Do not let that tools interfere in collaboration, all team members must be committed to acceptance tests contribution.

Acceptance Testing needs to be done using user interface.

In most situation you don’t need to implement tests using user interface.

User interface tends to be changed frequently, business logic not so often. I don’t want to change my tests when business logic stays unchanged, even if user interface has been changed significantly.

User interface tests are very fragile and slow. You will lost one of the automated tests advantages: fast and precise feedback loop. It is really hard to setup and maintain the infrastructure for user interface testing.

Everything should be tested.

Acceptance tests are mainly for happy path scenarios verification. Acceptance tests are expensive to maintain, so do not test corner cases, validation and error handling, on that level. Focus only on the relevant assertions for the given scenario, do not verify everything only because you can.

Project build organization

After bunch of theory it is time to show real code. Let’s start with proper project organization. I found that acceptance testing is a cross cutting aspect of the application, and should be separated from the application code. Acceptance tests build configuration is very specific and I don’t want to clutter application build configuration. You can also utilize multi module project, to ensure that acceptance tests module is allowed to call application public API only. This segregation applies only for acceptance testing, the best place for unit tests is still in an application module under src/test directory.

With Maven (and other build tools like Gradle), application code and acceptance tests code can be located in separate modules.

Parent module
1
2
3
4
5
6
7
8
9
10
11
<project>
    <groupId>example</groupId>
    <artifactId>example-jbehave</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>pom</packaging>

    <modules>
        <module>example-jbehave-app</module>
        <module>example-jbehave-tests</module>
    </modules>
</project>
Web application module
1
2
3
4
5
6
7
8
9
10
<project>
    <parent>
        <groupId>example</groupId>
        <artifactId>example-jbehave</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>

    <artifactId>example-jbehave-app</artifactId>
    <packaging>war</packaging>
</project>

Tests module
1
2
3
4
5
6
7
8
9
10
<project>
    <parent>
        <groupId>example</groupId>
        <artifactId>example-jbehave</artifactId>
        <version>1.0-SNAPSHOT</version>
    </parent>

    <artifactId>example-jbehave-tests</artifactId>
    <packaging>jar</packaging>
</project>

The parent module is the best place to define common configuration properties inherited by child modules.

Configuration properties in parent module
1
2
3
4
5
6
7
8
9
10
11
<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

    <maven.compiler.source>1.7</maven.compiler.source>
    <maven.compiler.target>1.7</maven.compiler.target>

    <jbehave.version>3.9.2</jbehave.version>
    <logback.version>1.1.1</logback.version>
    <slf4j.version>1.7.6</slf4j.version>
    <spring.version>4.0.5.RELEASE</spring.version>
</properties>

In the parent module you could also define Spring Framework BOM (Bill Of Materials), to ensure consistent dependency management. This is quite new Spring Framework ecosystem feature.

Dependency management in parent module
1
2
3
4
5
6
7
8
9
10
11
12
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-framework-bom</artifactId>
            <version>${spring.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        (...)
    </dependencies>
<dependencyManagement>

Because I prefer SLF4J over Apache Commons Logging, unwanted dependency is excluded globally from spring-core artifact.

Dependency management in parent module
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<dependencyManagement>
    <dependencies>
        (...)
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>
</dependencyManagement>

In the application module declare all application dependencies. In real application the list will be much longer.

Dependency management in application module
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>${slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>

        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>${logback.version}</version>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
</project>

Maven War Plugin must be configured specifically, classes (the content of the WEB-INF/classes directory) must be attached to the project as an additional artifact. Acceptance test module depends on this additional artifact. Set attachClasses property to true.

War plugin configuration in application module
1
2
3
4
5
6
7
8
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.4</version>
    <configuration>
        <attachClasses>true</attachClasses>
    </configuration>
</plugin>

Alternatively you can use two separate modules for the application. One jar type with domain and infrastructure and separate war type with web layer. Then you would declare dependency to your jar application module only. In my example I would keep it simple, and use single war type module for all layers in the application.

In the tests module declare all dependencies as well. There is also an extra dependency to the application module, additional jar type artifact generated by Maven War Plugin. The last two dependencies of zip type are needed to generate JBehave tests report.

Dependency management in tests module
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<dependencies>
        <dependency>
            <groupId>${project.groupId}</groupId>
            <artifactId>example-jbehave-app</artifactId>
            <version>${project.version}</version>
            <classifier>classes</classifier>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
        </dependency>

        <dependency>
            <groupId>org.jbehave</groupId>
            <artifactId>jbehave-core</artifactId>
            <version>${jbehave.version}</version>
        </dependency>

        <dependency>
            <groupId>org.jbehave</groupId>
            <artifactId>jbehave-spring</artifactId>
            <version>${jbehave.version}</version>
        </dependency>

        <dependency>
            <groupId>org.jbehave.site</groupId>
            <artifactId>jbehave-site-resources</artifactId>
            <version>3.1.1</version>
            <type>zip</type>
        </dependency>

        <dependency>
            <groupId>org.jbehave</groupId>
            <artifactId>jbehave-core</artifactId>
            <version>${jbehave.version}</version>
            <classifier>resources</classifier>
            <type>zip</type>
        </dependency>
    </dependencies>

Two Maven plugins must be configured specifically in the tests module: maven-surefire-plugin and jbehave-maven-plugin.

Because we separated tests into it’s own module, test classes might be located under src/main as first class citizen. Surefire is configured to execute test scenarios under example/jbehave/tests/stories package.

Surefire plugin configuration in tests module
1
2
3
4
5
6
7
8
9
10
11
12
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.17</version>
    <configuration>
        <testSourceDirectory>${basedir}/src/main/java/</testSourceDirectory>
        <testClassesDirectory>${project.build.directory}/classes/</testClassesDirectory>
        <includes>
            <include>example/jbehave/tests/stories/**/*.java</include>
        </includes>
    </configuration>
</plugin>

In my setup JBehave plugin will be responsible only for unpacking resources used by tests report. I do not use plugin to run stories at all, I found better way to do that. It will be described later in the post.

JBehave plugin configuration in tests module
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<plugin>
    <groupId>org.jbehave</groupId>
    <artifactId>jbehave-maven-plugin</artifactId>
    <version>${jbehave.version}</version>
    <executions>
        <execution>
            <id>unpack-view-resources</id>
            <phase>generate-resources</phase>
            <goals>
                <goal>unpack-view-resources</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Spring Framework configuration

The application implements shopping basket simplified functionality. Do not use my shopping basket implementation on production, it is only for this post educational purposes :–)

The application is composed from three main packages: domain, infrastructure and web. This convention comes from Domain Driven Design, you can read more in my post DDD Architecture Summary.

Each package is configured using Spring Framework annotation support. In general you should keep the configuration as modular as possible. It is very important for testing, with modular configuration you can load only needed context and speed up tests execution.

DomainConfiguration.java
1
2
3
4
@Configuration
@ComponentScan
public class DomainConfiguration {
}
InfrastructureConfiguration.java
1
2
3
4
@Configuration
@ComponentScan
public class InfrastructureConfiguration {
}
WebConfiguration.java
1
2
3
4
@Configuration
@ComponentScan
public class WebConfiguration {
}

If you are interested in application functionality, go to the source code. The application is really simple, just old plain Java.

Much more interesting is Spring Framework configuration in tests module. First the meta annotation for acceptance tests is defined. This is a new way to avoid repetition in tests definition, introduced in Spring Framework recently.

AcceptanceTest.java
1
2
3
4
5
6
7
8
@ContextConfiguration(classes = AcceptanceTestsConfiguration.class)
@ImportResource({"classpath:/application.properties", "classpath:/tests.properties"})
@ActiveProfiles("tests")
@DirtiesContext
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface AcceptanceTest {
}
  1. Tests configuration is loaded, again using Java config instead of XML.
  2. Load application properties and overwrite defaults using tests properties if needed.
  3. Activate some special Spring Framework profile(s). Another way to customize tests configuration.
  4. Acceptance tests have side effects typically. Reload context before every story execution.

The AcceptanceTestsConfiguration class is again very simple. It imports application configurations: domain and infrastructure. Because we will implement acceptance tests using service layer, we don’t need to load web module or run web container.

AcceptanceTestsConfiguration
1
2
3
4
5
@Configuration
@Import({DomainConfiguration.class, InfrastructureConfiguration.class})
@ComponentScan
public class AcceptanceTestsConfiguration {
}

Meta annotation support is also used to define very specific annotations, one for JBehave test steps, second for JBehave converters. Well crafted annotations are better than generic @Component, even if they do not provide additional features.

Steps.java
1
2
3
4
5
6
@Target(value = ElementType.TYPE)
@Retention(value = RetentionPolicy.RUNTIME)
@Documented
@Component
public @interface Steps {
}
Converter.java
1
2
3
4
5
6
@Target(value = ElementType.TYPE)
@Retention(value = RetentionPolicy.RUNTIME)
@Documented
@Component
public @interface Converter {
}

JBehave configuration

The last infrastructure element in tests module is a base class for stories. JBehave provides plenty of integration methods with Spring Framework and I spent a lot of time to select the best one.

I have following requirements:

  • The ability to run single story from my IDE.
  • Meet Open Close Principle. When I add new story I do not want to modify any existing file. I want to add new one(s).
  • Have a full control over JBehave configuration.

To meet my requirements some base class for all tests must be defined. I do not like the idea to use inheritance here but I did not find better way.

Let me describe AbstractSpringJBehaveStory step by step:

1
2
3
public abstract class AbstractSpringJBehaveStory extends JUnitStory {
...
}

JUnitStory is a JBehave class with single test to run single story. It means that any subclass of this class can be executed as regular JUnit test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
private static final int STORY_TIMEOUT = 120;

public AbstractSpringJBehaveStory() {
    Embedder embedder = new Embedder();
    embedder.useEmbedderControls(embedderControls());
    embedder.useMetaFilters(Arrays.asList("-skip"));
    useEmbedder(embedder);
}

private EmbedderControls embedderControls() {
    return new EmbedderControls()
            .doIgnoreFailureInView(true)
            .useStoryTimeoutInSecs(STORY_TIMEOUT);
}

The constructor initialize JBehave embedder, a fascade to embed JBehave functionality in JUnit runner.

1
2
3
4
5
6
7
@Autowired
private ApplicationContext applicationContext;

@Override
public InjectableStepsFactory stepsFactory() {
    return new SpringStepsFactory(configuration(), applicationContext);
}

Configure JBehave to load steps and converters from Spring Framework context. What is also important, the steps and converters are managed by Spring Framework, you can inject whatever you want.

1
2
3
4
5
6
7
8
@Override
public Configuration configuration() {
    return new MostUsefulConfiguration()
            .useStoryPathResolver(storyPathResolver())
            .useStoryLoader(storyLoader())
            .useStoryReporterBuilder(storyReporterBuilder())
            .useParameterControls(parameterControls());
}

The configuration method is surprisingly responsible for JBehave configuration. The most useful configuration is used with some customizations. Let’s check what kind of customization are applied.

1
2
3
private StoryPathResolver storyPathResolver() {
    return new UnderscoredCamelCaseResolver();
}

The story path resolver is responsible for resolving story based on test class name. With UnderscoredCamelCaseResolver implementation, story learn_jbehave_story.story will be correlated with LearnJbehaveStory.java class.

1
2
3
private StoryLoader storyLoader() {
    return new LoadFromClasspath();
}

Stories will be resolved and loaded from the current classpath (from src/main/resources to be more specific).

1
2
3
4
5
6
7
8
private StoryReporterBuilder storyReporterBuilder() {
    return new StoryReporterBuilder()
            .withCodeLocation(CodeLocations.codeLocationFromClass(this.getClass()))
            .withPathResolver(new ResolveToPackagedName())
            .withFailureTrace(true)
            .withDefaultFormats()
            .withFormats(IDE_CONSOLE, TXT, HTML);
}

The configuration how the reports will look like. Nothing special, please refer to JBehave reference documentation for more details.

1
2
3
4
private ParameterControls parameterControls() {
    return new ParameterControls()
            .useDelimiterNamedParameters(true);
}

The configuration how the steps parameters will be handled.

Test scenarios definition

Test scenarios are rather straightforward, if you are familiar with BDD and Gherkin like syntax. If not please read BDD Concepts short definition.

Look, in the scenarios there is nothing specific to the application user interface. It is not important how product price editor looks like, and how the shopping basket is presented.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Narrative:
In order to learn JBehave
As a tester
I want to define sample story for shopping cart

Lifecycle:
Before:
Given product Domain Driven Design with SKU 1234
And product Domain Driven Design price is 35 EUR

Given product Specification By Example with SKU 2345
And product Specification By Example price is 30 EUR

Scenario: Empty shopping cart

Given empty shopping cart
Then shopping cart is empty

Scenario: Products are added to empty shopping cart

Given empty shopping cart
When products are added to the shopping cart:
|PRODUCT                 |QTY|
|Domain Driven Design    |  1|
|Specification By Example|  2|

Then the number of products in shopping cart is 2
And total price is 95 EUR

Test steps implementation

Test steps are implemented in Java classes annotated with @Steps. The common mistake is to develop steps only for single story. The steps should be reusable across many user stories if feasible. With reusable steps you will find, that writing next user stories are much easier and faster. You can just use existing steps implementation to define new user story.

For example steps for product catalog and product prices are defined in SharedSteps class. The repositories are used to manage products and prices. In real application, you should use application service and it’s public API instead of direct access to the repositories. Please think about steps implementation complexity, if we would need to use user interface, instead of repositories or service API.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Steps
public class SharedSteps {

    @Autowired
    private ProductRepository productRepository;

    @Autowired
    private PriceRepository priceRepository;

    @Given("product $name with SKU $sku")
    public void product(String name, StockKeepingUnit sku) {
        productRepository.save(new Product(sku, name));
    }

    @Given("product $name price is $price")
    public void price(String name, Money price) {
        Product product = productRepository.findByName(name);
        priceRepository.save(product.getSku(), price);
    }
}

You could ask, how does JBehave know about StockKeepingUnit and Money classes? You will have to implement custom converters but it is much more convenient to use well defined API, instead of dozen of String based values.

MoneyConverter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Converter
public class MoneyConverter {

    @AsParameterConverter
    public Money convertPercent(String value) {
        if (StringUtils.isEmpty(value)) {
            return null;
        }

        String[] tokens = value.split("\\s");
        if (tokens.length != 2) {
            throw new ParameterConverters.ParameterConvertionFailed("Expected 2 tokens (amount and currency) but got " + tokens.length + ", value: " + value + ".");
        }

        return new Money(tokens[0], tokens[1]);
    }
}

The class MoneyConverter is annotated with @Converter annotation defined before. StringUtils is a utility class from Spring Framework, look at the API documentation how many helpful utils classes are implemented in the framework. If the value cannot be converted, JBehave ParameterConvertionFailed exception is thrown.

The shopping cart related steps are implemented in ShoppingCartSteps class.

ShoppingCartSteps
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
@Steps
public class ShoppingCartSteps {

    @Autowired
    private ShoppingCartService shoppingCartService;

    @Autowired
    private ProductDao productRepository;

    @Given("empty shopping cart")
    public void emptyShoppingCart() {
        shoppingCartService.createEmptyShoppingCart();
    }

    @When("products are added to the shopping cart: $rows")
    public void addProducts(List<ShoppingCartRow> rows) {
        for (ShoppingCartRow row : rows) {
            Product product = productRepository.findByName(row.getProductName());
            shoppingCartService.addProductToShoppingCart(product.getSku(), row.getQuantity());
        }
    }

    @Then("shopping cart is empty")
    public void isEmpty() {
        ShoppingCart shoppingCart = shoppingCartService.getShoppingCart();
        assertEquals(0, shoppingCart.numberOfItems());
    }

    @Then("the number of products in shopping cart is $numberOfItems")
    public void numberOfItems(int numberOfItems) {
        ShoppingCart shoppingCart = shoppingCartService.getShoppingCart();
        assertEquals(numberOfItems, shoppingCart.numberOfItems());
    }

    @Then("total price is $price")
    @Pending
    public void totalPrice(Money price) {
        // TODO: implement missing functionality and enable step
    }
}

There are two interesting elements:

  • Last step annotated with @Pending annotation.
  • ShoppingCartRow class used to defined products added to the cart.

Typically user story is prepared before implementation. In this situation you will have several pending steps, slowly implemented during the sprint. Pending step does not mean that acceptance tests have failed, it only means that functionality has not been implemented yet.

ShoppingCartRow is a simple bean prepared for tabular parameters definition in the story. Do you remember this step?

ShoppingCartSteps
1
2
3
4
When products are added to the shopping cart:
|PRODUCT                 |QTY|
|Domain Driven Design    |  1|
|Specification By Example|  2|

Basket presented in tabular form is much easier to read than if it would be defined line by line. To use this kind of parameter you have to prepare a class with a few annotations.

ShoppingCartRow
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@AsParameters
public static class ShoppingCartRow {

    @Parameter(name = "PRODUCT")
    private String productName;

    @Parameter(name = "QTY")
    private Integer quantity;

    public String getProductName() {
        return productName;
    }

    public void setProductName(String productName) {
        this.productName = productName;
    }

    public Integer getQuantity() {
        return quantity;
    }

    public void setQuantity(Integer quantity) {
        this.quantity = quantity;
    }
}

JBehave uses this annotated class to convert table row from user story to Java object.

Running tests

The last part of this post is about running tests.

For every user story definition, one test class is defined. The test class is only the marker and does not define any logic.

LearnJBehaveStory.java
1
2
3
4
@RunWith(SpringJUnit4ClassRunner.class)
@AcceptanceTest
public class LearnJbehaveStory extends AbstractSpringJBehaveStory {
}

The test can be executed directly from your favourite IDE, at least if the IDE provides support for JUnit runner.

The second way to execute tests is to use Maven from command line. As long as tests are executed by regular Maven Surefire Plugin, the tests are executed exactly the same way like any other tests. Run the following command from the project parent directory. Maven builds the application module first, add the classes to the reactor classpath and then execute acceptance tests from tests module.

1
mvn -pl example-jbehave-tests -am test

The convenient way to execute single test from command line is to use regular Java property -Dtest recognized by Surefire.

1
2
mvn -pl example-jbehave-tests -am test -Dtest=LearnJbehaveStory
...

Summary

In the post I presented the most important elements from example project. The complete project is hosted on GitHub, you can clone/fork the project and do some experiments by yourself. I configure Travis continuous integration build to ensure that the project really works.

The Twelve-Factor App - Part 2

This blog post is a continuation of first part of this blog series.

7. Port Binding

The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service.

I developed self-contained web application once, with embedded Jetty server. There are many product with embedded web server on the market, e.g: Artifactory. Now most of my POC (Proof of Concept) use Spring Boot, when you can run web application as regular system process. After hell of JBoss class loader issues it seems to be the correct way to run the application.

8. Concurrency

In the twelve-factor app, processes are a first class citizen.

Using processes instead of threads is controversial in JVM world. But I agree that you can not scale out using threads only. What is also interesting, you should never daemonize process or write PID file. Just align to the system process management tools like upstart.

9. Disposability

The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice.

I faced disposability issues, when I was developing applications hosted on GAE (Google App Engine). Forget about any heavy duty frameworks on GAE, the startup process must be really light. In general it is problematic in JVM world. Startup time of the JVM is significant itself, and JMV must spin up our application as well. If I could compare JVM startup performance to the node.js there is a huge difference.

I also remember how easily you can reconfigure system service when you can send -HUP signal. It would be nice to have this possibility for my applications.

10. Dev/prod parity

The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small.

Clear for me, test your production like environment as often as possible to minimize the risk. If the production database is Oracle, use Oracle XE for local development, not MySQL or H2. If the production applications server is JBoss, use JBoss locally or Apache Tomcat at last resort. Use the same JVM with similar memory settings if feasible. If you deploy your application on Linux, do not use Windows for local development. Virtualization or lightweight containers are your friends. And so on …

11. Logs

A twelve-factor app never concerns itself with routing or storage of its output stream.

Hmm, I would prefer to use any logger (SLF4J) with configured appender instead of stdout. Instead file appender I could use Syslog appender and gather logs from all cluster nodes. But maybe I’m wrong with this. I understand the point, than stdout is a basic common denominator for all runtime platform.

12. Admin processes

Twelve-factor strongly favors languages which provide a REPL shell out of the box, and which make it easy to run one-off scripts.

For almost all my web application, I embedded BSH web servlet (Bean Shell Console). It rescued me out of trouble many times. It isn’t full fledged REPL like this one from Scala but still usable. Oh, I forgot to mention about H2 web servlet, also embedded into most of my application.

Sometimes it is much easier to expose some admin functionality as JMX beans. You can use Jolokia as REST JMX connector and easily prepare admin console using a few line of HTML and JavaScript.

The Twelve-Factor App - Part 1

During my studies about “Micro Services” I found comprehensive (but short) document about Twelve-Factor App methodology for building software-as-a-service applications. The orginal paper is published at 12factor.net.

Below you can find a short summary of my experiences for the first part of the document. There is also a second part of this blog post series.

1. Codebase

There is always a one-to-one correlation between the codebase and the app

I had the chance to use setup, where Subversion repository was shared for many projects. Only once, and I said – “never again”. I remember problems with release management, setting up access rights, and crazy revision numbers.

2. Dependencies

A twelve-factor app never relies on implicit existence of system-wide packages

I remember a setup where you spent whole day to build all dependencies (a lot of C and C++ code). The solution was a repository with compiled and versioned dependencies. Almost everything was compiled statically with minimal dependency to the core system libraries like stdc.

Right now I build projects using Maven repositories and artifacts. But it is not enough for twelve-factor app, and I fully agree. My next step should be using “Infrastructure as a code” principle in practice.

3. Config

strict separation of config from code

Some time ago my application was deployed on the wrong environment (WAR file prepared for QA environment was deployed on PROD). It was one of the worst week in my career to rollback everything back. Never again, I fully agree that binary should be environment independent. Keep configuration out of binary artifact.

4. Backing Services

The code for a twelve-factor app makes no distinction between local and third party services

I do not fully understand this chapter. What I understood is that I should separate my domain from attached resources (local and third party services). And it is what I have done many times:

  • Externalize connection configuration
  • Use Anti Corruption Layer between my domain and infrastructure (e.g: hexagonal architecture)
  • Do not mix domain logic with infrastructure code.

5. Build, release, run

The twelve-factor app uses strict separation between the build, release, and run stages

The difference between build and release stages is somehow new for me. My JEE applications are released and deployed to the Maven repository. The deployed WAR files are deployable on any environment, the configuration is externalized and applied during Maven WAR overlay process. The outcome of the overlay is not stored as a reference but maybe it should. The question is where to put release? Again in the Maven repository or as a Bamboo build artifact?

What I apply during the run stage is the database schema migration using Liquibase or Flyway and it really works. I agree with author to keep this stage as small as possible.

And for sure direct changes on the production are prohibited. I had to clean up the project when the changes were not checked in to the repository once, never again.

6. Processes

Twelve-factor processes are stateless and share-nothing.

I have never used this concept but I agree with author. From the scalability perspective share-nothing architecture of stateless services is good.

Ok, 10 years ago I developed application using CORBA and there were remote calls to fully stateful remote object. Bad idea, really.

How to Send Email From JEE Application

Sending email notifications from enterprise application is very common scenario. I know several methods to solve this puzzle, below you can find short summary.

To send an email from the application at least SMTP server address must be configured. Because released application binary (e.g: WAR file) should be portable across environments (integration, QA, staging, production) configuration must be externalized.
Below I present code snippets to configure SMTP server address as JNDI entry.

Sample JNDI entry for JBoss:

1
2
3
4
5
6
7
8
9
10
11
12
<?xml version="1.0" encoding="UTF-8"?>
<server>
  <mbean code="org.jboss.mail.MailService" name="jboss:service=mailSession">
    <attribute name="JNDIName">mail/mailSession</attribute>
    <attribute name="Configuration">
      <configuration>
        <property name="mail.smtp.host" value="smtp.company.com"/>
      </configuration>
    </attribute>
    <depends>jboss:service=Naming</depends>
  </mbean>
</server>

Sample JNDI entry for Tomcat:

1
2
3
4
5
6
7
<?xml version="1.0" encoding="UTF-8"?>
<Context>
  <Resource name="mail/mailSession"
    auth="Container"
    type="javax.mail.Session"
    mail.smtp.host="smtp.company.com"/>
</Context>

When mail session is configured as JNDI resource, it can be easily utilized by Spring Framework mail sender:

1
2
3
4
5
<jee:jndi-lookup id="mailSession" jndi-name="mail/mailSession" />

<bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl">
  <property name="session" ref="mailSession"/>
</bean>

Now it is time for more tough part, how to use mail sender correctly? There are at least four options, choose the best one for you:

  • Direct (Sync) Use mail session directly from the application service in the web request thread.
  • Direct (Async) Use mail session directly from the application service using @Async Spring annotation.
  • Database Queue Save messages into database table and create cron job to send the emails periodically.
  • JMS Queue Put messages into JMS queue and attach JMS listener to process and send emails.

I collected a few non-functional and functional common requirements together with short categorization for each method.

Direct (Sync) Direct (Async) Database Queue JMS Queue
Application works even if the SMTP is down no no yes yes
Web request thread is not blocked no yes yes yes
Mail aggregation, scheduled sending, etc. no no yes limited
Control over SMTP requests throttle no limited limited yes
Redelivery policy, do not lost messages if SMTP is down no no limited yes
Monitoring no no yes yes

I would start with “Database Queue” approach, at least if JMS is not already used in the project or you do not have to send thousands of emails. “Direct” method is not an option at all IMHO.

Separate part of the subject is to how to create email body. In most situation I used some template engine, like Freemarker or Thymeleaf. The template can be defined as internal WAR resource or can be loaded from database if the template needs to be adjusted on runtime.

DDD Architecture Summary

In this blog post you can find my general rules for implementing system using Domain Driven Design. Do not use them blindly but it is good starting point for DDD practitioners.

Bounded Context

  • Separate bounded context for each important module of the application (important from business partner perspective).
  • Independent of each other (if feasible).
  • For monolithic application separate Spring Framework context for each bounded context, e.g: applicationContext-domain-crm.xml, applicationContext-domain-shipping.xml, etc.
  • CRUD like bounded contexts (user management, dictionaries, etc.) should be implemented as Anemic Domain Model.

Domain

  • Place for application business logic.
  • Must be independent of the technical complexity, move technical complexity into infrastructure.
  • Must be independent of the particular presentation technology, move presentation related stuff into web.
  • Internal package structure must reflect business concepts (bounded contexts), e.g: crm, shipping, sales, shared, etc.

Domain Model

  • Rich model, place for: entities, domain services, factories, strategies, specifications, etc.
  • Best object oriented practices applied (SOLID, GRASP).
  • Unit tested heavily (with mocks in the last resort).
  • Unit tests executed concurrently (on method or class level).
  • Meaningful names for domain services e.g: RebateCalculator, PermissionChecker, not RebateManager or SecurityService.
  • Domain services dependencies are injected by constructor.
  • Having more than 2~3 dependencies is suspicious.
  • Entities are not managed by containers.
  • Aggregate root entities are domain events publishers (events collectors).
  • Aggregates in single bounded context might be strongly referenced (navigation across objects tree).
  • Aggregates from different bounded contexts are referenced by business keys (if feasible).
  • No security, no transactions, no aspects, no magic, only plain old Java.
  • Interfaces for domain services when the service is provided by infrastructure.
  • No interfaces for domain services implemented in the domain model itself.

Application Services

  • Orchestrator and facade for actors under Model.
  • Place for security handling.
  • Place for transactions handling.
  • Must not deliver any business logic, move business logic into domain model. Almost no conditionals and loops.
  • Implemented as transactional script.
  • No unit tests.
  • Acceptance tests executed against this layer.
  • Cglib proxied, proxy must be serialized by session scoped beans in web layer.
  • Dependencies are injected on field level (private fields).
  • Ten or more dependencies for single application service is not a problem.
  • Application services are also domain event listeners.
  • Always stateless.
  • No interfaces, just implementation.

Application Bootstrap

  • Initial application data.
  • Loaded during application startup (fired by BootstrapEvent) if application storage is empty.
  • Loading order is defined with Spring Ordered interface.
  • Data is loaded within Model API.
  • Data might be loaded within application services, e.g: load sample Excel when application is integrated with external world this way.
  • No tests, bootstrap is tested during application startup on daily basis.

Infrastructure

  • Place for technical services
  • Must not deliver any business logic, move business logic into domain.
  • Internal package structure must reflect technical concepts, e.g: ~infrastructure.jpa, ~infrastructure.jms, ~infrastructure.jsf, ~infrastructure.freemarker, ~infrastructure.jackson, etc.
  • Shared for all bounded context of the application. For more complex applications, separate technical services e.g: ~infrastructure.jpa.crm, ~infrastructure.jpa.shipping, etc.
  • Class names must reflect technical concepts, e.g.: JpaCustomerRepository, JaksonJsonSerializer, not CustomerRepositoryImpl, JsonSerializerImpl.
  • Integration tested heavily (with Spring Framework context loaded).
  • Integration tests executed by single thread.
  • Test execution separated from unit tests within test groups.
  • Separate Spring Framework context for each technical concept, e.g: applicationContext-infrastructure-jpa.xml, applicationContext-infrastructure-jms.xml, etc.
  • Separate and independent Spring test context for each technical module, e.g: testContext-jpa.xml, testContext-jms.xml, etc.

Web

  • Client specific facade (REST, MVC, JSF, etc.)
  • Place for UI logic (not applicable for JavaScript client and REST)
  • Delegates requests to application services
  • No transactions, no method level security, move security and transactions to application services.
  • No business logic, move business logic into domain.
  • Tested with mocked application services.
  • Tested with loaded spring context for MVC controllers (if applicable).
  • Serializable session scoped beans (to be safe all beans in this module should be java.io.Serializable).
  • Internal package structure must reflect UI organization structure, it might be similar to project sitemap.
  • Top level package might reflect technology or architecture e.g: presentation, rest, mvc, jsf, etc.

Development Environment Setup

This document is a manual how to configure flexible development environment for Java, JavaScript, Ruby and Python – my primary set of tools. Even if the runtimes installation with apt-get seems to be a trivial task, there is limited control over installed version of the runtime. The goal is to configure environment where you can easily change Java, Ruby , node.js and python versions. Where you can define the runtime version on project level.

The most convenient way to configure and manage runtimes is to use environment managers. Environment manager is nothing more than shell script, the script intercepts executed commands using shim executables injected into your PATH. There are two flavours of the environment managers: rvm and rbenv like. I prefer the second one, it is less obtrusive and follows general unix principle: “do one thing and do it well”.

Let’s start and install environment managers (for Java, Ruby, node.js and Python) into your home directory:

1
2
3
4
git clone https://github.com/gcuisinier/jenv.git ~/.jenv
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
git clone https://github.com/OiNutter/nodenv.git ~/.nodenv
git clone https://github.com/yyuu/pyenv.git .pyenv

For rbenv and nodenv you can install plugins that provide rbenv install and nodenv install commands to compile and install runtimes automatically. For Java you have to download and install JVM manually.

1
2
$git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
$git clone https://github.com/OiNutter/node-build.git ~/.nodenv/plugins/node-build

Add environment managers to the PATH variable and initialize them to get command auto completion. Append the following snippet at the end of .bashrc (or .bash_profile on Mac) file.

1
2
3
4
5
6
7
8
9
10
11
export PATH="$HOME/.jenv/bin:$PATH"
eval "$(jenv init -)"

export PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"

export PATH="$HOME/.nodenv/bin:$PATH"
eval "$(nodenv init -)"

export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init -)"

Install runtimes using environment managers (Java needs to be installed manually):

1
2
3
4
$jenv add /path/to/already/installed/jdk
$rbenv install 1.9.3-p448
$nodenv install 0.10.12
$pyenv install 3.4.1

Install build tools (maven, gradle, sbt, etc.), create symbolic links, and configure PATH in .profile file:

1
2
APPS="$HOME/apps"
export PATH="$APPS/apache-maven/bin:$APPS/gradle/bin:$APPS/sbt/bin:$PATH"

Make build tools jenv aware:

1
2
3
$jenv enable-plugin maven
$jenv enable-plugin gradle
$jenv enable-plugin sbt

Finally add shell helper functions for JVM configuration to the .profile file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function jdebug_set() {
    jenv shell-options "$JENV_OPTIONS -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=8000,suspend=n"
}

function jdebug_unset() {
    jenv shell-options --unset
}

function gc_set() {
    jenv shell-options "$JENV_OPTIONS -XX:+PrintGCDetails -Xloggc:gc.log"
}

function gc_unset() {
    jenv shell-options --unset
}

function jrebel_set() {
    jenv shell-options "$JENV_OPTIONS -javaagent:$APPS/jrebel/jrebel.jar -noverify"
}

function jrebel_unset() {
    jenv shell-options --unset
}

function jprofiler_set() {
    jenv shell-options "$JENV_OPTIONS -javaagent:$APPS/jprofiler/bin/agent.jar"
}

function jprofiler_unset() {
    jenv shell-options --unset
}

The last step is to read environment managers manual. As long as all four managers are very similar it should not take more than one evening.

GitFlow Step by Step

Git Flow is a mainstream process for branch per feature development. Git Flow is the best method I’ve found for managing project developed by small to medium project teams. Before you start reading this post you should read two mandatory lectures:

Git Workflows by Atlassian

Maven JGit-Flow Plugin

This blog post is a step by step instruction how to use Git Flow together with Maven build tool, continuous integration server (e.g: Bamboo) and bug tracker (e.g: JIRA). If you are interested in how to automate the whole process, watch this Flow with Bamboo video. But I really recommend to start using Git Flow with pure git commands, when you understand the concept move to Git Flow, and then automate everything eventually.

Start feature branch

  1. Assign JIRA task to you.
  2. Move JIRA tasks from “To Do” to “In Progress”.
  3. Create feature branch for the JIRA user story (if it is a first task of the user story). Feature branch must reflect JIRA issue number and have meaningful name, e.g: PROJ-01_user_registration.

      mvn jgitflow:feature-start
    
  4. Verify that:

    • New local feature branch feature/PROJ-01_user_registration is created.
  5. Optionally push feature branch into remote repository.

     git push origin feature/PROJ-01_user_registration
    
  6. Verify that:

    • The feature branch is pushed into remote repository.
    • New Bamboo build plan is created for the feature branch.

Checkout the feature branch

  1. Checkout the feature branch created by other developer (e.g for code review).

     git checkout feature/PROJ-01_user_registration
    

Work on the feature branch

  1. Periodically push changes to the remote repository.

     git push origin feature/PROJ-01_user_registration
    
  2. Verify that:

    • Bamboo build plan for feature branch is green.

Finish feature branch

  1. Ensure your local develop branch is up to date.

     git checkout develop
     git pull origin develop
    
  2. To avoid conflicts during finishing feature branch, ensure that all changes from develop are merged to the feature branch.

     git checkout feature/PROJ-01_user_registration
     git pull origin develop
    
  3. Resolve all conflicts (if any) and commit changes.

     git commit -a -m "Conflicts resolved"
    
  4. Finish the feature.

     mvn jgitflow:feature-finish
    
  5. Push changes from develop into remote repository

     git push origin develop
    
  6. Move JIRA task to “Done” category.

  7. Verify that:

    • Feature branch is merged into develop branch.
    • Local feature branch is removed.
    • Bamboo build plan for develop is green.

Start release branch

  1. Create release branch.

     mvn jgitflow:release-start
    
  2. Verify that:

    • New local release branch release/version is created.
    • Work with release branch

Work with release branch

  1. Clean the database (Database).

  2. Run the application (Running Application) and perform exploratory tests.

  3. Fix all issues (if any).

  4. Commit changes to the release branch.

     git commit -a -m "Fixes release candidate"
    

Finish release branch

  1. Make sure your local master branch is up to date

     git fetch origin master
    
  2. Finish the release branch

     mvn jgitflow:release-finish
    
  3. Verify that:

    • Release branch is merged into local develop branch.
    • Project version is updated in local develop branch.
  4. Push changes from develop into remote repository

     git push origin develop
    
  5. Checkout master

     git checkout master
    
  6. Verify that:

    • Release branch is merged into local master branch.
    • Project version is updated in local master branch.
  7. Push changes from master into remote repository

     git push --tags origin master
    
  8. Verify that:

    • Release tag is pushed to the remote repository.
    • Build plan on master is green and new version is deployed.
  9. Delete released feature branches from remote repository.

     git push origin :feature/PROJ-01_user_registration
    

How to Document Your Professional Experiences

Have you considered what is important for prospective employer? What is the most valuable information source about your professional experience? How to document that you are an expert in software engineering?

Below you can find some of my tricks:

  • Write a blog, teaching is the best learning method :–)
  • Write an article to the software magazine
  • Contribute to open source project(s) like MyTourbook
  • Report bugs to the open source project(s), send pull requests and patches.
  • Manage your profile @ GitHub
  • Manage your profile @ StackOverflow
  • Manage your profile @ Goodreads
  • Manage your profile @ LinkedIn
  • Post to discussion groups, be helpful for others
  • Be active in local software groups (e.g JUG)
  • Attend university lectures (@ Coursera), 100% free

To be honest, I have done only few of them for myself :–(