lunes, agosto 29, 2016

Configuring Maven Release Plugin to Skip Tests


If you are using Maven and using Maven Release Plugin, you would like to skip the execution of tests during the release plugin execution. The reason might be very different but might depend on the nature of the project or how CI pipeline is implemented.

Notice that this might be a really improvement in releasing time since performing the release with Maven Release Plugin implies executing the same tests twice, one in prepare step and the other one in perform step.

To avoid executing tests in prepare phase you need to run as:

mvn -DpreparationGoals=clean release:prepare

If you want to avoid executing tests during perform phase you need to run as:

mvn -Darguments="-Dmaven.test.skip=true" release:perform

Please it is important to note that I am not saying you don't need to execute tests during release process, what I am saying is that something your release process doesn't fit the standard release process of the plugin and for example you are already running tests before executing the plugin.

We keep learning,
Alex.
Say it ain't so, I will not go, Turn the lights off, carry me home, Keep your head still, I'll be your thrill, The night will go on, my little windmill (All The Small Things - Blink-182)

jueves, agosto 18, 2016

Making Web UI testing great again with Arquillian, Docker and Selenium (part 1)


Introduction to the Problem

Most of the time when you need to write functional tests/end-to-end tests for web UI, you end up by using Selenium, which it can consider the de-facto tool in Java world for web UI testing. I am sure you've already used it for these kind of tests.

But probably at the same time you've been faced on some the most common problems in functional testing, some related with Web UI testing and others not.

For example one of the major problems usually people find in functional tests are the preparation of the environment, to run the tests you need to boot up a server and deploy your application, then install/start the database, also maybe the cache system and so on with all the servers, leaving the user to install locally each of the service. Some errors could happen like installing incorrect version of the server used in production, reusing another local installation of the database which might not be the same version or for example running them in a different JDK version of the one used in production.

But also there are some other problems that are more specific to Web UI testing such as browser installation or configuration of WebDriver properties.

Fixing First Problem

To fix the first problem, the most easier solution you can think is using Docker containers and of course Docker compose since you can define and run multi-container Docker applications. So basically you define in docker-compose file, all the servers that you might need to run the tests so when you run tests you have all of them running and more important with a fixed version, so you can be sure that the tests are always run against a known/desired specific version of the servers, same JDK, ... and not depending on what is installed in developers/CI machine.

But this approach has one problem. You need to specifically run docker-compose up, docker-compose down. Of course you can automate this in your build script, which will solve the problem on CI environment, but if a developer wants to execute a test from IDE, let's say for debugging, then he needs to be aware of that fact.

And this is what Arquillian Cube solves. Arquillian Cube is an Arquillian extensions that  uses docker-compose file to start and configure all the containers defined there, execute the tests and finally shutting down all of them. The good news is that since Arquillian works with JUnit (and TestNG and Spock), you can run the tests from the IDE without worrying about starting and stopping containers since Docker lifecycle is managed by Arquillian Cube.

So first part of the problem that is defining the test environment is fixed with Arquillian Cube. Let's see how to fix the second one.

Fixing Second Problem

Selenium project provides a Docker images with Selenium standalone or Selenium node with browser (Firefox or Chrome) and a VNC server installed.

So it seems a perfect fit to fix the problem of having to install browsers with a concrete version or concrete configurations locally since you can use a docker image with a browser configured for the tests.

New Problems When Using Docker for Testing

And that's cool, but it has some problems. The first one is that you need to create a docker-compose file specific for testing purposes, although this is not a bad thing per se, but it requires more management from dev part to maintain this file as well and of course repeat again and again in all the projects you want to use it, defining the browser to use and the VNC client image to get the recording for future inspection.

The second problem is the configuration of WebDriver. When running WebDriver against a remote browser, you need to set the location (IP) of the browser and configure the RemoteWebDriver accordantly with desired capabilities.

So again you have to write in all the tests the WebDriver configuration again and again. You can create a factory class that can be reused in all the projects, and it is good, but you still have one problem, some developers might use Docker machine so IP would not be static and might change every time, other might be using native Docker, and for example some phases of CI pipeline might run the tests against a remote fully environment like preproduction environment, so before executing tests you would need to specify manually the IP of container of Docker host.

And the third problem you'll get is that you need to instruct WebDriver to open a page:

webdriver.get("http://www.google.com");

The problem is that in this case the browser is inside the Docker infrastructre so you need to set the internal IP of the server container, so you don't only need to know the Docker host IP for connecting the remote web driver but also the internal IP of the server container to open the page in remote browser using the get method. And again this might be quite difficult to acquire in an automatic way.

But all these problems are solved when using the new integration between Arquillian Drone and Arquillian Cube.

Fixing New Problems

Arquillian Drone is an Arquillian extension that integrates Selenium WebDriver to Arquillian. This extension manages the configuration of the WebDriver so you don't need to repeat it in all your tests, and also the lifecycle of the browser.

So as you can see this pair of extensions seems a perfect fit for solving these problems. Drone takes care of configuration meanwhile Cube takes care of configuring correctly the Selenium/VNC containers and starting and stopping them.

As you might see, you don't need to worry about creating docker-compose file for testing purposes. You only need to create the one used for deploying, and Arquillian will take care of the rest.

Example

The first thing to do is create a project with required dependencies. For this example we are using Maven, but you can achieve the same using other build tools.

Things important to notice is that you are using BOM definitions for setting versions of the components. Then we set Arquillian Standalone dependency because our test is not going to have @Deployment method since the deployment file is already created inside the Docker image used in the application. Finally Arquillian Cube and Arquillian Drone dependencies are added.

Next step is creating at src/test/resources a file called arquillian.xml which is used for configuring extensions.

You can see that:

  • You need to specify the docker machine name where to start containers in case of using docker machine. If you are using native Docker then you don't need to set this attribute.
  • You need to set a location relative to root folder of the project where docker-compose file is located. Note that you could use any other name.
You can customize WebDriver as well configuring Arquillian Drone (https://docs.jboss.org/author/display/ARQ/Drone),  but for this test the defaults are enough. Note that now the default browser is firefox.

IMPORTANT: if you are using native Linux Docker installation, comment the configuring line of machineName. If you are using docker machine and it is called different to dev, then adapt machineName in arquillian.xml too.

Next step is creating the docker-compose file at root directory.

Simple compose file which defines only one container. This containers exposes the 80 port but then it is bound to port 8080. This container start a Go program listening to root context and returning Hello World in HTML format.

And finally the test:

There are some interesting parts in this test.

  • It is a standard Arquillian test in the sense it uses Arquillian runner.
  • Uses @Drone injection mechanism provided by Arquillian Drone to enrich test with a WebDriver configured to connect to remote browser.
  • Uses @CubeIp annotation to enrich test with the internal IP of the container helloworld. Since browser is running inside Docker host, we can use the internal IP for this purpose. Also it is important that you need to use the exposed port and not the bind port.
  • Everything else is managed by Arquillian Cube like the start and stop of the Docker containers(helloworld in this case) but also the ones containing the browser and the VNC client. If you put a debug point inside test method, and then execute a docker ps on a terminal, you'll see that three containers are started, not just helloworld
  • If after running the test you inspect target/reports/videos directory you will find the video recording of the test.
You can also see an screencast of this in action:


So as you can see using Arquillian Cube with Arquillian Drone makes your test and docker-compose file looks really neat.  Test only contains things related of the test and not about WebDriver configuration. Also your docker-compose looks clear, it only contains things related to business, not about testing.

In this post you've seen how to use Arquillian Cube + Arquillian Drone. In next one you'll see the integration with Arquillian Graphene, which will simplify even more the test to just focusing testing and not on WebDriver calls.

We keep learning,
Alex.

When I look 'round, I only see outta one eye
As the smoke surrounds my head, the sauna (Stickin' In My Eye - NOFX)

jueves, marzo 31, 2016

Continuous Stress Testing for your JAX-RS (and JavaEE) applications with Gatling + Gradle + Jenkins Pipeline

In this post I am going to explain how to use Gatling project to write stress tests for your JAX-RS Java EE endpoints, and how to integrate them with Gradle and Jenkins Pipeline, so instead of having a simple stress tests, what you have is a continuous stress testing, where each commit might fire these kind of tests automatically, providing automatic assertions and more important graphical feedback of each execution so you can monitorize how the performance is evolving in your application.

First thing to develop is the JAX-RS JavaEE service:


There is nothing special, this is an asynchronous JAX-RS endpoint that connects to swapi.co site, retrieves all the information of Star Wars planets, calculates the average of orbital period and finally it returns it in form of text. For sake of simplicity, I am not going to show you all the other classes but they are quite simple and at the end of the post I will provide you the github repository. 

The application is packaged inside a war file and deployed into an application server. In this case into an Apache TomEE 7 deployed inside the official Apache TomEE Docker image.

Next step is configuring Gradle build script with Gatling dependencies. Since Gatling is written in Scala you need to use Scala plugin.

After that, it is time to write our first stress test. It is important to notice that writing stress tests for Gatling is writing a Scala class using the provided DSL. Even for people who has never seen Scala is pretty intuitive how to use it.

So create a directory called src/test/scala and create a new class called AverageOrbitalPeriodSimulation.scala with next content:

Every simulation must extends Simulation object. This simulation takes base URL of the service from starwars_planets_url environment or system property, it creates the scenario pointing to the endpoint defined in JAX-RS,  and finally during 3 seconds it will gradually add users until 10 users are running at the same time. The test will pass only if all the requests succeed in less than 3 seconds.

Now we need to run this test. You will notice that this is not a JUnit test, so you cannot do a Run As JUnit test. What you need to do is use a runnable class provided by Gatling which requires you pass as argument the simulation class. This is really easy to do with Gradle.

We are defining a Gradle task of type JavaExec, since what we want is to run a runnable class. Then we make the life a bit easier for developer by automatically detect that if starwars_planets_url is not set, we are running this test into a machine that has Docker installed so probably this is the host to be used.
Finally we override the environment variable if it is required, we set the runnable class with required properties and we configure Gradle to execute this task every time the test task is executed (./gradlew test).

If you run it, you might see some output messages from Gatling, and after all a message like: please open the following file: /Users/..../stress-test/build/reports/gatling-results/averageorbitalperiodsimulation-1459413095563/index.html and this is where you can get the report. Notice that a random number is appended at the end of the directory and this is important as we are going to see later. The report might looks like:



At this time we have Gatling integrated with Gradle, but there is a missing piece here, and it is adding the continuous part on the equation. For adding continuous stress testing we are going to use Jenkins and Jenkins Pipeline as CI server so for each commit stress tests are executed among other tasks such as compile, run unit, integration tests, or code quality gate.

Historically Jenkins jobs were configured using web UI, requiring users to manually create jobs, fill the details of the job and create the pipeline through web browser. Also this makes keeping configuration of the job separated from the actual code being built.

With the introduction of Jenkins Pipeline plugin. This plugin is a Groovy DSL that let's implement you the entire build process in a file and store that alongside its code. Jenkins 2.0 comes by default with this plugin, but if you are using Jenkins 1.X you can install it as any other plugin (https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin)

So now we can start coding our release plugin but for the purpose of this post only stress part is going to be covered. You need to create a file called Jenkinsfile (the name is not mandatory but it is the de-facto name) on the root of your project, and in this case with next content:

In this case we are defining a new stage which is called Stress Test. Stage step is only used as informative and it will be used for logging purposes. Next a node is defined. A node is a Jenkins executor where to execute the code. Inside this node, the source code is checked out from the same location where Jenkinsfile is placed, sets a new environment variable pointing out to the location where the application is deployed, and finally a shell step which executes the Gradle test task.

Last step in Jenkins is to create a new job of type Pipeline and set the location of the Jenkinsfile. So go to Jenkins > New Item > Pipeline and give a name to the job.


Then you only need to go to Pipeline section and configure the SCM repository where the project is stored.


And then if you have correctly configured the hooks from Jenkins and your SCM server, this job is going to be executed for every commit, so your stress tests are going to run continuously.

Of course probably you have noticed that stress tests are executed but no reports are published in Jenkins, so you have no way to see or compare results from different executions. For this reason you can use publishHtml plugin to store the generated reports in Jenkins. If you don't have the plugin installed yet, you need to install it as any other Jenkins plugin.

PublishHtml plugin allows us to publish some html files generated by our build tool to Jenkins so they are available to users and also categorised by build number. You need to configure the location of the directory of files to publish, and here we find the first problem, do you remember that Gatling generates a directory with a random number? So we need to fix this first. You can follow different strategies, but the easiest one is simply rename the directory to a known static name after the tests.

Open Gradle build file and add next content.

We are creating a new task executed at the end of test task that renames the last created directory to averageorbitalperiodsimulation.

Final step is add after shell call in Jenkinsfile next call:

publishHTML(target: [reportDir:'stress-test/build/reports/gatling-results/averageorbitalperiodsimulation', reportFiles: 'index.html', reportName: 'Gatling report', keepAll: true])

After that you might see a link in the job page that points to the report.


And that's all, thanks of Gradle and Jenkins you can implement a continuous stress testing strategy in an easy way and just using code the language all developers speak.

We keep learning,
Alex.
I can live whatever way I please, I move around among the seven seas, No one will miss me when the sun goes down, and in the morning I'be out of town (Movin' Cruisin' - The Fantastic Oceans)

Music: https://www.youtube.com/watch?v=Byg5Xq_pb74
Source Code: https://github.com/lordofthejars/starwars


lunes, marzo 07, 2016

Docker and Jenkins - Orchestrating Continuous Delivery




Past week I had the honour of speaking in Docker Barcelona Meetup about how to use Jenkins for doing typical Docker tasks like creating images, publishing them or having a trace of what has occurred on them. Finally I introduced the new (or not so new) Jenkins Pipeline plugin which allows you to create your continuous delivery pipeline by coding it using a Groovy DSL instead of relaying on static steps like happens when you use FreeStyle jobs. At the end I showed how to use it with Docker.

You can see the slides in slideshare or as html.



We keep learning,
Alex.

Hello, it's me, I was wondering if after all these years you'd like to meet, To go over everything, They say that time's supposed to heal ya (Hello - Adele)

Music: https://www.youtube.com/watch?v=YQHsXMglC9A


viernes, enero 08, 2016

Container Object pattern. A new pattern for your tests.


If you search for a description of what Page Object is, you’ll find that The Page Object Pattern gives us a common sense way to model content in a reusable and maintainable way.

And also points that: Within your web app’s UI there are areas that your tests interact with. A Page Object simply models these as objects within the test code.
This reduces the amount of duplicated code and means that if the UI changes, the fix need only be applied in one place.

As you can see, Page Object applies to UI elements. We (the Arquillian community) has coined a new pattern following Page Object pattern logic called Container Object pattern.
You can think about Container Object as areas of a container (for now Docker container) that your test might interact with. For example some of these areas could be:
  • To get the host IP where container is running.
  • The bounded port for a given exposed port.
  • Any parameter configured inside the configuration file (Dockerfile) like a user or password to access to the service which the container exposes.
  • Definition of the containers.
A Container Object might contain an aggregation of more than one Container Object inside it. This effectively builds a relation ship (link) between containers.

An example of configuration parameters might be for example, in case of running a MySQL database in a container, it could be the user and password to access to database. 
Notice that nothing prevents you to generate the correct URL for accessing to the service from the test, or execute commands against container like retrieving an internal file.

And of course as Page Object does, Container Object gives you a way to build a model content that can be reused for several projects.

Before looking at how this pattern is implemented in Arquillian Cube, let’s go thorough an example:

Suppose all of your applications need to send a file to an FTP server. To write an integration/component test you might need a FTP server to send the file and check that the file was correctly sent.
One way to do this is using Docker to start a FTP server just before executing the test, then execute the test using this Docker container for FTP server, before stopping the container check that the file is there, and finally stop the container.

So all these operations that involves the FTP server and container could be joined inside a Container Object. This container object might contain information of:
  • Which image is used
  • IP and bounded port of host where this FTP server is running
  • Username and password to access to the FTP server
  • Methods for asserting the existence of a file
Then from the point of view of test, it only communicate with this object instead of directly hard coding all information inside the test.
Again as in Page Object, any change on the container only affects the Container Object and not the test itself.

Now let’s see how Arquillian Cube implements Container Object pattern with a very simple example:

Arquillian Cube and Container Object

Let’s see a simple example on how you can implement a Container Object in Cube. Suppose you want to create a container object that encapsulates a ping pong server running inside Docker.
The Container Object will be like a simple POJO with special annotations:

In previous example you must pay attention at next lines:
  1. @Cube annotation configures Container Object.
  2. A Container Object can be enriched with Arquillian enrichers.
  3. Bounded port is injected for given exposed port.
  4. Container Object hides how to connect to PingPong server.
@Cube annotation is used to configure this Container Object. Initially you set that the started container will be named pingpong and the port binding information for the container instance, in this case 5000→8080/tcp.
Notice that this can be an array to set more than one port binding definition.

Next annotation is @CubeDockerFile which configure how Container is created. In this case using a Dockerfile located at default classpath location. The default location is the package+classname, so for example in previous case, Dockerfile should be placed at org/superbiz/containerobject/PingPongContainer directory.
Of course you can set any other class path location by passing as value of the annotation. CubeDockerFile annotation sets the location where the Dockerfile is found and not the file itself.
Also this location should be reachable from ClassLoader, so it means it should be loaded from classpath in order to find it.

Any Cube can be enriched with any client side enricher, in this case with @HostIp enricher, but it could be enriched with DockerClient using @ArquillianResource as well.

Finally the @HostPort is used to translate the exposed port to bound port.
So in this example port value will be 5000. You are going to learn briefly why this annotation is important.

And then you can start using this container object in your test:

The most important thing here is that you need to set Container Object as a field of the class and annotate with @Cube.

It is very important to annotate the field with Cube, so before Arquillian runs the test, it can detect that it needs to start a new Cube (Docker container), create the Container Object and inject it in the test.

Notice that this annotation is exactly the same as used when you defined the Container Object.
And it is in this way because you can override any property of the Container Object from the test side. This is why @HostPort annotation is important, since port can be changed from the test definition, you need to find a way to inject the correct port inside the container object.

In this post I have introduced Container Object pattern and how can be used in Arquillian Cube. But this is only an small taste, you can read more about Arquillian Cube and Container Object integration at https://github.com/arquillian/arquillian-cube#arquillian-cube-and-container-object

Also a running examples can be found at https://github.com/arquillian/arquillian-cube/tree/master/docker/ftest-docker-containerobject

We keep learning,
Alex.

It's time to see what I can do, To test the limits and break through, No right, no wrong, no rules for me, I'm free! (Let It Go - Idina Menzel) 

Music: https://www.youtube.com/watch?v=moSFlvxnbgk

miércoles, noviembre 25, 2015

Java EE, Gradle and Integration Tests





In the last years Apache Maven has become the de-facto build tool for Java and Java EE projects. But from two years back Gradle is gaining more and more users. Following my previous post (http://www.lordofthejars.com/2015/10/gradle-and-java-ee.html), In this post you are going to see how to use Gradle for writing integration tests for Java EE using Arquillian.

Gradle is a build automation tool like Ant or Maven but introducing a Groovy-based DSL language instead of XML. So as you might expect the build file is a Groovy file. You can read in my previous post (http://www.lordofthejars.com/2015/10/gradle-and-java-ee.html) how to install Gradle.

To write integration tests for Java EE, the de-facto tool is Arquillan. If you want to know what Arquillian is, you can get a Getting Start Guide in (http://arquillian.org/guides/getting_started/) or in book Arquillian In Action.

To start using Arquillian, you need to add Arquillian dependencies, which comes in form of BOM. Gradle does not support BOM artefacts out of the box, but you can use dependency-management-plugin Gradle plugin to have support to define BOMs.

Moreover Gradle offers the possibility to add more test source sets apart from the default one which as in Maven is src/test/java and src/test/resources. The idea is that you can define a new test source set where you are going to put all integration tests. With this approach each kind of tests are clearly separated into different source sets. You can write Groovy code in Gradle script to achieve this or you can just use gradle-testsets-plugin which it is the easiest way to proceed.

So to register both plugins (dependency and testsets) you need to add next elements in build.gradle script file:

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath "io.spring.gradle:dependency-management-plugin:0.5.3.RELEASE"
        classpath 'org.unbroken-dome.gradle-plugins:gradle-testsets-plugin:1.2.0'
    }
}

apply plugin: "io.spring.dependency-management"
apply plugin: 'org.unbroken-dome.test-sets'

Now it is time to add Arquillian dependencies. You need to add the Arquillian BOM, and two dependencies, one that sets that we are going to use Arquillian with JUnit, and another one that sets Apache TomEE application server as target for deploying the application during test runs.

build.gradle with Arquillian, TomEE and Java EE dependency might look like:

dependencyManagement {
    imports {
        mavenBom 'org.arquillian:arquillian-universe:1.0.0.Alpha1'
    }
}

dependencies {
    testCompile group: 'org.arquillian.universe', name: 'arquillian-junit', ext: 'pom'
    testCompile group: 'org.apache.openejb', name: 'arquillian-tomee-embedded', version:'1.7.2'
    testCompile group: 'junit', name: 'junit', version:'4.12'
    providedCompile group: 'org.apache.openejb',name: 'javaee-api', version:'6.0-6'


}

Finally you can configure the new integration test folder as source set by adding next section:

testSets {
    integrationTests
}

Where integrationTest is the name of the test set. testSets automatically creates and configures next elements:
  • src/integrationTests/java and src/integrationTests/resources as valid source set folders.
  • A dependency configuration named integrationTestsCompile which extends from testCompile, and another one called integrationTestRuntime which extends from testRuntime.
  • A Test task named integrationTests which runs the tests in the set.
  • A Jar task named integrationTestsJar which packages the tests. 
Notice that you can change the integrationTests to any other value like intTests and Gradle would configure previous elements automatically to the value set it inside testSets, such as src/intTests/java or for example the test task would be called intTests.

Next step is creating the integration tests using Arquillian inside integrationTests test set. For example an Arquillian test for validating that you can POST a color in a REST API and it is returned when GET method is called, would look like:

You can now run integration tests by simply executing gradlew integrationTests

You'll notice that if you run gradlew build, the integration test task is not run. This happens because task is not registered within the default build lifecycle. If you want to add integrationTests task to be executed automatically during build you need to add next lines:

check.dependsOn integrationTest
integrationTest.mustRunAfter test

Ensure that integration tests are run before the check task and that the check task fails the build if there are failing integration tests and also ensures that unit tests are run before integration tests. This guarantees that unit tests are run even if integration tests fails.

So now when you run gradlew build, the integration tests are going to be executed as well.

And finally, what's happen if you are running JaCoCo plugin for code coverage? You will get two JaCoCo files, one for the unit test executions and another one for the integrationTests execution. But probably you want to see an aggregated code coverage report of both runs into one file, so you can inspect the code coverage degree of the application after the execution of all kind of tests. To achieve it you only need to add next task:

task jacocoRootTestReport(type: JacocoReport) {
    sourceSets sourceSets.main
    executionData files([
            "$buildDir/jacoco/test.exec",
            "$buildDir/jacoco/integrationTests.exec"
    ])
    reports {
        xml.enabled false
        csv.enabled false
    }    
}

In this case you are creating a task which aggregates the coverage results of test.exec file (which comes from unit tests) and integrationTests.exec which comes from integration tests.

And to generate the reports you need to explicitly call the jacocoRootTestReport task when you run Gradle

So it is so simple to write a Gradle script for running Java EE tests and more important the final script file looks very compact and readable without being tight to any static convention at all.

We keep  learning,
Alex.
There must be more to life than this, There must be more to life than this, How do we cope in a world without love (There Must Be More To Life Than This - Freddie Mercury - Michael Jackson)

miércoles, octubre 07, 2015

Gradle and Java EE


In the last years Apache Maven has become the de-facto build tool for Java and Java EE projects. But from two years back Gradle is gaining more and more users. In this post you are going to see how to use Gradle for Java EE projects.

Gradle is a build automation tool like Ant or Maven but introducing a Groovy-based DSL language instead of XML. So as you might expect the build file is a Groovy file.

There are different ways to install Gradle, but for me the best way is using sdkman tool. To install sdkman tool simply run:

$ curl -s get.sdkman.io | bash

After that you can init sdkman by running:

$ source "$HOME/.sdkman/bin/sdkman-init.sh"

With sdkman installed, installing Gradle is as easier as running:

$ sdk install gradle

Now you can start creating the build script. The first thing to do is creating a settings.gradle where in this case we are going to set the name of the project.

This file is also used in case of multiple module projects.

Last file you might need is one called build.gradle which manages all the build process.


Notice that the first line indicates that what you are going to build is a war project.  Then project properties are set like the group, version, description or Java compilation options. Finally only one dependency is required and with provided scope since the implementation is provided by the application server.

Note that providedCompile scope is only available if you are using the war plugin. If you are using another plugin like java, then you will need to implement this function by yourself (at least at the time of writing this post with Gradle  2.7).

And that's all, pretty compact, only 16 lines and no verbose information. Of course, now you will need to add more dependencies like JUnit or Arquillian with testCompile scope or any other extra library required in your code like the well known apache-commons dependency; But this is an story for another post.

We keep learning,
Alex.

Sun's in your eyes the heat is in your hair. They seem to hate you. Because you're there.  (Wonderful Life - Black)


martes, agosto 11, 2015

Arquillian Cube: Write Tests Once, Run Them Everywhere



Arquillian Cube is an Arquillian extension that can be used to manage Docker containers from Arquillian. Basically it starts all Docker containers required for your tests, deploys the application (or micro-application) which can be Java based or not, runs the tests and finally stops all of them.

Thanks of Arquillian Cube you can run your integration tests from your local IDE in similar situation as in production environment since in both cases everything is running inside Docker.

But you can go one step forward and you can instruct Arquillian Cube to not start Docker container instances locally (or inside your local boot2docker) but start them in external locations such as your preproduction infrastructure.

Thanks of Digital Ocean that has provided us a free account with some money, we can show you in next screencast how by simply changing one attribute (which could be automated with maven-resources-plugin or just using system properties), we are running the same test against local Docker instance or remotely to Digital Ocean infrastructure.

You can read more about Arquillian and Arquillian Cube in book Arquillian In Action (www.manning.com/sotobueno).


We keep learning,
Alex.
You’re a shooting star I see, A vision of ecstasy, When you hold me, I’m alive, We’re like diamonds in the sky (Diamonds - Rihanna)
Music: https://www.youtube.com/watch?v=lWA2pjMjpBs


lunes, agosto 03, 2015

Arquillian in Action goes MEAP


Currently  I am co-writing Arquillian in Action book with my colleague Jason Porter. Last week the book just entered into MEAP stage.

Arquillian in Action teaches you how to to build in-container tests using Arquillian. This practical hands-on guide begins with showing you how to find and squash your first bug. You'll move on to building persistence tests, and then discover how to write tests for front-end and RESTful services. Using carefully-designed examples, the book shows you how to write integration tests for Java EE, Spring, and Docker. Along the way, you'll also learn how to build functional, infrastructure, performance, and security tests.

You can visit http://www.manning.com/sotobueno, read the first chapter for free or you can buy it and start reading the first three chapters of the book.

It is time to start zapping all these bugs with Arquillian.

We keep learning,
Alex.

Algo lo que me invade, todo viene de dentro, Nunca lo que me sacie, siempre quiero, lobo hambriento. (Por la boca vive el pez - Fito & Fitipaldis)

Music:  https://www.youtube.com/watch?v=iUXs4Nt3Y7Y

miércoles, abril 01, 2015

Apache Mesos + Marathon and Java EE


Apache Mesos is an open-source cluster manager that provides efficient resource isolation and sharing across distributed applications or frameworks.

Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. It uses dynamic allocation of applications inside machines.

In summary Apache Mesos is composed by master and slaves. Masters are in charge of distributing work across several slaves and knowing the state of each slave. You may have more than one master for fault-tolerant.

And then we have the slaves which are the responsible of executing the applications. Slaves isolate executors and tasks (application) via containers (cgroups). 

So each slave offers its resources and Apache Mesos is in charge of schedule which slave will execute it. Note that each slave may execute more than one task if it has enough resources to execute it.



For example let's say that an Slave has 4 CPUs (to simplify I am not going to take into account other parameters), then it could execute 1 task of 4 CPU, 2 tasks of 2CPUs, ...

But Apache Mesos only manages resources, but for building a PaaS we need something more like service discovery or scaling features. And this is what Marathon does.

Marathon is a framework that runs atop of Apache Mesos and offers:

  • Running Linux binary
  • Cluster-wide process supervisor
  • Service Discovery and load balancing (HAProxy)
  • Automated software and hardware failure handling
  • Deployment and scaling
  • REST friendly

But one of the main advantages of using Marathon is that it simplifies and automates all those common tasks.

So main task of Marathon is deploy an application to different salves, so if one salve fails there are other slaves to serve incoming communications. But moreover Marathon will take care of reallocating the application to another slave so the amount of slaves per application is maintained constant. 



Installing Apache Mesos and Marathon in a developer machine is as easy as, having VirtualBox, Vagrant and git installed.

Cloning next repo:


And simply run vagrant-up command from the directory:

cd playa-mesos
vagrant up

First time it will take some time because it need to download several components.

After that you can check that it is correctly installed by connecting to Mesos and Marathon Web Console. http://10.141.141.10:5050 and http://10.141.141.10:8080

Next step is installing HAProxy. Although it is not a requirement HAProxy is "required" if you want to do Service Discovery and Load Balancing.

Run vagrant ssh.

Install HAProxy

sudo apt-get install haproxy

Download haproxy-marathon-bridge script:

chmod 755 haproxy-marathon-bridge

./haproxy_marathon_bridge localhost:8080 > haproxy.cfg
haproxy -f haproxy.cfg -p haproxy.pid -sf $(cat haproxy.pid)

And this configures HAproxy. To avoid having to run this command manually for every time topology change you can run:

./haproxy_marathon_bridge install_haproxy_system localhost:8080 

which installs the script itself, HAProxy and a cronjob that once a minute pings one of the Marathon servers specified and refreshes HAProxy if anything has changed.

And that's all, now we have our Apache Mesos with Mesosphere and HAProxy installed. Now it is time to deploy the Java EE application server. In this case we are going to use Apache TomEE.

The only thing we need to do is sending a JSON document as POST to http://10.141.141.10:8080/v2/apps 

{
  "id": "projectdemo",
  "cmd": "cd apache-tomee-plus* && sed \"s/8080/$PORT/g\" < ./conf/server.xml > ./conf/server-mesos.xml && ./bin/catalina.sh run -config ./conf/server-mesos.xml",
  "mem": 256,
  "cpus": 0.5,
  "instances": 1,
  "ports":[10000],
  "constraints": [
    ["hostname", "UNIQUE"]
  ],
  "uris": [
  ]
}

This JSON document will make Marathon to deploy the application in one node. Let's explain each attributes:

id: is the id of the application, not much secret here.

cmd: the command that will execute when node is chosen an ready. In this case note that we are creating a server-mesos.xml file which is a modified version of server.xml file but replacing 8080 port to $PORT var. For now is enough. Finally it starts TomEE with server-mesos.xml configuration file.

mem: Memory that will require in the node.

cpus: Cpu resources that will require in the node.
instances: number of nodes that we want to replicate this application. In this case only one because we are running locally.

ports: which ports will group all application instances. Basically this port is used by HAProxy to route to the correct instance. We are going to explain deeply in next paragraph.

constraints: constraints control where apps run to allow optimizing for fault tolerance or locality. In this case we are setting that each application should be in a different slave. With this approach you can avoid port collision.

uris: Sets the URI to execute before executing the cmd part. In case of a known compressed algorithm, it is automatically uncompressed. For this reason you can do a cd command in cmd directly without having to uncompress it manually.

So let me explain what's happening here or what Mesosphere does:

First of all reads the JSON document and inspect which slave has a node that can process this service. In this case it only needs to find one. (instances = 1).

When it is found, then the uri element is downloaded, uncompressed and then executes the commands specified in cmd attribute in current directory.
And that's all.

But wait what is ports and $PORT thing?

$PORT is a random port that Mesosphere will assign to a node to communicate with. This port is used to ensure no two applications can be run using Marathon with overlapping port assignments.

But also it is used for Service Discovery and Load Balancing by running a TCP proxy on each host in the cluster, and transparently forward a static port on localhost to the hosts that are running the app. That way, clients simply connect to that port, and the implementation details of discovery are completely abstracted away.

So the first thing we need to do is modifying the configuration of the TomEE to start at random port assigned by Marathon, for this reason we have created a new server.xml file and setting listening port to $PORT.

So if port is random, how a client may connect if it doesn't know in which port is started? And this is the ports attribute purpose. In this attribute we are setting that when I connect to port 10000 I want to connect to the application defined and deployed to any slave and independently of the number of instances.

Yes it may be a bit complicated but let me explain with a simple example:

Let's say I have the same example as before but with two instances (instances = 2). Both TomEE instances will be started in two different slaves (so in different nodes) and in different ports. Let's say 31456 and 31457. So how we can connect to them?

Easy. You can use the IP of Marathon and the random port (http://10.141.141.10:31456/) which you will access to that specific server, or you can use the global defined port (http://10.141.141.10:10000/) which in this case HAProxy will route to one of instances (depending on load balancing rules).

Note this has a real big implication on how we can communicate between applications inside Marathon, because if we need internal communication between applications that are deployed in Marathon, we only need to know that global port, because the host can be set to localhost as HAProxy will resolve it. So from within Marathon application we can communicate to TomEE by simply using http://localhost:10000/ as HAProxy will then route the request to a host and port where an instance of the service is actually running. In next picture you can see the dashboard of Marathon and how the application is deployed. Note that you can see the IP and port of deployed application. You can access by clicking on it or by using Marathon IP (the same as provided in that link) but using the port 10000. Remember that HAProxy is updated every minute so if it works by using random port and not using port 10000 probably you need to wait some time until HAProxy database is refreshed.


And that's all, as you may see Apache Mesos and Marathon is not as hard as you may expect at first.

Also note that this is a "Hello World" post about Mesos and Java EE, but Mesos and Mesosphere is much more than this like healthy checks of the services, running Docker containers, Artifact Storage or defining dependencies, but I have found that running this simple example, helps me so much clarifying the concepts of Mesosphere and it was a good point of start for more complex scenarios.

We keep learning,
Alex.
Dilegua, o notte!, Tramontate, stelle!, Tramontate, stelle!, All'alba vincerò!, Vincerò! Vincerò! (Nessun dorma - Giacomo Puccini)

miércoles, marzo 04, 2015

Restful Web Service Guide


Nowadays more and more projects are developed using the tuple AngularJs in frontend + Java EE (or Spring Framework) in backend. The communication between AngularJs and Java EE is done by using Restful Web Services

In my company this tuple is implemented in every project and we are several teams working on different projects. So it seems clear that it would have sense that all Restful Web Services should be done in similar way. For this reason we (the architecture team) decided to create a Restful Web Service guide where all teams could base their API design. In this document we mention basic concepts of Rest, but also how to internationalize a Rest API, Pagination, Security with Json Web Tokens or Http Error codes.

This guide has been released under CC license and is published in github. You can watch it without any problem, send a PR with any improvement or open an issue to discuss anything.


We keep learning,
Alex.

It might seem crazy what I'm about to say, Sunshine she's here, you can take away, I’m a hot air balloon, I could go to space ,With the air, like I don't care baby by the way (Happy - Pharrell Williams)

Music: https://www.youtube.com/watch?v=y6Sxv-sUYtM

jueves, enero 22, 2015

Self-Signed Certificate for Apache TomEE (and Tomcat)



Probably in most of your Java EE projects you will have part or whole system with SSL support (https) so browsers and servers can communicate over a secured connection. This means that the data being sent is encrypted, transmitted and finally decrypted before processing it.

The problem is that sometimes the official "keystore" is only available for production environment and cannot be used in development/testing machines. Then one possible step is creating a non-official "keystore" by one member of the team and share it to all members so everyone can locally test using https, and the same for testing/QA environments.

But using this approach you are running to one problem, and it is that when you are going to run the application you will receive a warning/error message that the certificate is untrusted. You can live with this but also we can do it better and avoid this situation by creating a self-signed SSL certificate.

In this post we are going to see how to create and enable SSL in Apache TomEE (and Tomcat) with a self-signed certificate.

The first thing to do is to install openssl. This step will depend on your OS. In my case I run with an Ubuntu 14.04.

Then we need to generate a 1024 bit RSA private key using Triple-DES algorithm and stored in PEM format. I am going to use {userhome}/certs directory to generate all required resources, but it can be changed without any problem.

Generate Private Key

openssl genrsa -des3 -out server.key 1024

Here we must introduce a password, for this example I am going to use apachetomee (please don't do that in production).

Generate CSR

Next step is to generate a CSR (Certificate Signing Request). Ideally this file will be generated and sent to a Certificate Authority such as Thawte or Verisign, who will verify the identity. But in our case we are going to self-signed CSR with previous private key.

openssl req -new -key server.key -out server.csr

One of the prompts will be for "Common Name (e.g. server FQDN or YOUR name)". It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. In case of development machine you can set "localhost".

Now that we have the private key and the csr, we are ready to generate a X.509 self-signed certificate valid for one year by running next command:

Generate a Self-Signed Certificate

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

To install certificate inside Apache TomEE (and Tomcat) we need to use a keystore. This keystore is generated using keytool command. To use this tool, the certificate should be a PKCS12 certificate. For this reason we are going to use openssl to transform the certificate to a PKCS12 format by running:

Prepare for Apache TomEE

openssl pkcs12 -export -in server.crt -inkey server.key -out server.p12 -name test_server -caname root_ca

We are almost done, now we only need to create the keystore. I have used as the same password to protect the keystore as in all other resources, which is apachetomee.

keytool -importkeystore -destkeystore keystore.jks -srckeystore server.p12 -srcstoretype PKCS12 -srcalias test_server -destalias test_server

And now we have a keystore.jks file created at {userhome}/certs.

Installing Keystore into Apache TomEE

The process of installing a keystore into Apache TomEE (and Tomcat) is described in http://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html. But in summary the only thing to do is open ${TOMEE_HOME}/config/server.xml and define the SSL connector.

<Service name="Catalina">
  <Connector port="8443" protocol="HTTP/1.1"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               keystoreFile="${user.home}/certs/keystore.jks" keystorePass="apachetomee"
               clientAuth="false" sslProtocol="TLS" />
</Service>

Note that you need to set the keystore location in my case {userhome}/certs/keystore.jks and the password to be used to open the keystore which is apachetomee.

Preparing the Browser

Before starting the server we need to add the server.crt as valid Authorities in browser.

In Firefox: Firefox Preferences -> Advanced -> View Certificates -> Authorities (tab) and then import the server.crt file.

In Chrome: Settings -> HTTPS/SSL -> Manage Certificates ... -> Authorities (tab) and then import the server.crt file.

And now you are ready to start Apache TomEE (or Tomcat) and you can navigate to any deployed application but using https and port 8443.

And that's all, now we can run tests (with Selenium) without worrying about untrusted certificate warning.

We keep learning,
Alex.

Dog goes woof, Cat goes meow, Bird goes tweet and mouse goes squeek (What Does the Fox Say - Ylvis)

Music: https://www.youtube.com/watch?v=jofNR_WkoCE

lunes, enero 12, 2015

Apache TomEE + JMS. It has never been so easy.


I remember old days of J2EE (1.3 and 1.4) that it was incredibly hard to start a project using JMS. You needed to install a JMS broker, create topics or queues and finally start your own battle with server configuration files and JNDI.

Thanks of  JavaEE 6 and beyond using JMS is really easy and simple. But with Apache TomEE is even more simpler to get started. In this post we are going to see how to create and test a simple application which sends and receives message to/from a JMS queue with Apache TomEE

Apache TomEE uses Apache Active MQ as a JMS provider. In this examples you won't need to download or install anything because all elements will be provided as Maven dependency, but if you plan (and you should)  use Apache TomEE server you will need to download Apache TomEE plus or Apache TomEE plume. You can read more about Apache TomEE flavors in http://tomee.apache.org/comparison.html.

Dependencies

The first thing to do is add javaee-api as provided dependency, and junit and openejb-core as test dependency. Note that openejb-core dependency is added to have a runtime to execute tests, we are going to see it deeply in test section.


Business Code

Next step is creating the business code responsible for sending messages and receiving messages from JMS queue. Also it contains a method to receive messages from queue. For this example we are going to use a stateless EJB.

The most important part of Messages class is to note how easy is to inject ConnectionFactory and Queue instances inside code. You only need to use @Resource annotation and container will do the rest for you. Finally note that because we have not used name or lookup attributes to set a name, the name of the field is used as resource name.


Test

And finally we can write a test that asserts that messages are sent and received using JMS queue. We could use for example Arquilian to write a test but for this case and because of simplicity, we are going to use an embedded OpenEJB instance to deploy the JMS example and run the tests.

Note that that test is really simple and concise, you only need to start programmatically an EJB container and bind the current test inside it so we can use JavaEE annotations inside test. The rest is a simple JUnit test.

And if you run the test you will receive a green bullet. But wait, probably you are wondering where is the JMS broker and its configuration? Where is the definition of ConnectionFactory and JMS queue? And this is where OpenEJB (and Apache TomEE) comes into to play.

In this case OpenEJB (and Apache TomEE) will use Apache Active MQ in embedded mode, so you don’t need to install Apache Active MQ in your computer to run the tests.  Moreover Apache TomEE will create all required resources for you.  For example it will create a ConnectionFactory and a Queue for you with default parameters and expected names (org.superbiz.Messages/connectionFactory for ConnectionFactory and org.superbiz.Messages/chatQueue for the Queue), so you don’t need to worry to configure JMS during test phase. Apache TomEE is smart enough to create and configure them for you.

You can inspect console output the realize that resources are auto-created by reading next log message: INFO: Auto-creating a Resource



And that's all, really simple and easy to get started with JMS thanks of Java EE and TomEE. In next post we are going to see how to do the same but using a Message Driven Beans (MDB).

We keep learning,
Alex.
No se lo qué hacer para que me hagas caso, lo he intentado todo menos bailar ballet, ya va siendo hora de mandarte a paseo, si consigo olvidarte tal vez pueda vivir. (Voy A Acabar Borracho - Platero y Tú)
Music: https://www.youtube.com/watch?v=aK6oIQikjZU