lunes, abril 10, 2017

Arquillian Persistence with MongoDB and Docker


In this screencast you are going to see how you can use Arquillian Persistence Extension (https://github.com/arquillian/arquillian-extension-persistence/tree/2.0.0) and Docker to write persistence tests for MongoDB.

To manage Docker lifecycle, I have used Arquillian Cube (http://arquillian.org/arquillian-cube/) and for populating data into MongoDB, the fairly new integration between Arquillian Persistence Extension (aka APE) and NoSQLUnit (https://github.com/lordofthejars/nosql-unit).



We keep learning,
Alex.

Ridi, Pagliaccio, Sul tuo amore infranto! Ridi del duol, che t'avvelena il cor! (Vesti la giubba (Pagliacci) - Leoncavallo)
Music: https://www.youtube.com/watch?v=Z0PMq4XGtZ4


viernes, marzo 24, 2017

3 ways of using Docker Containers for Testing in Arquillian


Arquillian Cube is an Arquillian extension that can be used to manager Docker containers from Arquillian.

With this extension you can start a Docker container(s), execute Arquillian tests and after that shutdown the container(s).

The first thing you need to do is add Arquillian Cube dependency. This can be done by using Arquillian Universe approach:


Then you have three ways of defining the containers you want to start.

The first approach is using docker-compose format. You only need to define the docker-compose file required for your tests, and Arquillian Cube automatically reads it, start all containers, execute the tests and finally after that they stop and remove them.

In previous example a docker compose file version 2 is defined (it can be stored in the root of the project, or in src/{main, test}/docker or in src/{main, test}/resources and Arquillian Cube will pick it up automatically), creates the defined network and start the service defined container, executes the given test. and finally stops and removes network and container. The key point here is that this happens automatically, you don't need to do anything manual.

The second approach is using Container Object pattern.  You can think of a Container Object as a mechanism to encapsulate areas (data and actions) related to a container that your test might interact with. In this case no docker-compose is required.

In this case you are using annotations to define how the container should looks like. Also since you are using java objects, you can add methods that encapsulates operations with the container itself, like in this object where the operation of checking if a file has been uploaded has been added in the container object.

Finally in your test you only need to annotate it with @Cube annotation.

Notice that you can even create the definition of the container programmatically:

In this case a Dockerfile file is created programmatically within the Container Object and used for building and starting the container.

The third way is using Container Object DSL. This approach avoids you from creating a Container Object class and use annotations to define it. It can be created using a DSL provided for this purpose:

In this case the approach is very similar to the previous one, but you are using a DSL to define the container.

You've got three ways, the first one is the standard one following docker-compose conventions, the other ones can be used for defining reusable pieces for your tests.

You can read more about Arquillian Cube at http://arquillian.org/arquillian-cube/

We keep learning,
Alex
And did you think this fool could never win, Well look at me, i'm coming back again, I got a taste of love in a simple way, And if you need to know while i'm still standing you just fade away (I'm still Standing - Elton John)
Music: https://www.youtube.com/watch?v=ZHwVBirqD2s


lunes, enero 09, 2017

Develop A Microservice with Forge, WildFly Swarm and Arquillian. Keep It Simple.

 


In this post we are going to see how to develop a microservice using WildFly Swarm and Forge and testing it with Arquillian and Rest Assured.

WildFly Swarm offers an innovative approach to packaging and running Java EE applications by packaging them with just enough of the server runtime to "java -jar" your application.

JBoss Forge is a software development tool that extends your Java IDE, providing wizards and extensions (add-ons) for different technologies and solutions.

Arquillian is a platform that simplifies integration testing for Java middleware. It deals with all the plumbing of container management, deployment, and framework initialization so you can focus on the task of writing your tests—real tests.

REST Assured brings the simplicity of testing and validating REST services in dynamic languages such as Ruby and Groovy into the Java domain.

So the first thing you need to do is installing Forge, to do it you can just download the CLI console from http://downloads.jboss.org/forge/releases/3.4.0.Final/forge-distribution-3.4.0.Final-offline.zip or navigate to http://forge.jboss.org/download and download the plugin for Eclipse, Netbeans or IntelliJ. For this example, I am going to use the CLI one.

After you've installed Forge and it is available in PATH environment variable you can start working on it.

First of all go to the directory where you want to store the project and run forge.
After a few seconds, you'll see that Forge is started and you are ready to type commands:



After that you need to install the wildfly-swarm addon. To do it just type next command on Forge shell:

> addon-install-from-git --url https://github.com/forge/wildfly-swarm-addon

Then the latest addon will be downloaded and installed. After this setup step, you can start creating your microservice by calling:

> project-new --top-level-package org.superbiz --named foo --type wildfly-swarm

This command creates a new project called foo, with pom.xml prepared with all wildfly swarm requirements. Next step is adding a wildfly swarm fragment. A fragment is a way to define which modules you want to be able at runtime.

> wildfly-swarm-add-fraction --fractions microprofile

In this case microprofile fraction is added. This means that at runtime CDI + JSON-P + JAXRS will be available.

Addon also creates a JAX-RS endpoint as an example, you can check it by running next two commands:

> cd src/main/java/org/superbiz/rest/HelloWorldEndpoint.java
> ls

Then return to root of the project and let's call the command that creates an Arquilian test for the microservice.

> wildfly-swarm-new-test --target-package org.superbiz --named HelloWorldEndpointTest --as-client

In this case the test is called HelloWorldEndpointTest and test is going to run in Arquillian as-client mode (which means that the test is not deployed inside the container and will be run at local runtime). You can check the generated code with next two commands:

> cd src/test/java/org/superbiz
> cat HelloWorldEndpointTest.java

Notice that test does not validate nothing yet, but since we are using as-client mode, the test injects the URL where the application is started. Let's add some checks using REST-assured.
Return to the root of the project and add REST-assured dependency by calling next command:

> project-add-dependencies io.rest-assured:rest-assured:3.0.1:test
> cat pom.xml

Finally you can use REST-assured in empty test to validate that your microservice endpoint effectively returns "Hello from WildFly Swam!".


When you run this test, what it is happening behind the scene is that the microservice is packaged and deployed locally. When service is ready to receive incoming requests, then the test will send a GET request to /hello and asserts that the response body is "Hello from WildFly Swam!"

You can see this running at https://youtu.be/9xb6GIZ1gjs

This is a really simple example, and this was the intention of this post. Just show you how using Forge and just running some commands you get an started project with its integration test running.

We keep learning,
Alex.

I'm not giving up today, There's nothing getting in my way, And if you knock knock me over, I will get back up again (Get Back Up Again - Trolls)

Music: https://www.youtube.com/watch?v=IFuFm0m2wj0

jueves, octubre 13, 2016

Build Docker Images with Maven and Gradle



One of the things that you might want to do if you are using Docker and Java is building the image from a Dockerfile in your build tool (Maven or Gradle).  In this post I am going to show you how to do it in both cases.

I am going to assume that you have the de-facto project layout, having the Dockerfile file at the root of the project.

Maven

There are several Maven plugins that can be used for building a Docker image in Maven, but one of the most used is fabric8-maven-plugin.

To start you need to register and configure the plugin in pom.xml:

In configuration section you set the image name and the directory where Dockerfile is located.

Any additional files located in the dockerFileDir directory will also be added to the build context. Since Dockerfile is on the root of the project, the target directory is added too. The problem arises because this plugin uses target/docker to generate the build and if you try to build it you'll get next exception:  tar file cannot include itself. To avoid this problem you need to create .maven-dockerignore file specifying which directory must be ignored at the same level as Dockerfile:

And that's all, after that you can do:

mvn package docker:build

Notice that this plugin honor Docker environment variables like DOCKER_HOST, DOCKER_CERT_PATH, ... so if your environment is correctly configured you don't need to do anything else.

Gradle

There are several Gradle plugins that can be used for building a Docker image in Gradle, but one of the most used is gradle-docker-plugin.

To start you need to register and configure the plugin in build.gradle:


In case of Gradle, you need to configure Docker host properties since plugin does not honor Docker environment variables. You need to configure them in docker {} block.

Finally you create a task of type DockerBuildImage, where you set the Dockerfile root directory using inputDir attribute and image name using tag attribute.

Conclusions

So in this post you've seen different ways of doing the same in two different build tools, which is building a Docker image from a Dockerfile. Notice that these plugins also allows you to define the Dockerfile content as a configuration field, so you are not creating a Dockerfile file, but specifying its content inside the build tool. You can read more about this feature at https://dmp.fabric8.io/ in case of Maven plugin and  https://github.com/bmuschko/gradle-docker-plugin#creating-a-dockerfile-and-building-an-image in case of Gradle.

We keep learning,
Alex.

Bees'll buzz, kids'll blow dandelion fuzz, And I'll be doing whatever snow does in summer., A drink in my hand, my snow up against the burning sand, Prob'ly getting gorgeously tanned in summer. (In Summer - Frozen)


jueves, septiembre 22, 2016

Authenticating with JGit


JGit is a lightweight, pure Java library implementing the Git version control system. You can do a lot of operations using Java language such as create or clone Git repos, create branches, make commits, rebase or tag, you can see this repo to learn how to use JGit and how to code the different commands.

But one thing that does not cover extensively is the authentication process. In this post I am going to show you how how to authenticate to a Git repository with JGit.

First thing to do is add JGit as dependency:


Then let's see a simple clone without authentication:

In this case no authentication method is set. Now let's see how to add a username and password in case of for example private repos:



In this case you only need to set as credential provider the UsernameAndPasswordCredentialsProvider and pass the required username and password.

The final scenario I am going to show here is how to authenticate against a git repository using your ssh keys, that is using (~/.ssh/id_rsa) and setting the passphrase to access it.


In this case you need to extend JSchConfigSessionFactory to be able to set passphrase to access to private key. To do it you set a custom UserInfo implementation where the getPassphrase method returns the passphrase to use and promptPassphrase method should return true.

After that you only need to set the transport configuration to the one created.

We keep learning,
Alex.
Chan eil inneal-ciùil a ghleusar, 'Dhùisgeas smuain mo chléibh gu aoibh, Mar nì duan o bheul nan caileag, Oidhche mhath leibh, beannachd leibh (Oidche Mhath Leibh - Ossian)
Music: https://www.youtube.com/watch?v=mi4SCOYAdEk

lunes, septiembre 19, 2016

Arquillian Chameleon for the sake of simplicity


When using Arquillian, one of the things you need to do is defining under which container you want to execute all your tests.

And this is done by adding a dependency in the classpath for the adapter and depending on the mode used (embedded, managed or remote) having to download the application server manually. For example this happens when Wildfly is used in embedded or managed mode.

An example of a pom.xml using Wildfly could be:


Notice that in previous script, you need to define the Arquillian adapter, in this case the managed one, and use maven-dependency-plugin to download Wildfly distribution file used by Arquillian.

This approach is good and it works, but it has three drawbacks:

  1. You need to repeat all these lines in every build script you want to use Arquillian and Wildfly.
  2. In case you need to use another application server in another project, you need to know which adapter artifact is required and if it is necessary to download  the artifacts or not. For example in case of Jetty embedded it is not necessary to download any distribution, you only need set the embedded dependency.
  3. If you want to test your code against several application servers you have the problem number 2 plus start dealing with profiles.
But all these problems can be fixed using Arquillian Chameleon. Arquillian Chameleon is a generic container which reads from arquillian.xml which container, which version and which mode you want to use in your tests, and he will take care of adding required adapter into classpath, download any required distribution and configure the protocol (this is something that as a user you should not touch).

How to use Arquillian Chameleon is pretty easy. Do whatever you would do normally such as adding Arquillian bom and add Chameleon Container instead of any application-server specific artifact:


Then create in src/test/resources the Arquillian configuration file called arquillian.xml with next configuration:


Notice that now you only need to use a friendly property called chameleonTarget to define which container, version and mode you want to use. In previous example Wildfly 9.0.0.Final with managed adapter.

When running any test with this configuration, Chameleon will check if Wildfly 9.0.0.Final distribution is downloaded, and if not download it, then will add to classpath the managed adapter for Wildfly 9.0.0 and finally execute the test as any other Arquillian test.

What's happening if you want to use Payara instead of Wildfly? You only need to change chameleonTarget property to payara:4.1.1.163:managed, to for example run tests against Payara 4.1.1 in managed mode.

TIP: You can set this property using a Java system property (-Darq.container.chameleon.chameleonTarget = payara:4.1.1.163:managed)

Currently next containers are supported by Chameleon:

  • JBoss EAP 6.x, 7.x
  • WildFly 10.x, 9.x, 8.x
  • JBoss AS 7.x
  • GlassFish 3.1.2, 4.x
  • Payara 4.x

We keep learning,
Alex.
I can see you, Your brown skin shining in the sun, I see you walking real slow(The boys of summer - The Ataris)
Music: https://www.youtube.com/watch?v=Qt6Lkgs0kiU

lunes, agosto 29, 2016

Configuring Maven Release Plugin to Skip Tests


If you are using Maven and using Maven Release Plugin, you would like to skip the execution of tests during the release plugin execution. The reason might be very different but might depend on the nature of the project or how CI pipeline is implemented.

Notice that this might be a really improvement in releasing time since performing the release with Maven Release Plugin implies executing the same tests twice, one in prepare step and the other one in perform step.

To avoid executing tests in prepare phase you need to run as:

mvn -DpreparationGoals=clean release:prepare

If you want to avoid executing tests during perform phase you need to run as:

mvn -Darguments="-Dmaven.test.skip=true" release:perform

Please it is important to note that I am not saying you don't need to execute tests during release process, what I am saying is that something your release process doesn't fit the standard release process of the plugin and for example you are already running tests before executing the plugin.

We keep learning,
Alex.
Say it ain't so, I will not go, Turn the lights off, carry me home, Keep your head still, I'll be your thrill, The night will go on, my little windmill (All The Small Things - Blink-182)

Donate If You Can and Find Post Useful