viernes, diciembre 29, 2017

Java Champion



Before Christmas, I received the good news that I become a Java Champion. And then many memories came to my mind. My first computer, the ZX Spectrum that my parents bought me when I was 6, the first game I had, Desperado (http://www.worldofspectrum.org/infoseekid.cgi?id=0009331)  from TopoSoft. All programs that I copied from books in Basic language, all modifications I did and of course all the "pokers" that I copied from magazines to have infinite lives in games.

After that, it started the PC era, where I remember creating many "bat" files for having different memory configurations for playing games, editing saved files with Hex editors to have millions of resources and the appearance of Turbo C in my computer.

Finally, the internet era, that for me started in 1997, and my first browser which it was Netscape. What good old time, and then my first web page and JavaScript.

And this moves me towards the key point, I think it was at 1999 or 2000, I bought a magazine which it came with a CD-ROM with Java 1.2 there, I installed and I started programming in Java without IDE until today.

Probably this is the life of many of developers that were born in the 80s. 

But this is only "scientific" facts, not feelings. These days I have been thinking about the feelings I had, what ruled to me all this time, and it came to my mind the opening theme of one of my favorite TV program called "This is Opera". Exactly the same words can be applied in my case for programming/computing/Java.



I cannot finish without thanking all the people who I met over the globe during conferences, starting with Dan Allen (the first one I met at a conference), all Arquillian team, Tomitribe team with David Belvins at head), Asciidoctor community, Barcelona JUG guys and a lot of people, organizers, attendees, .... every one thank you so much for being there.

Of course, all the workmates I had in University (La Salle), in Aventia, Grifols (thanks to Erytra team, I enjoyed so much during all years I was there), Everis, Scytl, CloudBees (LLAP to Jenkins) and finally Red Hat, a company that I really love and I feel like home.

Last but not least my family who has been there all the time.

Next year I will continue posting more posts, contributing more to open source projects and of course going to speak to as many events as possible.

We keep learning,
Alex.
Caminante son tus huellas el camino y nada más: caminante, no hay camino se hace el camino al andar (Caminante No Hay Camino - Joan Manuel Serrat)



miércoles, octubre 25, 2017

Adding Chaos on OpenShift cluster


Pumba is a chaos testing tool for Docker and Kubernetes (hence OpenShift too). It allows you to add some chaos on containers instances such as stopping, killing or removing containers randomly. But it also adding some network chaos such as delaying, packet loss or re-ordering.

You can see in next screen recording how to add Pumba on OpenShift cluster and add some chaos on it.



The security calls that you need to run before deploying Pumba are the next ones:

Also the Kubernetes script to deploy Pumba in OpenShift is:


I have sent a PR to upstream Pumba project with this file but meanwhile is not accepted you can use it this one.

I'd like to say thank you to Slava Semushin and Jorge Morales for helping me on understanding the OpenShift security model.

We keep learning,
Alex.
Ce joli rajolinet, que les oques tonifique, si le fique en une pique, mantindra le pompis net (El baró de Bidet - La Trinca)
Music: https://www.youtube.com/watch?v=4JWIbKGe4gA

Follow me at https://twitter.com/alexsotob


martes, octubre 24, 2017

Testing Code that requires a mail server

Almost all applications has one common requirement, they need to send an email notifying something to a registered user. It might be an invoice, a confirmation of an action or a password recovery. How to test this use case might be challenging, using mocks and stubs are ok for unit tests, but having a component test that tests the whole stack. 

In this post I am going to show you how Docker and MailHog, can help you on testing this part of code.

MailHog is  super simple SMTP server for email testing tool for developers:
  • Configure your application to use MailHog for SMTP delivery
  • View messages in the web UI, or retrieve them with the JSON API
  • Optionally release messages to real SMTP servers for delivery
  • Docker image with MailHog installed
Notice that since you can retrieve any message sent to mail server using JSON API, it makes really simple to validate if the message has been really delivered and of course assert on any of message fields.

Arquillian Cube is an Arquillian extension that can be used to manager Docker containers in your tests. To use it you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

Arquillian Cube offers three different ways to define container(s):
  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.
In this example I am going to show you Container Object DSL approach, but any of the others works as well.

To use Container Object DSL you simply need to instantiate the ContainerDslRule (in case you are using JUnit Rules) or use Arquillian runner in case of using JUnit, TestNG or Spock. You can read more about Container Object DSL at http://arquillian.org/arquillian-cube/#_arquillian_cube_and_container_object_dsl

As an example of definition of Redis container:


When running this test, Redis Docker image is started, tests are executed and finally the Docker instance is stopped.

So let's see how to do the same but instead of Redis using Mailhog Docker image.
It is important to notice that ContainerDslRule is a generic class that  can be extended to become more specific to a concrete use case. And this is what we are going to do for Mailhog.

First of all we need to create a class extending from ContainerDslRule, so everything is still a JUnit rule, but a customized one. Then we create a factory method which creates the MailhogContainer object, setting the image name and the binding ports. Finally an assertion method is used to connect to Rest API of Mailhog server to check if there is any message with given subject.

Then we can write a test using this new rule.


This test just configures MailService class with Docker container configuration, it sends an email and finally it delegates to container object to validate if the email has been received or not.

Notice that putting everything into an object makes this object reusable in other tests and even in other projects. You can create an independent project with all your custom developed container objects and just reuse them importing them as test jar in hosted project.

Code: https://github.com/lordofthejars/mailtest

We keep learning,
Alex.
'Cause I'm kind of like Han Solo always stroking my own wookie, I'm the root of all that's evil yeah but you can call me cookie (Fire Water Burn - Bloodhound Gang)




martes, septiembre 19, 2017

Testing code that uses Java System Properties

Sometimes your code uses any Java System Property to configure itself. Usually these classes are configuration classes that can get properties from different sources and one of valid one is using Java System Properties.

The problem is how to write tests for this code? Obviously to maintain your tests isolated you need to set and unset them for each test, restoring old value if for example you are dealing with a global property which is already set before executing tests, and of course in case of an error do not forget to unset/restore the value. So arrived at this point the test might look like:

Notice that this structure should be repeated for each test method that requires an specific system property, so this structure becomes a boilerplate code.

To avoid having to repeat this code over and over again, you can use System Rules project which is a collection of JUnit rules for testing code that uses java.lang.System 

The first thing you need to do is add System Rules dependency as test scope in your build tool, which in this case is com.github.stefanbirkner:system-rules:1.16.0.

Then you need to register the JUnit Rule into your test class.

In this case, you can freely set a System property in your test, the JUnit rule will take care of saving and restoring the System property.

That's it, really easy, no boilerplate code and help you maintaining your tests clean.

Just one notice, these kind of tests are not thread safe, so you need to be really cautious when you use for example surefire plugin with forks. One possible way to avoiding this is by using net.jcip.annotations.NotThreadSafe.class capabilities.

We keep learning,
Alex

Polly wants a cracker, I think I should get off her first, I think she wants some water, To put out the blow torch (Polly - Nirvana)



martes, junio 27, 2017

Lifecycle of JUnit 5 Extension Model


JUnit5 final release is around the corner (currently it is M4), and I have started playing with it a bit on how to write extensions.

In JUnit5, instead of dealing with Runners, Rules, ClassRules and so on, you've got a single Extension API to implement your own extensions.

JUnit5 provides several interfaces to hook in its lifecycle. For example you can hook to Test Instance Post-processing to invoke custom initialization methods on the test instance, or Parameter Resolution for dynamically resolving test method parameters at runtime. And of course the typical ones like hooking before all tests are executed, before a test is executed, after a test is executed and so on so far, a complete list can be found at http://junit.org/junit5/docs/current/user-guide/#extensions-lifecycle-callbacks

But in which point of the process it is executed each of them? To test it I have just created an extension that implements all interfaces and each method prints who is it.


Then I have created a JUnit5 Test suite containing two tests:

So after executing this suite, what it is the output? Let's see it. Notice that for sake of readability I have added some callouts on terminal output.


<1> First test that it is run is AnotherLoggerExtensionTest. In this case there is only one simple test, so the lifecycle of extension is BeforeAll, Test Instance-Post-Processing, Before Each, Before Test Execution, then the test itself is executed, and then all After callbacks.

<2> Then the LoggerExtensionTest is executed. First test is not a parametrized test, so events related to parameter resolution are not called. Before the test method is executed, test instance post-processing is called, and after that all before events are thrown. Finally the test is executed with all after events.

<3> Second test contains requires a parameter resolution. Parameter resolvers are run after Before events and before executing the test itself.

<4> Last test throws an exception. Test Execution Exception is called after test is executed but before After events.

Last thing to notice is that BeforeAll and AfterAll events are executed per test class and not suite.

The JUnit version used in this example is org.junit.jupiter:junit-jupiter-api:5.0.0-M4

We keep learning,
Alex
That's why we won't back down, We won't run and hide, 'Cause these are the things we can't deny, I'm passing over you like a satellite (Satellite - Rise Against)
Music: https://www.youtube.com/watch?v=6nQCxwneUwA

Follow me at

viernes, junio 23, 2017

Test AWS cloud stack offline with Arquillian and LocalStack


When you are building your applications on AWS cloud stack (such as DynamoDB, S3, ...), you need to write tests against these components. The first idea you might have is to have one environment for production and another one for testing, and run tests against it.

This is fine for integration tests, deployment tests, end to end tests or performance tests, but for component tests it will be faster if you could run AWS cloud stack locally and offline.

Localstack provides this feature. It  provides a fully functional local AWS cloud stack so you can develop and test your cloud applications offline.

Localstack comes with different ways to start all stack, but the easiest one is by using Docker image. So if you run atlassianlabs/localstack then you get the stack up and running with next configuration:
  • API Gateway at http://localhost:4567
  • Kinesis at http://localhost:4568
  • DynamoDB at http://localhost:4569
  • DynamoDB Streams at http://localhost:4570
  • Elasticsearch at http://localhost:4571
  • S3 at http://localhost:4572
  • Firehose at http://localhost:4573
  • Lambda at http://localhost:4574
  • SNS at http://localhost:4575
  • SQS at http://localhost:4576
  • Redshift at http://localhost:4577
  • ES (Elasticsearch Service) at http://localhost:4578
  • SES at http://localhost:4579
  • Route53 at http://localhost:4580
  • CloudFormation at http://localhost:4581
  • CloudWatch at http://localhost:4582
So the next question is how do you automate all the process of starting the container, run the tests and finally stop everything and make it portable, so you don't need to worry if you are using Docker in Linux or MacOS? The answer is using Arquillian Cube.

Arquillian Cube is an Arquillian extension that can be used to manager Docker containers in your tests. To use it you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

Arquillian Cube offers three different ways to define container(s):
  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.
In this example I am going to show you Container Object DSL approach, but any of the others works as well.

The first thing you need to do is add Arquillian and Arquillian Cube dependencies on your build tool.


Then you can write the test which in this case tests that you can create a bucket and add some content using the S3 instance started in Docker host:

Important things to take into consideration:
  1. You annotate your test with Arquillian runner.
  2. Use @DockerContainer annotation to attribute used to define the container.
  3. Container Object DSL is just a DSL that allows you to configure the container you want to use. In this case the localstack container with required port binding information.
  4. The test just connects to Amazon S3 and creates a bucket and stores some content.
Nothing else is required. When you run this test, Arquillian Cube will connect to installed Docker (Machine) host and start the localstack container. When it is up and running and services are able to receive requests, the tests are executed. After that container is stopped and destroyed.

TIP1: If you cannot use Arquillian runner you can also use a JUnit Class Rule to define the container as described here http://arquillian.org/arquillian-cube/#_junit_rule

TIP2: If you are planning to use localstack in the whole organization, I suggest you to use Container Object approach instead of DSL because then you can pack the localstack Container Object into a jar file and import in all projects you need to use it. You can read at http://arquillian.org/arquillian-cube/#_arquillian_cube_and_container_object

So now you can write tests for your application running on AWS cloud without having to connect to remote hosts, just using local environment.

We keep learning,
Alex
Tú, tú eres el imán y yo soy el metal , Me voy acercando y voy armando el plan , Solo con pensarlo se acelera el pulso (Oh yeah) (Despacito - Luis Fonsi)
Music: https://www.youtube.com/watch?v=kJQP7kiw5Fk


jueves, junio 22, 2017

Vert.X meets Service Virtualization with Hoverfly



Service Virtualization is a technique using to emulate the behaviour of dependencies of component-based applications.

Hoverfly is a service virtualisation tool written in Go which allows you to emulate HTTP(S) services. It is a proxy which responds to HTTP(S) requests with stored responses, pretending to be it’s real counterpart.

Hoverfly Java is a wrapper around Hoverfly, that let's you use it in Java world. It provides a native Java DSL to write expectations and a JUnit rule to use it together with JUnit.

But apart from being able to program expectations, you can also use Hoverfly to capture traffic between both services (in both cases are real services) and persist it.

Then in next runs Hoverfly will use these persisted scripts to emulate traffic and not touch the remote service. In this way, instead of programming expectations, which means that you are programming how you understand the system, you are using real communication data.

This can be summarised in next figures:


First time the output traffic is sent though Hoverfly proxy, it is redirected to real service and it generates a response. Then when the response arrives to proxy, both request and response are stored, and the real response is sent back to caller.

Then in next calls of same method:


The output traffic of Service A is still sent though Hoverfly proxy, but now the response is returned from previous stored responses instead of redirecting to real service.

So, how to connect from HTTP client of Service A to Hoverfly proxy? The quick answer is nothing.

Hoverfly just overrides Java network system properties (https://docs.oracle.com/javase/7/docs/api/java/net/doc-files/net-properties.html) so you don't need to do anything, all communications (independently of the host you put there) from HTTP client will  go through Hoverfly proxy.

The problem is what's happening if the API you are using as HTTP client does not honor these system properties? Then obviously all outgoing communications will not pass thorough proxy.

One example is Vert.X and its HTTP client io.vertx.rxjava.ext.web.client.WebClient. Since WebClient does not honor these properties, you need to configure the client properly in order to use Hoverfly.

The basic step you need to do is just configure WebClient with proxy options that are set as system properties.

Notice that the only thing that it done here is just checking if system properties regarding network proxy has been configured (by Hoverfly Java) and if it is the case just create a Vert.X ProxyOptions object to configure the HTTP client.

With this change, you can write tests with Hoverfly and Vert.X without any problem:

In previous example Hoverfly is used as in simulate mode and the request/response definitions comes in form of DSL instead of an external JSON script.
Notice that in this case you are programming that when a request by current service (VillainsVerticle), is done to host crimes and port 9090, using GET HTTP method at /crimes/Gru then the response is returned. For sake of simplicity of current post this method is enough.

You can see source code at https://github.com/arquillian-testing-microservices/villains-service and read about Hoverfly Java at http://hoverfly-java.readthedocs.io/en/latest/

We keep learning,
Alex
No vull veure't, vull mirar-te. No vull imaginar-te, vull sentir-te. Vull compartir tot això que sents. No vull tenir-te a tu: vull, amb tu, tenir el temps. (Una LLuna a l'Aigua - Txarango)
Music: https://www.youtube.com/watch?v=BeH2eH9iPw4

miércoles, mayo 24, 2017

Deploying Docker Images to OpenShift



OpenShift is RedHat's cloud development Platform as a Service (PaaS). It uses Kubernetes as container orchestration (so you can use OpenShift as Kubernetes implementation), but providing some features missed in Kubernates such as automation of the build process of the containers, health management, dynamic provision storage or multi-tenancy to cite a few.

In this post I am going to explain how you can deploy a Docker image from Docker Hub into an OpenShift instance. 

It is important to note that OpenShift offers other ways to create and deploy a container into its infrastructure, you can read more about at https://docs.openshift.com/enterprise/3.2/dev_guide/builds.html but as read in previous paragraph, in this case I am going to show you how to deploy already created Docker images from Docker Hub.

First thing to do is create an account in OpenShift Online. It is free and for the sake of this post is enough. Of course you can use any other OpenShift approach like OpenShift Origin.

After that you need to login into OpenShift cluster. In case of OpenShift Online using the token provided:

oc login https://api.starter-us-east-1.openshift.com --token=xxxxxxx

Then you need to create a new project inside OpenShift.

oc new-project villains

You can understand a project as a Kubernetes namespace with additional features. 

Then let's create a new application within previous project based on a Docker image published at Docker Hub. This example is a VertX application where you can get crimes from several fictional villains from Lex Luthor to Gru.

oc new-app lordofthejars/crimes:1.0 --name crimes

In this case a new app called crimes is created based on lordofthejars/crimes:1.0 image. After running previous command, a new pod running previous image + a service +  a replication controller is created.

After that we need to create a route so the service is available to public internet.

oc expose svc crimes --name=crimeswelcome

The last step is just get the version of the service from the browser, in my case was: http://crimeswelcome-villains.1d35.starter-us-east-1.openshiftapps.com/version notice that you need to change the public host with the one generated by your router and then append version. You can find the public URL by going to OpenShift dashboard, at top of pods definition.


Ok now you'll get a 1.0 which is the version we have deployed. Now suppose you want to update to next version of the service, to version 1.1, so you need to run next commands to deploy next version of crimes service container, which is pushed at Docker Hub.

oc import-image crimes:1.1 --from=lordofthejars/crimes:1.1

With previous command you are configuring internal OpenShift Docker Registry with next Docker image to release.

Then let's prepare the application so when next rollout command is applied, the new image is deployed:

oc patch dc/crimes -p '{"spec": { "triggers":[ {"type": "ConfigChange", "type": "ImageChange" , "imageChangeParams": {"automatic": true, "containerNames":["crimes"],"from": {"name":"crimes:1.1"}}}]}}'

And finally you can do the rollout of the application by using:

oc rollout latest dc/crimes

After a few seconds you can go again to http://crimeswelcome-villains.1d35.starter-us-east-1.openshiftapps.com/version (of course change the host with your host) and the version you'll get is 1.1.

Finally what's happening if this new version contains a bug, and you want to do a rollback of the deployment to previous version? Easy just run next command:

oc rollback crimes-1

And previous version is going to be deployed again, so after a few seconds you can go again to /version and you'll see 1.0 version again.

Finally if you want to delete the application to have a clean cluster run:

oc delete all --all

So as you can see, it is really easy to deploy to deploy container images from Docker Hub to OpenShift. Notice that there are other ways to deploy our application into OpenShift (https://docs.openshift.com/enterprise/3.2/dev_guide/builds.html), in this post I have just shown you one.

Commands: https://gist.github.com/lordofthejars/9fb5f08e47775a185a9b1f80f4af7aff

We keep learning,
Alex.
Yo listen up here's a story, About a little guy that lives in a blue world, And all day and all night and, everything he sees is just blue, Like him inside and outside (Blue - Eiffel 65)
Music: https://www.youtube.com/watch?v=68ugkg9RePc


viernes, mayo 19, 2017

Running Parallel Tests in Docker



Sometimes when you are running your tests on your CI environment, you want to run tests in parallel. This parallelism is programmed in build tool such as Maven or Gradle or by using Jenkins plugin. 

If you are using Docker as a testing tool for providing external dependencies to the application (for example databases, mail servers, ftp servers, ....) you might find a big problem and it is that probably Docker Host used is one and when running tests in parallel, all of them are going to try to start a container with same name. So when you start the second test (in parallel) you will get a failure regarding that a conflict container name because of trying to start at the same Docker Host two containers with same name or having same binding port in two containers.

So arrived at this point you can do two things:
  • You can have one Docker Host for each parallel test.
  • You can reuse the same Docker Host and use Arquillian Cube Star Operator.

Arquillian Cube is an Arquillian extension that can be used to manager Docker containers in your tests.

To use Arquillian Cube you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

Arquillian Cube offers three different ways to define container(s):

  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.
In this example I am going to show you how to use docker-compose and Container Object DSL.

Star operator let’s you indicate to Arquillian Cube that you want to generate cube names randomly and can adapt links as well. In this way when you execute your tests in parallel there will be no conflicts because of names or binding ports.

Let’s see an example:


You can see in docker-compose.yml file an important change on a typical docker-compose file, and it is that the name ends up with star (*) operator [redis*]. This is how you are instructing Arquillian Cube that this name should be generated dynamically for each execution.

Then there are three tests (here only showed the first one) that all of them looks like the same. Basically it prints to console the binding port to connect to the server.

Finally there is build.gradle file, which executes two tests in parallel. So if you run the tests in Gradle (./gradlew test) you'll see that two tests are executed at the same time and when it finish one of them, the remaining test is executed. Then if you inspect the output you'll see next output:


So as you can see in the log, container name is not redis nor redis*, but redis followed by a UUID. Also you can see that when the output is printed the binding port is different in each case.

But if you don't want to use docker-compose approach, you can also define container programmatically by using Container Object DSL which also supports star operator.  In this case it the example looks like:

The approach is the same, but using Container Objects (you need Arquillian Cube 1.4.0 to run it with Container Objects).

Notice that thanks of this feature you can run the tests with any degree of parallel execution, since Arquillian Cube takes care of naming or port binding issues. Notice that in case of linking between containers, you still need to use the star operator, and it will be resolved at runtime.

To read more about star operator just check http://arquillian.org/arquillian-cube/#_parallel_execution

Source code: https://github.com/lordofthejars/parallel-docker

We keep learning,
Alex.
I can show you the world, Shining, shimmering, splendid, Tell me, princess, now when did, You last let your heart decide? (A Whole New World - Aladdin)
Music: https://www.youtube.com/watch?v=sVxUUotm1P4


viernes, mayo 12, 2017

Testing Spring Data + Spring Boot applications with Arquillian (Part 2)


In previous post, I wrote about how to test Spring Data application using Docker with Arquillian Cube. The test looked like:


This test just starts Redis container, then populate data using restTemplate and post method, then execute the logic under test (testing GET HTTP method) and finally stop the Redis container.

It is good, it works but there are several problems there:
  • The first one is that we are using REST API to prepare data set of the test. The problem here is that the test might fail not because a failure on code under test but because of the preparation of the test (insertion of data).
  • The second one is that if POST endpoint changes format/location, then you need to remember to change everywhere in the tests where it is used.
  • The last one is that each test should leave the environment as found before execution, so the test is isolated from all executions. The problem is that to do it in this approach you need to delete the previous elements inserted by POST. This means to add DELETE HTTP method which might not be always implemented in endpoint, or it might be restricted to some concrete users so need to deal with special authentication things.
To avoid this problem Arquillian Persistence Extension (aka APE) was created. This extensions integrates with DBUnit and Flyway for SQL databases, NoSQLUnit for No SQL databases and Postman collections for REST services so you can populate your backend before testing the real test use case and clean the persistence storage after the test is executed.

Also population data is stored inside a file, so this means that can be reused in all tests and easily changed in case of any schema update.

Let's see example of Part 1 of the post but updating to use APE.

And the file (pings.json) used for populating Redis instance with data looks like:


Notice that in this test you have replaced the POST calls for something that directly inserts into the storage. In this way you avoid any failure that might occurs in the insertion logic (which is not the part under test). Finally after each test method, Redis instance is cleaned so other tests finds Redis clean and into known state. 

Project can be found at https://github.com/arquillian-testing-microservices/pingpongbootredis

We keep learning,
Alex
Y es que no puedo estar así, Las manecillas del reloj, Son el demonio que me tiene hablando solo (Tocado y Hundido - Melendi)
Music: https://www.youtube.com/watch?v=1JwAr4ZxdMk



martes, mayo 02, 2017

Testing Dockerized SQL Databases


One of the big advantages of using Docker for testing is that you don't need to install the required dependencies of code under tests in all machines where you are going to run these tests. This is really helpful for external services such as database servers, mail services, JMS queues, ... Also one of the big advantages of this approach is that the tests are going to use the same version used in production.

So for persistence tests using Docker is a really good approach to follow. But as usually this approach comes with some drawbacks. 

The first one is that obviously you need to have Docker installed in all machines that needs to run the tests, not a big problem but something to take into consideration, as well as Docker inside Docker problem.

The second one is that you need to automate somehow the starting and stopping of the container.

The third one is that Docker containers are ephemeral. This means that when you start the container, in this case a container with a SQL server, then you need to migrate the database schema there.

The fourth one, and this is not only related to Docker, is that you need to maintain test method execution isolated from test to test execution, by providing known data before execution and cleaning data after the execution so other test finds the environment clean.

First and second problems are fixed with Arquillian Cube (http://arquillian.org/arquillian-cube/). It manages lifecycle of containers by starting and stopping them automatically before and after test class execution. Also it detects when you are running into a DinD situation and configures started containers accordantly.

Arquillian Cube offers three different ways to define container(s).

  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.

For this post, Container Object DSL approach is the one used. To define a container to be started before executing tests and stopped after you only need to write next piece of code.


In this case a JUnit Rule is used to define which image should be used in the test (redis:3.2.6) and add as binding port the Redis port (6379).

The third one can be fixed using Flyway. It is an open-source database migration tool for SQL databases that allows you to automate the creation of database schemas.

Flyway is useful here since you can start the Docker container and then apply all migrations to the empty database using Flyway.

The fourth problem can be fixed by using tools like DBUnit. iI puts your database into a known state between test runs by populating database with known data, and cleaning it after the test execution.

Arquillian integrates with both of these tools (Flyway and DBUnit)  among others with its extension called Arquillian Persistence Extension (aka APE),

An example on how to use APE with DBUnit is shown in next snippet:

You can use Arquillian runner as shown in dbunit-ftest-example or as shown in previous snippet using a JUnit Rule. Choosing one or other depends on your test requirements.

So how everything fits together in Arquillian so you can boot up a Docker container with a SQL database, such as PostgreSQL, before test class execution, then migrate SQL schema and populate it with data, execute the test method, then clean the whole database so next test method finds a clean database and finally after test class execution, the Docker container is destroyed?

Let's see it in the next example:

Test is not so much complicated and it is pretty  much self explanatory of what it is doing in each step . You are creating the Docker container using Arquillian Cube DSL, and also you are configuring the populators by just using Arquillian APE DSL.

So thanks of Arquillian Cube and Arquillian APE  you can make your test totally isolated from your runtime, it will be executed always agains the same PostgreSQL database version and each test method execution will be isolated.

You can see full code at https://github.com/arquillian/arquillian-extension-persistence/tree/2.0.0/arquillian-ape-sql/standalone/dbunit-flyway-ftest

We keep learning,
Alex
Ya no me importa nada, Ni el día ni la hora, Si lo he perdido todo, Me has dejado en las sombras (Súbeme la Radio - Enrique Iglésias)
Music: https://www.youtube.com/watch?v=9sg-A-eS6Ig

miércoles, abril 26, 2017

Testing Spring Data + Spring Boot applications with Arquillian (Part 1)


Spring Data’s mission is to provide a familiar and consistent, Spring-based programming model for data access while still retaining the special traits of the underlying data store. It provides integration with several backend technologies such as JPA, Rest, MongoDB, Neo4J or Redis to cite a few.

So if you are using Spring (Boot) then Spring Data is the right choice to deal with persistence layer.

In next example you can see how simple is to use Spring Boot and Spring Data Redis.


It is important to notice that by default Spring Data Redis is configured to connect to localhost and port 6379, but you can override those values by setting system properties (spring.redis.host and spring.redis.port) or environment variables (SPRING_REDIS_HOST and SPRING_REDIS_PORT).

But now it is time to write a test for this piece of code. The main problem you might get is that you need a Redis server installed in all machines that need to execute these tests such as developers machine or Jenkins slaves. 

This is not a problem per se but when you start working on more and more projects you'll need more and more databases installed on the system, and what even can be worst not exactly the same version as required on production. 

To avoid this problem, one possible solution is using Docker and containers. So instead of relaying on having each database installed on the system, you only depends on Docker. Then the test just starts the repository container, in our case Redis, executes the test(s) and finally stops the container.

And this is where Arquillian (and Arquillian Cube) helps you on automating everything.
Arquillian Cube is an Arquillian extension that can be used to manager Docker containers from Arquillian.

To use Arquillian Cube you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

By default the Docker server uses UNIX sockets for communicating with the Docker client. Arquillian Cube will attempt to detect the operating system it is running on and either set docker-java to use UNIX socket on Linux or to Boot2Docker/Docker-Machine on Windows/Mac as the default URI, so your test is portable across several Docker installations and you don't need to worry about configuring it, Arquillian Cube adapts to what you have installed.

Arquillian Cube offers three different ways to define container(s).
  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.

For this post, Container Object DSL approach is the one used. To define a container to be started before executing tests and stopped after you only need to write next piece of code.

In this case a JUnit Rule is used to define which image should be used in the test (redis:3.2.6) and add as binding port the Redis port (6379).

The full test looks like:

Notice that it is a simple Spring Boot test using their bits and bobs, but Arquillian Cube JUnit Rule is used in the test to start and stop the Redis image.

Last important thing to notice is that test contains an implementation of ApplicationContextInitializer so we can configure environment with Docker data (host and binding port of Redis container) so Spring Data Redis can connect to correct instance.

Last but not least build.gradle file defines required dependencies, which looks like:


You can read more about Arquillian Cube at http://arquillian.org/arquillian-cube/

We keep learning,
Alex

Hercules and his gifts, Spiderman's control, And Batman with his fists, And clearly I don't see myself upon that list (Something just like this - The Chainsmokers & Coldplay)

Music: https://www.youtube.com/watch?v=FM7MFYoylVs

lunes, abril 10, 2017

Arquillian Persistence with MongoDB and Docker


In this screencast you are going to see how you can use Arquillian Persistence Extension (https://github.com/arquillian/arquillian-extension-persistence/tree/2.0.0) and Docker to write persistence tests for MongoDB.

To manage Docker lifecycle, I have used Arquillian Cube (http://arquillian.org/arquillian-cube/) and for populating data into MongoDB, the fairly new integration between Arquillian Persistence Extension (aka APE) and NoSQLUnit (https://github.com/lordofthejars/nosql-unit).



We keep learning,
Alex.

Ridi, Pagliaccio, Sul tuo amore infranto! Ridi del duol, che t'avvelena il cor! (Vesti la giubba (Pagliacci) - Leoncavallo)
Music: https://www.youtube.com/watch?v=Z0PMq4XGtZ4


viernes, marzo 24, 2017

3 ways of using Docker Containers for Testing in Arquillian


Arquillian Cube is an Arquillian extension that can be used to manager Docker containers from Arquillian.

With this extension you can start a Docker container(s), execute Arquillian tests and after that shutdown the container(s).

The first thing you need to do is add Arquillian Cube dependency. This can be done by using Arquillian Universe approach:


Then you have three ways of defining the containers you want to start.

The first approach is using docker-compose format. You only need to define the docker-compose file required for your tests, and Arquillian Cube automatically reads it, start all containers, execute the tests and finally after that they stop and remove them.

In previous example a docker compose file version 2 is defined (it can be stored in the root of the project, or in src/{main, test}/docker or in src/{main, test}/resources and Arquillian Cube will pick it up automatically), creates the defined network and start the service defined container, executes the given test. and finally stops and removes network and container. The key point here is that this happens automatically, you don't need to do anything manual.

The second approach is using Container Object pattern.  You can think of a Container Object as a mechanism to encapsulate areas (data and actions) related to a container that your test might interact with. In this case no docker-compose is required.

In this case you are using annotations to define how the container should looks like. Also since you are using java objects, you can add methods that encapsulates operations with the container itself, like in this object where the operation of checking if a file has been uploaded has been added in the container object.

Finally in your test you only need to annotate it with @Cube annotation.

Notice that you can even create the definition of the container programmatically:

In this case a Dockerfile file is created programmatically within the Container Object and used for building and starting the container.

The third way is using Container Object DSL. This approach avoids you from creating a Container Object class and use annotations to define it. It can be created using a DSL provided for this purpose:

In this case the approach is very similar to the previous one, but you are using a DSL to define the container.

You've got three ways, the first one is the standard one following docker-compose conventions, the other ones can be used for defining reusable pieces for your tests.

You can read more about Arquillian Cube at http://arquillian.org/arquillian-cube/

We keep learning,
Alex
And did you think this fool could never win, Well look at me, i'm coming back again, I got a taste of love in a simple way, And if you need to know while i'm still standing you just fade away (I'm still Standing - Elton John)
Music: https://www.youtube.com/watch?v=ZHwVBirqD2s


lunes, enero 09, 2017

Develop A Microservice with Forge, WildFly Swarm and Arquillian. Keep It Simple.

 


In this post we are going to see how to develop a microservice using WildFly Swarm and Forge and testing it with Arquillian and Rest Assured.

WildFly Swarm offers an innovative approach to packaging and running Java EE applications by packaging them with just enough of the server runtime to "java -jar" your application.

JBoss Forge is a software development tool that extends your Java IDE, providing wizards and extensions (add-ons) for different technologies and solutions.

Arquillian is a platform that simplifies integration testing for Java middleware. It deals with all the plumbing of container management, deployment, and framework initialization so you can focus on the task of writing your tests—real tests.

REST Assured brings the simplicity of testing and validating REST services in dynamic languages such as Ruby and Groovy into the Java domain.

So the first thing you need to do is installing Forge, to do it you can just download the CLI console from http://downloads.jboss.org/forge/releases/3.4.0.Final/forge-distribution-3.4.0.Final-offline.zip or navigate to http://forge.jboss.org/download and download the plugin for Eclipse, Netbeans or IntelliJ. For this example, I am going to use the CLI one.

After you've installed Forge and it is available in PATH environment variable you can start working on it.

First of all go to the directory where you want to store the project and run forge.
After a few seconds, you'll see that Forge is started and you are ready to type commands:



After that you need to install the wildfly-swarm addon. To do it just type next command on Forge shell:

> addon-install-from-git --url https://github.com/forge/wildfly-swarm-addon

Then the latest addon will be downloaded and installed. After this setup step, you can start creating your microservice by calling:

> project-new --top-level-package org.superbiz --named foo --type wildfly-swarm

This command creates a new project called foo, with pom.xml prepared with all wildfly swarm requirements. Next step is adding a wildfly swarm fragment. A fragment is a way to define which modules you want to be able at runtime.

> wildfly-swarm-add-fraction --fractions microprofile

In this case microprofile fraction is added. This means that at runtime CDI + JSON-P + JAXRS will be available.

Addon also creates a JAX-RS endpoint as an example, you can check it by running next two commands:

> cd src/main/java/org/superbiz/rest/HelloWorldEndpoint.java
> ls

Then return to root of the project and let's call the command that creates an Arquilian test for the microservice.

> wildfly-swarm-new-test --target-package org.superbiz --named HelloWorldEndpointTest --as-client

In this case the test is called HelloWorldEndpointTest and test is going to run in Arquillian as-client mode (which means that the test is not deployed inside the container and will be run at local runtime). You can check the generated code with next two commands:

> cd src/test/java/org/superbiz
> cat HelloWorldEndpointTest.java

Notice that test does not validate nothing yet, but since we are using as-client mode, the test injects the URL where the application is started. Let's add some checks using REST-assured.
Return to the root of the project and add REST-assured dependency by calling next command:

> project-add-dependencies io.rest-assured:rest-assured:3.0.1:test
> cat pom.xml

Finally you can use REST-assured in empty test to validate that your microservice endpoint effectively returns "Hello from WildFly Swam!".


When you run this test, what it is happening behind the scene is that the microservice is packaged and deployed locally. When service is ready to receive incoming requests, then the test will send a GET request to /hello and asserts that the response body is "Hello from WildFly Swam!"

You can see this running at https://youtu.be/9xb6GIZ1gjs

This is a really simple example, and this was the intention of this post. Just show you how using Forge and just running some commands you get an started project with its integration test running.

We keep learning,
Alex.

I'm not giving up today, There's nothing getting in my way, And if you knock knock me over, I will get back up again (Get Back Up Again - Trolls)

Music: https://www.youtube.com/watch?v=IFuFm0m2wj0