miércoles, mayo 24, 2017

Deploying Docker Images to OpenShift



OpenShift is RedHat's cloud development Platform as a Service (PaaS). It uses Kubernetes as container orchestration (so you can use OpenShift as Kubernetes implementation), but providing some features missed in Kubernates such as automation of the build process of the containers, health management, dynamic provision storage or multi-tenancy to cite a few.

In this post I am going to explain how you can deploy a Docker image from Docker Hub into an OpenShift instance. 

It is important to note that OpenShift offers other ways to create and deploy a container into its infrastructure, you can read more about at https://docs.openshift.com/enterprise/3.2/dev_guide/builds.html but as read in previous paragraph, in this case I am going to show you how to deploy already created Docker images from Docker Hub.

First thing to do is create an account in OpenShift Online. It is free and for the sake of this post is enough. Of course you can use any other OpenShift approach like OpenShift Origin.

After that you need to login into OpenShift cluster. In case of OpenShift Online using the token provided:

oc login https://api.starter-us-east-1.openshift.com --token=xxxxxxx

Then you need to create a new project inside OpenShift.

oc new-project villains

You can understand a project as a Kubernetes namespace with additional features. 

Then let's create a new application within previous project based on a Docker image published at Docker Hub. This example is a VertX application where you can get crimes from several fictional villains from Lex Luthor to Gru.

oc new-app lordofthejars/crimes:1.0 --name crimes

In this case a new app called crimes is created based on lordofthejars/crimes:1.0 image. After running previous command, a new pod running previous image + a service +  a replication controller is created.

After that we need to create a route so the service is available to public internet.

oc expose svc crimes --name=crimeswelcome

The last step is just get the version of the service from the browser, in my case was: http://crimeswelcome-villains.1d35.starter-us-east-1.openshiftapps.com/version notice that you need to change the public host with the one generated by your router and then append version. You can find the public URL by going to OpenShift dashboard, at top of pods definition.


Ok now you'll get a 1.0 which is the version we have deployed. Now suppose you want to update to next version of the service, to version 1.1, so you need to run next commands to deploy next version of crimes service container, which is pushed at Docker Hub.

oc import-image crimes:1.1 --from=lordofthejars/crimes:1.1

With previous command you are configuring internal OpenShift Docker Registry with next Docker image to release.

Then let's prepare the application so when next rollout command is applied, the new image is deployed:

oc patch dc/crimes -p '{"spec": { "triggers":[ {"type": "ConfigChange", "type": "ImageChange" , "imageChangeParams": {"automatic": true, "containerNames":["crimes"],"from": {"name":"crimes:1.1"}}}]}}'

And finally you can do the rollout of the application by using:

oc rollout latest dc/crimes

After a few seconds you can go again to http://crimeswelcome-villains.1d35.starter-us-east-1.openshiftapps.com/version (of course change the host with your host) and the version you'll get is 1.1.

Finally what's happening if this new version contains a bug, and you want to do a rollback of the deployment to previous version? Easy just run next command:

oc rollback crimes-1

And previous version is going to be deployed again, so after a few seconds you can go again to /version and you'll see 1.0 version again.

Finally if you want to delete the application to have a clean cluster run:

oc delete all --all

So as you can see, it is really easy to deploy to deploy container images from Docker Hub to OpenShift. Notice that there are other ways to deploy our application into OpenShift (https://docs.openshift.com/enterprise/3.2/dev_guide/builds.html), in this post I have just shown you one.

Commands: https://gist.github.com/lordofthejars/9fb5f08e47775a185a9b1f80f4af7aff

We keep learning,
Alex.
Yo listen up here's a story, About a little guy that lives in a blue world, And all day and all night and, everything he sees is just blue, Like him inside and outside (Blue - Eiffel 65)
Music: https://www.youtube.com/watch?v=68ugkg9RePc


viernes, mayo 19, 2017

Running Parallel Tests in Docker



Sometimes when you are running your tests on your CI environment, you want to run tests in parallel. This parallelism is programmed in build tool such as Maven or Gradle or by using Jenkins plugin. 

If you are using Docker as a testing tool for providing external dependencies to the application (for example databases, mail servers, ftp servers, ....) you might find a big problem and it is that probably Docker Host used is one and when running tests in parallel, all of them are going to try to start a container with same name. So when you start the second test (in parallel) you will get a failure regarding that a conflict container name because of trying to start at the same Docker Host two containers with same name or having same binding port in two containers.

So arrived at this point you can do two things:
  • You can have one Docker Host for each parallel test.
  • You can reuse the same Docker Host and use Arquillian Cube Star Operator.

Arquillian Cube is an Arquillian extension that can be used to manager Docker containers in your tests.

To use Arquillian Cube you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

Arquillian Cube offers three different ways to define container(s):

  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.
In this example I am going to show you how to use docker-compose and Container Object DSL.

Star operator let’s you indicate to Arquillian Cube that you want to generate cube names randomly and can adapt links as well. In this way when you execute your tests in parallel there will be no conflicts because of names or binding ports.

Let’s see an example:


You can see in docker-compose.yml file an important change on a typical docker-compose file, and it is that the name ends up with star (*) operator [redis*]. This is how you are instructing Arquillian Cube that this name should be generated dynamically for each execution.

Then there are three tests (here only showed the first one) that all of them looks like the same. Basically it prints to console the binding port to connect to the server.

Finally there is build.gradle file, which executes two tests in parallel. So if you run the tests in Gradle (./gradlew test) you'll see that two tests are executed at the same time and when it finish one of them, the remaining test is executed. Then if you inspect the output you'll see next output:


So as you can see in the log, container name is not redis nor redis*, but redis followed by a UUID. Also you can see that when the output is printed the binding port is different in each case.

But if you don't want to use docker-compose approach, you can also define container programmatically by using Container Object DSL which also supports star operator.  In this case it the example looks like:

The approach is the same, but using Container Objects (you need Arquillian Cube 1.4.0 to run it with Container Objects).

Notice that thanks of this feature you can run the tests with any degree of parallel execution, since Arquillian Cube takes care of naming or port binding issues. Notice that in case of linking between containers, you still need to use the star operator, and it will be resolved at runtime.

To read more about star operator just check http://arquillian.org/arquillian-cube/#_parallel_execution

Source code: https://github.com/lordofthejars/parallel-docker

We keep learning,
Alex.
I can show you the world, Shining, shimmering, splendid, Tell me, princess, now when did, You last let your heart decide? (A Whole New World - Aladdin)
Music: https://www.youtube.com/watch?v=sVxUUotm1P4


viernes, mayo 12, 2017

Testing Spring Data + Spring Boot applications with Arquillian (Part 2)


In previous post, I wrote about how to test Spring Data application using Docker with Arquillian Cube. The test looked like:


This test just starts Redis container, then populate data using restTemplate and post method, then execute the logic under test (testing GET HTTP method) and finally stop the Redis container.

It is good, it works but there are several problems there:
  • The first one is that we are using REST API to prepare data set of the test. The problem here is that the test might fail not because a failure on code under test but because of the preparation of the test (insertion of data).
  • The second one is that if POST endpoint changes format/location, then you need to remember to change everywhere in the tests where it is used.
  • The last one is that each test should leave the environment as found before execution, so the test is isolated from all executions. The problem is that to do it in this approach you need to delete the previous elements inserted by POST. This means to add DELETE HTTP method which might not be always implemented in endpoint, or it might be restricted to some concrete users so need to deal with special authentication things.
To avoid this problem Arquillian Persistence Extension (aka APE) was created. This extensions integrates with DBUnit and Flyway for SQL databases, NoSQLUnit for No SQL databases and Postman collections for REST services so you can populate your backend before testing the real test use case and clean the persistence storage after the test is executed.

Also population data is stored inside a file, so this means that can be reused in all tests and easily changed in case of any schema update.

Let's see example of Part 1 of the post but updating to use APE.

And the file (pings.json) used for populating Redis instance with data looks like:


Notice that in this test you have replaced the POST calls for something that directly inserts into the storage. In this way you avoid any failure that might occurs in the insertion logic (which is not the part under test). Finally after each test method, Redis instance is cleaned so other tests finds Redis clean and into known state. 

Project can be found at https://github.com/arquillian-testing-microservices/pingpongbootredis

We keep learning,
Alex
Y es que no puedo estar así, Las manecillas del reloj, Son el demonio que me tiene hablando solo (Tocado y Hundido - Melendi)
Music: https://www.youtube.com/watch?v=1JwAr4ZxdMk



martes, mayo 02, 2017

Testing Dockerized SQL Databases


One of the big advantages of using Docker for testing is that you don't need to install the required dependencies of code under tests in all machines where you are going to run these tests. This is really helpful for external services such as database servers, mail services, JMS queues, ... Also one of the big advantages of this approach is that the tests are going to use the same version used in production.

So for persistence tests using Docker is a really good approach to follow. But as usually this approach comes with some drawbacks. 

The first one is that obviously you need to have Docker installed in all machines that needs to run the tests, not a big problem but something to take into consideration, as well as Docker inside Docker problem.

The second one is that you need to automate somehow the starting and stopping of the container.

The third one is that Docker containers are ephemeral. This means that when you start the container, in this case a container with a SQL server, then you need to migrate the database schema there.

The fourth one, and this is not only related to Docker, is that you need to maintain test method execution isolated from test to test execution, by providing known data before execution and cleaning data after the execution so other test finds the environment clean.

First and second problems are fixed with Arquillian Cube (http://arquillian.org/arquillian-cube/). It manages lifecycle of containers by starting and stopping them automatically before and after test class execution. Also it detects when you are running into a DinD situation and configures started containers accordantly.

Arquillian Cube offers three different ways to define container(s).

  • Defining a docker-compose file.
  • Defining a Container Object.
  • Using Container Object DSL.

For this post, Container Object DSL approach is the one used. To define a container to be started before executing tests and stopped after you only need to write next piece of code.


In this case a JUnit Rule is used to define which image should be used in the test (redis:3.2.6) and add as binding port the Redis port (6379).

The third one can be fixed using Flyway. It is an open-source database migration tool for SQL databases that allows you to automate the creation of database schemas.

Flyway is useful here since you can start the Docker container and then apply all migrations to the empty database using Flyway.

The fourth problem can be fixed by using tools like DBUnit. iI puts your database into a known state between test runs by populating database with known data, and cleaning it after the test execution.

Arquillian integrates with both of these tools (Flyway and DBUnit)  among others with its extension called Arquillian Persistence Extension (aka APE),

An example on how to use APE with DBUnit is shown in next snippet:

You can use Arquillian runner as shown in dbunit-ftest-example or as shown in previous snippet using a JUnit Rule. Choosing one or other depends on your test requirements.

So how everything fits together in Arquillian so you can boot up a Docker container with a SQL database, such as PostgreSQL, before test class execution, then migrate SQL schema and populate it with data, execute the test method, then clean the whole database so next test method finds a clean database and finally after test class execution, the Docker container is destroyed?

Let's see it in the next example:

Test is not so much complicated and it is pretty  much self explanatory of what it is doing in each step . You are creating the Docker container using Arquillian Cube DSL, and also you are configuring the populators by just using Arquillian APE DSL.

So thanks of Arquillian Cube and Arquillian APE  you can make your test totally isolated from your runtime, it will be executed always agains the same PostgreSQL database version and each test method execution will be isolated.

You can see full code at https://github.com/arquillian/arquillian-extension-persistence/tree/2.0.0/arquillian-ape-sql/standalone/dbunit-flyway-ftest

We keep learning,
Alex
Ya no me importa nada, Ni el día ni la hora, Si lo he perdido todo, Me has dejado en las sombras (Súbeme la Radio - Enrique Iglésias)
Music: https://www.youtube.com/watch?v=9sg-A-eS6Ig