lunes, septiembre 24, 2012

Testing client side of RESTful services

People tell me A and B, They tell me how I have to see, Things that I have seen already clear, So they push me then from side to side (I Want Out - Helloween)
Develop an application that uses RESTful web API may imply developing server and client side. Writing integration tests for server side can be as easy as using Arquillian to start up server and REST-assured to test that the services works as expected. The problem is how to test the client side. In this post we are going to see how to test the client side apart from using mocks.

As a brief description, to test client part, what we need is a local server which can return recorded JSON responses. The rest-client-driver is a library which simulates a RESTful service. You can set expectations on the HTTP requests you want to receive during a test. So it is exactly what we need for our java client side. Note that this project is really helpful to write tests when we are developing RESTful web clients for connecting to services developed by third parties like Flickr Rest API, Jira Rest API, Github ...

First thing to do is adding rest-client-driver dependency:

Next step we are going to create a very simple Jersey application which simply invokes a get method to required URI.

And now we want to test that invokeGetMethod really gets the required resource. Let's suppose that this method in production code will be responsible of getting all issues name from a project registered on github.

Now we can start to write the test:

  • We use ClientDriverRule  @Rule annotation to add the client-driver to a test.
  • And then using methods provided by RestClientDriver class, expectations are recorded.
  • See how we are setting the base URL using driver.getBaseUrl()
With rest-client-driver we can also record http status response using giveEmptyResponse method:

And obviously we can record a put action:

Note that in this example, we are setting that our request should contain given message body to response a 204 status code.

This is a very simple example, but keep in mind that also works with libraries like gson or jackson. Also rest-driver project comes with a module that can be used to assert server responses (like REST-assured project) but this topic will be addressed into another post.

I wish you have found this post useful.

We keep learning,

miércoles, septiembre 05, 2012

NoSQLUnit 0.4.1 Released

Yo no soy marinero, Yo no soy marinero, soy capitan, Soy capitan, soy capitan, Bamba, bamba (La Bamba - Ritchie Valens)
NoSQLUnit is a JUnit extension to make writing unit and integration tests of systems that use NoSQL backend easier. Visit official page for more information.

In 0.4.1 release, after supporting Cassandra in 0.4.0 version, one new NoSQL system is supported and is Redis.

Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.

As all databases supported by NoSQLUnit, two set of rules are provided to write Redis tests:

First set of JUnit Rules are those responsible of managing database lifecycle; basically starting and stopping Redis instance.

  • Currently Redis does not support embedded lifecycle. For this reason I am developing an embedded in-memory Redis mock. It is based in Jedis library, and will be released in next version. Issue #22.
  • Managed: com.lordofthejars.nosqlunit.redis.ManagedRedis
Second set of rules are those responsible of maintaining database into known state;
  • NoSQLUnit Management: com.lordofthejars.nosqlunit.redis.RedisRule
And finally default dataset file format in Redis is json

We will use a very simple example used in Redis tutorial as an example of how to write unit tests for systems that uses Redis database as backend.

First of all, dataset used to maintain Redis into known state:

and finally the test case:

Next release 0.4.2 will contain fixes for issues #22, #23, #24 and #25. Moreover there are an open a poll to vote which engine would you like to see in 0.5.0 release:

Vote For Next Engine

Stay in touch with the project and of course I am opened to any ideas that you think that could make NoSQLUnit better.


martes, septiembre 04, 2012

Deploying JEE Artifacts with Jenkins

Un mondo, Soltanto adesso, io ti guardo, Nel tuo silenzio io mi perdo, E sono niente accanto a te (Il Mondo - Jimmy Fontana)
With the advent of Continuous Integration and Continuous Delivery, our builds are split into different steps creating the deployment pipeline. Some of these steps can be for example compile and run fast tests, run slow tests, run automated acceptance tests, or releasing the application, to cite a few.

The final steps of our deployment pipeline, implies a deployment of our product (in case of JEE project a war or ear) to production-like environment, for UAT or to production system when product is released.

In this post we are going to see how we can configure Jenkins to manage the deployment of a Java Enterprise Application correctly.

First thing to do is creating  the application, in this case a very simple web application in Java (in fact is only one jsp which prints a Hello World!! message) and mavenize it to create a war file (bar.war) when package goal is executed.

Then we need to create a Jenkins job (called bar-web) which is the responsible of compiling, and running unit tests. 

After this job would  come other jobs like running integration tests, running more tests, static code analysis (aka code quality), or uploading artifacts to  artifacts repository but won't be shown here.

And finally, the last steps which imply deploying previous generated code to staging environment (for running User Acceptance Tests for example) and after key users give the ok, deploying to production environment.

So let's see how to create these final steps in Jenkins. Note that binary file created in previous steps  (bar-web in our case) must be used in all these steps. This is because of two reasons, the first one is that your deployment pipeline should be run as fast as possible and obviously compiling in each step the code is not the best way to get it, the second one is that each time you compile your sources, increases the chance of not being compiling sources of previous steps. To achieve this goal we can follow two strategies, the first one is uploading binary files to artifact repository (like Nexus or Artifactory) and get from there in each job. The second one is using copy-artifacts Jenkins plugin to get binary files generated by previous step.

Let's see how to configure Jenkins for the first approach.

Using artifact repository approach, requires that you download the version we want to deploy from repository and then deploy it to external environment; in our case deploying to a web server.  All these steps are done by using maven-cargo-plugin.

Then we only have to create a new Jenkins job, named bar-to-staging, which will run cargo:redeploy Maven goal, and Cargo plugin will be the responsible to deploy bar-web to web server.

This approach has one advantage and one disadvantage.  The main advantage is that you are not bound to Jenkins, you can use Maven alone, or any other CI that supports Maven. The main disadvantage is that relies on artefacts repository, and this plan a new problem, deployment pipeline involves many steps, and between these steps (normally if you are building a snapshot version), a new artefact could be uploaded to artefacts repository with same version, and use it in the middle of pipeline execution. Of course this scenario can be avoided by managing permissions in artefact repository. 

The other approach is use Jenkins plugin, called copy-artifact-plugin. In this case Jenkins acts as an artefact repository, so artifacts created in previous step are used in next step without involving any external repository. Using this approach we cannot use maven-cargo-plugin, but we can use deploy- jenkins-plugin in conjunction with copy-artifacts-plugin

So let's see how to implement this approach.

First thing is create a Jenkins build job (bar-web), which creates the war file. Note that two Post-build actions are defined, first one is Archive the artifacts, which is used to store generated files so copy artifacts plugin can copy them to another workspace.  The other one is Build other projects, which in this case, calls a job which is responsible of deploying war file to staging directory (bar deploy-to-staging).

Next thing is create bar deploy-to-staging build job, which main action is deploying war file generated by previous build job to Tomcat server.

For this second build job, you should configure Copy artifacts plugin to copy previous generated files to current workspace, so in Build section, in Copy artifacts from another project section, we set from which build job we want to copy the artifact (in our case bar-web) and which artifacts we want to copy. And finally in Post-build actions section, we must configure which file should be deployed to Tomcat (bar.web), remember that this file is the compiled and packaged by previous build jobs, and finally set Tomcat parameters. And execution pipeline looks something like:

Note that a third build job has been added which deploys war file to production server.

This second approach is the counter part of the first approach, you can be sure that the artefact used in previous step of pipeline will be the one used in all steps, but you are bounded to Jenkins/Hudson.

So if you are going to create a policy in your artefact repository so only pipeline executor can upload artefacts to repository, first approach is better, but if you are not using an external artefact repository (you use Jenkins as is) then second approach is the best one to assure that packaged artefact in previous steps are not modified by parallel steps.

After file is deployed to server, acceptance tests or UAT tests can be executed without any problem.

I wish that now we can address the final steps of our deployment pipeline in a secure and better way.

We keep learning,

Donate If You Can and Find Post Useful