miércoles, agosto 22, 2012

NoSQLUnit 0.4.0 Released



All across the nation such a strange vibration, People in motion, There's a whole generation with a new explanation, People in motion people in motion (San Francisco - Scott McKenzie)
NoSQLUnit is a JUnit extension to make writing unit and integration tests of systems that use NoSQL backend easier. Visit official page for more information.

In 0.4.0 release, one new NoSQL system is supported and is Cassandra.

Cassandra is a BigTable data model running on an Amazon Dynamo-like infrastructure.

As all databases supported by NoSQLUnit, two set of rules are provided to write Cassandra tests:

First set of JUnit Rules are those responsible of managing database lifecycle; basically starting and stopping Cassandra instance.
  • Embedded: com.lordofthejars.nosqlunit.cassandra.EmbeddedCassandra
  • Managed: com.lordofthejars.nosqlunit.cassandra.ManagedCassandra
Depending on kind of tests you are implementing (unit test, integration test, deployment tests, …) you will require an embedded, managed or remote approach. Note that for now I haven to implemented an In-Memory approach because there is no in-memory Cassandra  instance, but embedded strategy for unit tests will be the better one. 

Second set of rules are those responsible of maintaining database into known state;
  • NoSQLUnit Management: com.lordofthejars.nosqlunit.cassandra.CassandraRule
And finally default dataset file format in Cassandra is json. To make compatible NoSQLUnit with Cassandra-Unit file format, DataLoader of Cassandra-Unit project is used, so same json format file is used.

We will use a very simple example used in Cassandra tutorial as an example of how to write unit tests for systems that uses Cassandra database as backend.

First of all, dataset used to maintain Cassandra into known state:


and finally the test case:


Next release 0.4.1 will contain some internal changes, but no support for new engine. And after these changes, new engines will be supported. Moreover I have decided to open a poll to vote which engine would you like to see in 0.4.2 release:

Visit Poll Here

Stay in touch with the project and of course I am opened to any ideas that you think that could make NoSQLUnit better.

Music: http://www.youtube.com/watch?v=bch1_Ep5M1s

martes, agosto 07, 2012

NoSQLUnit 0.3.2 Released


Yo no te pido la luna, tan solo quiero amarte. Quiero ser esa locura que vibra muy dentro de tí (Yo No Te Pido La Luna - Sergio Dalma)

Update: 0.3.3 version has been released providing In-Memory Neo4j lifecycle support.

NoSQLUnit is a JUnit extension to make writing unit and integration tests of systems that use NoSQL backend easier. Visit official page for more information.

In 0.3.2 release, one new NoSQL system is supported and is Neo4j.

Neo4j is a high-performance, NoSQL graph database with all the features of a mature and robust database.

As all databases supported by NoSQLUnit, two set of rules are provided to write Neo4j tests:

First set of JUnit Rules are those responsible of managing database lifecycle; basically starting and stopping Neo4j.
  • Embedded: com.lordofthejars.nosqlunit.neo4j.EmbeddedNeo4j
  • Managed Wrapping: com.lordofthejars.nosqlunit.neo4j.ManagedWrappingNeoServer
  • Managed: com.lordofthejars.nosqlunit.neo4j.ManagedNeoServer
Depending on kind of test you are implementing (unit test, integration test, deployment test, ...) you will require an embedded approach, managed approach or  remote approach. Note that for now I have not implementated of a Neo4j in-memory database at this time (0.3.3 will do), but embedded strategy for unit tests will be the better one. As Neo4j developers will know, you can start remote Neo4j server by calling start/stop scripts (Managed way) or by using an embedded database wrapper by a server (Managed Wrapping way), both of them are supported by NoSQLUnit.

Second set of rules are those responsible of maintaining database into known state;
  • NoSQLUnit Management: com.lordofthejars.nosqlunit.neo4j.Neo4jRule
And finally default dataset file format in Neo4j is GraphML. GraphML is a comprehensive and easy-to-use file format for graphs.


We will use the example of finding Neo's friends as an example of how to write unit tests for systems that uses Neo4j databases as backend.

First of all dataset used to maintain Neo4j into known state:


and finally the test case:

Next release will support Cassandra. Stay in touch with the project and of course I am opened to any ideas that you think that could make NoSQLUnit better.

lunes, agosto 06, 2012

JaCoCo Jenkins Plugin


Hier soir deux inconnus, Et ce matin, sur l'avenue - Deux amoureux, tout etourdis , Par la longue nuit (Aux Champs Elysées - Joe Dassin)
In my post about JaCoCo I wrote about the problems of using JaCoCo Maven plugin in multimodule Maven project because of having one report for each module separately instead of one report for all modules, and how can be fixed it using JaCoCo Ant Task.

In current post we are going to see how to use Jacoco Jenkins plugin to achieve the same goal of Ant Task and have an overall code coverage statistic of all modules.

First step is installing JaCoCo Jenkins plugin.

Go to Go to Jenkins -> Manage Jenkins -> Plugin Manager -> Available and find for JaCoCo Plugin

Next step, if it is not done before, is configuring your JaCoCo Maven plugin into parent pom:


And finally a post-action must be configured to the job responsible of packaging the application. Note that in previous pom file reports are generated just before package goal is executed.

Go to Configure -> Post-build Actions -> Add post-build action -> Record JaCoCo coverage report.

Then we have to set folders or files containing JaCoCo XML reports, which using previous pom is **/target/site/jacoco/jacoco*.xml, and also set when we consider that a build is healthy in terms of coverage.


Then we can save the job configuration and run build project.

After project is build, a new report will appear just under test result trend graph, called code coverage trend, where we can see the code coverage of all project modules.


From left menu, you can enter to Coverage Report and watch code coverage of each module separately.


Furthermore visiting Jenkins main page a nice quick overview of a job when mouse is over the weather icon is shown:


Keep in mind that this approach for merging code coverage files will only work if you are using Jenkins as a CI system meanwhile Ant Task is more generic solution and can also be used with JaCoCo Jenkins plugin.

We Keep Learning,
Alex.

Music: http://www.youtube.com/watch?v=OAMuNfs89yE



miércoles, agosto 01, 2012

Build Flow Jenkins Plugin


Este samba, Que é misto de maracatú, É samba de preto velho, Samba de preto tu (Mais que Nada - Sergio Mendes)
With the advent of Continuous Integration and Continuous Delivery, our builds are split into different steps creating the deployment pipeline. Some of these steps can be for example compile and run fast tests, run slow tests, run automated acceptance tests, or releasing the application, to cite a few.

Most of us we are using Jenkins/Hudson to implement Continuous Integration/Delivery, and we manage job orchestration combining some Jenkins plugins like build pipeline, parameterized-build, join or downstream-ext. We require configuring all of them which implies polluting the job configuration through multiple jobs, which takes the system configuration very complex to maintain.

Build Flow enables us to define an upper level flow item to manage job orchestration and link up rules, using a dedicated DSL.

Let's see a very simple example:

First step is installing the plugin.

Go to Jenkins -> Manage Jenkins -> Plugin Manager -> Available and find for CloudBees Build Flow plugin.


Then you can go to Jenkins -> New Job and you will see a new kind of job called Build Flow. In this example we are going to name it build-all-yy.


And now you only have to program using flow DSL how this job should orchestrate the other jobs.

In "Define build flow using flow DSL" input text you can specify the sequence of commands to execute.


In current example I have already created two jobs, one executing clean compile goal (yy-compile job name) and the other one executing javadoc goal (yy-javadoc job name). I know that this deployment pipeline is not real in a true environment but for now it is enough. 

Then we want javadoc job running after project is compiled.

To configure this we don't have to create any upstream or downstream actions, simply add next lines at DSL text area:

build("yy-compile");
build("yy-javadoc");

Save and execute build-all-yy job and both projects will be built in a sequential way.

Now suppose that we add a third job called yy-sonar which runs sonar goal that generates code quality sonar report. In this case it seems obvious that after project is compiled, generation of javadocs and code quality jobs can be run in parallel. So script is changed to:

build("yy-compile")
parallel (
    {build("yy-javadoc")},
    {build("yy-sonar")}
)


This plugin also supports more operations like retry (similar behaviour of retry-failed-job plugin) or guard-rescue, that it works mostly like a try+finally block. Also you can create parameterized builds, accessing to build execution or printing to Jenkins console. Next example will print build number of yy-compile job execution:

b = build("yy-compile")
out.println b.build.number


And finally you can also have a quick graphical overview of the execution in Status section. It is true that could be improved more, but for now it is acceptable, and can be used without any problem.


Build Flow plugin is in its early stages, in fact it is only at version 0.4. But will be a plugin to be considered in future, and I think it is good to know that it exists. Moreover is being developed by CloudBees folks so it is a guarantee of being fully supported by Jenkins.

We Keep Learning.
Alex.


Warning: In order to run parallel tasks with the plugin Anonymous users must have Read Job access (Jenkins -> Manage Jenkins -> Configure System). There is an issue already inserted into Jira to fix this problem.