Mostrando entradas con la etiqueta maven. Mostrar todas las entradas
Mostrando entradas con la etiqueta maven. Mostrar todas las entradas

jueves, octubre 13, 2016

Build Docker Images with Maven and Gradle



One of the things that you might want to do if you are using Docker and Java is building the image from a Dockerfile in your build tool (Maven or Gradle).  In this post I am going to show you how to do it in both cases.

I am going to assume that you have the de-facto project layout, having the Dockerfile file at the root of the project.

Maven

There are several Maven plugins that can be used for building a Docker image in Maven, but one of the most used is fabric8-maven-plugin.

To start you need to register and configure the plugin in pom.xml:

In configuration section you set the image name and the directory where Dockerfile is located.

Any additional files located in the dockerFileDir directory will also be added to the build context. Since Dockerfile is on the root of the project, the target directory is added too. The problem arises because this plugin uses target/docker to generate the build and if you try to build it you'll get next exception:  tar file cannot include itself. To avoid this problem you need to create .maven-dockerignore file specifying which directory must be ignored at the same level as Dockerfile:

And that's all, after that you can do:

mvn package docker:build

Notice that this plugin honor Docker environment variables like DOCKER_HOST, DOCKER_CERT_PATH, ... so if your environment is correctly configured you don't need to do anything else.

Gradle

There are several Gradle plugins that can be used for building a Docker image in Gradle, but one of the most used is gradle-docker-plugin.

To start you need to register and configure the plugin in build.gradle:


In case of Gradle, you need to configure Docker host properties since plugin does not honor Docker environment variables. You need to configure them in docker {} block.

Finally you create a task of type DockerBuildImage, where you set the Dockerfile root directory using inputDir attribute and image name using tag attribute.

Conclusions

So in this post you've seen different ways of doing the same in two different build tools, which is building a Docker image from a Dockerfile. Notice that these plugins also allows you to define the Dockerfile content as a configuration field, so you are not creating a Dockerfile file, but specifying its content inside the build tool. You can read more about this feature at https://dmp.fabric8.io/ in case of Maven plugin and  https://github.com/bmuschko/gradle-docker-plugin#creating-a-dockerfile-and-building-an-image in case of Gradle.

We keep learning,
Alex.

Bees'll buzz, kids'll blow dandelion fuzz, And I'll be doing whatever snow does in summer., A drink in my hand, my snow up against the burning sand, Prob'ly getting gorgeously tanned in summer. (In Summer - Frozen)


lunes, agosto 29, 2016

Configuring Maven Release Plugin to Skip Tests


If you are using Maven and using Maven Release Plugin, you would like to skip the execution of tests during the release plugin execution. The reason might be very different but might depend on the nature of the project or how CI pipeline is implemented.

Notice that this might be a really improvement in releasing time since performing the release with Maven Release Plugin implies executing the same tests twice, one in prepare step and the other one in perform step.

To avoid executing tests in prepare phase you need to run as:

mvn -DpreparationGoals=clean release:prepare

If you want to avoid executing tests during perform phase you need to run as:

mvn -Darguments="-Dmaven.test.skip=true" release:perform

Please it is important to note that I am not saying you don't need to execute tests during release process, what I am saying is that something your release process doesn't fit the standard release process of the plugin and for example you are already running tests before executing the plugin.

We keep learning,
Alex.
Say it ain't so, I will not go, Turn the lights off, carry me home, Keep your head still, I'll be your thrill, The night will go on, my little windmill (All The Small Things - Blink-182)

lunes, febrero 25, 2013

Code Quality stage using Jenkins


In Continuous Delivery each build is potentially shippable. This fact implies among a lot of other things, to assign a none snapshot version to your components as fast as possible so you can refer them through all the process.

Usually automated software delivery process consist of several stages like Commit stage, Code Quality, Acceptance Tests, Manual Test, Deployment, ... But let's focusing on second stage related to code quality. Note that in my previous post (http://www.lordofthejars.com/2013/02/conditional-buildstep-jenkins-plugin.html) there are some concepts that are being used here.

Second stage in continuous delivery is the code quality. This step is very important because is where we are running static code analysis for detecting possible defects (mostly possible NPE), code conventions or unnecessary object creation. Some of projects that are typically used are Checkstyle, PMD or FindBugs among others. In this case we are going to see how to use Checkstyle, but of course it is very similar in any other tool.

So the first thing to do is configure Checkstyle into our build tool (in this case Maven). Because we only want to run the static analysis in second stage of our pipeline we are going to register the Checkstyle Maven plugin into a metrics profile. Keep in mind that all plugins run for code analysis should be added into that profile.


Now that we have our pom configured with Checkstyle, we can configure Jenkins to run Code Quality stage after the first stage (explained in my previous post).

In this case we are going to use Trigger Parameterized Build plugin to execute code quality job from commit stage.

Because code of current build version has been pushed into a release branch (see my previous post) during commit stage, we need to set branch name as parameter for the code quality Jenkins job, so code can be downloaded and then run the static analysis.

In build job of our first stage, we add a Post-build Action of type Trigger parameterized build on other projects. First we open the Configure menu of first build job of pipeline and we configure it so next build job of the pipeline (helloworld-code-quality) is executed only if current job is stable. Also we define the RELEASE_BRANCH_NAME parameter with branch name.



Then let's create a new build job that will be in charge of running static code analysis, we are going to name it helloworld-code-quality.

And we configure the new build job. First of all check the option "This build is parameterized", and add a String parameter and set the name RELEASE_BRANCH_NAME. After that we can use RELEASE_BRANCH_NAME parameter in current job. So at Source Code Management section we add the repository URL and in Branches to build we set origin/${RELEASE_BRANCH_NAME}.

Then at Build section we add a Maven build step, which executes Checkstyle goal: checkstyle:checkstyle -P metrics.

And finally to have a better visibility of the result, we can install Checkstyle Jenkins plugin and publish the report. After plugin is installed, we can add a new Post-build Actions with name "Publish Checkstyle analysis result". In our case report is located at **/target/checkstyle-result.xml.



And that's all for current stage, next stage is the responsible of executing the acceptance tests, but this would be in another post.

So in summary we have learned how after code is compiled and some tests are executed (in first stage of pipeline), the Code Quality stage is run into Jenkins using Checkstyle Maven plugin.

We keep learning,
Alex
En algun lugar de un gran pais, Olvidaron construir, Un hogar donde no queme el sol, Y al nacer no haya que morir… (En Algún Lugar - Dunncan Dhu)
Music: http://www.youtube.com/watch?v=Myn7ghLQltI

martes, julio 17, 2012

JaCoCo in Maven Multi-Module Projects



Can you blow my whistle baby, whistle baby, Let me know Girl I'm gonna show you how to do it And we start real slow You just put your lips together. (Whistle - Flo Rida)
Code coverage is an important measure used during our development that describes the degree to which source code is tested.

In this post I am going to explain how to run code coverage using Maven and JaCoCo plugin in multi-module projects.

JaCoCo is a code coverage library for Java, which has been created by the EclEmma team. It has a plugin for Eclipse, and can be run with Ant and Maven too.

Now we will focus only in Maven approach.

In a project with only one module is as easy as registering a build plugin:


And now running mvn package, in site/jacoco directory, a coverage report will be present in different formats.



But with multimodule projects a new problem arises. How to merge metrics of all subprojects into only one file, so we can have a quick overview of all subprojects? For now Maven JaCoCo Plugin does not support it.

There are many alternatives and I am going to cite the most common:

  • Sonar. It has the disadvantage that you need to install Sonar (maybe you are already using, but maybe not).
  • Jenkins. Plugin for JaCoCo is still under development. Moreover you need to run a build job to inspect your coverage. This is good in terms of continuous integration but could be a problem if you are trying to "catch" some piece of code that has not covered with already implemented tests.
  • Arquillian JaCoCo Extension. Arquillian is a container test framework that has an extension which during test execution can capture the coverage. Also a good option if you are using Arquillian. The disadvantage is that maybe your project does not require a container.
  • Ant. You can use Ant task with Maven. JaCoCo Ant task can merge results from multiple JaCoCo files result. Note that is the most generic solution, and this is the chosen approach that we are going to use.
First thing to do is add JaCoCo plugin to parent pom so all projects could generate coverage report. Of course if there are modules which does not require coverage, plugin definition should be changed from parent pom to specific projects.


Next step is creating a specific submodule for appending all results of JaCoCo plugin by using Ant task. I suggest  using something like project-name-coverage.

Then let's open generated pom.xml and we are going to insert required plugins to join all coverage information. To append them, as we have already written we are going to use a JaCoCo Ant task which has the ability to open all JaCoCo output files and append all their content into one. So first thing to do is download the jar which contains the JaCoCo Ant task. To automatize download process, we are going to use maven dependency plugin:

During process-test-resources phase Jacoco Ant artifact will be downloaded and copied to target directory, so can be registered into pom without worrying about jar location.

We also need a way to handle Ant tasks from Maven. And this is as simple as using maven antrun plugin, which you can specify any ant command in its configuration section. See next simple example:


Notice that into target tag we can specify any Ant task. And now we are ready to start configuring JaCoCo Ant task. JaCoCo report plugin requires you set the location of build directory, class directory, source directory or generated-source directory. For this purpose we are going set them as properties.

And now the Ant task part which will go into target tag of antrun plugin.

First we need to define report task.

See that org.jacoco.ant.jar file is downloaded by dependency plugin, you don't need to worry about copying it manually.

Then we are going to call report task as defined in taskdef section.


Within executiondata element, we specify locations where JaCoCo execution data files are stored. By default is target directory, and for each project we need to add one entry for each submodule.

Next element is structure. This element defines the report structure, and can be defined with hierarchy of group elements. Each group  should contain class files and source files of all projects that belongs to that group. In our example only one group is used.

And finally we are setting output format using html, xml and csv tags.

Complete Code:


And now simply run mvn clean verify and in my-project-coverage/target/coverage-report, a report with code coverage of all projects will be presented.

Hope you find this post useful.

We Keep Learning,
Alex.

Screencast
Download Code
Music: http://www.youtube.com/watch?v=cSnkWzZ7ZAA

jueves, mayo 24, 2012

I'm guided by this birthmark on my skin, I'm guided by the beauty of our weapons, First we take Manhattan, then we take Berlin (First We Take Manhattan - Leonard Cohen)




On May 23, I was at Berlin as speaker in LinuxTag. I talked about how to test modern Enterprise Java Applications using open source tools.

Presentation abstract was:

Ten years ago to present, Enterprise Java Applications have suffered many changes. We have moved from Enterprise Applications built with JSP+Servlet and EJB, to much more complex applications. Nowadays with the advent of HTML5 or JavaScript libraries like JQuery, client side development has changed significantly. With the emergence of web frameworks like Spring MVC or JSF,  server side code has quite changed compared to the one used when each web-form was mapped to a Servlet. And also persistence layer has changed with Java Persistence standard or with new database approaches like Data-Grid, Key-Values stores or Document stores.
Moreover, architectural changes have occurred too, REST-web applications have grown in popularity or AJAX is used to create asynchronous web applications. Due to development of Enterprise Java Applications have changed during these years, so testing frameworks have changed accordantly. The main topic of this speech will be how to test Enterprise Java Applications using these new frameworks.
In the first part of this presentation we are going to explore how to test JavaScript written on client side, how to write unit tests of server side code, and how to validate persistence layer. Next part of presentation will be focused on how to write integration tests on server side and acceptance tests on full Enterprise Java Applications (joining client and server side) and an introduction about testing REST-web applications. Finally we will show how to integrate all kind of test on your continuous integration system and run acceptance tests on test environment.

Session will combine theory with interactive practice using only open-source projects.

I have uploaded slides to slideshare so you can  take a look (sorry for red and blue colours):

How to Test Enterprise Java Applications
View more presentations from Alex Soto

Also if you want you can download the code that it was used in demo sections.

Javascript Unit Testing with JS Test Driver
NoSQL Unit Testing with NoSQLUnit
Integration Tests with Arquillian
Acceptance Tests with Thucydides

Please let me warn you that NoSQLUnit is an open source project that I am developing, and it is on early stages, in next month, project will have a better look by supporting more NoSQL systems like Neo4j, Cassandra or CouchDb and having an official (not snapshot) release. If you want you can follow me on Twitter or subscribing to NoSQLUnit github repository and receive the last news of this JUnit extension.

For any question do not hesitate to write them in comments section or sending me an email.

I would like to say thank you to linuxtag folks for treating me so well and all people who came to presentation, for all of them a big thank you.

Music: http://www.youtube.com/watch?v=JTTC_fD598A&ob=av2e

jueves, abril 05, 2012

Hey! Teachers! Leave them kids alone! All in all it's just another brick in the wall. All in all you're just another brick in the wall. (Another Brick In The Wall - Pink Floyd)


In current post I am going to show you how to configure your application to use slf4j and logback as logger solution.

The Simple Logging Facade For Java (slf4j) is a simple facade for various logging frameworks, like JDK logging (java.util.logging), log4j, or logback. Even it contains a binding tat will delegate all logger operations to another well known logging facade called jakarta commons logging (JCL).

Logback is the successor of log4j logger API, in fact both projects have the same father, but logback offers some advantages over log4j, like better performance and less memory consumption, automatic reloading of configuration files, or filter capabilities, to cite a few features.

Native implementation of slf4j is logback, thus using both as logger framework implies zero memory and computational overhead.

First we are going to add slf4j and logback into pom as dependencies

Note that three files are required, one for slf4j, and two for logback. The last two dependencies will change depending on you logging framework, if for example you want to still use log4j, instead of having logback dependencies we would have log4j dependency itself and slf4j-log4j12.

Next step is creating the configuration file. Logback supports two formats of configurations files, the traditional way, using XML or using a Groovy DSL style. Let's start with traditional way, and we are going to create a file called logback.xml into classpath. File name is mandatory, but logback-test.xml is also valid. In case that both files are found in classpath the one ended with -test, will be used.

In general file is quite intuitive, we are defining the appender (the output of log messages), in this case to console, a pattern, and finally root level logger (DEBUG) and a different level logger (INFO) for classes present in foo package. 

Obviously this format is much readable than typical log4j.properties. Recall on additivity attribute, the appender named STDOUT is attached to two loggers, to root and to com.lordofthejars.foo. because the root logger is the ancestor of all loggers, logging request made by com.lordofthejars.foo logger will be output twice. To avoid this behavior you can set additivity attribute to false, and message will be printed only once.

Now let's create to classes which will use slf4j. First class called BarComponent is created on com.lordofthejars.bar:


Note two big differences from log4j. The first one is that is no longer required the typical if construction above each log call.  The other one is a pair of '{}'. Only after evaluating whether to log or not, logback will format the message replacing '{}' with the given string value.

The other one called FooComponent is created on com.lordofthejars.foo:

And now calling foo and bar method, with previous configuration, the output produced will be:

Notice that debug lines in foo method are not shown. This is ok, because we have set to be in this way. 

Next step we are going to take is configuring logback, but instead of using xml approach we are going to use groovy DSL approach. Logback will give preference to groovy configuration over xml configuration, so keep in mind it if you are mixing configuration approaches.

So first thing to do is add groovy as dependency.

And then we are going to create the same configuration created previously but in groovy format.

You can identify the same parameters of xml approach but as groovy functions.

I wish you have found this post useful, and in next project, if you can, use slf4j in conjunction with logback, your application will run faster than logging with log4j.

Keep Learning,
Alex.


martes, marzo 06, 2012

Keep 'em laughing as you go, Just remember that the last laugh is on you, And always look on the bright side of life..., Always look on the right side of life... (Always Look on the Bright Side of Life - Mony Python)




Integration tests are kind of tests which individual modules are combined and tested as a whole. Moreover integration tests might use system dependent values, accessing external systems like file system, database, web services, ..., and testing multiple aspects of one test case. We can say it is a high-level test.

This differs from unit test where only a single component is tested. Unit tests runs in isolation, mocking-out external components or using in-memory database in case of DAO layers. A unit test might be:
  • Repeatable.
  • Consistent.
  • In Memory.
  • Fast.
  • Self-validating.
  • Testing single concept

The problem when we are writing tests, is how to test rare (or untypical) conditions like "No disk space" in case of accessing file system, or "Connection lost" when executing a database query.

In unit testing this is not a problem you can mock up that component (database connection or filesystem access), generating required output like throwing IOException.

The problem becomes "harder" with integration tests. It would be strange to mock a component, when what you really want to do is validate the real system. So arrived at this point I see two possibilities:
  • Creating a partial mock.
  • Using fault injection.
In this post I am going to show you how to use fault injection approach to test unusual erroneous situations. 

Fault injection is a technique which involves changing application code under test at specific locations. This modifications will introduce faults on error handling code paths which otherwise would rarely be followed.

I am going to talk about how to use fault injection using Byteman in a JUnit test, and run it with Maven.

Let's start coding. Imagine you need to write a backup module, which shall save a string into a local file, but if hard disk is full (IOException is thrown), content shall be sent to remote server.

First we are going to code a class that writes content into file.



Next class, would be the one that sends data through socket but will not be shown, because it is not necessary for this example.

And finally the backup service responsible of managing described behavior.

And now testing time. First of all a brief introduction to Byteman.

Byteman is a tool which allows you to insert/modify code into an application at runtime. These modifications can be used to inject code on your compiled application causing unusual or unexpected operations (aka Fault Injection).

Byteman uses a clear, simple scripting language, based on a formalism called Event Condition Action (ECA) rules to specify where, when and how the original Java code should be transformed.

An example of ECA script is:

But Byteman also supports annotations. And in my opinion, annotations are a better approach than script file, because only watching your test case you can understand what you are exactly testing. If not you should switch context from unit class to script file to understand what are you testing.

So let's create an integration test that that validates that when IOException is thrown while writing content into disk, data is sent to a server.


See that BMUnitRunner (a special jUnit runner that comes with Byteman) is required.

First test called aFileWithContentShouldBeCreated is a standard test that writes Hello world into backup file.

But the second one dataShouldBeSentToServerInCaseOfIOException, has BMRule annotation which will contain when, where and what code should be injected. First parameter is the name of the rule, in this case a description of what we are going to do (throwing an IOException). Next attributes, targetClass and targetMethod configure when injected code should be added. In this case when FileUtils.createFileWithContent method is called. Next attribute targetLocation is location where code is inserted, and in our case is where createFileWithContent method calls write method of BufferedWriter. And finally what to do that obviously in this test is throwing an IOException.

So now you can go to your IDE and run them, and all tests should pass, but if you run through Maven using Surefire plugin, test will not work. To use Byteman with Maven, Surefire plugin should be configured in a specific way.


First important thing is adding tools jar as dependency. This jar provides classes needed in order to dynamically install the Byteman agent.

In Surefire plugin configuration is important to set useManifestOnlyJar to false to ensure that the Byteman jar appears in the classpath of the test JVM.  Also see that we are defining empty environment variables (BYTEMAN_HOME and org.jboss.byteman.home). This is because when it loads the agent the BMUnit package will use environment variable BYTEMAN_HOME or System property org.jboss.byteman.home to locate byteman.jar but only if it is a non-empty string. Otherwise it scans the classpath to locate the jar. Because we want to ensure that jar added on dependency section is used, we are overriding any other configuration present on system.

And now you can run mvn clean test and two tests are successful too.

See that Byteman opens a new world into how we are writing our integration tests, now we can test in an easy way unusual exceptions like Communications Error, Input/Output Exceptions or Out Of Memory Error. Moreover because we are not mocking FileUtils, we are executing real code; for example in our second test, we are running a few lines of FileUtils object until write method is reached. If we had mocked-up FileUtils class, these lines would not be executed. Thanks of using fault injection our code coverage is improved.

Byteman is more than what I have shown you, it also has built-ins designed for testing in multithreaded environments, parameter binding, and an amount of location specifiers, to cite a few things.

I wish you have found this post useful and help you testing rare conditions of your classes.

Download Code
Music: http://www.youtube.com/watch?v=WlBiLNN1NhQ

lunes, febrero 27, 2012

For everything I long to do, No matter when or where or who, Has one thing in common too, It's a, it's a, it's a, it's a sin (It's a Sin - Pet Shop Boys)-



Usually when you start a new project, it will contain several subprojects, for example one with core funcionalities, another one with user interface, or acceptance tests could be another one.

In next screen-cast post I am going to show you how to create a multimodule Maven project using M2 Eclipse plugin.

This is the first video I have done, I wish you find it really useful, and I will try to switch between blog posts and video posts.


jueves, febrero 23, 2012

If there ain't all that much to lug around, Better run like hell when you hit the ground. When the morning comes. (This Too Shall Pass - Ok Go)



Javascript has become much more important to interactive website development than five years ago. With the advent of HTML 5 and new Javascript libraries like jQuery and all libraries that depends on it, more and more functionalities are being implemented using Javascript on client side, not only for validating input forms, but as UI creator or Restful interface to server side.

With the growing use of Javascript, new testing frameworks have appeared too. We could cite a lot of them but in this post I am going to talk only about one called Jasmine

Jasmine is a BDD framework for testing Javascript code. It does not depend on any other JavaScript framework, and uses a really clean syntax, similar to xUnit framework. See next example:


To run Jasmine, you should simply point your browser to SpecRunner.html file which will contain  references to scripts under test and spec scripts. An example of SpecRunner is shown here:


From my point of view, Javascript has become so popular thanks to jQuery, which has greatly simplified the way we wrote Javascript code. And you can also test jQuery applications with Jasmine using Jasmine-jQuery module, which provides two extensions for testing:

  • set of matchers for jQuery framework like toBeChecked(), toBeVisible(), toHaveClass(), ...
  • an API for handling HTML fixtures which enable you to load HTML code to be used by tests. 
So with Jasmine you can test your Javascript applications; but we still have a small big problem. We should launch manually all tests by opening SpecRunner page into browser. But don't worry, exists jasmine-maven-plugin. This plugin is a Maven plugin that runs Jasmine spec files during test phase automatically, without needing to write SpecRunner boilerplate file.


So I suppose you want to start coding. We are going to create a simple jQuery plugin in standard Maven war layout, where Javascript files go to src/webapp/js, css at src/webapp/css and Javascript tests at src/test/javascript. Of course this directory structure is fully configurable, for example if your project was a Javascript project, src/main/javascript would be better place. Next image shows you directory layout.



Let's start. First of all we are going to create a css file which will define a red class. Not complicated code:


Next step, create a js file containing jQuery plugin code. It is a simple plugin that adds red class to affected element.

And finally html code that uses previous functionality. Not much secret, a div element modified by our jQuery plugin.

Now it is time for testing. Yes I know write tests first, and then business code, but I thought it will be more appropriate to show first the code to test.

So let's write Jasmine test file.

First thing to do is add a description (behaviour) of what we are going to test with describe function. Then with beforeEach, we are defining what function we want to execute before each test execution (like @Before JUnit annotation). In this case we are setting our fixture to test plugin code, you can set an html file as template or you can define html inline as done here.

And finally the test, written inside it function. Our test should validate that div element with id content, defined in fixture, should contain class attribute with value red after running redColor function. See how we are using jasmine-query toHaveClass matcher.


Now we have got our Javascript test written and it is time to run it, but instead of using SpecRunner file, we are going to make Jasmine tests being executed by Maven during test phase.

Let's see how to configure jasmine-maven plugin.

First thing to do is register plugin into pom.

And then configure plugin with required parameters. In two first parameters (jsSrcDir and jsTestSrcDir) we are setting Javascript locations for production code and testing code.  Since we are writing tests for jQuery plugin in Jasmine, both jquery and jasmine-jquery libraries should be imported into generated SpecRunner, and this is accomplished by using preloadSources tag.

All these parameters will change depending on your project but in case you are creating a Maven war project, this layout is enough.

And now you can run Maven by typing:

mvn clean test

And next console output should be printed:


I think we have integrated Javascript tests into Maven in an easy and clean way; and now our continuous integration server (Jenkins or Hudson) will run Javascript tests too. If you are planning to mount a continuous delivery system with your next project, and this project will contain Javascript file, take in consideration using Jasmine as BDD tool because it suits perfectly with Maven.

I wish you have found this post useful.

Download code

Music: http://www.youtube.com/watch?feature=player_embedded&v=qybUFnY7Y8w#!

miércoles, diciembre 14, 2011

Elle ne me quitte pas d'un pas, fidèle comme une ombre. Elle m'a suivi ça et là, aux quatres coins du monde. Non, je ne suis jamais seul avec ma solitude (Ma Solitude - Georges Moustaki)



In current post I have uploaded my Devoxx presentation. This year I was at Devoxx as speaker. My presentation was about how to speed up HTML 5 applications, and Javascript and CSS in general, using Aggregation and Minification


Also you can visit two entries of my blog where I talk about same theme.
Links of technologies I talked about:

Finally I want to say thank you to all people who came to watch me, and of course Devoxx folks, for organising such amazing event.

I wish you have discovered a great way to speed up your web applications.

Music: http://www.youtube.com/watch?v=qSZKO5K2eTE

viernes, diciembre 09, 2011

Far away, long ago, glowing dim as an Ember, Things my heart use to know, things it yearns to remember (Once upon a December - Anastasia)



You never develop code without version control, why do you develop your database without it? Flyway  is database-independent library for tracking, managing and applying database changes.

Personally I find that using a database migration tool like Flyway is "a must", because covers two scenarios of our software life-cycle:

  • Multiple developers developing an application with continuous integration.
  • Multiple clients each one with different versions of production code.

Let's start with first point. If your project is big enough there will be more than one developer working on it, each one developing a new feature. Each feature may require a database update (adding a new table, a new constraint, ...), so developer creates a .sql file with all required changes.

After each developer finishes its work, these changes are merged into main branch and integration/acceptance tests are executed on test machine. And the problem is obvious which process updates testing database? And how? QA department executes sql files manually? Or we develop a program that executes these updates automatically? And in what order must be executed? Also same problem arises in production environment.

Second point is only applicable if your application is distributed across multiple clients. And at this point the problem is further accentuated because each client may have different software versions. Hence when an update is required by our client (for example because a bug), you should know which database version was installed and what changes must be applied to get expected database.

Don't worry Flyway comes to rescue you, and will help to fix all previous questions. Let me start explaining some features of Flyway that in my opinion make it a good tool.

  • Automatic migration: Flyway will update from any version to the latest version of schema. Flyway can be executed as Command-line (can be used with non JVM environments), Ant script, Maven script (to update integration/acceptance test environments) or within application (when application is starting up).
  • Convention over configuration: Flyway comes with default configuration so no configuration is required to start using.
  • Plain SQL scripts or Java classes: To execute updates, you can use plain SQL files or Java classes for advanced migrations.
  • Highly reliable: safe for cluster environments.
  • Schema clean: Flyway can clean existing schema, so empty installation is produced. 

Conventions to be followed if they are not explicitly modified are:

  • Plain SQL files go to db/migration directory inside src/main/resources structure.
  • Java classes go to db.migration package.
  • Files (SQL and Java) must follow next name convention: V<version>[__description]. Where each version number is separated by dots (.) or underscore (_) and if description is provided, two underscores must proceed. A valid example is V_1_1_0__Update.sql

So let's see Flyway in action. In this application I am going to focus only on how to use Flyway, I am not going to create any DAO, DTO or Controller class, only database migration part.

Imagine we are going to develop a small application using Spring Framework that will allow us registering authors and which books they have written.

First version will contain two tables, Author and Book related with one to many relationship.

First step is registering Flyway into Spring Application Context. Flyway class is the main class and requires a javax.sql.DataSource instance. migrate method is responsible to start migration process.


See that there is no secret. Only be careful because if your project uses JPA or ORM frameworks for persistence, you should configure them to avoid auto creation of tables, because now Flyway is responsible of managing database structure. And because of that, creation of SessionFactory (in case of Hibernate) or EntityManagerFactoryBean( in case of JPA),  should depends on Flyway bean.

Flyway is configured. Each time you start application, it will review if configured datasource requires an update or not.

And now let's write first version of SQL migration. Create db/migration directory into src/main/resources and create a file called V1__Initial_version.sql with next content:


This script creates Author and Book tables with their respective attributes.

And if you run next JUnit both tables are created into database.


Take a look at your console and next log message has appeared:

10:33:49,512  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: null
10:33:49,516  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 1
10:33:49,577 INFO glecode.flyway.core.migration.DbMigrator: 188 - Successfully applied 1 migration (execution time 00:00.085s).


And if you open your database:


Note that Flyway has created a table to annotate all updates that have been executed (SCHEMA_VERSION) and last insert is a "Flyway insert" marking which is the current version.

Then your first version of application is distributed across the world.

And you can start to develop version 1.1.0 of application. For next release, Address table must be added with a relationship to Author.


As done before, create a new SQL file V1_1_0__AddressTable.sql into db/migration folder.


And run next unit test:


your database will be upgraded to version 1.1.0. Also take a look at log messages and database:

11:27:30,149  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: 1
11:27:30,152  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 1.1.0
11:27:30,191 INFO glecode.flyway.core.migration.DbMigrator: 188 - Successfully applied 1 migration (execution time 00:00.053s).



New table is created, and a new entry into SCHEMA_VERSION table is inserted marking that current database version is 1.1.0.

When your 1.1.0 application is distributed to your clients, Flyway will be the responsible of updating their databases without losing data.


Previously I have mentioned that Flyway also supports Java classes for advanced migrations. Let's see how.

Imagine that in your next release, authors can upload their personal photo, and you decide to store as  blob attribute into Author table. The problem resides on already created authors because you should set some data into this attribute. Your marketing department decides that authors inserted prior to this version will contain a photo of Spock,


So now you must alter Author table and moreover update a field with a photo. You can see clearly that for this update you will need something more than a simple SQL file, because you will need to add a new property and updating them with chunk of bytes. This problem could be accomplished using only one Java class but for showing a particularity of Flyway, problem will be treated with one SQL and one Java object.

First of all new SQL script adding a new binary field is created. This new feature will be implemented on version 2.0.0, so script file is named V2_0_0__AddAvatar.sql.


Next step is developing a Java Migration class. Create a new package db.migration on src/main/java. Notice that this class cannot be named V2_0_0_AddAvatar.java because Flyway will try to execute two different migrations with same version, and obviously Flyway will detect a conflict.

To avoid this conflict you can follow many different strategies, but in this case we are going to add a letter as version suffix, so class will be named V2_0_0_A__AddAvatar.java instead of V2_0_0__AddAvatar.java.


Before run previous unit test, open testdb.script file and add next line just under SET SCHEMA PUBLIC command.

INSERT INTO AUTHOR(ID, FIRSTNAME, LASTNAME, BIRTHDATE) VALUES(1, 'Alex', 'Soto', null);

And running unit test, next lines are logged:


20:21:18,032  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: 1.1.0
20:21:18,035  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 2.0.0
20:21:18,088  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 2.0.0.A
20:21:18,114 INFO glecode.flyway.core.migration.DbMigrator: 190 - Successfully applied 2 migrations (execution time 00:00.094s).

And if you open updated database, next lines are added:


See how all previous authors have avatar column with data.

Note that now you have not to worry about database migrations, your application is packaged and delivered to all your clients regardless of the version they had installed; Flyway will execute only required migration files depending on installed version.

If you are not using Spring, you can update your database using Flyway-Maven-Plugin. Next piece of pom shows you how to execute migration during test-compile phase. By default plugin is executed during pre-integration-test phase.


Thanks of Maven plugin, we can configure our continuous integration system so all environments (test, production,...) would be updated during deployment of application.

I wish Flyway will help you make better life as developer.



Music: http://www.youtube.com/watch?v=oyUBdLm3s9U


jueves, diciembre 01, 2011

All my scientists are working on a deadline, So my psychologist is working day and nighttime, They say they know what's best for me, But they don't know what they're doing (Atomic Garden - Bad Religion)


Maven archetypes are project templates that allow users create project structure with a simple Maven command. In my company we are using archetypes because provide a way to standardize projects structure. All our projects are built using the same directory structure, all of us use the same version of common libraries like JUnit, Hamcrest, Spring Framework, Mockito, or in case of web applications bundling them with company's approved CSS and Javascript libraries. Also PMD, checkstyle or findbugs coding rules can be stored in distributed archetype.

If each time you start a new project you are of those who copy files from existing projects to the new one, apply DRY principle and create a Maven archetype from existing project.

First thing to do is create your template project with all files to be bundled into archetype. In this example, simple Spring MVC project will be transformed  to be a Maven archetype.


After template project is created and all desired files are added, you should have a directory layout like:


My personal advice is that if you are thinking about distributing this archetype with community (not only for your company), remove all IDE specific files.

Now you have your project created and ready to be packaged as archetype. Execute next command on root of your project.
mvn archetype:create-from-project
And Maven console output should be:

And now your archetype is created in target/generated-sources/archetype directory with next hierarchy:


Now project is inside archetype-resources directory. This directory contains all files that will be added in generated project.

At first sight, not much differences between original project and "template" project, it seems that only three files has been added archetype-metadata.xml, archetype.properties and goal.txt, but shortly you will see that original project content has been modified too.

Before continuing see that in project exists two poms, one pom that is in root directory, that will be called archetype pom, because it contains all archetype configuration, and another one into archetype-resources, called template pom, because it will be the pom used in generated project.

Next step is isolate archetype project into separate folder, so can be dealt as alone project.
mv target/generated-sources/archetype ../spring-mvc-archetype

Following step is adding a name to generated archetype, so open archetype pom and change  <name> tag value to your archetype name, for example spring-mvc-archetype, and if you want  artifactId and groupId too.

After this modification, open archetype-resources' pom, and see how <artifactId> or <groupId> values are surrounded with ${artifactId} or ${groupId}. When you are creating a new archetype, by default Maven will ask you to enter four parameters, groupId, artifactId, version and package. Entered values will be used to fill placeholders.

With default four parameters should be enough, but imagine you want that user provides more information, for example, war name. To get this, open archetype-metadata.xml file (src/main/resources/META-INF/maven) and add one required property, using <requiredProperties> tag.

In previous file we are adding a new required property named warName. And last thing to do is update
archetype.properties located on test/resources/projects/basic with default value of new property.

And that's all, if you open any Java class or any Xml file, you will see that has been modified with ${package} variable. This information is filled when you generate the project.

Now you can install archetype into your local catalog and start generating standardized  projects.
mvn clean install
And your artifact is ready to be used. Try next command or if you have installed m2Eclipse plugin open Eclipse and try your new archetype:
mvn archetype:generate -DarchetypeCatalog=local
A list of all installed archetypes is shown. Choose previously created and fill up all required properties, and your new project is built and configured. You can start coding with same libraries that your workmates use and same style rules.

In this post a simple example has been provided, but think about all kind of elements that you copy and paste from one project to another like SCM connection, surefire plugin configuration, release plugin tag name, to cite a few, and how you can integrate them into your archetype.

I wish you have found this post interesting.

Music: http://www.youtube.com/watch?v=AhzhiQA6-Aw&ob=av3e

jueves, noviembre 10, 2011

Fear of the dark, fear of the dark I have a phobia that someone's always there (Fear of the Dark - Iron Maiden)



Some time ago I wrote about how to implement your Restful Web API using Spring MVC. Read my previous  post to know about it. 

In that post it was developed a simple Rest example. For testing the application,  file was copied into a web server  (Tomcat for example), and then accessing to http://localhost:8080/RestServer/characters/1 information of character 1 was returned.

In current post I am going to explain how to transform that application to a Google App Engine and be deployed into Google's infrastructure using Maven. Of course in this case we are going to deploy a Rest Spring MVC application, but same approach can be used for migrating a Spring MVC web application (or any other application developed with other web framework) to GAE.

First of all, obviously you should create a Google Account and register a new application (remember the name because will be used in next step). After that you can start the migration.

Three changes are required, create appengine-web.xml defining application name; add server tag to settings.xml with Google account information, and modify pom.xml for adding GAE plugin and its dependencies.

Let's start with appengine-web.xml. This file is used by GAE to configure application and is created into WEB-INF directory (at same level of web.xml).

The most important field is application tag. This tag contains the name of our application (defined when you register a new Google Application).

Other tags are version, system properties and environment variables, and misc configuration like if you want a precompilation to enhance performance or if your application requires sessions.

And your project should not be modified anymore, now only Maven files will be touched.

In settings.xml, account information should be added:

See that it is as easy as registering any other server in Maven.

And finally the most tedious part, modifying pom.xml.

First thing is adding new properties:

At first line we are defining Appengine Java SDK location. If you have already installed then insert location in this tag, if not, copy same location of this pom and simply change maven repository directory, in my case /media/share/maven_repo, to yours. Typically your Maven repository location will be  /home/user/.m2/repositories. Maven will download SDK for you at deploy time.

Next step is adding Maven GAE repository.

Because our project is dummy project, Datanucleus are not used. In case of more complex projects, that database access is required using, for example JDO, next dependencies should be added:

And in case you are using Datanucleusmaven-datanucleus-plugin should be registered. Take care to configure it properly depending on your project. 

Now Google App Engine dependencies are added.


Then if you want to test GAE functionalities (not used in our dummy project), next GAE libraries are added:

Next change is a modification on maven-war-plugin including appengine-web.xml into generated package:

And finally adding maven-gae-plugin and configuring it to upload application to appspot.

See that <serviceId> tag contains the server name defined previously in settings.xml file.

Also if you are using maven-release-plugin you can upload application to the appspot automatically, during release:perform goal:

Now run gae:deploy goal. If you have already installed Appengine Java SDK, then your application will be uploaded to your GAE site. But if it is the first time you run the plugin, you will receive an error. Do not panic, this error occurs because Maven plugin does not find Appengine SDK into directory you specified in <gae.home> tag. But if you have configured gae.home location into your local Maven repository, simply run gae:unpack goal, and SDK will be installed correctly so when you rerun gae:deploy your application will be uploaded into Google infrastructure. 

In post example you can go to http://alexsotoblog.appspot.com/characters/1 and character information in JSON format is displayed into your browser.

As I have noted at the beginning of the post, the same process can be used for any web application, not only for Spring Rest MVC.

Because of teaching purpose all modifications have been made into application pom. My advice is that you create a parent pom with GAE related tags, so each project that must be uploaded into Google App Engine extends from same pom file.

I wish you have found this post useful.

This week I am at devoxx, meet me there ;) I will be speaking on Thursday 17 at 13:00 about Speeding Up Javascript & CSS Download Times With Aggregation and Minification

Full pom file:


Download Code.