jueves, abril 19, 2012

Qui dit crise te dis monde dit famine dit tiers- monde, Qui dit fatigue dit réveille encore sourd de la veille, Alors on sort pour oublier tous les problèmes, Alors on danse... (Alors on Danse - Stromae)




Let's introduce another hibernate performance tip. Do you remember the model of previous hibernate post? We had a starship and officer related with a one to many association.


Now we have next requirement:

We shall get all officers assigned to a starship by alphabetical order.

To solve this requirement we can:
  1. implementing an HQL query with order by clause.
  2. using sort approach.
  3. using order approach.
The first solution is good in terms of performance, but implies more work as a developers because we should write a query finding all officers of given starship ordered by name and then create a finder method in DAO layer (in case you are using DAO pattern).

Let's explore the second solution,  we could use SortedSet class as association, and make Officer implements Comparable, so Officer has natural order. This solution implies less work than the first one, but requires using @Sort hibernate annotation on association definition. So let's going to modify previous model to meet our new requirement. Note that there is no equivalent annotation in JPA specification.

First we are going to implement Comparable interface in Officer class.


We are ordering officer by name by simply comparing name field. Next step is annotating association with @Sort.


Notice that now officers association is implemented using SortedSet instead of a List.   Furthermore we are adding @Sort annotation to relationship, stating that officers should be natural ordered. Before finishing this post we will insist more in @Sort topic, but for now it is sufficient.

And finally a method that gets all officers of given starship ordered by name, printing them in log file.


All officers are sorted by their names, but let's examine which queries are sent to RDBMS.


First query is resulting of calling find method on EntityManager instance finding starship.

Because one to many relationships are lazy by default when we call getOfficers method and we access first time to SortedSet, second query is executed to retrieve all officers. See that no order by clause is present on query, but looking carefully on output, officers are retrieved in alphabetical order.


So who is sorting officer entities? The explanation is on @Sort annotation. In hibernate a sorted collection is sorted in memory being Java the responsible of sorting data using compareTo method.

Obviously this method is not the best performance-way to sort a collection of elements. It is likely that we'll need a hybrid solution between using SQL clause and using annotation instead of writing a query.

And this leads us to explain the third possibility, using ordering approach.


@OrderBy annotation, available as hibernate annotation and JPA annotation, let us specifies how to order a collection by adding “order by" clause to generated SQL.

Keep in mind that using javax.persistence.OrderBy allows us to specify the order of the collection via object properties, meanwhile org.hibernate.annotations.OrderBy order a collection appending directly the fragment of SQL (not HQL) to order by clause.

Now Officer class should not be touched, we don't need to implement compareTo method nor a java.util.Comparator. We only need to annotate officers field with @OrderBy annotation. Since in this case we are ordering by simple attribute, JPA annotation is used to maintain fully compatibility to other “JPA readyORM engines. By default ascendent order is assumed.



And if we rerun get all officers method, next queries are sent:


Both queries are still executed but note that now select query contains order by clause too.

With this solution you are saving process time allowing RDBMS sorting data in a fast-way, rather than ordering data in Java once received.

Furthermore OrderBy annotation does not force you to use SortedSet or SortedMap collection. You can use any collection like HashMap, HashSet, or even a Bag, because hibernate will use internally a LinkedHashMap, LinkedHashSet or ArrayList respectively.

In this example we have seen the importance of choosing correctly an order strategy. Whenever possible you should try to take advantage of capabilities of RDBMS, so your first option should be using OrderBy annotaion (hibernate or JPA), instead of Sort. But sometimes OrderBy clause will not be enough. In this case, I recommend you using Sort annotation with custom type (using java.util.Comparator class), instead of relaying on natural order to avoid touching model classes.


I wish this post helped you to understand differences between "sort" and "order" in hibernate.

Keep learning.

Music: http://www.youtube.com/watch?v=VHoT4N43jK8&ob=av3n

martes, abril 10, 2012

Why does the rain fall from above? Why do fools fall in love? Why do they fall in love? (Why Do Fools Fall In Love - Frankie Lymon)



More often than not our applications need to send emails to users notifying for example that its account has been created, they have purchased an item, or simply password remaining. When you are writing unit tests there is no problem because probably you will be mocking up interface responsible of sending an email. But what's happen with integration tests?

Maybe the logical path to resolve this problem is installing an email server and execute these tests against it. It is not  a bad idea, but note that you will need to configure your environment before executing your tests.  Your tests will depend on external resources, and this is a bad idea for integration tests. Furthermore these integration tests would not be portable against multiple machines if an email server is not installed previously.

To avoid this problem Dumbster comes to save us. Dumbster is a fake smtp server designed for testing applications that send email messages. It is written in Java so you can start and stop it directly from your tests.

Let's see an example, suppose we are developing an electronic shop, and when an order is placed and email to customer should be sent.

In this case we are going to use Spring Framework 3.1 to create our service layer and will also help us in testing.

Because of teaching purpose, I am not using mail templates, or rich mime types.

First class I am going to show you is Order, which as you can imagine represents an order:

Most important method here is toEmail() that returns email body message.

Next class is service responsible of place an order to delivery system:

This service class uses Spring classes to send an email to customer. See that two methods are present, one that sends a simple message, and the other one called placeOrderWithInvoice that sends an email with an attachment, concretely an invoice in jpg format.

And finally Spring context file:

Note that mail configuration is surrounded by a profile. This means that Spring will only create these beans when application is started up in production mode, and in this case production smtp location is set.

And now let's start with testing:

First of all we must create a Spring context file to configure smtp server location.

See that we are importing application-context.xml file but now we are defining  a new beans profile called integration, where we are redefining smtp connection (changing hostname and port) pointing to fake server.

And finally the test itself.

It is important to explain next parts:
  • @ActiveProfiles is an annotation to tell Spring context which environment should be loaded.
  • SimpleSmtpServer is the main class of Dumbster.
  • @Rule is responsible of starting and stopping smtp server for each method execution.
We have created two tests one that sends a plain message (an_email_should_be_sent_to_customer_confirming_purchase()) and the other one that sends a message with an attachment (an_email_with_invoice_should_be_sent_to_special_customer_confirming_purchase()).

The private methods are simply helper classes to create required classes.

Note that Hamcrest matcher bodyEqualTo comes from BodySmtpMessage class developed specifically for this example.

I wish you have found this post useful, and can give you an alternative when you want to write integration tests involving smtp email service.

Keep Learning,
Alex.

jueves, abril 05, 2012

Hey! Teachers! Leave them kids alone! All in all it's just another brick in the wall. All in all you're just another brick in the wall. (Another Brick In The Wall - Pink Floyd)


In current post I am going to show you how to configure your application to use slf4j and logback as logger solution.

The Simple Logging Facade For Java (slf4j) is a simple facade for various logging frameworks, like JDK logging (java.util.logging), log4j, or logback. Even it contains a binding tat will delegate all logger operations to another well known logging facade called jakarta commons logging (JCL).

Logback is the successor of log4j logger API, in fact both projects have the same father, but logback offers some advantages over log4j, like better performance and less memory consumption, automatic reloading of configuration files, or filter capabilities, to cite a few features.

Native implementation of slf4j is logback, thus using both as logger framework implies zero memory and computational overhead.

First we are going to add slf4j and logback into pom as dependencies

Note that three files are required, one for slf4j, and two for logback. The last two dependencies will change depending on you logging framework, if for example you want to still use log4j, instead of having logback dependencies we would have log4j dependency itself and slf4j-log4j12.

Next step is creating the configuration file. Logback supports two formats of configurations files, the traditional way, using XML or using a Groovy DSL style. Let's start with traditional way, and we are going to create a file called logback.xml into classpath. File name is mandatory, but logback-test.xml is also valid. In case that both files are found in classpath the one ended with -test, will be used.

In general file is quite intuitive, we are defining the appender (the output of log messages), in this case to console, a pattern, and finally root level logger (DEBUG) and a different level logger (INFO) for classes present in foo package. 

Obviously this format is much readable than typical log4j.properties. Recall on additivity attribute, the appender named STDOUT is attached to two loggers, to root and to com.lordofthejars.foo. because the root logger is the ancestor of all loggers, logging request made by com.lordofthejars.foo logger will be output twice. To avoid this behavior you can set additivity attribute to false, and message will be printed only once.

Now let's create to classes which will use slf4j. First class called BarComponent is created on com.lordofthejars.bar:


Note two big differences from log4j. The first one is that is no longer required the typical if construction above each log call.  The other one is a pair of '{}'. Only after evaluating whether to log or not, logback will format the message replacing '{}' with the given string value.

The other one called FooComponent is created on com.lordofthejars.foo:

And now calling foo and bar method, with previous configuration, the output produced will be:

Notice that debug lines in foo method are not shown. This is ok, because we have set to be in this way. 

Next step we are going to take is configuring logback, but instead of using xml approach we are going to use groovy DSL approach. Logback will give preference to groovy configuration over xml configuration, so keep in mind it if you are mixing configuration approaches.

So first thing to do is add groovy as dependency.

And then we are going to create the same configuration created previously but in groovy format.

You can identify the same parameters of xml approach but as groovy functions.

I wish you have found this post useful, and in next project, if you can, use slf4j in conjunction with logback, your application will run faster than logging with log4j.

Keep Learning,
Alex.


domingo, marzo 18, 2012

Moi je pense à l'enfant, Entouré de soldats, Moi je pense à l'enfant, Qui demande pourquoi (Non Non Rien N'a Changé - Les Poppys)


After 8 years developing server and embedded applications using Hibernate as ORM, squeezing my brain seeking solutions to improve Hibernate performance, reading blogs and attending conferences, I decided to share this knowledge acquired during these years with you.

This is the first post of many more posts to come:


Last year I went to Devoxx as speaker but also I attended Patrycja Wegrzynowicz conference about Hibernate Anti-Patterns. In that presentation Patrycja shows us an anti-pattern that shocks me because it proved to expect the unexpected.

We are going to see the effect it has when Hibernate detects a dirty collection and should re-create it.

Let's start with the model we are going to use, only two classes related with one-to-many association:




In previous classes, we should pay attention in three important points:
  • we are annotating at property level instead of field level.
  • @OneToMany and @ManyToOne uses default options (apart from cascade definition)
  • officers getter on Starship class returns an immutable list. 
To test model configuration, we are going to create a test which creates and persists one Starship and seven Officers, and in different Transaction and EntityManager finds created Starship.

Now that we have created this test, we can run it and we are going to observe Hibernate console output.

See the number of queries executed during first commit (persisting objects) and during commit of second transaction (finding a Starship). In total and ignoring sequence generator, we can count 22 inserts, 2 selects and 1 delete, not bad when we are only creating 8 objects and 1 find by primary key.

At this point let's examine why these SQL queries are executed:

First eight inserts are unavoidable; they are required by inserting data into database.

Next seven inserts are required because we have annotated getOfficers property without mappedBy attribute. If we look closely at Hibernate documentation, it points us that “Without describing any physical mapping, a unidirectional one to many with join table is used.”

Next group of queries are even stranger, the first select statement is to find Starship by id, but what are these deletes and inserts of data that we have already created?

During commit Hibernate validates whether collection properties are dirty by comparing object references. When a collection is marked as dirty, Hibernate needs to re-create whole collection, even containing the same objects. In our case when we are getting officers we are returning a different collection instance, concretely an unmodifiable list, so Hibernate considers officers collection as dirty.

Because a join table is used, Starship_Officer table should be re-created, deleting previous inserted tuples and inserting the new ones (although they have the same values).

Let's try to fix this problem. We start by mapping a bidirectional one-to-many association, with many-to-one side as owning side.

And now we rerun the same test again and we inspect the output again.


Although we have reduced the number of SQL statements, from 25 to 10, we still have an unnecessary query, the ones just in commit section of second transaction. Why if officers are lazy by default (JPA specification), and we are not getting officers in transaction, Hibernate executes a select on Officers table?  By the same reason as previously configuration, returned collection has different Java identifier, so Hibernate marks it as newly instantiated collection, but now obviously join table operations are no longer required. We have reduced the number of queries but we still have a performance problem. It is likely that we'll need some other solution, and the solution is not the most obvious one, we are not going to return collection objects returned by Hibernate, we might expand on this later, but we are going to change annotations location.

What we are going to do is to change mapping location from property approach to use field mapping. Simply we are going to move all annotations to class attributes rather than on getters.


And finally we are going to run the test again, and see what's happen:


Why using property mapping Hibernate runs queries during commit and using field mapping are not executed? When a Transaction is committed, Hibernate execute a flush to  synchronize the underlying persistent store with persistable state held in memory. When property mapping is used, Hibernate calls getter/setter methods to synchronize data, and in case of getOfficers method, it returns a dirty collection (because of unmodifiableList call). On the other side when we are using field mapping, Hibernate gets directly the field, so collection is not considered dirty and no re-creation is required.

But we have not finished yet, I suppose you are wondering why we have not removed Collections.unmodifiableList from getter, returning Hibernate collection? Yes I agree with you that we finished quickly, and change would look like @OneToMany(cascade={CascadeType.ALL}) public List<Officer> getOfficers() {officers;} but returning original collection ends up with an encapsulation problem, in fact we are broken encapsulation!. We could add to mutable list anything we like; we could apply uncontrolled changes to the internal state of an object.

Using an  unmodifiableList is an approach to use to avoid breaking encapsulation, but of course we could have used different accessors for public access and hibernate access, and not calling  Collections.unmodifiableList method.

Considering what we have seen today, I suggest you to use always field annotations instead of property mapping, we are going to save from a plenty of surprises.

Hope you have found this post useful.

Screencast of example shown here:



Download code
Music: http://www.youtube.com/watch?v=H14VIsnr6aA


martes, marzo 06, 2012

Keep 'em laughing as you go, Just remember that the last laugh is on you, And always look on the bright side of life..., Always look on the right side of life... (Always Look on the Bright Side of Life - Mony Python)




Integration tests are kind of tests which individual modules are combined and tested as a whole. Moreover integration tests might use system dependent values, accessing external systems like file system, database, web services, ..., and testing multiple aspects of one test case. We can say it is a high-level test.

This differs from unit test where only a single component is tested. Unit tests runs in isolation, mocking-out external components or using in-memory database in case of DAO layers. A unit test might be:
  • Repeatable.
  • Consistent.
  • In Memory.
  • Fast.
  • Self-validating.
  • Testing single concept

The problem when we are writing tests, is how to test rare (or untypical) conditions like "No disk space" in case of accessing file system, or "Connection lost" when executing a database query.

In unit testing this is not a problem you can mock up that component (database connection or filesystem access), generating required output like throwing IOException.

The problem becomes "harder" with integration tests. It would be strange to mock a component, when what you really want to do is validate the real system. So arrived at this point I see two possibilities:
  • Creating a partial mock.
  • Using fault injection.
In this post I am going to show you how to use fault injection approach to test unusual erroneous situations. 

Fault injection is a technique which involves changing application code under test at specific locations. This modifications will introduce faults on error handling code paths which otherwise would rarely be followed.

I am going to talk about how to use fault injection using Byteman in a JUnit test, and run it with Maven.

Let's start coding. Imagine you need to write a backup module, which shall save a string into a local file, but if hard disk is full (IOException is thrown), content shall be sent to remote server.

First we are going to code a class that writes content into file.



Next class, would be the one that sends data through socket but will not be shown, because it is not necessary for this example.

And finally the backup service responsible of managing described behavior.

And now testing time. First of all a brief introduction to Byteman.

Byteman is a tool which allows you to insert/modify code into an application at runtime. These modifications can be used to inject code on your compiled application causing unusual or unexpected operations (aka Fault Injection).

Byteman uses a clear, simple scripting language, based on a formalism called Event Condition Action (ECA) rules to specify where, when and how the original Java code should be transformed.

An example of ECA script is:

But Byteman also supports annotations. And in my opinion, annotations are a better approach than script file, because only watching your test case you can understand what you are exactly testing. If not you should switch context from unit class to script file to understand what are you testing.

So let's create an integration test that that validates that when IOException is thrown while writing content into disk, data is sent to a server.


See that BMUnitRunner (a special jUnit runner that comes with Byteman) is required.

First test called aFileWithContentShouldBeCreated is a standard test that writes Hello world into backup file.

But the second one dataShouldBeSentToServerInCaseOfIOException, has BMRule annotation which will contain when, where and what code should be injected. First parameter is the name of the rule, in this case a description of what we are going to do (throwing an IOException). Next attributes, targetClass and targetMethod configure when injected code should be added. In this case when FileUtils.createFileWithContent method is called. Next attribute targetLocation is location where code is inserted, and in our case is where createFileWithContent method calls write method of BufferedWriter. And finally what to do that obviously in this test is throwing an IOException.

So now you can go to your IDE and run them, and all tests should pass, but if you run through Maven using Surefire plugin, test will not work. To use Byteman with Maven, Surefire plugin should be configured in a specific way.


First important thing is adding tools jar as dependency. This jar provides classes needed in order to dynamically install the Byteman agent.

In Surefire plugin configuration is important to set useManifestOnlyJar to false to ensure that the Byteman jar appears in the classpath of the test JVM.  Also see that we are defining empty environment variables (BYTEMAN_HOME and org.jboss.byteman.home). This is because when it loads the agent the BMUnit package will use environment variable BYTEMAN_HOME or System property org.jboss.byteman.home to locate byteman.jar but only if it is a non-empty string. Otherwise it scans the classpath to locate the jar. Because we want to ensure that jar added on dependency section is used, we are overriding any other configuration present on system.

And now you can run mvn clean test and two tests are successful too.

See that Byteman opens a new world into how we are writing our integration tests, now we can test in an easy way unusual exceptions like Communications Error, Input/Output Exceptions or Out Of Memory Error. Moreover because we are not mocking FileUtils, we are executing real code; for example in our second test, we are running a few lines of FileUtils object until write method is reached. If we had mocked-up FileUtils class, these lines would not be executed. Thanks of using fault injection our code coverage is improved.

Byteman is more than what I have shown you, it also has built-ins designed for testing in multithreaded environments, parameter binding, and an amount of location specifiers, to cite a few things.

I wish you have found this post useful and help you testing rare conditions of your classes.

Download Code
Music: http://www.youtube.com/watch?v=WlBiLNN1NhQ

lunes, febrero 27, 2012

For everything I long to do, No matter when or where or who, Has one thing in common too, It's a, it's a, it's a, it's a sin (It's a Sin - Pet Shop Boys)-



Usually when you start a new project, it will contain several subprojects, for example one with core funcionalities, another one with user interface, or acceptance tests could be another one.

In next screen-cast post I am going to show you how to create a multimodule Maven project using M2 Eclipse plugin.

This is the first video I have done, I wish you find it really useful, and I will try to switch between blog posts and video posts.


jueves, febrero 23, 2012

If there ain't all that much to lug around, Better run like hell when you hit the ground. When the morning comes. (This Too Shall Pass - Ok Go)



Javascript has become much more important to interactive website development than five years ago. With the advent of HTML 5 and new Javascript libraries like jQuery and all libraries that depends on it, more and more functionalities are being implemented using Javascript on client side, not only for validating input forms, but as UI creator or Restful interface to server side.

With the growing use of Javascript, new testing frameworks have appeared too. We could cite a lot of them but in this post I am going to talk only about one called Jasmine

Jasmine is a BDD framework for testing Javascript code. It does not depend on any other JavaScript framework, and uses a really clean syntax, similar to xUnit framework. See next example:


To run Jasmine, you should simply point your browser to SpecRunner.html file which will contain  references to scripts under test and spec scripts. An example of SpecRunner is shown here:


From my point of view, Javascript has become so popular thanks to jQuery, which has greatly simplified the way we wrote Javascript code. And you can also test jQuery applications with Jasmine using Jasmine-jQuery module, which provides two extensions for testing:

  • set of matchers for jQuery framework like toBeChecked(), toBeVisible(), toHaveClass(), ...
  • an API for handling HTML fixtures which enable you to load HTML code to be used by tests. 
So with Jasmine you can test your Javascript applications; but we still have a small big problem. We should launch manually all tests by opening SpecRunner page into browser. But don't worry, exists jasmine-maven-plugin. This plugin is a Maven plugin that runs Jasmine spec files during test phase automatically, without needing to write SpecRunner boilerplate file.


So I suppose you want to start coding. We are going to create a simple jQuery plugin in standard Maven war layout, where Javascript files go to src/webapp/js, css at src/webapp/css and Javascript tests at src/test/javascript. Of course this directory structure is fully configurable, for example if your project was a Javascript project, src/main/javascript would be better place. Next image shows you directory layout.



Let's start. First of all we are going to create a css file which will define a red class. Not complicated code:


Next step, create a js file containing jQuery plugin code. It is a simple plugin that adds red class to affected element.

And finally html code that uses previous functionality. Not much secret, a div element modified by our jQuery plugin.

Now it is time for testing. Yes I know write tests first, and then business code, but I thought it will be more appropriate to show first the code to test.

So let's write Jasmine test file.

First thing to do is add a description (behaviour) of what we are going to test with describe function. Then with beforeEach, we are defining what function we want to execute before each test execution (like @Before JUnit annotation). In this case we are setting our fixture to test plugin code, you can set an html file as template or you can define html inline as done here.

And finally the test, written inside it function. Our test should validate that div element with id content, defined in fixture, should contain class attribute with value red after running redColor function. See how we are using jasmine-query toHaveClass matcher.


Now we have got our Javascript test written and it is time to run it, but instead of using SpecRunner file, we are going to make Jasmine tests being executed by Maven during test phase.

Let's see how to configure jasmine-maven plugin.

First thing to do is register plugin into pom.

And then configure plugin with required parameters. In two first parameters (jsSrcDir and jsTestSrcDir) we are setting Javascript locations for production code and testing code.  Since we are writing tests for jQuery plugin in Jasmine, both jquery and jasmine-jquery libraries should be imported into generated SpecRunner, and this is accomplished by using preloadSources tag.

All these parameters will change depending on your project but in case you are creating a Maven war project, this layout is enough.

And now you can run Maven by typing:

mvn clean test

And next console output should be printed:


I think we have integrated Javascript tests into Maven in an easy and clean way; and now our continuous integration server (Jenkins or Hudson) will run Javascript tests too. If you are planning to mount a continuous delivery system with your next project, and this project will contain Javascript file, take in consideration using Jasmine as BDD tool because it suits perfectly with Maven.

I wish you have found this post useful.

Download code

Music: http://www.youtube.com/watch?feature=player_embedded&v=qybUFnY7Y8w#!

jueves, febrero 16, 2012

Party rock is in the house tonight, Everybody just have a good time, And we gon' make you loose your mind, Everybody just have a good good good time. (Party Rock Anthem - LMFAO)




Redmine is a free and open source, flexible web-based project management and bug-tracking tool,  written using the Ruby on Rails framework.

Redmine supports multiple projects, with its own wiki, forum, time tracker and issues management.

Moreover Redmine implements a plugin platform so can be customized depending on your requirements. Exists plugins to work with Kanban, Scrum, notification plugins or reports.

What I really like about Redmine is that although does not fix the way you must work, it contains enough options to work in any kind of project management approach.

Redmine can be installed in different ways:
  • Using webrick (not recommended in production environments).
  • Run with mongrel and fastcgi.
  • Using Passenger.
  • Or package Redmine into war and deploy into  Java container like Tomcat or Glassfish.
In this post I am going to show you how to package Redmine 1.3 into a war file so could be executed into Tomcat7 and Linux. In theory should be work with Glassfish, JBoss, or any other OS.

First of all download JRuby 1.6.6, so open a terminal

wget http://jruby.org.s3.amazonaws.com/downloads/1.6.6/jruby-bin-1.6.6.tar.gz

And decompress downloaded file and move to /usr/share directory.

tar xvzf jruby-bin-1.6.6.tar.gz
sudo mv jruby-1.6.6/ /usr/share/jruby-1.6.6

Then update environment variables with JRuby installation directory.

sudo gedit /etc/environment


Finally try to execute jruby to see that has been installed correctly:

jruby -v

And JRuby version information should be printed on console.

Next step is to install required gems:


Redmine installation

Download Redmine 1.3 and install them on /usr/share directory:

Redmine requires a database to work. In this case I had already installed mySQL5, but postgeSQL is supported too. So let's configure mySQL into Redmine.

cd /usr/share/redmine-1.3.0/config/

Installation comes with a database template configuration file, we are going to rename it and modify to suit our environment. Moreover Redmine contains different start up modes (production, development, test). In our case because we are configuring a production environment, only production section will be touched.


After this modification, it is time to create Redmine user and database into mySQL.

mysql -u root -p


Now it is time to initialize Redmine



Next step is required because we are installing Redmine 1.3, in next versions of Redmine 1.4 and beyond will not be necessary. Open config/environment.rb and comment next like:

config.gem 'rubytree', :lib => 'tree'

And then create database schema and fill them with default data with next scripts.


Now we are going to test that Redmine is correctly configured. For this purpose we are going to use webrick.


and open a browser at http://localhost:3000 to start checking installation.

Redmine web page will be shown, you can login with username and password admin/admin

At this point we have Redmine correctly installed.


Configuring Email

An issue tracker should be able to send mail to affected users when a new issue is created or modified by  change.

If your mail server requires tls security protocol you should install action_mailer_optional_tls plugin.

This plugin requires git, if you don’t have installed yet, type:

sudo apt-get install git

and then run next command on Redmine directory:

jruby script/plugin install git://github.com/collectiveidea/action_mailer_optional_tls.git

Let’s configure email delivery:

Inside configuration file you will find common email settings. Depending on your email server these attributes can vary widely, so at this point I am going to show you a simple smtp server configuration using plain authentication at production environment. Go to last line of configuration.yml file and append next lines into production section.

All attributes are self-explanatory.

And before creating war file, let’s check that email is correctly configured. Again we use webrick.


Then open browser at http://localhost:3000 and log in with admin account.

Adjust admin email by clicking on My Account link, and at Email section, set administrator email.

After that we are going to test email configuration, from main menu, go to Administration -> Settings -> Email Notifications, add emission email and click on test email. After a few time, a test message will be sent to administrator email account.

We have succeeded in Redmine installation, now it is time to package it to be deployed into Tomcat.

Packaging Redmine

Before starting, because of incompatibility with installed jruby-rack gem, we should run next commands to install 1.0.10 version of jruby-rack.

Warble command requires a configuration file. This file is created using next command:

Edit Warble::Config section and configure config.dirs, config.gems and config.webxml.rails.env sections as:

And finally run:

warble

And Redmine war has been created and is ready to be deployed into Tomcat.


Although we have got a war file, I recommend not deleting Redmine installation directory because could be used in future to install new plugins, or modify any configuration. After a modification, calling warble command, a new war with that change would be created.


I wish you have found useful.


martes, enero 31, 2012

Giuro per sempre a te, Di viver, morire per te, Se tu sarai con me lo so, Dea Roma, vincerò!



One of the common problems of people that start using Hibernate is performance, if you don't have much experience in Hibernate you will find how quickly your application becomes slow. If you enable sql traces, you would see how many queries are sent to database that can be avoided with little Hibernate knowledge. In current post I am going to explain how to use Hibernate Query Cache to avoid amount of traffic between your application and database.

Hibernate offers two caching levels:

  • The first level cache is the session cache. Objects are cached within the current session and they are only alive until the session is closed.
  • The second level cache exists as long as the session factory is alive. Keep in mind that in case of Hibernate, second level cache is not a tree of objects; object instances are not cached, instead it stores attribute values.
After this brief introduction (so brief I know) about Hibernate cache, let's see what is Query Cache and how is interrelated with second level cache.

Query Cache is responsible for caching the combination of query and values provided as parameters as key, and list of identifiers of objects returned by query execution as values. Note that using Query Cache requires a second level cache too because when query result is get from cache (that is a list of identifiers), Hibernate will load objects using cached identifiers from second level.

To sum up, and as a conceptual schema, given next query: "from Country where population > :number", after first execution, Hibernate caches would contain next fictional values (note that number parameter is set to 1000):

L2 Cache
[
id:1, {name='Spain', population=1000, ....}
id:2, {name='Germany', population=2000,...}
....
]
QueryCache
[{from Country where population > :number, 1000}, {id:2}]

So before start using Query Cache, we need to configure cache of second level.
First of all you must decide what cache provider you are going to use. For this example Ehcache is chosen, but refer to Hibernate documentation for complete list of all supported providers.

To configure second level cache, set next Hibernate properties:

hibernate.cache.provider_class = org.hibernate.cache.EhCacheProvider
hibernate.cache.use_structured_entries = true
hibernate.cache.use_second_level_cache = true

And if you are using annotation approach, annotate cachable entities with:

@Cacheable
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)

See that in this case cache concurrency strategy is NONSTRICT_READ_WRITE, but depending on cache provider, other strategies can be followed like TRANSACTIONAL, READ_ONLY, ... take a look at cache section of Hibernate documentation to chose the one that fits better with your requirements.

And finally add Ehcache dependencies:

<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-ehcache</artifactId>
<version>3.6.0.Final</version>
</dependency>

Now second level cache is configured, but not query cache; anyway we are not far from our goal.

Set hibernate.cache.use_query_cache property to true.

And for each cachable query, we must call setCachable method during query creation:

List<Country> list = session.createQuery("from Country where population > 1000").setCacheable(true).list();

To make example more practical I have uploaded a full query cache example with Spring Framework. To see clearly that query cache works I have used one public database hosted in ensembl.org. The Ensembl project produces genome databases for vertebrates and other eukaryotic species, and makes this information freely available online. In this example query to dna table is cached.

First of all Hibernate configuration:


It is a simple Hibernate configuration, using properties previously explained to configure second level cache.

Entity class is an entity that represents a sequence of DNA.

To try query cache, we are going to implement one test where same query is executed multiple times.


We can see that we are returning first fifty dna sequences, and if you execute it, you will see that elapsed time between creation of query and commiting transaction is printed. As you can suppose only first iteration takes about 5 seconds to get all data, but the other ones only milliseconds.

The foreach line just before query iteration will print object identifier through console. If you look carefully none of these identifiers will be repeated during all execution. This fact just goes to show you that Hibernate cache does not save objects but properties values, and the object itself is created each time.

Last note, remember that Hibernate does not cache associations by default.

Now after writing a query, think if it will contain static data and if it will be executed often. If it is the case, query cache is your friend to make Hibernate applications run faster.


Download Code

Music: http://www.youtube.com/watch?v=fw1VJSU92mw

viernes, enero 27, 2012

Once upon a time and long ago, I heard someone singing, Soft and low (Distant Melody - Peter Pan)




Thymeleaf Spring-MVC Maven Archetype aims to create a web application that uses Thymeleaf template engine and Spring Framework.

The main goal of Thymeleaf is to provide an elegant and well-formed way of creating HTML 5 templates. Its Standard and SpringStandard dialects allow you to create powerful natural templates, that can be correctly displayed by browsers and therefore work also as static prototypes.

You can read more about Thymeleaf at:


When you create an application using this archetype, generated web application will be composed by two html templates in WEB-INF/views, one for showing a form using HTML5 and CSS3 and another one for listing inserted data.

Spring controllers are located in controller package.

Application is internationalized too using LocaleChangeInterceptor with en_US as default locale. Properties are in src/main/resources/locale folder.

And finally server-side validation is provided by using JSR-303 provider.

Versions of used jars are:

  • Spring Framework: 3.0.5
  • Thymeleaf: 1.1.2
  • Hibernate-Validator: 4.1.0
  • Slf4j: 1.5.10
  • Servlet-api: 2.5
  • JUnit: 4.9

You can install this archetype from source or from jar file:

From source:

mvn clean install
mvn archetype:generate -DarchetypeCatalog=local

From jar:


and execute:

mvn install:install-file \ -DgroupId=com.lordofthejars \ -DartifactId=thymeleaf-spring-maven-archetype \ 
-Dversion=DOWNLOADED_VERSION \ -Dpackaging=jar 
-Dfile=PATH_TO_JAR_YOU_DOWNLOADED/thymeleaf-spring-maven-archetype-VERSION.jar


Maven repository is located at 


and source code is stored at  https://github.com/maggandalf/thymeleaf-spring-maven-archetype

For any question regarding of how to use this archetype or any issue/improvement, do not hesitate to contact me or open a new issue on github.

I wish this archetype can help you to start a new project using Thymeleaf template engine.

Music: http://www.youtube.com/watch?v=7EaGSocm5dc


lunes, enero 23, 2012

Nevermind, I'll find someone like you, I wish nothing but the best, for you too, Don't forget me, I beg, I remember you said:-, Sometimes it lasts in love but sometimes it hurts instead (Someone Like You - Adele))



This week I have reached 100K visits on blog. Simply I would like to say thank you very much to all people that have come and found useful information, my intention is writing posts to make developers life easier.

Especially I would like to thank all dzone folks, theserverside people, springsource bloggers, JavaCodeGeeks, and of course my Twitter followers all of them have helped to reach this number of visits.

To celebrate this event, now alexsotob blog is converted to lordofthejars, so now you can access this blog through alexsotob.blogspot.com or by www.lordofthejars.com.

Thank you very much again to read my blog; my next challenge is to reach 250k visits.

See you next time, keep reading,

Alex.

Music: http://www.youtube.com/watch?v=hLQl3WQQoQ0&ob=av3e

lunes, enero 02, 2012

Sábado na balada, A galera começou a dançar, E passou a menina mais linda, Tomei coragem e comecei a falar (Ai se eu te pego - Michel Teló)



Backbone is a Javascript library that provides a clean way to:

  • define models in Javascript.
  • deal with Collections using a rich API.
  • define Views with declarative event handling and template support.
  • interface to Rest architectures.

In current post I am going to use the Rest WebApp of my old post, where I only explained the server side to show you how to implement client side using Backbone.

So the first thing to do is create a web structure into IDE, for this example I will reuse same project of RestServer post.

Then download backbone.js and its dependencies and copy to webapp/resources/js directory:


Open servlet-context.xml and add root and  js folder as static resource.

Then go to webapp/resources and create index.html.


Add all Javascript libraries into page, but most important, add them in the same order as appears here, if not, some Javascript failures would appear in your browser and the application will not work.

Note that some content is created directly using HTML tags rather than using Backbone views. Normally page layout, headers, footers ... are treated as static content and they are directly written into page.

Let's start modeling our data, in this case a character with its attributes (id, name, url, isHuman...).


Models in Backbone has a particularity that make them useful, they are an implementation of Active Record pattern, so when you are creating a model you will have CRUD operations against a Rest-WebService. So for example when you call save method of your model, Backbone converts that model to JSON and sends the result to server as HTTP POST (if model has no id), or HTTP PUT (update). Other available operations are fetch (load) and destroy (delete).

See how in this case we have created a Character model setting urlRoot property. This property is used to build endpoint to ResetServer.  As long as the endpoint returns JSON for a single character, properties will be merged into model.

And the same applies to Collections. When you call fetch method of characters var, all characters returned by calling GET .../characters will be stored into.

See how easy is implementing a Rest Client using Backbone. And I suppose you are wondering where are attributes' definition? Don't worry, with Backbone is not necessary to explicitly define at definition time.

Now it is time for Views. As I cited at the begin of post, Backbone has template support. For this example two templates are going to be used, one for filling a new character and another for showing character information.


To define a template you must use <script> tag with text/template type instead of text/javascript and an id. The structure is similar to JSP (embedded HTML code, <%= %> for printing variables and <% %> for logical structures).


Next step is creating Views

For this example we are going to create three views, one for each template, and one that will be a composition of both.

Let's start with View responsible of showing information of one character.

At line 3 we are loading template content defined previously with id show-characters, into showCharactersTemplate object.

Next important line is number 6. There we are defining a function that will be invoked each time we want to render a view (one time per character). See how we are passing a model object (trust me it is not defined yet, but will be there), to template. To fill template variables, template engine requires data in JSON format, and Backbone model has a toJSON method.  And finally generated html code is saved into el attribute.

Next View is responsible of showing character form, and deal with submit action.

This View shows events handling in Backbone. See line 9. We are defining that when user clicks on tag element "a" with class submit, element defined in new-character template, createOnEnter function is executed.

createOnEnter method does three things:

  • gets values from template form.
  • instantiates CharacterModel class. See that this is the place where model attributes are defined.
  • calls save method. This method sends object data to defined endpoint at server side. Because this operation is asynchronous, a callback is registered, so when a success is received, character is added into characters list.

And last View is a composite view of two previous views, acts as a controller between adding a new character and showing all inserted characters:

Initialize function does all required initializations so UI could be refreshed asynchronously. In first two lines we are binding add and reset functions of characters collection to a method. For example each time add function is called, addCharacter is invoked.

Next line is used to populate data to Collection. When fetch is called an HTTP GET to defined endpoint is sent, data is retrieved and Collection's add method is called for each result.

addCharacter function is responsible of creating one View for each character. There is not much secret, we are creating a CharacterView, and appending the result of calling its render method to html element with character-list id.

And finally render function that is responsible of rendering CharacterFormView.

See that page is only loaded first time we access it, subsequent updates are done by DOM manipulations.

To finish I embedded all html file explained here so you can see a global overview of all pieces used. Also feel free to download all Eclipse project. I wish you find this post useful.


Download Code

Music: http://www.youtube.com/watch?v=q1Ebi9cSn48

miércoles, diciembre 14, 2011

Elle ne me quitte pas d'un pas, fidèle comme une ombre. Elle m'a suivi ça et là, aux quatres coins du monde. Non, je ne suis jamais seul avec ma solitude (Ma Solitude - Georges Moustaki)



In current post I have uploaded my Devoxx presentation. This year I was at Devoxx as speaker. My presentation was about how to speed up HTML 5 applications, and Javascript and CSS in general, using Aggregation and Minification


Also you can visit two entries of my blog where I talk about same theme.
Links of technologies I talked about:

Finally I want to say thank you to all people who came to watch me, and of course Devoxx folks, for organising such amazing event.

I wish you have discovered a great way to speed up your web applications.

Music: http://www.youtube.com/watch?v=qSZKO5K2eTE

viernes, diciembre 09, 2011

Far away, long ago, glowing dim as an Ember, Things my heart use to know, things it yearns to remember (Once upon a December - Anastasia)



You never develop code without version control, why do you develop your database without it? Flyway  is database-independent library for tracking, managing and applying database changes.

Personally I find that using a database migration tool like Flyway is "a must", because covers two scenarios of our software life-cycle:

  • Multiple developers developing an application with continuous integration.
  • Multiple clients each one with different versions of production code.

Let's start with first point. If your project is big enough there will be more than one developer working on it, each one developing a new feature. Each feature may require a database update (adding a new table, a new constraint, ...), so developer creates a .sql file with all required changes.

After each developer finishes its work, these changes are merged into main branch and integration/acceptance tests are executed on test machine. And the problem is obvious which process updates testing database? And how? QA department executes sql files manually? Or we develop a program that executes these updates automatically? And in what order must be executed? Also same problem arises in production environment.

Second point is only applicable if your application is distributed across multiple clients. And at this point the problem is further accentuated because each client may have different software versions. Hence when an update is required by our client (for example because a bug), you should know which database version was installed and what changes must be applied to get expected database.

Don't worry Flyway comes to rescue you, and will help to fix all previous questions. Let me start explaining some features of Flyway that in my opinion make it a good tool.

  • Automatic migration: Flyway will update from any version to the latest version of schema. Flyway can be executed as Command-line (can be used with non JVM environments), Ant script, Maven script (to update integration/acceptance test environments) or within application (when application is starting up).
  • Convention over configuration: Flyway comes with default configuration so no configuration is required to start using.
  • Plain SQL scripts or Java classes: To execute updates, you can use plain SQL files or Java classes for advanced migrations.
  • Highly reliable: safe for cluster environments.
  • Schema clean: Flyway can clean existing schema, so empty installation is produced. 

Conventions to be followed if they are not explicitly modified are:

  • Plain SQL files go to db/migration directory inside src/main/resources structure.
  • Java classes go to db.migration package.
  • Files (SQL and Java) must follow next name convention: V<version>[__description]. Where each version number is separated by dots (.) or underscore (_) and if description is provided, two underscores must proceed. A valid example is V_1_1_0__Update.sql

So let's see Flyway in action. In this application I am going to focus only on how to use Flyway, I am not going to create any DAO, DTO or Controller class, only database migration part.

Imagine we are going to develop a small application using Spring Framework that will allow us registering authors and which books they have written.

First version will contain two tables, Author and Book related with one to many relationship.

First step is registering Flyway into Spring Application Context. Flyway class is the main class and requires a javax.sql.DataSource instance. migrate method is responsible to start migration process.


See that there is no secret. Only be careful because if your project uses JPA or ORM frameworks for persistence, you should configure them to avoid auto creation of tables, because now Flyway is responsible of managing database structure. And because of that, creation of SessionFactory (in case of Hibernate) or EntityManagerFactoryBean( in case of JPA),  should depends on Flyway bean.

Flyway is configured. Each time you start application, it will review if configured datasource requires an update or not.

And now let's write first version of SQL migration. Create db/migration directory into src/main/resources and create a file called V1__Initial_version.sql with next content:


This script creates Author and Book tables with their respective attributes.

And if you run next JUnit both tables are created into database.


Take a look at your console and next log message has appeared:

10:33:49,512  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: null
10:33:49,516  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 1
10:33:49,577 INFO glecode.flyway.core.migration.DbMigrator: 188 - Successfully applied 1 migration (execution time 00:00.085s).


And if you open your database:


Note that Flyway has created a table to annotate all updates that have been executed (SCHEMA_VERSION) and last insert is a "Flyway insert" marking which is the current version.

Then your first version of application is distributed across the world.

And you can start to develop version 1.1.0 of application. For next release, Address table must be added with a relationship to Author.


As done before, create a new SQL file V1_1_0__AddressTable.sql into db/migration folder.


And run next unit test:


your database will be upgraded to version 1.1.0. Also take a look at log messages and database:

11:27:30,149  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: 1
11:27:30,152  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 1.1.0
11:27:30,191 INFO glecode.flyway.core.migration.DbMigrator: 188 - Successfully applied 1 migration (execution time 00:00.053s).



New table is created, and a new entry into SCHEMA_VERSION table is inserted marking that current database version is 1.1.0.

When your 1.1.0 application is distributed to your clients, Flyway will be the responsible of updating their databases without losing data.


Previously I have mentioned that Flyway also supports Java classes for advanced migrations. Let's see how.

Imagine that in your next release, authors can upload their personal photo, and you decide to store as  blob attribute into Author table. The problem resides on already created authors because you should set some data into this attribute. Your marketing department decides that authors inserted prior to this version will contain a photo of Spock,


So now you must alter Author table and moreover update a field with a photo. You can see clearly that for this update you will need something more than a simple SQL file, because you will need to add a new property and updating them with chunk of bytes. This problem could be accomplished using only one Java class but for showing a particularity of Flyway, problem will be treated with one SQL and one Java object.

First of all new SQL script adding a new binary field is created. This new feature will be implemented on version 2.0.0, so script file is named V2_0_0__AddAvatar.sql.


Next step is developing a Java Migration class. Create a new package db.migration on src/main/java. Notice that this class cannot be named V2_0_0_AddAvatar.java because Flyway will try to execute two different migrations with same version, and obviously Flyway will detect a conflict.

To avoid this conflict you can follow many different strategies, but in this case we are going to add a letter as version suffix, so class will be named V2_0_0_A__AddAvatar.java instead of V2_0_0__AddAvatar.java.


Before run previous unit test, open testdb.script file and add next line just under SET SCHEMA PUBLIC command.

INSERT INTO AUTHOR(ID, FIRSTNAME, LASTNAME, BIRTHDATE) VALUES(1, 'Alex', 'Soto', null);

And running unit test, next lines are logged:


20:21:18,032  INFO glecode.flyway.core.migration.DbMigrator: 119 - Current schema version: 1.1.0
20:21:18,035  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 2.0.0
20:21:18,088  INFO glecode.flyway.core.migration.DbMigrator: 206 - Migrating to version 2.0.0.A
20:21:18,114 INFO glecode.flyway.core.migration.DbMigrator: 190 - Successfully applied 2 migrations (execution time 00:00.094s).

And if you open updated database, next lines are added:


See how all previous authors have avatar column with data.

Note that now you have not to worry about database migrations, your application is packaged and delivered to all your clients regardless of the version they had installed; Flyway will execute only required migration files depending on installed version.

If you are not using Spring, you can update your database using Flyway-Maven-Plugin. Next piece of pom shows you how to execute migration during test-compile phase. By default plugin is executed during pre-integration-test phase.


Thanks of Maven plugin, we can configure our continuous integration system so all environments (test, production,...) would be updated during deployment of application.

I wish Flyway will help you make better life as developer.



Music: http://www.youtube.com/watch?v=oyUBdLm3s9U