martes, junio 26, 2012

Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye


Llueve, llueve, y mientras nos mojamos como tontos. LLueve, llueve, y en un simple charco a veces nos ahogamos. (Llueve - Melendi)

In this post I am going to talk about one class that was first introduced in version 1.5, that I have used too much but talking with some people they said that they didn't know it exists. This class is TimeUnit.

TimeUnit class represents time durations at a given unit of granularity and also provides utility methods to convert to different units, and methods to perform timing delays.

TimeUnit is an enum with seven levels of granularity: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS and SECONDS.

The first feature that I find useful is the convert method. With this method you can say good bye to typical:

     private static final int FIVE_SECONDS_IN_MILLIS = 1000 * 5;

to something like:

     long duration = TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS);

But also equivalent operations in a better readable method exist. For example the same conversion could be expressed as:

     long duration = TimeUnit.SECONDS.toMillis(5);

The second really useful sets of operations are those related with stopping current thread.

For example you can sleep current thread with method:

     TimeUnit.MINUTES.sleep(5);

instead of:

     Thread.sleep(5*60*1000);

But you can also use it with join and wait with timeout.

     Thread t = new Thread();
     TimeUnit.SECONDS.timedJoin(t, 5);

So as we can see TimeUnit class is though in terms of expressiveness, you can do the same as you do previously but in a more fashionable way. Notice that you can also use static import and code will be even more readable.

Keep Learning,

martes, junio 19, 2012

NoSQLUnit 0.3.0 Released


Se você me olhar vou querer te pegar, E depois namorar curtição, Que hoje vai rolar... (Balada Boa - Gustavo Lima)


Introduction

Unit testing is a method by which the smallest testable part of an application is validated. Unit tests must follow the FIRST Rules; these are Fast, Isolated, Repeatable, Self-Validated and Timely.

It is strange to think about a JEE application without persistence layer (typical Relational databases or new NoSQL databases) so should be interesting to write unit tests of persistence layer too. When we are writing unit tests of persistence layer we should focus on to not break two main concepts of FIRST rules, the fast and the isolated ones.

Our tests will be fast if they don't access network nor filesystem, and in case of persistence systems network and filesystem are the most used resources. In case of RDBMS ( SQL ), many Java in-memory databases exist like Apache Derby , H2 or HSQLDB . These databases, as their name suggests are embedded into your program and data are stored in memory, so your tests are still fast. The problem is with NoSQL systems, because of their heterogeneity. Some systems work using Document approach (like MongoDb ), other ones Column (like Hbase ), or Graph (like Neo4J ). For this reason the in-memory mode should be provided by the vendor, there is no a generic solution.

Our tests must be isolated from themselves. It is not acceptable that one test method modifies the result of another test method. In case of persistence tests this scenario occurs when previous test method insert an entry to database and next test method execution finds the change. So before execution of each test, database should be found in a known state. Note that if your test found database in a known state, test will be repeatable, if test assertion depends on previous test execution, each execution will be unique. For homogeneous systems like RDBMS , DBUnit exists to maintain database in a known state before each execution. But there is no like DBUnit framework for heterogeneous NoSQL systems.

NoSQLUnit resolves this problem by providing a JUnit extension which helps us to manage lifecycle of NoSQL systems and also take care of maintaining databases into known state.



NoSQLUnit

NoSQLUnit is a JUnit extension to make writing unit and integration tests of systems that use NoSQL backend easier and is composed by two sets of Rules and a group of annotations.

First set of Rules are those responsible of managing database lifecycle; there are two for each supported backend.

  • The first one (in case it is possible) it is the in-memory mode. This mode takes care of starting and stopping database system in "in-memory" mode. This mode will be typically used during unit testing execution.

  • The second one is the managed mode. This mode is in charge of starting NoSQL server but as remote process (in local machine) and stopping it. This will typically used during integration testing execution.


Second set of Rules are those responsible of maintaining database into known state. Each supported backend will have its own, and can be understood as a connection to defined database which will be used to execute the required operations for maintaining the stability of the system.

Note that because NoSQL databases are heterogeneous, each system will require its own implementation.

And finally two annotations are provided, @UsingDataSet and @ShouldMatchDataSet , (thank you so much Arquillian people for the name) to specify locations of datasets and expected datasets.


MongoDb Example

Now I am going to explain a very simple example of how to use NoSQLUnit, for full explanation of all features provided, please read documentation in link or download in pdf format.

To use NoSQLUnit with MongoDb you only need to add next dependency:

First step is defining which lifecycle management strategy is required for your tests. Depending on kind of test you are implementing (unit test, integration test, deployment test, ...) you will require an in-memory approach, managed approach or remote approach.

For this example we are going to use managed approach using ManagedMongoDb Rule) but note that in-memory MongoDb management is also supported (see documentation how).

Next step is configuring Mongodb rule in charge of maintaining MongoDb database into known state by inserting and deleting defined datasets. You must register MongoDbRule JUnit rule class, which requires a configuration parameter with information like host, port or database name.

To make developer's life easier and code more readable, a fluent interface can be used to create these configuration objects.

Let's see the code:

First thing is a simple POJO class that will be used as model class:

Next business class is the responsible of managing access to MongoDb server:


And now it is time for testing. In next test we are going to validate that a book is inserted correctly into database.

See that first of all we are creating using ClassRule annotation a managed connection to MongoDb server. In this case we are configuring MongoDb path programmatically, but also can be set from MONGO_HOME environment variable. See here full description of all available parameters.

This Rule will be executed when test is loaded and will start a MongoDb instance. Also will shutdown the server when all tests have been executed.

Next Rule is executed before any test method, and is responsible of maintaining database into known state. Note that we are only configuring working database, in this case the test one.

And finally we annotate method test with @UsingDataSet indicating where to find data to be inserted before execution of each test, and @ShouldMatchDataSet locating expected dataset.



We are setting an initial dataset in file initialData.json located at classpath com/lordofthejars/nosqlunit/demo/mongodb/initialData.json and expected dataset called expectedData.json.


Final Notes

Although NoSQLUnit is at early stages, the part of MongoDb is almost finished, in next releases new features and of course new databases will be supported. Next NoSQL supported engines will be Neo4J, Cassandra, HBase and CouchDb.

Also read the documentation where you will find an full explanation of each feature explained here.

And finally any suggestion you have, any recommendation, or any advice will be welcomed.


Stay In Touch


Email: asotobu at gmail.com

Blog: Lord Of The Jars
Twitter: @alexsotob
Github: NoSQLUnit Github

Keep Learning,
Alex

Full Code
Music: http://www.youtube.com/watch?v=8y5CbeHY7X0

viernes, junio 08, 2012

Testing Abstract Classes (and Template Method Pattern in Particular)



Sick at heart and lonely, deep in dark despair. Thinking one thought only, where is she tell me where. (Heart Full of Soul - The Yardbirds).

From wikipedia "A template method defines the program skeleton of an algorithm. One or more of the algorithm steps can be overridden by subclasses to allow differing behaviors while ensuring that the overarching algorithm is still followed".

Typically this pattern is composed by two or more classes, one that is an abstract class providing template methods (non-abstract) that have calls to abstract methods implemented by one or more concrete subclasses.

Often template abstract class and concrete implementations reside in the same project, but depending on the scope of the project, these concrete objects will be implemented into another project.

In this post we are going to see how to test template method pattern when concrete classes are implemented on external project, or more general how to test abstract classes.

Let's see a simple example of template method pattern. Consider a class which is responsible of receiving a vector of integers and calculate the Euclidean norm. These integers could be received from multiple sources, and is left to each project to provide a way to obtain them.

The template class looks like:

Now another project could extend previous class and make an implementation of  abstract calculator by providing an implementation of read() method .

Developer that has written a concrete implementation will test only read() method, he can "trust" that developer of abstract class has tested non-abstract methods.

But how are we going to write unit tests over calculate method if class is abstract and an implementation of read() method is required?

The first approach could be creating a fake implementation:

This is not a bad approach, but has some disadvantages:
  • Test will be less readable, readers should know the existence of these fake classes and must know exactly what are they doing. 
  • As a test writer you will spend time in implementing fake classes, in this case it is simple, but your project could have more than one abstract class without implementation, or even with more than one abstract method.
  • Behaviour of fake classes are "hard-coded".
A better way is using Mockito to mock only abstract method meanwhile implementation of non-abstract methods are called.


Mockito simplifies the testing of abstract classes by calling real methods, and only stubbing abstract methods. See that in this case because we are calling real methods by default, instead of using the typical when() then() structure, doReturn schema must be used.

Of course this approach can be only used if your project does not contain a concrete implementation of algorithm or your project will be a part of a 3rd party library on another project. In the other cases the best way of attacking the problem is by testing the implemented class.

Download sourcecode

Music: http://www.youtube.com/watch?v=9O6eGOu27DA

jueves, mayo 24, 2012

I'm guided by this birthmark on my skin, I'm guided by the beauty of our weapons, First we take Manhattan, then we take Berlin (First We Take Manhattan - Leonard Cohen)




On May 23, I was at Berlin as speaker in LinuxTag. I talked about how to test modern Enterprise Java Applications using open source tools.

Presentation abstract was:

Ten years ago to present, Enterprise Java Applications have suffered many changes. We have moved from Enterprise Applications built with JSP+Servlet and EJB, to much more complex applications. Nowadays with the advent of HTML5 or JavaScript libraries like JQuery, client side development has changed significantly. With the emergence of web frameworks like Spring MVC or JSF,  server side code has quite changed compared to the one used when each web-form was mapped to a Servlet. And also persistence layer has changed with Java Persistence standard or with new database approaches like Data-Grid, Key-Values stores or Document stores.
Moreover, architectural changes have occurred too, REST-web applications have grown in popularity or AJAX is used to create asynchronous web applications. Due to development of Enterprise Java Applications have changed during these years, so testing frameworks have changed accordantly. The main topic of this speech will be how to test Enterprise Java Applications using these new frameworks.
In the first part of this presentation we are going to explore how to test JavaScript written on client side, how to write unit tests of server side code, and how to validate persistence layer. Next part of presentation will be focused on how to write integration tests on server side and acceptance tests on full Enterprise Java Applications (joining client and server side) and an introduction about testing REST-web applications. Finally we will show how to integrate all kind of test on your continuous integration system and run acceptance tests on test environment.

Session will combine theory with interactive practice using only open-source projects.

I have uploaded slides to slideshare so you can  take a look (sorry for red and blue colours):

How to Test Enterprise Java Applications
View more presentations from Alex Soto

Also if you want you can download the code that it was used in demo sections.

Javascript Unit Testing with JS Test Driver
NoSQL Unit Testing with NoSQLUnit
Integration Tests with Arquillian
Acceptance Tests with Thucydides

Please let me warn you that NoSQLUnit is an open source project that I am developing, and it is on early stages, in next month, project will have a better look by supporting more NoSQL systems like Neo4j, Cassandra or CouchDb and having an official (not snapshot) release. If you want you can follow me on Twitter or subscribing to NoSQLUnit github repository and receive the last news of this JUnit extension.

For any question do not hesitate to write them in comments section or sending me an email.

I would like to say thank you to linuxtag folks for treating me so well and all people who came to presentation, for all of them a big thank you.

Music: http://www.youtube.com/watch?v=JTTC_fD598A&ob=av2e

jueves, mayo 03, 2012

Nasha nasha krovatka delala shik - shik, Ya tvo pianino , a ty moi nastroishchik, My tak letali chto ne zametili tvoyu matʹ, Ahaa..I ona skazala chto ya prosto blaz (Mama Lyuba - Serebro))



When we develop any application, after we finish it or when we end up any of its module, we start optimization process. Most applications contain database access, and if you are using an ORM, probably you will use hibernate. Optimizing hibernate persistence layer, requires to be prepared to read, understand and evaluate sql.

If we take an overview on hibernate configuration, two properties, hibernate.format_sql and hibernate.use_sql_comments, should be enabled to print performed sql code through console.

This is a good start but it seems that we need more information to make an accurate diagnosis of performance like connection events, returned data on queries, or parameters binding (hibernate shows parameters values with question mark ?) . Hence, we need another way to inspect generated sql. Log4jdbc is a jdbc driver that can log sql/jdbc calls. In fact log4jdbc is an implementation of proxy pattern which will automatically load popular jdbc drivers (Oracle, Derby, MySql, PostgreSql, H2, Hsqldb, ...), intercept calls, log information, and then send data to "spied" driver.

In log4jdbc, there are 5 loggers that can be used depending on data to monitor:
  • jdbc.sqlonly: logs executed sql with binding arguments replaced with bound data.
  • jdbc.sqltiming: logs how long a sql took to execute.
  • jdbc.audit: logs all jdbc calls except for ResultSets.
  • jdbc.resultset: same as jdbc.audit plus ResultsSets.
  • jdbc.connection: logs open and close connection events.
In this post we are going to see how to configure log4jdbc-remix, a fork of log4jdbc, which apart from inheriting log4jdbc capabilities, also let us:
  • jdbc.resultsettable: log results set in table format.
  • configure it as datasource.
  • available in maven repository (log4jdbc is not present on maven repositories).
For this example we are going to use the project created by JPA Spring Template which contains two entities Order and Item associated with one-to-many relationship, and one test that executes some database operations.

First thing to do is add log4jdb-remix and slf4j-log4j12 dependencies to project:

Next thing to do is configure active loggers.  Depending on the data we are interested to monitor, we activate the required loggers. As an example let's configure log4j.xml so result set is printed in table format and also time taken to execute each query is shown.

After configuring loggers, run test and inspect the output.


Output is printed in a fashion format, queries contains bind parameters (not a question mark (?)), and process time is also informed.

Notice that logging more or less information is simply a matter of configuring a log. Moreover depending on log level, more or less information will be provided in each case. If logger is configured in DEBUG class name and line number (if available) at which the sql was executed will be included. In INFO will include only sql, and finally ERROR which show stacktraces if any SQLException occurs.

Optimizing hibernate applications can imply touching many parts of an application (JVM configuration, database engine, network, …) but one very important aspect to take care is the number of queries that are sent to RDBMS (for example N+1 problem), and the amount of data that is retrieved from database (Projection problem), and log4jdbc-remix perfectly fits to help in this purpose.

As final note, log4jdbc(-remix) is a jdbc logger, so it is not necessary to use only in hibernate applications, can be used with any framework that uses a datasource.

I wish this library would help you.

Keep Learning,
Alex

Download Code
Music: http://www.youtube.com/watch?v=h9HRHOXfRBI


jueves, abril 19, 2012

Qui dit crise te dis monde dit famine dit tiers- monde, Qui dit fatigue dit réveille encore sourd de la veille, Alors on sort pour oublier tous les problèmes, Alors on danse... (Alors on Danse - Stromae)




Let's introduce another hibernate performance tip. Do you remember the model of previous hibernate post? We had a starship and officer related with a one to many association.


Now we have next requirement:

We shall get all officers assigned to a starship by alphabetical order.

To solve this requirement we can:
  1. implementing an HQL query with order by clause.
  2. using sort approach.
  3. using order approach.
The first solution is good in terms of performance, but implies more work as a developers because we should write a query finding all officers of given starship ordered by name and then create a finder method in DAO layer (in case you are using DAO pattern).

Let's explore the second solution,  we could use SortedSet class as association, and make Officer implements Comparable, so Officer has natural order. This solution implies less work than the first one, but requires using @Sort hibernate annotation on association definition. So let's going to modify previous model to meet our new requirement. Note that there is no equivalent annotation in JPA specification.

First we are going to implement Comparable interface in Officer class.


We are ordering officer by name by simply comparing name field. Next step is annotating association with @Sort.


Notice that now officers association is implemented using SortedSet instead of a List.   Furthermore we are adding @Sort annotation to relationship, stating that officers should be natural ordered. Before finishing this post we will insist more in @Sort topic, but for now it is sufficient.

And finally a method that gets all officers of given starship ordered by name, printing them in log file.


All officers are sorted by their names, but let's examine which queries are sent to RDBMS.


First query is resulting of calling find method on EntityManager instance finding starship.

Because one to many relationships are lazy by default when we call getOfficers method and we access first time to SortedSet, second query is executed to retrieve all officers. See that no order by clause is present on query, but looking carefully on output, officers are retrieved in alphabetical order.


So who is sorting officer entities? The explanation is on @Sort annotation. In hibernate a sorted collection is sorted in memory being Java the responsible of sorting data using compareTo method.

Obviously this method is not the best performance-way to sort a collection of elements. It is likely that we'll need a hybrid solution between using SQL clause and using annotation instead of writing a query.

And this leads us to explain the third possibility, using ordering approach.


@OrderBy annotation, available as hibernate annotation and JPA annotation, let us specifies how to order a collection by adding “order by" clause to generated SQL.

Keep in mind that using javax.persistence.OrderBy allows us to specify the order of the collection via object properties, meanwhile org.hibernate.annotations.OrderBy order a collection appending directly the fragment of SQL (not HQL) to order by clause.

Now Officer class should not be touched, we don't need to implement compareTo method nor a java.util.Comparator. We only need to annotate officers field with @OrderBy annotation. Since in this case we are ordering by simple attribute, JPA annotation is used to maintain fully compatibility to other “JPA readyORM engines. By default ascendent order is assumed.



And if we rerun get all officers method, next queries are sent:


Both queries are still executed but note that now select query contains order by clause too.

With this solution you are saving process time allowing RDBMS sorting data in a fast-way, rather than ordering data in Java once received.

Furthermore OrderBy annotation does not force you to use SortedSet or SortedMap collection. You can use any collection like HashMap, HashSet, or even a Bag, because hibernate will use internally a LinkedHashMap, LinkedHashSet or ArrayList respectively.

In this example we have seen the importance of choosing correctly an order strategy. Whenever possible you should try to take advantage of capabilities of RDBMS, so your first option should be using OrderBy annotaion (hibernate or JPA), instead of Sort. But sometimes OrderBy clause will not be enough. In this case, I recommend you using Sort annotation with custom type (using java.util.Comparator class), instead of relaying on natural order to avoid touching model classes.


I wish this post helped you to understand differences between "sort" and "order" in hibernate.

Keep learning.

Music: http://www.youtube.com/watch?v=VHoT4N43jK8&ob=av3n

martes, abril 10, 2012

Why does the rain fall from above? Why do fools fall in love? Why do they fall in love? (Why Do Fools Fall In Love - Frankie Lymon)



More often than not our applications need to send emails to users notifying for example that its account has been created, they have purchased an item, or simply password remaining. When you are writing unit tests there is no problem because probably you will be mocking up interface responsible of sending an email. But what's happen with integration tests?

Maybe the logical path to resolve this problem is installing an email server and execute these tests against it. It is not  a bad idea, but note that you will need to configure your environment before executing your tests.  Your tests will depend on external resources, and this is a bad idea for integration tests. Furthermore these integration tests would not be portable against multiple machines if an email server is not installed previously.

To avoid this problem Dumbster comes to save us. Dumbster is a fake smtp server designed for testing applications that send email messages. It is written in Java so you can start and stop it directly from your tests.

Let's see an example, suppose we are developing an electronic shop, and when an order is placed and email to customer should be sent.

In this case we are going to use Spring Framework 3.1 to create our service layer and will also help us in testing.

Because of teaching purpose, I am not using mail templates, or rich mime types.

First class I am going to show you is Order, which as you can imagine represents an order:

Most important method here is toEmail() that returns email body message.

Next class is service responsible of place an order to delivery system:

This service class uses Spring classes to send an email to customer. See that two methods are present, one that sends a simple message, and the other one called placeOrderWithInvoice that sends an email with an attachment, concretely an invoice in jpg format.

And finally Spring context file:

Note that mail configuration is surrounded by a profile. This means that Spring will only create these beans when application is started up in production mode, and in this case production smtp location is set.

And now let's start with testing:

First of all we must create a Spring context file to configure smtp server location.

See that we are importing application-context.xml file but now we are defining  a new beans profile called integration, where we are redefining smtp connection (changing hostname and port) pointing to fake server.

And finally the test itself.

It is important to explain next parts:
  • @ActiveProfiles is an annotation to tell Spring context which environment should be loaded.
  • SimpleSmtpServer is the main class of Dumbster.
  • @Rule is responsible of starting and stopping smtp server for each method execution.
We have created two tests one that sends a plain message (an_email_should_be_sent_to_customer_confirming_purchase()) and the other one that sends a message with an attachment (an_email_with_invoice_should_be_sent_to_special_customer_confirming_purchase()).

The private methods are simply helper classes to create required classes.

Note that Hamcrest matcher bodyEqualTo comes from BodySmtpMessage class developed specifically for this example.

I wish you have found this post useful, and can give you an alternative when you want to write integration tests involving smtp email service.

Keep Learning,
Alex.

jueves, abril 05, 2012

Hey! Teachers! Leave them kids alone! All in all it's just another brick in the wall. All in all you're just another brick in the wall. (Another Brick In The Wall - Pink Floyd)


In current post I am going to show you how to configure your application to use slf4j and logback as logger solution.

The Simple Logging Facade For Java (slf4j) is a simple facade for various logging frameworks, like JDK logging (java.util.logging), log4j, or logback. Even it contains a binding tat will delegate all logger operations to another well known logging facade called jakarta commons logging (JCL).

Logback is the successor of log4j logger API, in fact both projects have the same father, but logback offers some advantages over log4j, like better performance and less memory consumption, automatic reloading of configuration files, or filter capabilities, to cite a few features.

Native implementation of slf4j is logback, thus using both as logger framework implies zero memory and computational overhead.

First we are going to add slf4j and logback into pom as dependencies

Note that three files are required, one for slf4j, and two for logback. The last two dependencies will change depending on you logging framework, if for example you want to still use log4j, instead of having logback dependencies we would have log4j dependency itself and slf4j-log4j12.

Next step is creating the configuration file. Logback supports two formats of configurations files, the traditional way, using XML or using a Groovy DSL style. Let's start with traditional way, and we are going to create a file called logback.xml into classpath. File name is mandatory, but logback-test.xml is also valid. In case that both files are found in classpath the one ended with -test, will be used.

In general file is quite intuitive, we are defining the appender (the output of log messages), in this case to console, a pattern, and finally root level logger (DEBUG) and a different level logger (INFO) for classes present in foo package. 

Obviously this format is much readable than typical log4j.properties. Recall on additivity attribute, the appender named STDOUT is attached to two loggers, to root and to com.lordofthejars.foo. because the root logger is the ancestor of all loggers, logging request made by com.lordofthejars.foo logger will be output twice. To avoid this behavior you can set additivity attribute to false, and message will be printed only once.

Now let's create to classes which will use slf4j. First class called BarComponent is created on com.lordofthejars.bar:


Note two big differences from log4j. The first one is that is no longer required the typical if construction above each log call.  The other one is a pair of '{}'. Only after evaluating whether to log or not, logback will format the message replacing '{}' with the given string value.

The other one called FooComponent is created on com.lordofthejars.foo:

And now calling foo and bar method, with previous configuration, the output produced will be:

Notice that debug lines in foo method are not shown. This is ok, because we have set to be in this way. 

Next step we are going to take is configuring logback, but instead of using xml approach we are going to use groovy DSL approach. Logback will give preference to groovy configuration over xml configuration, so keep in mind it if you are mixing configuration approaches.

So first thing to do is add groovy as dependency.

And then we are going to create the same configuration created previously but in groovy format.

You can identify the same parameters of xml approach but as groovy functions.

I wish you have found this post useful, and in next project, if you can, use slf4j in conjunction with logback, your application will run faster than logging with log4j.

Keep Learning,
Alex.


domingo, marzo 18, 2012

Moi je pense à l'enfant, Entouré de soldats, Moi je pense à l'enfant, Qui demande pourquoi (Non Non Rien N'a Changé - Les Poppys)


After 8 years developing server and embedded applications using Hibernate as ORM, squeezing my brain seeking solutions to improve Hibernate performance, reading blogs and attending conferences, I decided to share this knowledge acquired during these years with you.

This is the first post of many more posts to come:


Last year I went to Devoxx as speaker but also I attended Patrycja Wegrzynowicz conference about Hibernate Anti-Patterns. In that presentation Patrycja shows us an anti-pattern that shocks me because it proved to expect the unexpected.

We are going to see the effect it has when Hibernate detects a dirty collection and should re-create it.

Let's start with the model we are going to use, only two classes related with one-to-many association:




In previous classes, we should pay attention in three important points:
  • we are annotating at property level instead of field level.
  • @OneToMany and @ManyToOne uses default options (apart from cascade definition)
  • officers getter on Starship class returns an immutable list. 
To test model configuration, we are going to create a test which creates and persists one Starship and seven Officers, and in different Transaction and EntityManager finds created Starship.

Now that we have created this test, we can run it and we are going to observe Hibernate console output.

See the number of queries executed during first commit (persisting objects) and during commit of second transaction (finding a Starship). In total and ignoring sequence generator, we can count 22 inserts, 2 selects and 1 delete, not bad when we are only creating 8 objects and 1 find by primary key.

At this point let's examine why these SQL queries are executed:

First eight inserts are unavoidable; they are required by inserting data into database.

Next seven inserts are required because we have annotated getOfficers property without mappedBy attribute. If we look closely at Hibernate documentation, it points us that “Without describing any physical mapping, a unidirectional one to many with join table is used.”

Next group of queries are even stranger, the first select statement is to find Starship by id, but what are these deletes and inserts of data that we have already created?

During commit Hibernate validates whether collection properties are dirty by comparing object references. When a collection is marked as dirty, Hibernate needs to re-create whole collection, even containing the same objects. In our case when we are getting officers we are returning a different collection instance, concretely an unmodifiable list, so Hibernate considers officers collection as dirty.

Because a join table is used, Starship_Officer table should be re-created, deleting previous inserted tuples and inserting the new ones (although they have the same values).

Let's try to fix this problem. We start by mapping a bidirectional one-to-many association, with many-to-one side as owning side.

And now we rerun the same test again and we inspect the output again.


Although we have reduced the number of SQL statements, from 25 to 10, we still have an unnecessary query, the ones just in commit section of second transaction. Why if officers are lazy by default (JPA specification), and we are not getting officers in transaction, Hibernate executes a select on Officers table?  By the same reason as previously configuration, returned collection has different Java identifier, so Hibernate marks it as newly instantiated collection, but now obviously join table operations are no longer required. We have reduced the number of queries but we still have a performance problem. It is likely that we'll need some other solution, and the solution is not the most obvious one, we are not going to return collection objects returned by Hibernate, we might expand on this later, but we are going to change annotations location.

What we are going to do is to change mapping location from property approach to use field mapping. Simply we are going to move all annotations to class attributes rather than on getters.


And finally we are going to run the test again, and see what's happen:


Why using property mapping Hibernate runs queries during commit and using field mapping are not executed? When a Transaction is committed, Hibernate execute a flush to  synchronize the underlying persistent store with persistable state held in memory. When property mapping is used, Hibernate calls getter/setter methods to synchronize data, and in case of getOfficers method, it returns a dirty collection (because of unmodifiableList call). On the other side when we are using field mapping, Hibernate gets directly the field, so collection is not considered dirty and no re-creation is required.

But we have not finished yet, I suppose you are wondering why we have not removed Collections.unmodifiableList from getter, returning Hibernate collection? Yes I agree with you that we finished quickly, and change would look like @OneToMany(cascade={CascadeType.ALL}) public List<Officer> getOfficers() {officers;} but returning original collection ends up with an encapsulation problem, in fact we are broken encapsulation!. We could add to mutable list anything we like; we could apply uncontrolled changes to the internal state of an object.

Using an  unmodifiableList is an approach to use to avoid breaking encapsulation, but of course we could have used different accessors for public access and hibernate access, and not calling  Collections.unmodifiableList method.

Considering what we have seen today, I suggest you to use always field annotations instead of property mapping, we are going to save from a plenty of surprises.

Hope you have found this post useful.

Screencast of example shown here:



Download code
Music: http://www.youtube.com/watch?v=H14VIsnr6aA


martes, marzo 06, 2012

Keep 'em laughing as you go, Just remember that the last laugh is on you, And always look on the bright side of life..., Always look on the right side of life... (Always Look on the Bright Side of Life - Mony Python)




Integration tests are kind of tests which individual modules are combined and tested as a whole. Moreover integration tests might use system dependent values, accessing external systems like file system, database, web services, ..., and testing multiple aspects of one test case. We can say it is a high-level test.

This differs from unit test where only a single component is tested. Unit tests runs in isolation, mocking-out external components or using in-memory database in case of DAO layers. A unit test might be:
  • Repeatable.
  • Consistent.
  • In Memory.
  • Fast.
  • Self-validating.
  • Testing single concept

The problem when we are writing tests, is how to test rare (or untypical) conditions like "No disk space" in case of accessing file system, or "Connection lost" when executing a database query.

In unit testing this is not a problem you can mock up that component (database connection or filesystem access), generating required output like throwing IOException.

The problem becomes "harder" with integration tests. It would be strange to mock a component, when what you really want to do is validate the real system. So arrived at this point I see two possibilities:
  • Creating a partial mock.
  • Using fault injection.
In this post I am going to show you how to use fault injection approach to test unusual erroneous situations. 

Fault injection is a technique which involves changing application code under test at specific locations. This modifications will introduce faults on error handling code paths which otherwise would rarely be followed.

I am going to talk about how to use fault injection using Byteman in a JUnit test, and run it with Maven.

Let's start coding. Imagine you need to write a backup module, which shall save a string into a local file, but if hard disk is full (IOException is thrown), content shall be sent to remote server.

First we are going to code a class that writes content into file.



Next class, would be the one that sends data through socket but will not be shown, because it is not necessary for this example.

And finally the backup service responsible of managing described behavior.

And now testing time. First of all a brief introduction to Byteman.

Byteman is a tool which allows you to insert/modify code into an application at runtime. These modifications can be used to inject code on your compiled application causing unusual or unexpected operations (aka Fault Injection).

Byteman uses a clear, simple scripting language, based on a formalism called Event Condition Action (ECA) rules to specify where, when and how the original Java code should be transformed.

An example of ECA script is:

But Byteman also supports annotations. And in my opinion, annotations are a better approach than script file, because only watching your test case you can understand what you are exactly testing. If not you should switch context from unit class to script file to understand what are you testing.

So let's create an integration test that that validates that when IOException is thrown while writing content into disk, data is sent to a server.


See that BMUnitRunner (a special jUnit runner that comes with Byteman) is required.

First test called aFileWithContentShouldBeCreated is a standard test that writes Hello world into backup file.

But the second one dataShouldBeSentToServerInCaseOfIOException, has BMRule annotation which will contain when, where and what code should be injected. First parameter is the name of the rule, in this case a description of what we are going to do (throwing an IOException). Next attributes, targetClass and targetMethod configure when injected code should be added. In this case when FileUtils.createFileWithContent method is called. Next attribute targetLocation is location where code is inserted, and in our case is where createFileWithContent method calls write method of BufferedWriter. And finally what to do that obviously in this test is throwing an IOException.

So now you can go to your IDE and run them, and all tests should pass, but if you run through Maven using Surefire plugin, test will not work. To use Byteman with Maven, Surefire plugin should be configured in a specific way.


First important thing is adding tools jar as dependency. This jar provides classes needed in order to dynamically install the Byteman agent.

In Surefire plugin configuration is important to set useManifestOnlyJar to false to ensure that the Byteman jar appears in the classpath of the test JVM.  Also see that we are defining empty environment variables (BYTEMAN_HOME and org.jboss.byteman.home). This is because when it loads the agent the BMUnit package will use environment variable BYTEMAN_HOME or System property org.jboss.byteman.home to locate byteman.jar but only if it is a non-empty string. Otherwise it scans the classpath to locate the jar. Because we want to ensure that jar added on dependency section is used, we are overriding any other configuration present on system.

And now you can run mvn clean test and two tests are successful too.

See that Byteman opens a new world into how we are writing our integration tests, now we can test in an easy way unusual exceptions like Communications Error, Input/Output Exceptions or Out Of Memory Error. Moreover because we are not mocking FileUtils, we are executing real code; for example in our second test, we are running a few lines of FileUtils object until write method is reached. If we had mocked-up FileUtils class, these lines would not be executed. Thanks of using fault injection our code coverage is improved.

Byteman is more than what I have shown you, it also has built-ins designed for testing in multithreaded environments, parameter binding, and an amount of location specifiers, to cite a few things.

I wish you have found this post useful and help you testing rare conditions of your classes.

Download Code
Music: http://www.youtube.com/watch?v=WlBiLNN1NhQ

lunes, febrero 27, 2012

For everything I long to do, No matter when or where or who, Has one thing in common too, It's a, it's a, it's a, it's a sin (It's a Sin - Pet Shop Boys)-



Usually when you start a new project, it will contain several subprojects, for example one with core funcionalities, another one with user interface, or acceptance tests could be another one.

In next screen-cast post I am going to show you how to create a multimodule Maven project using M2 Eclipse plugin.

This is the first video I have done, I wish you find it really useful, and I will try to switch between blog posts and video posts.


jueves, febrero 23, 2012

If there ain't all that much to lug around, Better run like hell when you hit the ground. When the morning comes. (This Too Shall Pass - Ok Go)



Javascript has become much more important to interactive website development than five years ago. With the advent of HTML 5 and new Javascript libraries like jQuery and all libraries that depends on it, more and more functionalities are being implemented using Javascript on client side, not only for validating input forms, but as UI creator or Restful interface to server side.

With the growing use of Javascript, new testing frameworks have appeared too. We could cite a lot of them but in this post I am going to talk only about one called Jasmine

Jasmine is a BDD framework for testing Javascript code. It does not depend on any other JavaScript framework, and uses a really clean syntax, similar to xUnit framework. See next example:


To run Jasmine, you should simply point your browser to SpecRunner.html file which will contain  references to scripts under test and spec scripts. An example of SpecRunner is shown here:


From my point of view, Javascript has become so popular thanks to jQuery, which has greatly simplified the way we wrote Javascript code. And you can also test jQuery applications with Jasmine using Jasmine-jQuery module, which provides two extensions for testing:

  • set of matchers for jQuery framework like toBeChecked(), toBeVisible(), toHaveClass(), ...
  • an API for handling HTML fixtures which enable you to load HTML code to be used by tests. 
So with Jasmine you can test your Javascript applications; but we still have a small big problem. We should launch manually all tests by opening SpecRunner page into browser. But don't worry, exists jasmine-maven-plugin. This plugin is a Maven plugin that runs Jasmine spec files during test phase automatically, without needing to write SpecRunner boilerplate file.


So I suppose you want to start coding. We are going to create a simple jQuery plugin in standard Maven war layout, where Javascript files go to src/webapp/js, css at src/webapp/css and Javascript tests at src/test/javascript. Of course this directory structure is fully configurable, for example if your project was a Javascript project, src/main/javascript would be better place. Next image shows you directory layout.



Let's start. First of all we are going to create a css file which will define a red class. Not complicated code:


Next step, create a js file containing jQuery plugin code. It is a simple plugin that adds red class to affected element.

And finally html code that uses previous functionality. Not much secret, a div element modified by our jQuery plugin.

Now it is time for testing. Yes I know write tests first, and then business code, but I thought it will be more appropriate to show first the code to test.

So let's write Jasmine test file.

First thing to do is add a description (behaviour) of what we are going to test with describe function. Then with beforeEach, we are defining what function we want to execute before each test execution (like @Before JUnit annotation). In this case we are setting our fixture to test plugin code, you can set an html file as template or you can define html inline as done here.

And finally the test, written inside it function. Our test should validate that div element with id content, defined in fixture, should contain class attribute with value red after running redColor function. See how we are using jasmine-query toHaveClass matcher.


Now we have got our Javascript test written and it is time to run it, but instead of using SpecRunner file, we are going to make Jasmine tests being executed by Maven during test phase.

Let's see how to configure jasmine-maven plugin.

First thing to do is register plugin into pom.

And then configure plugin with required parameters. In two first parameters (jsSrcDir and jsTestSrcDir) we are setting Javascript locations for production code and testing code.  Since we are writing tests for jQuery plugin in Jasmine, both jquery and jasmine-jquery libraries should be imported into generated SpecRunner, and this is accomplished by using preloadSources tag.

All these parameters will change depending on your project but in case you are creating a Maven war project, this layout is enough.

And now you can run Maven by typing:

mvn clean test

And next console output should be printed:


I think we have integrated Javascript tests into Maven in an easy and clean way; and now our continuous integration server (Jenkins or Hudson) will run Javascript tests too. If you are planning to mount a continuous delivery system with your next project, and this project will contain Javascript file, take in consideration using Jasmine as BDD tool because it suits perfectly with Maven.

I wish you have found this post useful.

Download code

Music: http://www.youtube.com/watch?feature=player_embedded&v=qybUFnY7Y8w#!

jueves, febrero 16, 2012

Party rock is in the house tonight, Everybody just have a good time, And we gon' make you loose your mind, Everybody just have a good good good time. (Party Rock Anthem - LMFAO)




Redmine is a free and open source, flexible web-based project management and bug-tracking tool,  written using the Ruby on Rails framework.

Redmine supports multiple projects, with its own wiki, forum, time tracker and issues management.

Moreover Redmine implements a plugin platform so can be customized depending on your requirements. Exists plugins to work with Kanban, Scrum, notification plugins or reports.

What I really like about Redmine is that although does not fix the way you must work, it contains enough options to work in any kind of project management approach.

Redmine can be installed in different ways:
  • Using webrick (not recommended in production environments).
  • Run with mongrel and fastcgi.
  • Using Passenger.
  • Or package Redmine into war and deploy into  Java container like Tomcat or Glassfish.
In this post I am going to show you how to package Redmine 1.3 into a war file so could be executed into Tomcat7 and Linux. In theory should be work with Glassfish, JBoss, or any other OS.

First of all download JRuby 1.6.6, so open a terminal

wget http://jruby.org.s3.amazonaws.com/downloads/1.6.6/jruby-bin-1.6.6.tar.gz

And decompress downloaded file and move to /usr/share directory.

tar xvzf jruby-bin-1.6.6.tar.gz
sudo mv jruby-1.6.6/ /usr/share/jruby-1.6.6

Then update environment variables with JRuby installation directory.

sudo gedit /etc/environment


Finally try to execute jruby to see that has been installed correctly:

jruby -v

And JRuby version information should be printed on console.

Next step is to install required gems:


Redmine installation

Download Redmine 1.3 and install them on /usr/share directory:

Redmine requires a database to work. In this case I had already installed mySQL5, but postgeSQL is supported too. So let's configure mySQL into Redmine.

cd /usr/share/redmine-1.3.0/config/

Installation comes with a database template configuration file, we are going to rename it and modify to suit our environment. Moreover Redmine contains different start up modes (production, development, test). In our case because we are configuring a production environment, only production section will be touched.


After this modification, it is time to create Redmine user and database into mySQL.

mysql -u root -p


Now it is time to initialize Redmine



Next step is required because we are installing Redmine 1.3, in next versions of Redmine 1.4 and beyond will not be necessary. Open config/environment.rb and comment next like:

config.gem 'rubytree', :lib => 'tree'

And then create database schema and fill them with default data with next scripts.


Now we are going to test that Redmine is correctly configured. For this purpose we are going to use webrick.


and open a browser at http://localhost:3000 to start checking installation.

Redmine web page will be shown, you can login with username and password admin/admin

At this point we have Redmine correctly installed.


Configuring Email

An issue tracker should be able to send mail to affected users when a new issue is created or modified by  change.

If your mail server requires tls security protocol you should install action_mailer_optional_tls plugin.

This plugin requires git, if you don’t have installed yet, type:

sudo apt-get install git

and then run next command on Redmine directory:

jruby script/plugin install git://github.com/collectiveidea/action_mailer_optional_tls.git

Let’s configure email delivery:

Inside configuration file you will find common email settings. Depending on your email server these attributes can vary widely, so at this point I am going to show you a simple smtp server configuration using plain authentication at production environment. Go to last line of configuration.yml file and append next lines into production section.

All attributes are self-explanatory.

And before creating war file, let’s check that email is correctly configured. Again we use webrick.


Then open browser at http://localhost:3000 and log in with admin account.

Adjust admin email by clicking on My Account link, and at Email section, set administrator email.

After that we are going to test email configuration, from main menu, go to Administration -> Settings -> Email Notifications, add emission email and click on test email. After a few time, a test message will be sent to administrator email account.

We have succeeded in Redmine installation, now it is time to package it to be deployed into Tomcat.

Packaging Redmine

Before starting, because of incompatibility with installed jruby-rack gem, we should run next commands to install 1.0.10 version of jruby-rack.

Warble command requires a configuration file. This file is created using next command:

Edit Warble::Config section and configure config.dirs, config.gems and config.webxml.rails.env sections as:

And finally run:

warble

And Redmine war has been created and is ready to be deployed into Tomcat.


Although we have got a war file, I recommend not deleting Redmine installation directory because could be used in future to install new plugins, or modify any configuration. After a modification, calling warble command, a new war with that change would be created.


I wish you have found useful.