jueves, marzo 31, 2011

Teo Torriate Konomama Iko Aisuruhito Yo Shizukana Yoi Ni Hikario Tomoshi Itoshiki Oshieo Idaki

This week I have reached 10K visits. I would like to say thank you to everyone who have read this blog.
Also I would like to dedicate to Japanese people, and all heroes of Fukushima, without them the disaster could have been worse.
I wish I will reach 25K visits as soon as possible and I could explain it with better news.

Alex.

domingo, marzo 27, 2011

A Te Che Sei Il Mio Grande Amore Ed Mio Amore Grande

Maven Verifier Plugin, is a Maven Plugin that is used for verifying the existence of certain conditions into files content. These conditions are expressed in form of regular expression, so if regular expression is matched in defined resources content, no error is showed, if not, build fails and error message is shown indicating which file does not matches the given expression.

Why I find this plugin useful? Usually my projects have three execution environments, one for unit testing, another for integration/acceptance test and production one. As you can imagine, each one has its configuration, like database configuration. Each environment has a different database, for example unit testing has a HSQL engine, while integration and production have a PostgreSQL. For dealing with that problem I usually create three different Spring files, each one loading required properties, and depending on environment, the applicationContext is modified for importing required resources. Let's look another example, in my work, I develop planners for instruments, in integration tests, an emulator is used, while in acceptance tests and production, as you can imagine we are using a real instrument. For that reason, we must inject into our business objects, which driver to use (emulator driver or real driver), and as you can imagine two spring files are created and are imported into application context depending on the stage of building.

The problem with changing importing files in applicationContext depending on environment, resides that implies a manual human process, and because it is human, an error can occurs and delivery a version with incorrect configured application context. Meanwhile Spring Framework 3.1 is not released as stable version (Spring Profiles would resolve that problem), Maven Verifier Plugin can help us to avoid that problem.

In our Continuous Integration System, one of our steps before releasing a version is check that all configuration files are configured with correct values, keep in mind that I have showed only two examples, but some other values are changed between development environment and production environment, like time constants, file locations, log level ... Thanks of that plugin this checking procedure is executed automatically.

Let's see an example:

First of all pom.xml must be configured for using Maven Verifier Plugin:



<verificationFile&gt; tag is where you configure which files should be verified and which rules should be applied.

And verifications-rules.xml:



In this example we are verifying that property-placeholder defined in applicationContext.xml are loading properties from META-INF/spring and not from any other location. Same approach can be used for verifying injected beans, constants, log level ... case that any verification does not matches, the build result would be a fail.

Although I always though myself "hey men this could not happen to me", one day, and you don't know whyhappens, and you upload code not correctly configured, and when VVT department starts verifications, project starts to crash, and then all test protocol should be cancelled, you must change one line, upload one line change to repository, re-deploy all application, and start again.

Since that day, I always create a regular expression for assuring that when my code is deployed for production, all configuration files contains correct values.

When Spring Framework 3.1 sees the light all will be different, meanwhile and for legacy code, try Maven Verifier Plugin.

miércoles, marzo 23, 2011

Sa Zebra Que Passa Un Semàfor I Com Se Desmunta Un Bidet, Cosmètics I Margaret Astor, Ja Sé Com S´escriu Juliette!!!

JDK 7 is coming, yes finally it seems that will see the light, without some really nice features like Closures, but with other nice improvements, like NIO 2.0, Project Coin, or auto-close resources. One new features that I really like is the inclusion of new concurrency classes specified in jsr166y. In this post I will summarize these new classes that can help us in parallel programming using Java. Let's make a brief introduction of new classes and creates a simple example:
Interface TransfereQueue with its implementation LinkedTransferQueue. TransferQueue is a BlockingQueue which producers may wait until consumer receives elements. Because it is also a BlockingQueue, programmer can choose to wait until consumers receives elements (TransferQueue.transfer()) or simply put the element without waiting as done in jsr166 (BlockingQueue.put()). This class should be used when your producer sometimes await receipt of elements, and sometimes it should only enqueue elements without waiting. An example where producer is blocked until consumer polls an element:
And the output is:
Before Transfer.
<producer thread wait 5 seconds>
Before Consumer.
Hello World!!
After Consumer.
After Transfer.
But what's happen if I change transfer call to put call? The output is:
Before Transfer.
After Transfer.
< producer thread wait 5 seconds>
Before Consumer.
Hello World!!
After Consumer.
Producer finishes its work just after enqueue Hello World message.

Class Phaser. This class is like CyclicBarrier class because it waits until all parties reach barrier point for continuing thread execution. The difference is that Phaser class is more flexible. The number of parties are not static like CycleBarrier, one can register and deregister dynamically at any time. Also each Phaser has a phase number which enables independent control of actions upon arrival at a phaser and upon awaiting others. New method like arrive, awaitAdvance are provided. In termination state Phaser also provides a method for avoiding termination, this method by default returns true, meaning that when all parties reach the barrier point barrier is terminated, but overriding onAdvance method you could modify this behavior, doing that all threads perform an iteration over its task.

Let's see an example of using Phaser as CountDownLatch, but as you notice some differences can be observed, first of all is that we initialize Phaser to 1 (self Thread) and then we register each parties dynamically. With CountDownLatch we should done the same but initializing statically to 15+1. arriveAndAwaitAdvance has the same behavior as we call CyclicBarrier.await, and getArrivedParties() returns how many parties have arrived to barrier point. See that in following example when second party arrives, does not call arriveAndAwaitAdvance() but calls arrive, this method notifies to Phaser that it has arrived to barrier point but it will not block, it is going to execute some extra logic, and only after that it will wait until all other parties have arrived to barrier point, calling method awaitAdvance.

I suppose you are wondering what is the returning value of  arrive method. Phaser.arrive method is the responsible of notifying that thread has arrived to barrier point and returns immediately. And it returns a phaser number. Phaser Number is an integer managed by Phaser class, initially is 0 and each time all parties arrive to a barrier point, that phaser number is incremented. Phaser.awaitAdvance stops thread execution until current phase number has been incremented.
Output of previous program:
Hello World 2
Hello World 0
<Thread that prints Hello World 0 are executing Thread.sleep(5000) >
Hello World 6
Hello World 10
Hello World 1
Hello World 3
Hello World 8
Hello World 4
Hello World 13
Hello World 11
Hello World 9
Hello World 14
Hello World 7
Hello World 12
<phase number == 0>
Hello World 5
<phase number == 1>
END
After Sleep
See that After Sleep is executed after all threads have been arrived to barrier point, including "the parent thread".

Class ForkJoinTask interface is a lightweight form of Future. Main intended use of this class is for computational tasks calculating pure functions or operating on purely isolated objects. The primary coordination mechanisms are fork(), that arranges asynchronous execution, and join(), that doesn't proceed until the task's result has been computed.

ForkJoinTask have two abstract implementations that can be extended RecursiveAction and RecursiveTask.  Imagine next isolated problem, we have a square matrix and we want to sum all its values. Imagine that this matrix is huge, and you want to partitioned it into much smaller matrix so calculations can be executed in parallel. For simplifying the problem and showing how to use ForkJoinTask the matrix will be an 2x2 square matrix, that obviously should not be parallelized in normal circumstances.

Sequential algorithm should be:
Result is 10.

And now parallel solution using RecursiveTask.
And of course the output is 10 too. Take a look that we are using ForkJoinPool for specifying the number of computer processors to maximize usage of system resources.
See how trivial solution cuts the recursive tasks returning a valid result,and how in not trivial solution what we are doing is dividing matrix into four small matrix, and executes the sum of these new matrix into different threads (calling fork()) and join method waits until compute method returns a result. As you can see, there aren't a lot of new classes for concurrency in JDK 7, but I think that these new classes can help in common concurrency problems, specially ForkJoin classes.

lunes, marzo 21, 2011

Que Rabia Que Ternura Ser El Sol Y La Luna Esto Es Una Locura Lo Que Siento Mujer



From Wikipedia "Maven is a software tool for project management and build automation.". Most of us in our projects are using Maven as a build tool. As you probably know, the main file in Maven is POM (Project Object Model).  POM file provides all configuration for a single project, like name, dependencies, plugins to be used, ... In large projects, you divide your project in several subprojects, each one with its POM. In this case it is a good practice to create a root POM through which one can compile all the modules with a single command. Also a parent POM can be defined for common plugins or configurations.
After this brief Maven introduction I expose a recurrent problem that I had with Maven. The problem is that in each project I started, I made a copy paste of POM files from my previous project to the new one. After some copy paste projects, I decided to create three templates, one for parent POM, another for project/subprojects, and one settings.xml that although this file is computer dependent, some configuration like repository server username/password and plugins repository are specified for all computers.
In both files I have defined next sections:
  • Information about project.
  • Distribution Server for uploading/downloading Artifacts.
  • Some reports for assuring quality.
  • SCM configuration for some Source Control Managers.
  • Two profiles.
  • Definition of useful Maven plugins.
settings file settings.xml file contains elements used for defining Maven configuration.

I define:

  • tag <localrepository> an alternative directory for storing local artifacts rather than home directory. I really don't like use my home directory as local repository, because my personal documents are mixed with dependencies.
  • tag <servers> for specifying login and password for snapshot and release repository server. I use Nexus Repository Manager for uploading/downloading artifacts, and is common that each developer has its authentication data.
  • tag <pluginrepository> where I inform Maven where it can download plugins. In my case Nexus Repository, but an external repository can also be used. This information is present in settings.xml because you can run Maven without any project created previously (when start a project with archetypes). And in this case Maven will use settings.xml for finding where plugins should be downloaded.
superPOM file POMs that extend a parent POM inherit certain values from that parent. This is useful for defining typical values that are shared across all projects. Moreover a parentPOM should acts as aggregation POM too, because of aggregation, one can release all subprojects simply goaling this file. I define:
  • tag <packaging> must be pom.
  • tag <properties> defines servers location.
  • tag <build> defining directory locations for classes, resources, test classes, ... Although it is the default Maven configuration, I prefer having always present in POM files so no misunderstanding can occurs.
  • tag <plugins> I define 3 plugins: maven-compiler-plugin that should only compile with version 1.6, maven-deploy-plugin for deploying project, and versions-maven-plugin for managing project/dependencies versions.
  • tag <reporting> only one reporting is executed in each execution, and this is maven-surefire-report-plugin used for reporting why a JUnit test has failed.
  • tag <profiles> defines two profiles. One called source-javadoc that generates a zip file with project source files, and an archive with project javadoc too. Can be executed with option -Psource-javadoc. The other profile is called metrics. This profile executes report plugins for creating reports about Source Quality. Because it is an expensive process, I define them in a different profile rather than default, so in my Continuous Integration System, does not executed every night but once per week. Plugins are: maven-site-plugin, cobertura-maven-plugin, maven-checkstyle-plugin, maven-pmd-plugin and findbugs-maven-plugin.
  • tag <dependencies> I define common dependencies across all projects. As you can imagine, these dependencies are about testing, so JUnit is defined for testing, Mockito for mocking and Hamcrest.
  • tag <repositories> defines repositories where artifacts will be uploaded/downloaded. It is a good practice to have a central artifact repository in your company divided between Snapshots and Release jars. In our case Nexus Repository Manager is used. Tag <id> inside is used in settings.xml for specifying login and password of identified server.
templatePOM file
Template POM is standard POM for all projects/subprojects. In this POM you will define specific configuration of each project, like name, version, ... and this POM is which inherits superPOM but also superPOM aggregates it using <module> tag.
In this file should be configured the groupId and artifactId with project specific configuration. 
Download templatePOM.xml

These three files are available here, feel free to download, use, and modify them. If you have any suggestion, it would be a pleasure to watch and adding to that files.

domingo, marzo 13, 2011

Makoto No Kokoro Wo Shiru Wa Mori No Sei Mononoke-tachi Dake Mononoke-tachi Dake

Chrome Developer Tools are tools that comes with Google Chrome Browser that allows web developers and programmers deep access into the internals of the browser and their web application.

In this post I will only write about using Chrome Dev Tools for detecting performance problems, auditing problems (Chrome also suggests you how to fix them), and a possible implementation for fixing them.

For this purpose I have developed a web application with Spring Roo. It is defined by a simple Entity called Person that has only two attributes, name and age. Spring Roo is a next-generation rapid application development tool for Java developers. With Roo you can easily build full Java applications in minutes. In this case will create a website with CRUD operations for Person entity.

// Spring Roo 1.1.0.RELEASE [rev 793f2b0]
project --topLevelPackage org.chrome.devtools.example --projectName ChromeDevTools --java 6
persistence setup --database HYPERSONIC_IN_MEMORY --provider HIBERNATE
entity --class ~.domain.Person
field string --fieldName name --notNull
field number --fieldName age --type java.lang.Integer
controller scaffold ~.web.PersonController
security setup
web flow
json all 

The action is starting right now:

For accessing to Developer Tools, you should open Google Chrome Browser, and then go to Tool Icon -> Tools -> Developer Tools or Ctrl+Shift+I.

When you access to Developer Tools an split menu appears, with eight options:

  • Elements: In this tab you can inspect HTML code and CSS code. When you select an element, this element is highlighted in the browser, and its CSS properties are shown. These CSS properties can be modified on-the-fly and see immediately how change is affected.


  • Resources: In this tab, you can watch, which resources are loaded, and internal resources like cookies, sessions, HTML5 local databases, application cache, ...


  • Network: In this tab, you see for each resource how much time is took between is requested and is sent. Each request is summarized in a time-line graph, and ordering by time you can see which resources are slowest to be received.


  • Script: This tab shows you scripts executed in current page. Also acts as a debugger, you can set breakpoints, and debug your Javascript code as Eclipse does with Java


  • Timeline: Is the next tab that is really interesting to making a performance diagnostic. Time-line tab is more or less like Network tab, but instead of showing networking time, it shows time spent by browser like sending requests, evaluating scripts, painting components, ... Also has a sub-tab for watching memory consumption.


  • Profile: This tab is a typical profiler but for browsers.

  • Audit: And finally the last tab. This tab audits current page finding points of improvement. For example in "Show Person" page, Chrome has found:
    • Enable Gzip compression: browser has the feature of decompressing data encoded with gzip. If you are using Rest application with Spring check out this blog: http://www.oudmaijer.com/2011/02/23/spring-resttemplate-and-gzip-compression-continued/ if not, and you are using Spring MVC you can try implementing a HandlerInterceptor. The most global solution is showed in http://tim.oreilly.com/pub/a/onjava/2003/11/19/filters.html where a Filter is used for compressing output. In summary what all solutions do is checking if request header (generated by browser) supports gzip Accept-Encoding: gzip, deflate and if it is the case compress response stream and modifies response header to notify to web client that content is encoded in gzip Content-Encoding: gzip.
    • Leverage browse caching: static resources should be interesting to be cached by the browser, so only first time that are requested are sent. In Spring 3 there is a <mvc:resources> that works perfect for this porpoise. http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/mvc.html#mvc-static-resources or using an interceptor:

      <mvc:interceptors>
         <mvc:interceptor>
          <mapping path="/static/*"/>
          <bean id="webContentInterceptor" 
               class="org.springframework.web.servlet.mvc.WebContentInterceptor">
              <property name="cacheSeconds" value="31556926"/>
              <property name="useExpiresHeader" value="true"/>
              <property name="useCacheControlHeader" value="true"/>
              <property name="useCacheControlNoStore" value="true"/>
          </bean>
         </mvc:interceptor>
      </mvc:interceptors>

    • Optimize the order of styles and scripts: always define first external CSS and then external Javascript files, this ensures a better downloading performance. Also defining CSS into HEAD section makes page to be rendered progressively. In this case there is no server side effects, you should only keep in mind that rule when you define these kind of static resources.


If you have any performance problem in your web application, thanks of Google Chrome you can make an initial diagnostic and see where time is lost (client or server side). Also you can take a look of what Google Chrome Audit suggests you for making your application loading faster.


miércoles, marzo 02, 2011

Lluita Pels Teus Somnis T´Estan Esperant Fes Que Siguin Certs Abraça´ls.


Testing asynchronous systems are difficult, especially because your tests can fail because of true invalid assertion, but also because your asynchronous system has not had time to process the request and assertion fails. This scenario is typical in JMS environments. With Unit Testing you write a mock that "simulates" JMS behavior, but in case of integration tests, you use a real JMS server for validating all scenario and then some help for dealing with time problem would be welcomed.

So in summary, our test can pass, fail, or failed because tests require more time for processing the request. Let's see an example:

Imagine that we have the next requirement: "when a new user is registered into application, user information should be sent to a JMS Queue".

Moreover when JMS Consumer consumes the user information, it should insert it into database.

Let's write integration tests for these requirements.

@Test
public void addNewUserIsSentToQueue() {
   //Publish an asynchronous event to a JMS system.
   publish(new AddNewUserEvent(user));
   //Retrieve User from database
   User repoUser = userRepository.getUser(user);
   assertThat(repoUser, is(user));
}

Previous test could fail not because of bad code (bug), but because takes more time of publishing and inserting user into repository, than querying to repository and executing assertion.

A possible solution could be:

@Test
public void addNewUserIsSentToQueue() {

   //Publish an asynchronous event to a JMS system.
   publish(new AddNewUserEvent(user));

   try {
      Thread.sleep(5000);
   }catch(Exception e) {
fail(e);
   }

   //Retrieve User from database
   User repoUser = userRepository.getUser(user);
   assertThat(repoUser, is(user));
}

This solution is about wating 5 seconds to give time to consumer for inserting data. It is a possible solution, but I find it hard to read, not a clean solution. In my opinion code should be "human readable", even tests (think about why Hamcrest is important).

Awaitility allows you to express expectations of an asynchronous system in a concise and easy to read manner, avoiding you from dealing with handling threads, timeouts and concurrency issues.

Let's examine some examples:

@Test
public void addNewUserIsSentToQueue() {
//Publish an asynchronous event to a JMS system.
publish(new AddNewUserEvent(user));
//Awaitility waits until asynchronous operation completes
await("user inserted").atMost(5, SECONDS).until(newUserIsAdded());
assertThat(repoUser, is(user));
}

See that unit test is executing the same, but you agree that with Awaitility is cleaner, you can read without any doubt that at most it will wait 5 seconds until new user is added, expired that time, a timeout exception is thrown. But, what does newUserIsAdded() method? It is a simply callback.

private Callable<Boolean> newUserIsAdded() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
return user.equals(userRepository.getUser(user));
}
}
}

Internally Awaitility has a polling time (100ms by default), meaning that every 100ms the callback is called, if finally returns true, polling is stopped, if not, the exception is thrown.

Depending on polling intervals and callback logic, you can saturate your testing machine, for this reason, you can change that value:

with().pollInterval(1, SECONDS).await("user inserted").atMost(5, SECONDS).until(newUserIsAdded());

Awaitility also supports a waiting depending on a class attribute instead of a method call. It is like watching a flag until it changes its value. I prefer callback approach, in front of monitoring a private attribute, because of breaking encapsulation, but I will explain how to do it because it is another possibility.

await().until( fieldIn(user).ofType(int.class).andWithName("userId"), equalTo(2) );

And that's Awaitility, as I have already mentioned, Awaitility gives you the possibility of expressing expectations in asynchronous integrations tests.