Llueve, llueve, y mientras nos mojamos como tontos. LLueve, llueve, y en un simple charco a veces nos ahogamos. (Llueve - Melendi)
martes, junio 26, 2012
Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye
Etiquetas: java, thread sleep, time, TimeUnit
martes, junio 19, 2012
NoSQLUnit 0.3.0 Released
Se você me olhar vou querer te pegar, E depois namorar curtição, Que hoje vai rolar... (Balada Boa - Gustavo Lima)
Introduction
NoSQLUnit
- The first one (in case it is possible) it is the in-memory mode. This mode takes care of starting and stopping database system in "in-memory" mode. This mode will be typically used during unit testing execution.
- The second one is the managed mode. This mode is in charge of starting NoSQL server but as remote process (in local machine) and stopping it. This will typically used during integration testing execution.
For this example we are going to use managed approach using ManagedMongoDb Rule) but note that in-memory MongoDb management is also supported (see documentation how).
Next step is configuring Mongodb rule in charge of maintaining MongoDb database into known state by inserting and deleting defined datasets. You must register MongoDbRule JUnit rule class, which requires a configuration parameter with information like host, port or database name.
To make developer's life easier and code more readable, a fluent interface can be used to create these configuration objects.
Let's see the code:
First thing is a simple POJO class that will be used as model class:
And now it is time for testing. In next test we are going to validate that a book is inserted correctly into database.
This Rule will be executed when test is loaded and will start a MongoDb instance. Also will shutdown the server when all tests have been executed.
Next Rule is executed before any test method, and is responsible of maintaining database into known state. Note that we are only configuring working database, in this case the test one.
And finally we annotate method test with @UsingDataSet indicating where to find data to be inserted before execution of each test, and @ShouldMatchDataSet locating expected dataset.
Final Notes
Although NoSQLUnit is at early stages, the part of MongoDb is almost finished, in next releases new features and of course new databases will be supported. Next NoSQL supported engines will be Neo4J, Cassandra, HBase and CouchDb.
Also read the documentation where you will find an full explanation of each feature explained here.
And finally any suggestion you have, any recommendation, or any advice will be welcomed.
Stay In Touch
Email: asotobu at gmail.com
Blog: Lord Of The Jars
Twitter: @alexsotob
Github: NoSQLUnit Github
Keep Learning,
Alex
Full Code
Music: http://www.youtube.com/watch?v=8y5CbeHY7X0
Etiquetas: dbunit, integration tests, junit testing java, mongodb, nosql, nosqlunit, unit testing
viernes, junio 08, 2012
Testing Abstract Classes (and Template Method Pattern in Particular)
Sick at heart and lonely, deep in dark despair. Thinking one thought only, where is she tell me where. (Heart Full of Soul - The Yardbirds).
But how are we going to write unit tests over calculate method if class is abstract and an implementation of read() method is required?
The first approach could be creating a fake implementation:
- Test will be less readable, readers should know the existence of these fake classes and must know exactly what are they doing.
- As a test writer you will spend time in implementing fake classes, in this case it is simple, but your project could have more than one abstract class without implementation, or even with more than one abstract method.
- Behaviour of fake classes are "hard-coded".
Download sourcecode
Music: http://www.youtube.com/watch?v=9O6eGOu27DA
Etiquetas: abstract methods, junit, mockito, pattern, template method pattern, test, unit testing
jueves, mayo 24, 2012
I'm guided by this birthmark on my skin, I'm guided by the beauty of our weapons, First we take Manhattan, then we take Berlin (First We Take Manhattan - Leonard Cohen)
Presentation abstract was:
Javascript Unit Testing with JS Test Driver
NoSQL Unit Testing with NoSQLUnit
Integration Tests with Arquillian
Acceptance Tests with Thucydides
For any question do not hesitate to write them in comments section or sending me an email.
I would like to say thank you to linuxtag folks for treating me so well and all people who came to presentation, for all of them a big thank you.
Music: http://www.youtube.com/watch?v=JTTC_fD598A&ob=av2e
Etiquetas: arquillian, arquillina, hamcrest, jenkins, js testdriver, junit testing java, maven, mockito, nosqlunit, thucydides
jueves, mayo 03, 2012
Nasha nasha krovatka delala shik - shik, Ya tvo pianino , a ty moi nastroishchik, My tak letali chto ne zametili tvoyu matʹ, Ahaa..I ona skazala chto ya prosto blaz (Mama Lyuba - Serebro))
If we take an overview on hibernate configuration, two properties, hibernate.format_sql and hibernate.use_sql_comments, should be enabled to print performed sql code through console.
This is a good start but it seems that we need more information to make an accurate diagnosis of performance like connection events, returned data on queries, or parameters binding (hibernate shows parameters values with question mark ?) . Hence, we need another way to inspect generated sql. Log4jdbc is a jdbc driver that can log sql/jdbc calls. In fact log4jdbc is an implementation of proxy pattern which will automatically load popular jdbc drivers (Oracle, Derby, MySql, PostgreSql, H2, Hsqldb, ...), intercept calls, log information, and then send data to "spied" driver.
In log4jdbc, there are 5 loggers that can be used depending on data to monitor:
- jdbc.sqlonly: logs executed sql with binding arguments replaced with bound data.
- jdbc.sqltiming: logs how long a sql took to execute.
- jdbc.audit: logs all jdbc calls except for ResultSets.
- jdbc.resultset: same as jdbc.audit plus ResultsSets.
- jdbc.connection: logs open and close connection events.
- jdbc.resultsettable: log results set in table format.
- configure it as datasource.
- available in maven repository (log4jdbc is not present on maven repositories).
First thing to do is add log4jdb-remix and slf4j-log4j12 dependencies to project:
After configuring loggers, run test and inspect the output.
Output is printed in a fashion format, queries contains bind parameters (not a question mark (?)), and process time is also informed.
Notice that logging more or less information is simply a matter of configuring a log. Moreover depending on log level, more or less information will be provided in each case. If logger is configured in DEBUG class name and line number (if available) at which the sql was executed will be included. In INFO will include only sql, and finally ERROR which show stacktraces if any SQLException occurs.
Optimizing hibernate applications can imply touching many parts of an application (JVM configuration, database engine, network, …) but one very important aspect to take care is the number of queries that are sent to RDBMS (for example N+1 problem), and the amount of data that is retrieved from database (Projection problem), and log4jdbc-remix perfectly fits to help in this purpose.
As final note, log4jdbc(-remix) is a jdbc logger, so it is not necessary to use only in hibernate applications, can be used with any framework that uses a datasource.
I wish this library would help you.
Keep Learning,
Alex
Download Code
Music: http://www.youtube.com/watch?v=h9HRHOXfRBI
Etiquetas: hibernate, hibernate performance, jdbc, jdbc logging, jpa, log4jdbc, log4jdbc-remix, logger, logging, resultset, sql
jueves, abril 19, 2012
Qui dit crise te dis monde dit famine dit tiers- monde, Qui dit fatigue dit réveille encore sourd de la veille, Alors on sort pour oublier tous les problèmes, Alors on danse... (Alors on Danse - Stromae)
We shall get all officers assigned to a starship by alphabetical order.
- implementing an HQL query with order by clause.
- using sort approach.
- using order approach.
Notice that now officers association is implemented using SortedSet instead of a List. Furthermore we are adding @Sort annotation to relationship, stating that officers should be natural ordered. Before finishing this post we will insist more in @Sort topic, but for now it is sufficient.
Obviously this method is not the best performance-way to sort a collection of elements. It is likely that we'll need a hybrid solution between using SQL clause and using annotation instead of writing a query.
And this leads us to explain the third possibility, using ordering approach.
Keep in mind that using javax.persistence.OrderBy allows us to specify the order of the collection via object properties, meanwhile org.hibernate.annotations.OrderBy order a collection appending directly the fragment of SQL (not HQL) to order by clause.
Now Officer class should not be touched, we don't need to implement compareTo method nor a java.util.Comparator. We only need to annotate officers field with @OrderBy annotation. Since in this case we are ordering by simple attribute, JPA annotation is used to maintain fully compatibility to other “JPA ready” ORM engines. By default ascendent order is assumed.
And if we rerun get all officers method, next queries are sent:
Furthermore OrderBy annotation does not force you to use SortedSet or SortedMap collection. You can use any collection like HashMap, HashSet, or even a Bag, because hibernate will use internally a LinkedHashMap, LinkedHashSet or ArrayList respectively.
I wish this post helped you to understand differences between "sort" and "order" in hibernate.
Keep learning.
Music: http://www.youtube.com/watch?v=VHoT4N43jK8&ob=av3n
Etiquetas: hibernate, hibernate performance, hibernate query, jpa, order by, OrderBy annotation, sort, Sort annotation
martes, abril 10, 2012
Why does the rain fall from above? Why do fools fall in love? Why do they fall in love? (Why Do Fools Fall In Love - Frankie Lymon)
Maybe the logical path to resolve this problem is installing an email server and execute these tests against it. It is not a bad idea, but note that you will need to configure your environment before executing your tests. Your tests will depend on external resources, and this is a bad idea for integration tests. Furthermore these integration tests would not be portable against multiple machines if an email server is not installed previously.
To avoid this problem Dumbster comes to save us. Dumbster is a fake smtp server designed for testing applications that send email messages. It is written in Java so you can start and stop it directly from your tests.
Let's see an example, suppose we are developing an electronic shop, and when an order is placed and email to customer should be sent.
In this case we are going to use Spring Framework 3.1 to create our service layer and will also help us in testing.
Because of teaching purpose, I am not using mail templates, or rich mime types.
First class I am going to show you is Order, which as you can imagine represents an order:
Next class is service responsible of place an order to delivery system:
And finally Spring context file:
And now let's start with testing:
First of all we must create a Spring context file to configure smtp server location.
And finally the test itself.
- @ActiveProfiles is an annotation to tell Spring context which environment should be loaded.
- SimpleSmtpServer is the main class of Dumbster.
- @Rule is responsible of starting and stopping smtp server for each method execution.
The private methods are simply helper classes to create required classes.
I wish you have found this post useful, and can give you an alternative when you want to write integration tests involving smtp email service.
Keep Learning,
Alex.
Etiquetas: dumbster, email testing, fake smtp server, integration tests, java, spring email, spring profiles, test, test email services
jueves, abril 05, 2012
Hey! Teachers! Leave them kids alone! All in all it's just another brick in the wall. All in all you're just another brick in the wall. (Another Brick In The Wall - Pink Floyd)
domingo, marzo 18, 2012
Moi je pense à l'enfant, Entouré de soldats, Moi je pense à l'enfant, Qui demande pourquoi (Non Non Rien N'a Changé - Les Poppys)
Last year I went to Devoxx as speaker but also I attended Patrycja Wegrzynowicz conference about Hibernate Anti-Patterns. In that presentation Patrycja shows us an anti-pattern that shocks me because it proved to expect the unexpected.
In previous classes, we should pay attention in three important points:
- we are annotating at property level instead of field level.
- @OneToMany and @ManyToOne uses default options (apart from cascade definition)
- officers getter on Starship class returns an immutable list.
At this point let's examine why these SQL queries are executed:
First eight inserts are unavoidable; they are required by inserting data into database.
Next seven inserts are required because we have annotated getOfficers property without mappedBy attribute. If we look closely at Hibernate documentation, it points us that “Without describing any physical mapping, a unidirectional one to many with join table is used.”
Next group of queries are even stranger, the first select statement is to find Starship by id, but what are these deletes and inserts of data that we have already created?
During commit Hibernate validates whether collection properties are dirty by comparing object references. When a collection is marked as dirty, Hibernate needs to re-create whole collection, even containing the same objects. In our case when we are getting officers we are returning a different collection instance, concretely an unmodifiable list, so Hibernate considers officers collection as dirty.
Because a join table is used, Starship_Officer table should be re-created, deleting previous inserted tuples and inserting the new ones (although they have the same values).
Let's try to fix this problem. We start by mapping a bidirectional one-to-many association, with many-to-one side as owning side.
Although we have reduced the number of SQL statements, from 25 to 10, we still have an unnecessary query, the ones just in commit section of second transaction. Why if officers are lazy by default (JPA specification), and we are not getting officers in transaction, Hibernate executes a select on Officers table? By the same reason as previously configuration, returned collection has different Java identifier, so Hibernate marks it as newly instantiated collection, but now obviously join table operations are no longer required. We have reduced the number of queries but we still have a performance problem. It is likely that we'll need some other solution, and the solution is not the most obvious one, we are not going to return collection objects returned by Hibernate, we might expand on this later, but we are going to change annotations location.
What we are going to do is to change mapping location from property approach to use field mapping. Simply we are going to move all annotations to class attributes rather than on getters.
Why using property mapping Hibernate runs queries during commit and using field mapping are not executed? When a Transaction is committed, Hibernate execute a flush to synchronize the underlying persistent store with persistable state held in memory. When property mapping is used, Hibernate calls getter/setter methods to synchronize data, and in case of getOfficers method, it returns a dirty collection (because of unmodifiableList call). On the other side when we are using field mapping, Hibernate gets directly the field, so collection is not considered dirty and no re-creation is required.
But we have not finished yet, I suppose you are wondering why we have not removed Collections.unmodifiableList from getter, returning Hibernate collection? Yes I agree with you that we finished quickly, and change would look like @OneToMany(cascade={CascadeType.ALL}) public List<Officer> getOfficers() {officers;} but returning original collection ends up with an encapsulation problem, in fact we are broken encapsulation!. We could add to mutable list anything we like; we could apply uncontrolled changes to the internal state of an object.
Using an unmodifiableList is an approach to use to avoid breaking encapsulation, but of course we could have used different accessors for public access and hibernate access, and not calling Collections.unmodifiableList method.
Considering what we have seen today, I suggest you to use always field annotations instead of property mapping, we are going to save from a plenty of surprises.
Hope you have found this post useful.
Screencast of example shown here:
Download code
Music: http://www.youtube.com/watch?v=H14VIsnr6aA
Etiquetas: dirty collection, encapsulation, hibernate, hibernate performance, java, onetomany, orm, sql, unmodifiableList
martes, marzo 06, 2012
Keep 'em laughing as you go, Just remember that the last laugh is on you, And always look on the bright side of life..., Always look on the right side of life... (Always Look on the Bright Side of Life - Mony Python)
- Repeatable.
- Consistent.
- In Memory.
- Fast.
- Self-validating.
- Testing single concept
- Creating a partial mock.
- Using fault injection.
Next class, would be the one that sends data through socket but will not be shown, because it is not necessary for this example.
And finally the backup service responsible of managing described behavior.
Byteman is a tool which allows you to insert/modify code into an application at runtime. These modifications can be used to inject code on your compiled application causing unusual or unexpected operations (aka Fault Injection).
See that BMUnitRunner (a special jUnit runner that comes with Byteman) is required.
First test called aFileWithContentShouldBeCreated is a standard test that writes Hello world into backup file.
But the second one dataShouldBeSentToServerInCaseOfIOException, has BMRule annotation which will contain when, where and what code should be injected. First parameter is the name of the rule, in this case a description of what we are going to do (throwing an IOException). Next attributes, targetClass and targetMethod configure when injected code should be added. In this case when FileUtils.createFileWithContent method is called. Next attribute targetLocation is location where code is inserted, and in our case is where createFileWithContent method calls write method of BufferedWriter. And finally what to do that obviously in this test is throwing an IOException.
So now you can go to your IDE and run them, and all tests should pass, but if you run through Maven using Surefire plugin, test will not work. To use Byteman with Maven, Surefire plugin should be configured in a specific way.
First important thing is adding tools jar as dependency. This jar provides classes needed in order to dynamically install the Byteman agent.
In Surefire plugin configuration is important to set useManifestOnlyJar to false to ensure that the Byteman jar appears in the classpath of the test JVM. Also see that we are defining empty environment variables (BYTEMAN_HOME and org.jboss.byteman.home). This is because when it loads the agent the BMUnit package will use environment variable BYTEMAN_HOME or System property org.jboss.byteman.home to locate byteman.jar but only if it is a non-empty string. Otherwise it scans the classpath to locate the jar. Because we want to ensure that jar added on dependency section is used, we are overriding any other configuration present on system.
And now you can run mvn clean test and two tests are successful too.
See that Byteman opens a new world into how we are writing our integration tests, now we can test in an easy way unusual exceptions like Communications Error, Input/Output Exceptions or Out Of Memory Error. Moreover because we are not mocking FileUtils, we are executing real code; for example in our second test, we are running a few lines of FileUtils object until write method is reached. If we had mocked-up FileUtils class, these lines would not be executed. Thanks of using fault injection our code coverage is improved.
Byteman is more than what I have shown you, it also has built-ins designed for testing in multithreaded environments, parameter binding, and an amount of location specifiers, to cite a few things.
I wish you have found this post useful and help you testing rare conditions of your classes.
Download Code
Etiquetas: byteman, code coverage, fault injection, integration tests, junit, maven, surefire plugin, test, testing unusual exceptions
lunes, febrero 27, 2012
For everything I long to do, No matter when or where or who, Has one thing in common too, It's a, it's a, it's a, it's a sin (It's a Sin - Pet Shop Boys)-
Etiquetas: dependencyManagement, eclispe, m2eclipse, maven, maven multimodule, maven multiproject, maven screencast
jueves, febrero 23, 2012
If there ain't all that much to lug around, Better run like hell when you hit the ground. When the morning comes. (This Too Shall Pass - Ok Go)
From my point of view, Javascript has become so popular thanks to jQuery, which has greatly simplified the way we wrote Javascript code. And you can also test jQuery applications with Jasmine using Jasmine-jQuery module, which provides two extensions for testing:
- set of matchers for jQuery framework like toBeChecked(), toBeVisible(), toHaveClass(), ...
- an API for handling HTML fixtures which enable you to load HTML code to be used by tests.
So I suppose you want to start coding. We are going to create a simple jQuery plugin in standard Maven war layout, where Javascript files go to src/webapp/js, css at src/webapp/css and Javascript tests at src/test/javascript. Of course this directory structure is fully configurable, for example if your project was a Javascript project, src/main/javascript would be better place. Next image shows you directory layout.
Now it is time for testing. Yes I know write tests first, and then business code, but I thought it will be more appropriate to show first the code to test.
So let's write Jasmine test file.
First thing to do is add a description (behaviour) of what we are going to test with describe function. Then with beforeEach, we are defining what function we want to execute before each test execution (like @Before JUnit annotation). In this case we are setting our fixture to test plugin code, you can set an html file as template or you can define html inline as done here.
And finally the test, written inside it function. Our test should validate that div element with id content, defined in fixture, should contain class attribute with value red after running redColor function. See how we are using jasmine-query toHaveClass matcher.
All these parameters will change depending on your project but in case you are creating a Maven war project, this layout is enough.
And now you can run Maven by typing:
mvn clean test
And next console output should be printed:
Music: http://www.youtube.com/watch?feature=player_embedded&v=qybUFnY7Y8w#!
jueves, febrero 16, 2012
Party rock is in the house tonight, Everybody just have a good time, And we gon' make you loose your mind, Everybody just have a good good good time. (Party Rock Anthem - LMFAO)
- Using webrick (not recommended in production environments).
- Run with mongrel and fastcgi.
- Using Passenger.
- Or package Redmine into war and deploy into Java container like Tomcat or Glassfish.
Redmine installation
Download Redmine 1.3 and install them on /usr/share directory:
cd /usr/share/redmine-1.3.0/config/
Installation comes with a database template configuration file, we are going to rename it and modify to suit our environment. Moreover Redmine contains different start up modes (production, development, test). In our case because we are configuring a production environment, only production section will be touched.
mysql -u root -p
Now it is time to initialize Redmine
Next step is required because we are installing Redmine 1.3, in next versions of Redmine 1.4 and beyond will not be necessary. Open config/environment.rb and comment next like:
config.gem 'rubytree', :lib => 'tree'
And then create database schema and fill them with default data with next scripts.
Now we are going to test that Redmine is correctly configured. For this purpose we are going to use webrick.
and open a browser at http://localhost:3000 to start checking installation.
Redmine web page will be shown, you can login with username and password admin/admin
At this point we have Redmine correctly installed.
I wish you have found useful.














