miércoles, marzo 04, 2015

Restful Web Service Guide

Nowadays more and more projects are developed using the tuple AngularJs in frontend + Java EE (or Spring Framework) in backend. The communication between AngularJs and Java EE is done by using Restful Web Services

In my company this tuple is implemented in every project and we are several teams working on different projects. So it seems clear that it would have sense that all Restful Web Services should be done in similar way. For this reason we (the architecture team) decided to create a Restful Web Service guide where all teams could base their API design. In this document we mention basic concepts of Rest, but also how to internationalize a Rest API, Pagination, Security with Json Web Tokens or Http Error codes.

This guide has been released under CC license and is published in github. You can watch it without any problem, send a PR with any improvement or open an issue to discuss anything.

We keep learning,

It might seem crazy what I'm about to say, Sunshine she's here, you can take away, I’m a hot air balloon, I could go to space ,With the air, like I don't care baby by the way (Happy - Pharrell Williams)

Music: https://www.youtube.com/watch?v=y6Sxv-sUYtM

jueves, enero 22, 2015

Self-Signed Certificate for Apache TomEE (and Tomcat)

Probably in most of your Java EE projects you will have part or whole system with SSL support (https) so browsers and servers can communicate over a secured connection. This means that the data being sent is encrypted, transmitted and finally decrypted before processing it.

The problem is that sometimes the official "keystore" is only available for production environment and cannot be used in development/testing machines. Then one possible step is creating a non-official "keystore" by one member of the team and share it to all members so everyone can locally test using https, and the same for testing/QA environments.

But using this approach you are running to one problem, and it is that when you are going to run the application you will receive a warning/error message that the certificate is untrusted. You can live with this but also we can do it better and avoid this situation by creating a self-signed SSL certificate.

In this post we are going to see how to create and enable SSL in Apache TomEE (and Tomcat) with a self-signed certificate.

The first thing to do is to install openssl. This step will depend on your OS. In my case I run with an Ubuntu 14.04.

Then we need to generate a 1024 bit RSA private key using Triple-DES algorithm and stored in PEM format. I am going to use {userhome}/certs directory to generate all required resources, but it can be changed without any problem.

Generate Private Key

openssl genrsa -des3 -out server.key 1024

Here we must introduce a password, for this example I am going to use apachetomee (please don't do that in production).

Generate CSR

Next step is to generate a CSR (Certificate Signing Request). Ideally this file will be generated and sent to a Certificate Authority such as Thawte or Verisign, who will verify the identity. But in our case we are going to self-signed CSR with previous private key.

openssl req -new -key server.key -out server.csr

One of the prompts will be for "Common Name (e.g. server FQDN or YOUR name)". It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. In case of development machine you can set "localhost".

Now that we have the private key and the csr, we are ready to generate a X.509 self-signed certificate valid for one year by running next command:

Generate a Self-Signed Certificate

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

To install certificate inside Apache TomEE (and Tomcat) we need to use a keystore. This keystore is generated using keytool command. To use this tool, the certificate should be a PKCS12 certificate. For this reason we are going to use openssl to transform the certificate to a PKCS12 format by running:

Prepare for Apache TomEE

openssl pkcs12 -export -in server.crt -inkey server.key -out server.p12 -name test_server -caname root_ca

We are almost done, now we only need to create the keystore. I have used as the same password to protect the keystore as in all other resources, which is apachetomee.

keytool -importkeystore -destkeystore keystore.jks -srckeystore server.p12 -srcstoretype PKCS12 -srcalias test_server -destalias test_server

And now we have a keystore.jks file created at {userhome}/certs.

Installing Keystore into Apache TomEE

The process of installing a keystore into Apache TomEE (and Tomcat) is described in http://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html. But in summary the only thing to do is open ${TOMEE_HOME}/config/server.xml and define the SSL connector.

<Service name="Catalina">
  <Connector port="8443" protocol="HTTP/1.1"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               keystoreFile="${user.home}/certs/keystore.jks" keystorePass="apachetomee"
               clientAuth="false" sslProtocol="TLS" />

Note that you need to set the keystore location in my case {userhome}/certs/keystore.jks and the password to be used to open the keystore which is apachetomee.

Preparing the Browser

Before starting the server we need to add the server.crt as valid Authorities in browser.

In Firefox: Firefox Preferences -> Advanced -> View Certificates -> Authorities (tab) and then import the server.crt file.

In Chrome: Settings -> HTTPS/SSL -> Manage Certificates ... -> Authorities (tab) and then import the server.crt file.

And now you are ready to start Apache TomEE (or Tomcat) and you can navigate to any deployed application but using https and port 8443.

And that's all, now we can run tests (with Selenium) without worrying about untrusted certificate warning.

We keep learning,

Dog goes woof, Cat goes meow, Bird goes tweet and mouse goes squeek (What Does the Fox Say - Ylvis)

Music: https://www.youtube.com/watch?v=jofNR_WkoCE

lunes, enero 12, 2015

Apache TomEE + JMS. It has never been so easy.

I remember old days of J2EE (1.3 and 1.4) that it was incredibly hard to start a project using JMS. You needed to install a JMS broker, create topics or queues and finally start your own battle with server configuration files and JNDI.

Thanks of  JavaEE 6 and beyond using JMS is really easy and simple. But with Apache TomEE is even more simpler to get started. In this post we are going to see how to create and test a simple application which sends and receives message to/from a JMS queue with Apache TomEE

Apache TomEE uses Apache Active MQ as a JMS provider. In this examples you won't need to download or install anything because all elements will be provided as Maven dependency, but if you plan (and you should)  use Apache TomEE server you will need to download Apache TomEE plus or Apache TomEE plume. You can read more about Apache TomEE flavors in http://tomee.apache.org/comparison.html.


The first thing to do is add javaee-api as provided dependency, and junit and openejb-core as test dependency. Note that openejb-core dependency is added to have a runtime to execute tests, we are going to see it deeply in test section.

Business Code

Next step is creating the business code responsible for sending messages and receiving messages from JMS queue. Also it contains a method to receive messages from queue. For this example we are going to use a stateless EJB.

The most important part of Messages class is to note how easy is to inject ConnectionFactory and Queue instances inside code. You only need to use @Resource annotation and container will do the rest for you. Finally note that because we have not used name or lookup attributes to set a name, the name of the field is used as resource name.


And finally we can write a test that asserts that messages are sent and received using JMS queue. We could use for example Arquilian to write a test but for this case and because of simplicity, we are going to use an embedded OpenEJB instance to deploy the JMS example and run the tests.

Note that that test is really simple and concise, you only need to start programmatically an EJB container and bind the current test inside it so we can use JavaEE annotations inside test. The rest is a simple JUnit test.

And if you run the test you will receive a green bullet. But wait, probably you are wondering where is the JMS broker and its configuration? Where is the definition of ConnectionFactory and JMS queue? And this is where OpenEJB (and Apache TomEE) comes into to play.

In this case OpenEJB (and Apache TomEE) will use Apache Active MQ in embedded mode, so you don’t need to install Apache Active MQ in your computer to run the tests.  Moreover Apache TomEE will create all required resources for you.  For example it will create a ConnectionFactory and a Queue for you with default parameters and expected names (org.superbiz.Messages/connectionFactory for ConnectionFactory and org.superbiz.Messages/chatQueue for the Queue), so you don’t need to worry to configure JMS during test phase. Apache TomEE is smart enough to create and configure them for you.

You can inspect console output the realize that resources are auto-created by reading next log message: INFO: Auto-creating a Resource

And that's all, really simple and easy to get started with JMS thanks of Java EE and TomEE. In next post we are going to see how to do the same but using a Message Driven Beans (MDB).

We keep learning,
No se lo qué hacer para que me hagas caso, lo he intentado todo menos bailar ballet, ya va siendo hora de mandarte a paseo, si consigo olvidarte tal vez pueda vivir. (Voy A Acabar Borracho - Platero y Tú)
Music: https://www.youtube.com/watch?v=aK6oIQikjZU

miércoles, diciembre 10, 2014

AsciidoctorJ 1.5.2 released

This week AsciidoctorJ 1.5.2 has been released. It contains several big fixes and also one really important internal change, now the build is done using Gradle and relased using Bintray.

But there is two new features that people waits so much, and it is the integration with asciidoctor-pdf and asciidoctor-epub3. Now you can set backend as pdf in asciidoctorJ. Let's see an easy example:

First of all you need to set asciidoctorj-pdf artifact in your pom in front of asciidoctorJ.

And then you can use AsciidoctorJ as usually.

The execution of previous code will create a sample.pdf file in the same location as sample.adoc.

And for asciidoctorj-epub3 it is almost the same. But with one particularity, you need to create an index file which includes the document to be rendered.

Index file looks like:

And the content itself:

And finally the invocation:

And in this case an epub3 document is generated.

In both cases so easy, add a dependency and you are ready to use them as backends.

We keep learning,
Old, but I'm not that old. Young, but I'm not that bold. And I don't think the world is sold. I'm just doing what we're told. (Counring stars - OneRepublic)

Arquillian + Java 8

With Java 8, a lot of new language improvements have been implemented for making life of developer easier. In my opinion, one of the greatest things it has Java 8 is that in some situations developed code looks more beautiful that using prior approaches, and I am referring in Lambdas and Method references. This post is not about learning these Java 8 features but how to apply them in Arquillian framework.

I have detected four use cases where method references and lambdas can be used in Arquillian. Here you can see them, and of course if you found any other one, feel free to share with us.

Merge libs inside a JavaArchive

To write tests with Arquillian you need to create the deployment file programmatically (jar, war or ear). This is accomplished using Shrinkwrap. Your deployment file sometimes will require you add some external dependencies on it. Typical example is when you are creating a WebArchive and you need to add some dependencies to WEB-INF/lib. In this case it is easy because there WebArchive class has a method called addAsLibraries which basically adds the given jars in the libraries path.

But what's happening when your deployment file is a jar file? Then you need to merge each library inside JavaArchive object by using merge method.

This is a way to do it but with Java 8, you can use foreach function and method references.

Note that we are converting the Array into a stream so we can call foreach function. In version 2.2.0 of ShrinkWrap Resolver you will be able to get dependencies as List, so you will be able to get an stream without any conversion. Next important point is that we are using method reference feature to merge all dependencies. Now with a single line we can merge all dependencies.

 Creating custom Assets

Arquillian uses ShrinkWrap to create the deployment file and add resources inside. These resources are added by using any of the methods provided by the API like add, addClass, addAsManifestReource and so on. These methods can receive as first parameter an Asset. Asset is an interface which contains only one method called openStream which returns an InputStream. Assets are used for setting the content of file that will be be added inside the deployment file. 

For example:

ShrinkWrap comes with some already defined assets like Url, String, Byte, Class, ... but sometimes you may need to implement your own Asset.

In this case we are using an inner-class, but because Asset class can be considered a functional interface (only one abstract method), we can use Lambdas to avoid the inner class.

So much simple and more readable.

Parsing HTML tables

If you are using Arquillian Drone or Arquillian Graphene, you will be use some WebDriver Selenium classes for getting web page elements. Sometimes you need to validate columns of and HTML table, and in this cases you can end up by having a lot of boilerplate code iterating over columns and rows to validate that contains the correct values.

Your code prior to Java 8 will look something like:

But in Java 8, with the addition of streaming API, the code becomes much easier and readable:

As you can see code is pretty much compact. What we are doing here is first of all getting all web elements of column title, no news here. But then streaming API comes to play. First we create an stream from a list by calling stream method. Then what we are doing is calling the method getText from all WebElements present in the list. And finally the list of strings which in fact is a list of content of all rows of column title is returned. 

See that in this case the code is much readable than previous one, and more important is that you can even create a parallel stream to get all power of multi-core processors.

So as you can see Java 8 can be used not only in business code but also in tests.

We keep learning,
See yourself, You are the steps you take, You and you, and that's the only way (Owner Of A Lonely Heart - Yes)

Music: https://www.youtube.com/watch?v=lsx3nGoKIN8

martes, diciembre 09, 2014

RESTful Java Patterns and Best Practices Book Review

Past week I have been reading the book RESTful Java Patterns and Best Practices written by Bhakti Mehta. It is a good introduction book of RESTful patterns which compares how common REST API problems like pagination or internationalization are solved by companies like GitHub, Facebook or Twitter. Also there is one introductory chapter to HATEOAS.

The book is well-written and faces clearly each problem and how can be solved. Also compares how the same problem has been implemented by different companies, comparing their approaches and what are the advantages and disadvantages of each one.

Maybe as something to be improved for my taste is the examples. In my opinion (maybe because of my expectations), the code examples are simple, which on one side is good because they are easy to follow but on the other side you will need to search on internet to find more complex examples on how to implement what is described in the book. Of course this will depend on your knowledge of REST and JAX-RS.

Overall a good book to read, easy understanding and a good starting point for RESTful patterns.

domingo, noviembre 30, 2014

Arquillian Cube. Let's zap ALL these bugs, even the infrastructure ones.

Docker is becoming the de-facto project for deploying applications inside lightweight software containers in isolation. Because they are really lightweight they are perfect not only to use in production, but to be used inside developer/qa/CI machine. So the natural step is start writing tests of your software that runs against these containers.

In fact if you are running Docker on production, you can start writing infrastructure tests and run them as a process of your release chain (even debugging in developer machine) before touching production.

That's pretty cool but you need a way to automate all these steps from the point of view of developer. It should be amazing if we could startup Docker containers, deploy there the project and execute the tests from a JUnit and as easy as a simple JUnit. And this is what Arquillian Cube does.

Arquillian Cube is an Arquillian extension that can be used to manager Docker containers from Arquillian.

Extension is named Cube for two reasons:
  • Because Docker is like a cube
  • Because Borg starship is named cube and well because we are moving tests close to production we can say that "any resistance is futile, bugs will be assimilated".

With this extension you can start a Docker container with a server installed, deploy the required deployable file within it and execute Arquillian tests.

The key point here is that if Docker is used as deployable platform in production, your tests are executed in a the same container as it will be in production, so your tests are even more real than before.

But also let you start a container with every required service like database, mail server, … and instead of stubbing or using fake objects your tests can use real servers.

Let's see a really simple example which shows you how powerful and easy to use Arquillian Cube is. Keep in mind that this extension can be used along with other Arquillian extensions like Arquillian Drone/Graphene to run functional tests.

Arquillian Cube relies on docker-java API.

To use Arquillian Cube you need a Docker daemon running on a computer (it can be local or not), but probably it will be at local.

By default Docker server is using UNIX sockets for communication with the Docker client, however docker-java client uses TCP/IP to connect to the Docker server, so you will need to make sure that your Docker server is listening on TCP port. To allow Docker server to use TCP add the following line to /etc/default/docker:

DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock"

After restarting the Docker daemon you need to make sure that Docker is up and listening on TCP.

$ docker -H tcp:// version

Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
Server version: 1.2.0
Git commit (server): fa7b24f
Go version (server): go1.3.1

If you cannot see the client and server versions then means that something is wrong in Docker installation.

After having a Docker server installed we can start using Arquillian Cube.

In this case we are going to use a Docker image with an Apache Tomcat and we are going to test a Servlet.

And the test:

Notice that the test looks like any other Arquillian test.

Next step is add Arquillian dependencies as described in http://arquillian.org/guides/getting_started and add arquillian-cube-docker dependency:

Because we are using Tomcat and because it is being executed in a remote host (in fact this is true because Tomcat is running inside Docker which is external to Arquillian), we need to add Tomcat remote adapter.

And finally arquillian.xml is configured:

(1) Arquillian Cube extension is registered.
(2) Docker server version is required.
(3) Docker server URI is required. In case you are using a remote Docker host or Boot2Docker here you need to set the remote host ip, but in this case Docker server is on same machine.
(4) A Docker container contains a lot of parameters that can be configured. To avoid having to create one XML property for each one, a YAML content can be embedded directly as property.
(5) Configuration of Tomcat remote adapter. Cube will start the Docker container when it is ran in the same context as an Arquillian container with the same name.
(6) Host can be localhost because there is a port forwarding between container and Docker server.
(7) Port is exposed as well.
(8) User and password are required to deploy the war file to remote Tomcat.

And that’s all. Now you can run your test and you will see how tutum/tomcat:7.0 image is downloaded and started. Then test is executed and finally the docker container is stopped.

This has been a simple example, but you can do a lot of more things like creating images from Dockerfile, orchestrate several docker images or enrich the test to programmatically manipulate containers.

For more information you can read https://github.com/arquillian/arquillian-cube and of course any feedback will be more than welcomed.

We keep learning,
Come on now, who do you, Who do you, who do you, who do you think you are?, Ha ha ha, bless your soul, You really think you're in control? (Crazy - Gnarls Barkley)
Music: https://www.youtube.com/watch?v=bd2B6SjMh_w 

martes, septiembre 23, 2014

Java EE + MongoDb with Apache TomEE and Jongo Starter Project

Know MongoDB and Java EE, but you don't know exactly how to integrate both of them? Do you read a lot about the topic but you have not found a solution which fits this purpose? This starter project is for you:

You will learn how to use MongoDB and Java EE in a fashion way without having to depend on Spring Data MongoDB framework but with "similar" basic features.

The only thing better than a Maven archetype is a repository you can fork with everything already setup. Skip the documentation and just fork-and-code. This starter project contains:
The example is pretty simple, we want to store colors inside a MongoDB collection.

Our POJO is like:

Note that we are using @ObjectId annotation provided by Jongo to set this field as MongoDB id. Moreover because it is called _id, the id will be set automatically.

Then the service layer:

Note that there isn't a lot of code, but some points are really interesting stuff. Let's analyze them.

@Singleton is used to define an EJB as singleton, it works with @Stateless as well, for Java EE users no news here.

The class is abstract. Why? Because it allows us to not implement all methods but define them.

Also implements java.lang.reflect.InvocationHandler. This is because we want to use one really interesting feature which allows us to create a fallback method called invoke. For any method defined but not implemented this method is called.

We have a MongoCollection class (from Jongo project) that it is injected. MongoCollection represents a collection in MongoDB. Because we need to set which collection we want to work with, an annotation called @JongoCollection is created so you can pass the name of the backend collection. Note that MongoCollection is produced by CDI container by using our custom producer. Again no news here for CDI users.

Then there is a lot of methods which represents CRUD operations. Note that they are not implemented, they are only annotated with @Insert, @Find, @Remove, ... to set which is the purpose of the method we want to execute. Some of them like finders or removers can receive Jongo-like query to be executed. And also a method called countColors which as you can see you can implement as custom method without relying to logic implemented within invoke method.

And finally the invoke method. This method will be called for all abstract methods, and simply sends to PersistenceHandler class, which in fact is a util class against Jongo to execute the required operation.

And that's all, quite simple, and if you want to add new abstract operations, you only need to implement them inside PersistenceHandler class.

Some of you may wonder why I use annotations and not the typical Spring Data approach where the name of the method indicates the operation. You can implement this approach as well, it is a simple matter of creating a regular expression inside PersistenceHandler class instead of if/else with annotations, but I prefer annotations approach. Faster in execution time, clean, and for example you can refactor the annotation name from @Find to @Buscar (Spanish equivalent) without worrying if you are breaking some regular expression.

And finally the test:

This is an Arquillian test that has nothing special apart from one line:

.addAsManifestResource(new StringAsset(MONGODB_RESOURCE), "resources.xml")

Because we are using Apache TomEE we use the way it has to configure elements to be used as javax.annotation.Resource in our code.

The META-INF/resources.xml content will be:

and then we use in our MongoClient producer to create the MongoClient instance to be used inside code. Note that we are using @Resource as any standard resource like DataSource, but in fact MongoClientURI is injected:

so in fact Mongo connection is configured in META-INF/resources.xml file and thanks of TomEE we can refer it as any standard resource.

If you are going to use other application server you can change this approach to the one provided by it, or if you want you can use DeltaSpike extensions or your own method. Also because MongoClient database is get from a method annotated with @Produces  you can be injected it wherever you want on your code, so you can skip the abstract services layer if you want.

What are the benefits of this approach?

First that it is Java EE solution, you can use it without depending on Spring framework or any other library. You implement what you need, you do not download a bunch of libraries simply for accessing a MongoDB with some kind of object mapping.

Also as you may see, the code is quite simple and there is no magic behind it, you can debug it without any problem, or even improve or change depending on your needs. The code is yours and is waiting to be modified. Do you want to use native MongoDB objects instead of Jongo? No problem, you can implement it. Moreover there aren't much layers, in fact only one (the PersistenceHandler) so the solution is pretty fast in terms of execution.

Of course this do not mean that you can't use Spring Data MongoDB. It is a really interesting framework, so if you are already using Spring, go ahead with it, but if you plan to use a full Java EE solution, then clone this project and start using MongoDB without having to make some research on the net about how to integrate both of them.

You can clone the project from https://github.com/lordofthejars/tomee-mongodb-starter-project

We keep learning,
I'm driving down to the barrio, Going 15 miles an hour cause I'm already stoned, Give the guy a twenty and wait in the car, He tosses me a baggie then he runs real far (Mota - The Offspring)

Music: https://www.youtube.com/watch?v=xS5_TO8lyF8

Donate If You Can and Find Post Useful