miércoles, enero 30, 2019

DRY with Kubernetes Operator Framework


Introduction:

An Operator is a method of packaging, deploying and managing a Kubernetes application. To make it, you use Kubernetes API to generate and deploy the application. You can think about a way of extending Kubernetes resources to be custom to your needs. This enables us to not have to repeat the same resource configuration every time but just the things that are different.

An example of an operator might be the deployment of service of your microservice architecture. For your service, you need to create a deployment file where you specify a lot of parameters such as the container, the deployment information, environment variables to configure the service, liveness and readiness probes, ... Then when you need to release a new version of this service, you just take this deployment file, you specify everything again but with one small change (the version part of docker image is updated). So everything is exactly the same except one field (the version part) and you apply the resource to Kubernetes cluster. Why you need to repeat everything every time you want to release a new version when the only important thing is the version number?

An Operator let you fix this. You specify all common part as a custom resource (write once) and then for every new version of the service you only need to create a new resource of kind custom resource, with only the uncommon part set, in our example the version number.

In a big simplification (I repeat a big simplification) you can think about Operators as a way to create a template with some dynamic values that are set in creation time. The biggest difference with a template is that the common content ("template") is created programmatically by the operator so you've got the freedom to change the resources dynamically.

Apart from that, with an Operator, you use Kubernetes API to decide when and how to deploy each of the resources.

Let's start with a really simple example which might help you understand how powerful is Operators and why you should start using them.

Suppose that I have one simple service which prints to console a message. This message is set in the command line section. So the resource file to deploy this service might look like:


As you can see it is fairly simple, but what's happen if now you want to deploy a new version of the service which instead of printing "Hello Alex", it prints "Hello Soto"? Well, you just need to create a new file which is exactly the same but changing the command line part. But instead of doing this, let's create an operator where you only need to specify the message to print, and the release of the service "just happens".

What you need:

To create an operator, for this guide you need:
Installing and booting up Minishift:

Minishift installation instructions can be found at (https://docs.okd.io/latest/minishift/getting-started/installing.html). After installation just run to prepare the cluster:

Creating the operator:

The first thing to do is prepare the layout for the Operator. Since we are going to create the Operator in Go, you need to create it in your GOPATH directory:


Then we need to add a new custom resource definition to this project, which will be responsible for defining how our custom resources look like:

We are generating a custom resource definition for a custom type called Hello.

Then we need to define the parameters that you want to set to the custom resource. These are the parts that you want to be different in every deployment. Let's say the version number or the message to print.

So let's define the spec and the status object.

Open pkg/apis/hello/v1alpha1/hello_types.go and add next fields:

You define in HelloSpec struct a field called Message that will contain the message you want to be printed when the container is started.

Then you need to run next command to update the generated code: operator-sdk generate k8s


Last part regarding code is to generate a controller which will be responsible for watching and reconciling our Hello resource. So run next command:


operator-sdk add controller --api-version=hello.lordofthejars.com/v1alpha1 --kind=Hello

The important file created after running this command is at pkg/controller/hello/hello_controller.go and in the next snippet you can see the important bits for our example:

Reconcile method reads the state of the cluster for our Hello object and makes any changes based on the sate and what is in the spec object.

The next important piece is the method newPodForCR which is a custom method that generates programmatically the resource that we want to use. You can think about it as the template, where you define the schema, and you use the Hello kind to fill the empty spaces. Notice that in this method there is the cr variable which is used to get the values from the custom resource. Let's modify this method to adapt to our requirements.

Installing the Operator:

Then you need to install the custom resource to the cluster, build the Operator Docker image and push it to Docker registry:

After that, you need to update the operator's definition to use the created image. Open deploy/operator.yaml and change REPLACE_IMAGE tag to lordofthejars/hello-operator:v0.0.1


And finally, we just need to create all Operator resources into the cluster:


If you run kubectl get pods now, you'll see the Operator deployed in the cluster (hello-operator-6d5559b65f-5zjg2   1/1       Running   0          25s).

Now that we have everything in place, it is time to see it in action. Create next file:

This is our resource where we are only specifying the message to be printed.

And finally, run oc/kubectl apply -f deploy/crds/hello_v1alpha1_hello_cr.yaml

Then you can check the log message by running oc/kubectl logs example-hello-pod


To remove the resource, you just need to do as usually oc/kubectl delete -f deploy/crds/hello_v1alpha1_hello_cr.yaml 


Now just update the hello_v1alpha1_hello_cr.yaml file to another message and apply the resource again. See the logs and boom the new message is printed.


So notice that now we are not doing a copy-paste anymore, we just create a file with the configurable parts, and that's all, everything else is managed by the operator.

Conclusions:

This is a really simple example, but you get the idea of how powerful is Operators and how they can simplify the way you deploy applications on Kubernetes.

We keep learning,
Alex

I don't know why you're not fair, I give you my love, but you don't care, So what is right and what is wrong?, Gimme a sign (What is Love - Haddaway)

Music: https://www.youtube.com/watch?v=HEXWRTEbj1I
Follow me at https://twitter.com/alexsotob


martes, enero 15, 2019

SerenityBDD for clean Rest API tests


Serenity BDD helps you write cleaner and more maintainable automated acceptance and regression tests faster. As his name suggests, it is a tool for BDD, but in this post, I am going to show you that can be used standalone (no Cucumber or JBehave specs) just JUnit for testing Rest APIs.

Serenity BDD also implements the screenplay pattern, which encourages good testing habits and improves the readability of the tests.

Although Serenity BDD was created first with built-in support for web testing with Selenium 2, currently it also supports Rest API testing by using Rest-Assured out-of-the-box.

So let's see how to write some tests using Serenity BDD for Rest API services. For this example, we are going to use ReqRes service which provides a simple Rest service with fake data, so you don't need to create a demo service anymore, just use one. We are going to create some integration test for this service which we can call it User Service.

At the end of the post, you'll be able to get the source code of the example, but for now, let's see the important bits.
In @Before section, we are just setting the URL of the service under test. In this case, the URL is set by using a system property called restapi.baseurl, but of course there are other ways to do that like using serenity.conf or serenity.properties files.
Then you also define the actor. This has the responsibility of doing the action, in this case, we named Security Service because in our example we suppose that the Security Service is the consumer and the User Service (implemented by ReqRes) is the provider.

Then we are defining two tests, and notice how is the actor who is responsible for doing the actions. attemptsTo (for doing the request), and should (for asserting the response). As you can see every test now it is really readable on what is the purpose. Check for example the find_an_individual_user() test method. You can read almost as natural language something like "security service attempts to find a user with id 1 and it should see that the response is (...)".

And if you are curious, the FindAUser class looks like;

You only take performAs method to implement the logic to do the request (in this case a Get method). Similar is done with other Http methods.

So you can see that it is not mandatory to use BDD approach for using Serenity BDD, you can use it for integration tests, without having to use any Http client nor REST-Assured directly with the big win of creating readable tests.

But there is one more thing to fell in love with Serenity BDD, and it is the reports it generates. If you start using it for BDD tests you'll see how powerful are its reports, being the live-documentation dream a reality, but if you are using just for integration tests, the generated reports are still impressive.




So you can have a quick overview of how your integration tests for a given service is behaving.

Serenity BDD is a really good choice for starting using BDD correctly, providing live-documentation to get the current state of the project and it integrates really well with Cucumber/JBehave as well as Selenium and REST-Assured. But if you are not into BDD or doing BDD-ish, then Serenity BDD is still a solution for just e2e tests (in case of monoliths) or integration tests.

Source code: https://github.com/lordofthejars/ReqRes-Serenity (mvn clean verify -Pdemo)

We keep learning,
Alex
Per tu, no sóc un dels teus amants, però creuo l'Himàlaia per tu, i robaré dotze diamants, un per cada lluna plena (Fins que arribi l'alba - Els Catarres)
Music: https://www.youtube.com/watch?v=Z5LVw2abUlw
Follow me at https://twitter.com/alexsotob





martes, enero 08, 2019

Auto-numbered Callouts in Asciidoctor


Asciidoctor 1.5.8 comes with a really nice feature which is called autonumber callouts, so you do not have to specify the number of the callout, but just a generic character (.) and then at rendering time, Asciidoctor will set the name correctly.

See the next example shows the number and auto-number feature for callouts:


And the output looks like:


























So as you can see the output is exactly the same in both cases, but in the first case the number of the callout is static meanwhile in the second one autonumber feature is used.

Using autonumbering feature is really useful when you've got big blocks of code where you might introduce some callouts into already defined callouts which means a shifting of all of them.
Meanwhile, with the first approach means you need to go to every callout and increase the number manually, in the later one you only need to add a new callout and that's all, you do not need to go manually increasing the callout number.

We keep learning,
Alex.
My body is burning, it starts to shout, Desire is coming, it breaks out loud (Rock You Like Hurricane - Scorpions)

lunes, noviembre 19, 2018

Continuous Documentation with Antora and Travis

Antora is a documentation pipeline that enables docs, product, and engineering teams to create, manage, remix, and publish documentation sites composed in AsciiDoc and sourced from multiple versioned content repositories.

You can see several examples out there from Couchbase documentation to Fedora documentation. And of course, Antora documentation is used to generate Antora documentation. You can see it here.

So basically we have our project with documents in adoc format. Then what we want is regenerating the documentation every time a PR is merged to master.

In our project, we are using Travis-CI as CI server, so I am going to show you how we have done.

First of all, you need to create a .travis.yml file on the root of your project.


First, we define what we want to use. In this case docker and git.

Then in before_install section, we are detecting if we need to regenerate documentation or not.

Basically, we are going to generate documentation in two conditions:

  1. If commit message contains the word doc, then docs should be regenerated.
  2. If you have modified an adoc file from the documentation folder (or README.adoc) and the branch is master, then the docs should be regenerated.
If any of these conditions are met, then we configure git client with user, email and token to be used for pushing the generated documentation. Notice that this information comes from environment variable defined in Travis console. Also, it is important to note that the documentation should be generated in gh-pages branch (since we are releasing to GitHub pages). For this reason, we are using git worktree which checkouts the gh-pages branch in gh-pages directory.

Then in script section, we are just using Antora docker image to render documentation.

Finally, we just need to enter into gh-pages directory, create a .nojekyll file to avoid Git Hub Pages thinks that this is a Jekyll site, and finally push the changes.

And then for PR merged, the documentation is automatically regenerated and published.

Important: This script is based on one done previously by Bartosz Majsak (@majson) for Asciidoctor. My task has been only adapting it to use Antora.

We keep learning,
Alex.

Y no me importa nada nada (nada), Que rías o que sueñes, que digas o que hagas, Y no me importa nada, Por mucho que me empeñe, estoy jugando y no me importa nada (No me importa nada - Luz  Casal)








miércoles, octubre 03, 2018

Arquillian Chameleon Cheat Sheet


Arquillian Chameleon simplifies how we can write container tests in Arquillian, it has been out there for several times, but now in this post, I share with you a refcard so you can print and take a quick overview of its functionalities.




Special thanks to https://twitter.com/Mogztter for make it possible with its contribution to asciidoctor.js.

We keep learning,
Alex
You don't have to believe no more, Only got four hours, To learn your manners, Never felt so close to you before (King George - Dover)
Music: https://www.youtube.com/watch?v=wbM9RtOGdKE
Follow me at https://twitter.com/alexsotob

lunes, agosto 13, 2018

Java Iterator to Java 8 Stream


Sometimes during my work, I need to integrate with other libraries which they return an Iterator object instead of a list. This is fine from the point of view of libraries but it might be a problem when you want to use Java 8 streams on the returned iterator. There is one way to transform the Iterator to Iterable and then easily to stream.

Since all the time I need to remember how to do it, I decided to share the snippet here.


In the example, first of all, we have an Iterator class. Since Iterator cannot be used as a stream but an Iterable can do, we just create a new Iterable class which overrides its iterator() method to return the Iterator we want to stream.

Then we have an Iterable which is not streamable yet. So what we need to do is to use StreamSupport class to convert the Iterable to a Stream.

And that's all then you can use all streaming operations without any problem.

We keep learning,
Alex.
Prefereixo que em passis la birra que em tiris la canya, Perdona'm si em ric però es que em fas molta gràcia, Lligar no es lo teu, Em sap molt de greu (Lligar no és lo teu - Suu)
Music: https://www.youtube.com/watch?v=fWNqMjAVNto
Follow me at https://twitter.com/alexsotob

jueves, junio 07, 2018

Spring Boot + Cockroach DB in Kubernetes/OpenShift


In my previous post, I showed why CockroachDB might help you if you need a cloud native SQL database for your application. I explained how to install it in Kubernetes/OpenShift and how to validate that the data is replicated correctly.

In this post, I am going to show you how to use Cockroach DB in a Spring Boot application. Notice that Cockroach DB is compatible with PostgresSQL driver, so in terms of configuration is almost the same.

In this post, I assume that you have already a Cockroach DB cluster running in Kubernetes cluster as explained in my previous post.

For this example, I am using Fabric8 Maven Plugin to smoothly deploy a Spring Boot application to Kubernetes without having to worry so much about creating resources, creating Dockerfile and so on. Everything is automatically created and managed.

For this reason, pom.xml looks like:


Notice that apart from defining Fabric8 Maven Plugin I am also defining to use Spring Data JPA to make the integration between Spring Boot and JPA easier from the point of view of the developer.

Then you need to create a JPA entity and Spring Data Crud repository to interact with JPA.

Also, we need to create a controller who is responsible to get incoming requests, use the repository to make queries to DB and return results back to the caller.

Finally, you need to configure JPA to use the desired driver and dialect. In case of Spring Boot this is done in application.properties file.


The most important part here is that we need to use the PostgeSQL94 dialect. Notice that in url, we are using the postgresql jdbc url form. That's fine, since Cockroach uses the Postgres driver.

Now we need to create the database (customers) and the user (myuser) as configured in application.properties. To make it so, you just need to run cockroach shell and run some SQL commands:


Finally, you can deploy the application by running mvn clean fabric8:deploy. After that, the first time might take longer since needs to pull Docker images, you can start sending queries to the service.

As you can see it is really easy to start using a cloud-native DB like Cockroach DB in Spring Boot. If you want you can do exactly the same as in my previous post and start running queries to each of the nodes to validate that data is available correctly.

Code: https://github.com/lordofthejars/springboot-cockroach

We keep learning,
Alex.
Dôme épais, le jasmin, à la rose s'assemble, rive en fleurs, frais matin, nous appellent ensemble. (Flower Duet - Lakmé - Leo Delibes)
Music: https://www.youtube.com/watch?v=Vf42IP__ipw
Follow me at https://twitter.com/alexsotob