Wednesday, March 12, 2014

Integration testing with Maven and Docker

Docker is one of the new hot things out there. With a different set of technologies and ideas compared to traditional virtual machines, it implements something similar and at the same time different, with the idea of containers: almost all VMs power but much faster and with very interesting additional goodies.

In this article I assume you already know something about Docker and know how to interact with it. If it's not the case I can suggest you these links to start with:

http://www.docker.io/gettingstarted
http://coreos.com/docs/launching-containers/building/getting-started-with-docker/
http://robknight.org.uk/blog/2013/05/drupal-on-docker/

My personal contribution to the topic is to show you a possible workflow that allows you to start and stop Docker containers from within a Maven job.

The reason why I have investigated in this functionality is to help with tests and integration tests in Java projects built with Maven. The problem is well known: your code interacts with external systems and services. Depending on what you are really writing this could mean Databases, Message Brokers, Web Services and so on.

The usual strategies to test these interactions are:

  • In memory servers; implemented in java that are usually very fast but too often their limit is that they are not the real thing
  • A layer of stubbed services, that you implement to offers the interfaces that you need.
  • Real external processes, sometimes remote, to test real interactions.

Those strategies work but they often require a lot of effort to be put in place. And the most complete one, that is the one that uses proper external services, poses problems for what concerns isolation:
imagine that you are interacting with a database and that you perform read/write operations just while someone else was accessing the same resources. Again, you may find the correct workflows that invovle creating separate schemas and so on, but, again, this is extra work and very often a not very straight forward activity.

Wouldn't it be great if we could have the same opportunities that these external systems offers, but in totaly isolation? And what do you think if I also add speed to the offer?

Docker is a tool that offers us this opportunity.

You can start a set of Docker container with all the services that you need, at the beginning of the testing suite, and tear it down at the end of it. And your Maven job can be the only consumer of these services, with all the isolation that it needs. And you can all of this easily scripted with the help of Dockerfiles, that are, at the end, not much more than a sequential set of command line invocations.

Let see how to enable all of this.

The first prerequisite is obviously to have Docker installed on your system. As you may already know Docker technology depends on the capabilities of the Linux Kernel, so you have to be on Linux OR you need the help of a traditional VM to host the Docker server process.

This is the official documentation guide that shows you how to install under different Linux distros:

http://docs.docker.io/en/latest/installation/

While instead this is a very quick guide to show how to install if you are on MacOSX:

http://blog.javabien.net/2014/03/03/setup-docker-on-osx-the-no-brainer-way/

Once you are ready and you have Docker installed, you need to apply a specific configuration.

Docker, in recents versions, exposes its remote API, by default, only over Unix Sockets. Despite we could interact with them with the right code, I find much easier to interact with the API over HTTP. To obtain this, you have to pass a specific flag to the Docker daemon to tell it to listen also on HTTP.

I am using Fedora, and the configuration file to modify is /usr/lib/systemd/system/docker.service.

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
After=network.target

[Service]
ExecStart=/usr/bin/docker -d -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock
Restart=on-failure

[Install]
WantedBy=multi-user.target

The only modification compared to the defaults it's been adding -H tcp://127.0.0.1:4243.

Now, after I have reloaded systemd scripts and restarted the service I have a Docker daemon that exposes me a nice REST API I can poke with curl.

sudo systemctl daemon-reload
sudo systemctl restart docker
curl http://127.0.0.1:4243/images/json # returns a json in output

You probably also want this configuration to survive future Docker rpm updates. To achieve that you have to copy the file you have just modified to a location that survives rpm updates. The correct way to achieve this in systemd is with:

sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system

See systemd FAQ for more details.

If you are using Ubuntu you have to configure a different file. Look at this page: http://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api/

Now we have all we need to interact easily with Docker.

You may at this point expect me to describe you how to use the Maven Docker plugin. Unluckily that's not the case. There is no such plugin yet, or at least I am not aware of it. I am considering writing one but for the moment being I have solved my problems quickly with the help of GMaven plugin, a little bit of Groovy code and the help of the java library Rest-assured.

Here is the code to startup Docker containers

import com.jayway.restassured.RestAssured
import static com.jayway.restassured.RestAssured.*
import static com.jayway.restassured.matcher.RestAssuredMatchers.*
import com.jayway.restassured.path.json.JsonPath
import com.jayway.restassured.response.Response

RestAssured.baseURI = "http://127.0.0.1"
RestAssured.port = 4243

// here you can specify advance docker params, but the mandatory one is the name of the Image you want to use
def dockerImageConf = '{"Image":"${docker.image}"}'
def dockerImageName = JsonPath.from(dockerImageConf).get("Image")

log.info "Creating new Docker container from image $dockerImageName"
def response =  with().body(dockerImageConf).post("/containers/create")

if( 404 == response.statusCode ) {
    log.info "Docker image not found in local repo. Trying to dowload image '$dockerImageName' from remote repos"
    response = with().parameter("fromImage", dockerImageName).post("/images/create")
    def message = response.asString()
    //odd: rest api always returns 200 and doesn't return proper json. I have to grep
    if( message.contains("HTTP code: 404") ) fail("Image $dockerImageName NOT FOUND remotely. Abort. $message}")
    log.info "Image downloaded"
    
    // retry to create the container
    response = with().body(dockerImageConf).post("/containers/create")
    if( 404 == response.statusCode ) fail("Unable to create container with conf $dockerImageConf: ${response.asString()}")
}

def containerId = response.jsonPath().get("Id")

log.info "Container created with id $containerId"

// set the containerId to be retrieved later during the stop phase
project.properties.setProperty("containerId", "$containerId")

log.info "Starting container $containerId"
with().post("/containers/$containerId/start").asString()

def ip = with().get("/containers/$containerId/json").path("NetworkSettings.IPAddress")

log.info "Container started with ip: $ip" 

System.setProperty("MONGODB_HOSTNAME", "$ip")
System.setProperty("MONGODB_PORT", "27017")

And this is the one to stop them

import com.jayway.restassured.RestAssured
import static com.jayway.restassured.RestAssured.*
import static com.jayway.restassured.matcher.RestAssuredMatchers.*

RestAssured.baseURI = "http://127.0.0.1"
RestAssured.port = 4243

def containerId = project.properties.getProperty('containerId')
log.info "Stopping Docker container $containerId"
with().post("/containers/$containerId/stop")
log.info "Docker container stopped"
if( true == ${docker.remove.container} ){
    with().delete("/containers/$containerId")
    log.info "Docker container deleted"
}

Rest-assured fluent API should suggest what is happening, and the inline comment should clarify it but let me add a couple of comments. The code to start a container is my implementation of the functionality of docker run as described in the official API documentation here:

http://docs.docker.io/en/latest/reference/api/docker_remote_api_v1.9/#inside-docker-run

The specific problem I had to solve was how to propagate the id of my Docker container from a Maven Phase to another one.
I have achieved the functionality thanks to the line:

// set the containerId to be retrieved later during the stop phase
project.properties.setProperty("containerId", "$containerId")

I have also exposed a couple of Maven properties that can be useful to interact with the API:

  • docker.image - The name of the image you want to spin
  • docker.remove.container - If set to false, tells Maven to not remove the stopped container from filesystem (useful to inspect your docker container after the job has finished)

Ex.

    mvn verify -Ddocker.image=pantinor/fuse -Ddocker.remove.container=false

You may find here a full working example. I have been told that sometimes my syntax colorizer script eats some keyword or change the case of words, so if you want to copy and paste it may be a better idea cropping from Github.

This is a portion of the output while running the Maven build with the command mvn verify :

...
[INFO] --- gmaven-plugin:1.4:execute (start-docker-images) @ gmaven-docker ---
[INFO] Creating new Docker container from image {"Image":"pantinor/centos-mongodb"}
log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.BasicClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
[INFO] Container created with id 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Starting container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Container started with ip: 172.17.0.2

...

[INFO] --- gmaven-plugin:1.4:execute (stop-docker-images) @ gmaven-docker ---
[INFO] Stopping Docker container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Docker container stopped
[INFO] Docker container deleted

...

If you have any question or suggestion please feel free to let me know!

Full Maven `pom.xml` available also here:

https://raw.githubusercontent.com/paoloantinori/gmaven_docker/master/pom.xml




    4.0.0
    gmaven-docker
    paolo.test
    1.0.0-SNAPSHOT
    Sample Maven Docker integration
    See companion blogpost here: http://giallone.blogspot.co.uk/2014/03/integration-testing-with-maven-and.html
    
        pantinor/centos-mongodb
        true
    
    
        
            
                org.codehaus.gmaven
                gmaven-plugin
                1.4
                
                    2.0
                
                
                    
                        start-docker-images
                        test
                        
                            execute
                        
                        
                            
                        
                    
                    
                        stop-docker-images
                        post-integration-test
                        
                            execute
                        
                        
                            
                        
                    
                
            
        
    
    
        
            com.jayway.restassured
            rest-assured
            1.8.1
            test
        
    



Friday, January 24, 2014

Monitoring JBoss Fuse ESB with Nagios

Note: this article describe a scenario based on JBoss Fuse, but it's applicable to any Java context able to run Servlet java, like JBoss EAP, WildFly, Tomcat, etc...

One of my recent activity at work has been to provide guidance about monitoring a JBoss Fuse ESB setup with Nagios/OpsView. Despite more specialized solutions for the specific problem do exist (Fuse plugin for Red Hat JON), Nagios is still one of the most diffuse opensource monitoring tool.

You don't need to an expert in Nagios to understand this article, I am definitely not. But if you are and you have any suggestion to improve this solution, let me know please.

Nagios is an open source monitoring tool that, with the help of plugins, is able to collect many metrics from different kind of services and to notify you when a specific value or a specific pattern ( values over time ) is identified. It can be used to monitor from the operating system status to you custom deployed application more obscure values, assuming that you give specify what is important for you.

In our example, our custom application is deployed on JBoss Fuse ESB.

Most of the metrics that we want to monitor are related to Apache Camel, Apache ActiveMQ and Apache CXF These projects already do a excellent job in exposing many runtime information that we are interested into. For example Camel tells us how many messages passed through a specific component or how what's the status of some of our routes.

The technology that these projects use to expose so many valuable information is JMX.

Nagios supports JMX with the help of external plugins.

We explored the following list:

check_jmx

We have found some problem with this approach:

1) to allow RMI communication, the network layer needs to allow the connection to specific ports.
2) the plugin supports only attributes and not operations
3) building JMX queries is not particularly user friendly, specifically if you are not a java developer/devop

Since we had the need to invoke some operation as part of our monitoring requirements, we were forced to look for other alternatives.

check_http with Jolokia

One of our first alternatives idea was to use Jolokia.

Jolokia is a java library that exposes JMX interfaces over HTTP, with a JSON based REST api.

To do its magic over http it just needs an http entrypoint to be invoked, that is a Servlet. I leave you with Jolokia official instruction to install it, but once you have those components in place, you are ready to use it's JMX bridging features.

But I also share with you a small trick:
I haven't manually installed Jolokia. Since we are already using the awesome hawt.io as a management console, and since it's leverages Jolokia, everything that we needed was already there.

Let's explore the benefits of using a jolokia based solution.

Being http based, it clearly helps with the network configuration problems:

I still find it somehow hard to accept, but in my experience with many customers, the handling of the corporate network configuration is often more complicated than expected. Something that seems simple on an abstract paper diagram stop being so when you don't have details about the network topology and the only thing that you can see is that the end to end communication doesn't work. For this reason, depending on the popular http protocol is definitely an attractive feature.

The second added benefit is that, differently than check_jmx, it supports JMX operation invocation. This last feature turns to be handy if the metrics you are interested into are not exposed as attributes but only as operations. One example is the operation:

osgi.core:type=bundleState,version=1.5/getState

For what concerns the ergonomy of the interface, I personally believe that it offers a very straight forward feeling.

Requests can be simple. You could end up invoking a very tidy REST endpoint via GET, something similar to this:

curl -u admin:admin http://172.17.42.1:8012/jolokia/read/java.lang:type=Memory/HeapMemoryUsage

But the moment you are starting to send complex input payload or when you can rely on POST and external input files containing you json payloads. I suggest you to check Jakub Korab helpful post: http://www.jakubkorab.net/2013/11/monitoring-activemq-via-http.html

To use Jolokia directly from Nagios we can use the common check_http plugin that mimics curl behavior just like in the previous example. The only glitch with this is that check_http doesn't offer a behavior to process json strings, that are the structure that jolokia is returning. You could probably be able to parse the output with regular expressions and simple value checking but we feel that we are missing something. And what is missing here is instead offered by the next option.

check_jmx4perl with Jolokia

jmx4perl is a set of Perl libraries and scripts that allow to communicate with jolokia exposed JMX objects. One of the tools bundled with the project is a Nagios plugin: check_jmx4perl

Don't be scared by the "perl" keyword. I don't write perl and I have problems reading it. And still I can use the tool. The project gives you executables scripts that you can invoke from command line to query JMX services exposed by Jolokia and it also provides Nagios compatible executable.

With this tool you can write queries like this one:

$ check_jmx4perl \
    --user=admin \
    --password=admin \
    --url http://10.21.21.1:8012/jolokia \
    --name "[MyService - CamelContext - WebService]" \
    --mbean "org.apache.camel:context=mycontext/86-MyRoute.Request,name=\"log\",type=components" \
    --attribute "State" \
    --critical Stopped \
    --warning   !Started

OK - MyService - CamelContext - WebService] : 'Started' as expected | 'MyService - CamelContext - WebService]'=Started;!Started;Stopped

And as you can guess reading the previous command the Nagios support is very immediate, allowing you to specify the values you want to identify as representing a Warning status or an Error one.

If you are familiar with Nagios you know that to use an executable you have to define it in Nagios configuration.

This is some example of possible macros:

### check_jmx4 supports wildcards! ( you can use asterisk everywhere in the string names )


# Read JMX attributes without support for nested attributes 
define command {
     command_name         check_jmx4perl_attribute_absolute
     command_line         /usr/local/bin/check_jmx4perl \
                              $ARG1$ \
                              --url $ARG2$ \
                              --mbean $ARG3$ \
                              --attribute $ARG4$ \
                              $ARG5$
  }

# Check Bundle is Active
define command {
     command_name         check_jmx4perl_bundle_is_active
     command_line         /usr/local/bin/check_jmx4perl \
                              $ARG1$ \
                              --url $ARG2$ \
                              --warning \!ACTIVE \
                              --critical \!ACTIVE \
                              --mbean "osgi.core:type=bundleState,version=1.5" \
                              --operation "getState(long)" \
                              $ARG3$
  }

Once you have defined those macros in Nagios, you can define your real monitoring calls that use those commands. Something like:

# Root service definition that presets some values and variables
define service {
    use generic-service
    name jolokia
    register 0
    host_name localhost
    _agenturl http://172.17.42.1:8012/jolokia
    _authentication --user=admin --password=admin
    }

# Sample Bundle is Active
define service {
     service_description    Sample Bundle is Active
     use                    jolokia
     check_command          check_jmx4perl_bundle_is_active\
                            !$_SERVICEAUTHENTICATION$ \
                            !$_SERVICEAGENTURL$ \
                            !74 
    }

How to test this?

Despite installing and configuring Nagios is not rocket science, it's not always a straight forward activity.
Sometimes you make silly typos or just leave a space in the wrong place and nothing is working. Despite having a feeling that you can fix it with just some time it turns to be a time stealing activity that distracts you from your huge list of other things to do.

Or maybe you are just like me: the fact that you managed to set up everything in a couple of days doesn't mean that you will be able to precisely remember how, if asked in month.

For all those reasons I have decided to have some fun with Docker.

Docker is a cool and new tool that you can use to provide bundled stacks of applications, called containers; they can be preconfigured exactly as you want. I have put together a Docker container that starts for you a Nagios instance and provides all the plugins, scripts and sample configuration that I have discussed in this post.

In case you are not interest in Docker you can still find sample at this GitHub repository and eventually still read the Docker file that at the end gives all the step you need to install and configure Nagios with jmx4perl.

https://github.com/paoloantinori/docker_centos_nagios

Since I've built my knowledge on the information already available on the web, this is a small list of the resources that helped me to put together this tutorial:

http://www.jakubkorab.net/2013/11/monitoring-activemq-via-http.html
http://search.cpan.org/~roland/jmx4perl-1.07/scripts/check_jmx4perl#Parameterized_checks
http://labs.consol.de/lang/en/blog/jmx4perl/check_jmx4perl-einfache-servicedefinitionen/

In case you have any other interesting approach to the problem please leave a comment.

Sunday, August 11, 2013

Share your Bash history across terminals in Terminator

As many developers I spent lots of time working with a command line shell. My operating system of choice is Fedora (currently 18) and I am using the excellent Terminator as a more powerful replacement for the basic default shell that comes with Gnome 3.

The typical workflow with Terminator is to use its very intuitive key shortcuts to split the current working window, maximize it, start working, resize when finish and jump back to any other window of interest.

It's a very fast workflow and I cannot see myself going back to anything different.

But this high productive usage has a drawback:

the commands history is not shared across the different windows!

This is often annoying since one of the reasons I am spawning a new shell is to do some "temporary" work without moving away from my current position. And this usually involves redoing some already executed step, so the lack of history is quite annoying.

I will describe here a way to address this problem.

The first thing to know is that the command history is not present because by default bash history is flushed to .bash_history only when you terminate a session.

A way to force a more frequent flush is to play with the environment variable PROMPT_COMMAND

If we modify one of our bash configuration files, .bashrc for instance.

#save history after every command
#use 'history -r' to reload history
PROMPT_COMMAND="history -a ; $PROMPT_COMMAND" 

What we are saying here is to invoke history -a with every new command prompt line. This flush the history to file every time we see a new command prompt. You can verify this if you monitory you .bash_history file.

Have we reached what we were hoping for?

Not yet.

Even if the history is now persisted, you will notice that you running shell do not see this history. This is because the history is loaded only at the beginning of a new session.

The only thing left is to manually force a reload of the history with the command

history -r

Done. We have now access to other shells history.

The last question:

why don't we add the reload of the history directly in the PROMPT_COMMAND variable?

Because you probably don't want that. Having all your shells always sharing a global history will break the most obvious behavior of the shell that is to show you the previous command you typed in that very shell.

Friday, June 14, 2013

Maven: Start an external process without blocking your build


Let's assume that we have to execute a bunch of acceptance tests with a BDD framework like Cucumber as part of a Maven build.

Using Maven Failsafe Plugin is not complex. But it has an implicit requirement:
The container that hosts the implementation we are about to test needs to be already running.

Many containers like Jetty or JBoss provide their own Maven plugins, to allow to start the server as part of a Maven job. And there is also the good generic Maven Cargo plguin that offers an implementation of the same behavior for many different container.

These plugins allow for instance, to start the server at the beginning of a Maven job, deploy the implementation that you want to test, fire your tests and stop the server at the end.
All the mechanisms that I have described work and they are usually very useful for the various testing approaches.

Unluckily, I cannot apply this solution if my container is not a supported container. Unless obviuosly, I decide to write a custom plugin or add the support to my specific container to Maven Cargo.
In my specific case I had to find a way to use Red Hat's JBoss Fuse, a Karaf based container.
I decided to try keeping it easy and to not write a full featured Maven plugin and eventually to rely to GMaven plugin, or how I have recently read on the internet the "Poor Man's Gradle".

GMaven is basically a plugin to add Groovy support to you Maven job, allowing you to execute snippets of Groovy as part of your job. I like it because it allows me to inline scripts directly in the pom.xml.
It permits you also to define your script in a separate file and execute it, but that is exactly the same behaviour you could achieve with plain java and Maven Exec Plugin; a solution that I do not like much because hides the implementation and makes harder to imagine what the full build is trying to achieve.
 Obviously this approach make sense if the script you are about to write are simple enough to be autodescriptive.
 
I will describe my solution starting with sharing with you my trial and errors and references to various articles and posts I have found:



At first I have considered to use Maven Exec Plugin to directly launch my container. Something like what was suggested here

http://stackoverflow.com/questions/3491937/i-want-to-execute-shell-commands-from-mavens-pom-xml

   org.codehaus.mojo
   exec-maven-plugin
   1.1.1
   
     
       some-execution
       compile
       
         exec
       
     
   
   
     hostname
   
 
That plugin invocation, as part of a Maven job, actually allows me to start the container, but it has a huge drawback: he Maven lifecycle stops until the external process terminates or is manually stopped.
This is because the external process execution is "synchronous" and Maven doesn't consider the command execution finished, so, it never goes on with the rest of the build instructions.
This is not what I needed, so I have looked for something different.
At first I have found this suggestion to start a background process to allow Maven not to block:

http://mojo.10943.n7.nabble.com/exec-maven-plugin-How-to-start-a-background-process-with-exec-exec-td36097.html

The idea here is to execute a shell script, that start a background process and that immediately returns.
 
   org.codehaus.mojo
   exec-maven-plugin
   1.2.1
   
     
       start-server
       pre-integration-test
       
         exec
       
       
         src/test/scripts/run.sh
         
           {server.home}/bin/server
         
       
     
   
 
and the script is

#! /bin/sh
$* > /dev/null 2>&1 &
exit 0

This approach actually works. My Maven build doesn't stop and the next lifecycle steps are executed.

But I have a new problem now.  
My next steps are immediately executed.
I have no way to trigger the continuation only after my container is up and running.
Browsing a little more I have found this nice article:

http://avianey.blogspot.co.uk/2012/12/maven-it-case-background-process.html

The article, very well written, seems to describe exactly my scenario. It's also applied to my exact context, trying to start a flavour of Karaf.
It uses a different approach to start the process in background, the Antrun Maven plugin. I gave it a try and unluckily I am in the exact same situation as before. The integration tests are executed immediately, after the request to start the container but before the container is ready.

Convinced that I couldn't find any ready solution I decided to hack the current one with the help of some imperative code.
I thought that I could insert a "wait script", after the start request but before integration test are fired, that could check for a condition that assures me that the container is available.

So, if the container is started during this phase:

pre-integration-test

and my acceptance tests are started during the very next

integration-test

I can insert some logic in pre-integration-test that keeps polling my container and that returns only after the container is "considered" available.


import static com.jayway.restassured.RestAssured.*;
println("Wait for FUSE to be available")
for(int i = 0; i < 30; i++) {
    try{
        def response = with().get("http://localhost:8383/hawtio")
        def status = response.getStatusLine()
        println(status)
        } catch(Exception e){
            Thread.sleep(1000)
            continue
        }finally{
            print(".")
        }
        if( !(status ==~ /.*OK.*/) )
            Thread.sleep(1000)

}

And is executed by this GMaven instance:


    org.codehaus.gmaven
    gmaven-plugin
    
        1.8
    
    
        
            ########### wait for FUSE to be available ############
            pre-integration-test
            
                execute
            
            
                <![CDATA[
                            import static com.jayway.restassured.RestAssured.*;
                            ...
                            }
                            ]]>
            
        
    

My (ugly) script, uses Rest-assured and an exception based logic to check for 30 seconds if a web resource, that I know my container is deploying will be available.

This check is not as robust as I'd like to, since it checks for a specific resource but it's not necessary a confirmation that the whole deploy process has finished. Eventually, a better solution would be use some management API that could be able to check the state of the container, but honestly I do not know if they are exposed by Karaf and my simple check was enough for my limited use case.

With the GMaven invocation, now my maven build is behaving like I wanted.
This post showed a way to enrich your Maven script with some programmatic logic without the need of writing a full featured Maven plugin. Since you have full access to the Groovy context, you can think to perform any kind of task that you could find helpful. For instance you could also start background threads that will allow the Maven lifecycle to progress while keep executing your logic.

My last suggestion is to try keeping the logic in your scripts simple and to not turn them in long and complex programs. Readability was the reason I decided to use rest-assured instead of direct access to Apache HttpClient.

This is a sample full pom.xml


    4.0.0
    ${groupId}.${artifactId}
    
        xxxxxxx
        esb
        1.0.0-SNAPSHOT
    
    acceptance
    
        /data/software/RedHat/FUSE/fuse_full/jboss-fuse-6.0.0.redhat-024/bin/
    
    
        
            
                maven-failsafe-plugin
                2.12.2
                
                    
                        
                            integration-test
                            verify
                        
                    
                
            
            
                org.apache.maven.plugins
                maven-surefire-plugin
                
                    
                        **/*Test*.java
                    
                
                
                    
                        integration-test
                        
                            test
                        
                        integration-test
                        
                            
                                none
                            
                            
                                **/RunCucumberTests.java
                            
                        
                    
                
            
            
                maven-antrun-plugin
                1.6
                
                    
                        ############## start-fuse ################
                        pre-integration-test
                        
                            
                                
                    
                            
                        
                        
                            run
                        
                    
                
            
            
                maven-antrun-plugin
                1.6
                
                    
                        ############## stop-fuse ################
                        post-integration-test
                        
                            
                                
                    
                            
                        
                        
                            run
                        
                    
                
            
            
                org.codehaus.gmaven
                gmaven-plugin
                
                    1.8
                
                
                    
                        ########### wait for FUSE to be available ############
                        pre-integration-test
                        
                            execute
                        
                        
                            <![CDATA[
import static com.jayway.restassured.RestAssured.*;
println("Wait for FUSE to be available")
for(int i = 0; i < 30; i++) {
    try{
        def response = with().get("http://localhost:8383/hawtio")
        def status = response.getStatusLine()
        println(status)
        } catch(Exception e){
            Thread.sleep(1000)
            continue
        }finally{
            print(".")
        }
        if( !(status ==~ /.*OK.*/) )
            Thread.sleep(1000)

}
]]>
                        
                    
                
            
            
        
    
    
        
        
            info.cukes
            cucumber-java
            ${cucumber.version}
            test
        
        
            info.cukes
            cucumber-picocontainer
            ${cucumber.version}
            test
        
        
            info.cukes
            cucumber-junit
            ${cucumber.version}
            test
        
        
            junit
            junit
            4.11
            test
        
        
        
            org.apache.httpcomponents
            httpclient
            4.2.5
        
        
            com.jayway.restassured
            rest-assured
            1.8.1
        
    


Sunday, June 2, 2013

Eclipse for small screens on Linux

This post is inspired by a discussion with Sanne, of Hibernate team,  that introduced me to the customization secret to get back the missing space when you are using Eclipse with Linux on a small screen.


Some of this suggestions apply to different operating systems as well, but I am mainly focused on Linux.

This are my system specs, to give a context:

Fedora 18 with Gnome 3.6
Lenovo ThinkPad X220 12.5-inch
Screen resolution:  1366x768
JBoss Developer Studio 6 (based on Eclipse 4.2.1)


Let's start showing you a screenshot of my Eclipse( JBoss Developer Studio flavor, in my case ):



As you can see there isn't much space left to the code editor.

We can obviously improve the situation collapsing the various panel but the feeling is that we still have lots of space wasted space, stolen from the various toolbars:



The first tip to remove some wasted space is to apply some GTK customization. This trick could not be very well known, but considering the amount of posts on the internet that are reporting it, like http://blog.valotas.com/2010/02/eclipse-on-linux-make-it-look-good.html , we can expect to be an important secret.

The trick consists in passing Eclipse a specific configuration for the GTK theme it's using. This is performed externally respect of Eclipse, passing the customization in form of an environment variable.

Create a file with the following content:

style "gtkcompact" { 
 font_name="Liberation 8" 
 GtkButton::defaultborder={0,0,0,0} 
 GtkButton::defaultoutsideborder={0,0,0,0} 
 GtkButtonBox::childminwidth=0 
 GtkButtonBox::childminheigth=0 
 GtkButtonBox::childinternalpadx=0 
 GtkButtonBox::childinternalpady=0 
 GtkMenu::vertical-padding=0 
 GtkMenuBar::internalpadding=0 
 GtkMenuItem::horizontalpadding=2 
 GtkToolbar::internal-padding=0 
 GtkToolbar::space-size=0 
 GtkOptionMenu::indicatorsize=0 
 GtkOptionMenu::indicatorspacing=0 
 GtkPaned::handlesize=4 
 GtkRange::troughborder=0 
 GtkRange::stepperspacing=0 
 GtkScale::valuespacing=0 
 GtkScrolledWindow::scrollbarspacing=0 
 GtkExpander::expandersize=10 
 GtkExpander::expanderspacing=0 
 GtkTreeView::vertical-separator=0 
 GtkTreeView::horizontal-separator=0 
 GtkTreeView::expander-size=8 
 GtkTreeView::fixed-height-mode=TRUE 
 GtkWidget::focuspadding=0 
 xthickness=0 
 ythickness=0
} 


class "GtkWidget" style "gtkcompact"

style "gtkcompactextra" { 
 xthickness=0 ythickness=0 
} 
class "GtkButton" style "gtkcompactextra" 
class "GtkToolbar" style "gtkcompactextra" 
class "GtkPaned" style "gtkcompactextra" 


Start Eclipse assigning the path to that file to GTK2_RC_FILES environment variable:

GTK2_RC_FILES=/data/software/ext/eclipse_conf/layout.conf  ./jbdevstudio


Or if you are creating a shortcut or an entry in the start menu, use this version:

env GTK2_RC_FILES=/data/software/ext/eclipse_conf/layout.conf  ./jbdevstudio  



With this change in place, we are reducing some wasted space, and you will noticing the different starting from the workspace selection screen. Notice the difference in the buttons between the first and the second screen:

Without custom GTK style

With custom GTK style
Our modification impact the whole Eclipse style, as you can see here:



But there is still space for improvements. If you notice, we are dedicating a lot of space to the window title, that doesn't add particular value.

How can we reduce it? A way to reach this is via a Gnome Extension, Maximus, that will remove the title bar and will use Gnome bar instead.

We can enable Maximus in Gnome Extension website https://extensions.gnome.org/extension/354/maximus/:

Note:
Maximus by default applies its behavior to all the applications. This could save space in other apps, but you could prefer to have a finer control. In my case I do not want the feature in Sublime Text 2 since it doesn't integrate well. You can easily configure Maximus with the list of all the application you want its service applied or which one you do not want it applied via blacklisting and whitelisting.




With the following result:


Much better!

At this point we can try to reapply our full toolbar and thanks to all the optimizations, we are able to have it all on a single line. And consider that we obviously have the option in eclipse to specify which are the icons that we want to display and which instead we are not interested into.



There is now only an aspect that I'd like to improve, the tab size. I do believe that they are stealing a little too much space.

To modify them we have to change the .css files that control that aspect.

The base GTK theme .css file is

./plugins/org.eclipse.platform_4.2.2.v201302041200/css/e4_default_gtk.css


And we have to touch this section:


.MPartStack {
    font-size: 11;

Changing the font-size value to a smaller value, will reduce the wasted space.


In my particular case, since I have applied JBoss Developer Studio red theme, the file that I have to modify stays in another location:

 ./plugins/org.jboss.tools.central.themes_1.1.0.Final-v20130326-2027-B145.jar 



I have changed its value to 8 and obtained this result:





For some related links about the topic refer to:

http://stackoverflow.com/questions/11805784/very-large-tabs-in-eclipse-panes-on-ubuntu
http://wiki.eclipse.org/Eclipse4/CSS

Saturday, May 4, 2013

GateIn/JBoss Portal: InterPortlet + InterPage communcation with a Servlet Filter

The Problem

During a recent Java Portlets related project we were faced with a simple requirement that created us some trouble to solve it. The request this simple: we have to share some parameter between portlets defined in different pages of a GateIn based portal.

Apparently this task was harder than expected. In particular the greatest frustration was related to the inability to simply inject URL parameters, the easiest mechanism that many web technologies offers to pass non-critical values from one page to another.

When we tried this simple approach:

@Override
public void processAction(ActionRequest request, ActionResponse response)
        throws PortletException, PortletSecurityException, IOException {
    LOGGER.info("Invoked Action Phase");

    response.setRenderParameter("prp", "#######################");
    response.sendRedirect("/sample-portal/classic/getterPage");
}

But when the code was executed we were seeing this error in the logs:

14:37:32,455 ERROR [portal:UIPortletLifecycle] (http--127.0.0.1-8080-1) Error processing the action: sendRedirect cannot be called after setPortletMode/setWindowState/setRenderParameter/setRenderParameters has been called previously: java.lang.IllegalStateException: sendRedirect cannot be called after setPortletMode/setWindowState/setRenderParameter/setRenderParameters has been called previously
    at org.gatein.pc.portlet.impl.jsr168.api.StateAwareResponseImpl.checkRedirect(StateAwareResponseImpl.java:120) [pc-portlet-2.4.0.Final.jar:2.4.0.Final]
...

We are used to accept specifications limits but we are also all used to exceptional requests from our customers. So my task was to trying to find a solution to this problem.

A solution : a Servlet Filter + WrappedResponse

I could have probably have found some other way to reach what we wanted, but I had a certain amount of fun playing with the abstraction layers that Servlet offer us.

One of the main reason while we are receiving that exception is because we can not trigger a redirect on a response object if the response has already started to stream the answer to the client.

Another typical exception the you could have encounter when you are playing with these aspects is:

java.lang.IllegalStateException: Response already committed 

More in general I have seen this behaviour happening in other technologies as well, like when in PHP you try to wrtie a cookie after you have already started to send some output to a client.

Since the limitation we have to find some way to deviate from this behaviour to allow us to perform our redirect and still accept our parameters.

One standard and interesting way to "extend" the default behaviour of Servlet based applications is via Filters. We can think to inject our custom behaviour to modify the normal workflow of any application. We just have to pay attention to not break anything!

Here comes our filter:

public class PortletRedirectFilter implements javax.servlet.Filter {

    private static final Logger LOGGER = Logger
            .getLogger(PortletRedirectFilter.class);

    private FilterConfig filterConfig = null;

    public void doFilter(ServletRequest request, ServletResponse response,

    FilterChain chain) throws IOException, ServletException {

        LOGGER.info("started filtering all urls defined in the filter url mapping ");

        if (request instanceof HttpServletRequest) {
            HttpServletRequest hsreq = (HttpServletRequest) request;

            // search for a GET parameter called as defined in REDIRECT_TO
            // variable
            String destinationUrl = hsreq.getParameter(Constants.REDIRECT_TO);

            if (destinationUrl != null) {
                LOGGER.info("found a redirect request " + destinationUrl);
                // creates the HttpResponseWrapper that will buffer the answer
                // in memory
                DelayedHttpServletResponse delayedResponse = new DelayedHttpServletResponse(
                        (HttpServletResponse) response);
                // forward the call to the subsequent actions that could modify
                // externals or global scope variables
                chain.doFilter(request, delayedResponse);

                // fire the redirection on the original response object
                HttpServletResponse hsres = (HttpServletResponse) response;
                hsres.sendRedirect(destinationUrl);

            } else {
                LOGGER.info("no redirection defined");
                chain.doFilter(request, response);
            }
        } else {
            LOGGER.info("filter invoked outside the portal scope");
            chain.doFilter(request, response);
        }

    }
...

As you can see the logic inside the filter is not particularly complex. We start checking for the right kind of Request object since we need to cast it to HttpServletRequest to be able to extract GET parameters from them.

After this cast we look for a specific GET parameter, that we will use in our portlet for the only purpose of specifying the address we want to redirect to. Nothing will happen in case we won't find the redirect parameter set, so the filter will implement the typical behaviour to forward to the eventual other filters in the chain.

But the real interesting behaviour is defined when we identify the presence of the redirect parameter.

If we would limit ourself to forward the original Response object we will received the error we are trying to avoid. Our solution is to wrap the Response object that we are forwarding to the other filters in a WrappedResponse that will buffer the response so that it won't be streamed to the client but will stay in memory.

After the other filters complete their job we can then safely issue a redirect instruction that won't be rejected since we are firing it on a fresh Response object and not on one that has already been used by other components.

We now only need to uncover the implementation of DelayedHttpServletResponse and of its helper class ServletOutputStreamImpl:

public class DelayedHttpServletResponse extends HttpServletResponseWrapper {
    protected HttpServletResponse origResponse = null;
    protected OutputStream temporaryOutputStream = null;
    protected ServletOutputStream bufferedServletStream = null;
    protected PrintWriter writer = null;

    public DelayedHttpServletResponse(HttpServletResponse response) {
        super(response);
        origResponse = response;
    }

    protected ServletOutputStream createOutputStream() throws IOException {
        try {

            temporaryOutputStream = new ByteArrayOutputStream();

            return new ServletOutputStreamImpl(temporaryOutputStream);
        } catch (Exception ex) {
            throw new IOException("Unable to construct servlet output stream: "
                    + ex.getMessage(), ex);
        }
    }

    @Override
    public ServletOutputStream getOutputStream() throws IOException {

        if (bufferedServletStream == null) {
            bufferedServletStream = createOutputStream();
        }
        return bufferedServletStream;
    }

    @Override
    public PrintWriter getWriter() throws IOException {
        if (writer != null) {
            return (writer);
        }

        bufferedServletStream = getOutputStream();

        writer = new PrintWriter(new OutputStreamWriter(bufferedServletStream,
                "UTF-8"));
        return writer;
    }

}

DelayedHttpServletResponse implements the Decorator pattern around HttpServletResponse and what it does is keeping a reference to the original Response object that is decorating and instantiating a separated OutputStream that all the components that use ServletResponse object want to use.
This OutputStream will write to an in memory buffer that will not reach the client but that will enable the server to keep on processing the call and generating all the server side interaction related to the client session.

Implementation of ServletOutputStreamImpl is not particularly interesting and is a basic (and possibly incomplete) implementation of ServletOutputStream abstract class:

public class ServletOutputStreamImpl extends ServletOutputStream {

    OutputStream _out;
    boolean closed = false;

    public ServletOutputStreamImpl(OutputStream realStream) {
        this._out = realStream;
    }

    @Override
    public void close() throws IOException {
        if (closed) {
            throw new IOException("This output stream has already been closed");
        }
        _out.flush();
        _out.close();

        closed = true;
    }

    @Override
    public void flush() throws IOException {
        if (closed) {
            throw new IOException("Cannot flush a closed output stream");
        }
        _out.flush();
    }

    @Override
    public void write(int b) throws IOException {
        if (closed) {
            throw new IOException("Cannot write to a closed output stream");
        }
        _out.write((byte) b);
    }

    @Override
    public void write(byte b[]) throws IOException {
        write(b, 0, b.length);
    }

    @Override
    public void write(byte b[], int off, int len) throws IOException {
        if (closed) {
            throw new IOException("Cannot write to a closed output stream");
        }
        _out.write(b, off, len);
    }

}

This is all the code that we need to enable the required behaviour. What remains left is registering the filter.

We are going to configure GateIn web descriptor, portlet-redirect/war/src/main/webapp/WEB-INF/web.xml

<!-- Added to allow redirection of calls after Public Render Parameters have been already setted.-->

<filter>
  <filter-name>RedirectFilter</filter-name>
  <filter-class>paolo.test.portal.servletfilter.PortletRedirectFilter</filter-class>
</filter>  

<filter-mapping>
  <filter-name>RedirectFilter</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

Remember to declare it as the first filter-mapping so that it will be executed as first, and all the subsequent filters will receive the BufferedResponse object.

And now you can do something like this in your portlet to use the filter:

@Override
protected void doView(RenderRequest request, RenderResponse response)
        throws PortletException, IOException, UnavailableException {
    LOGGER.info("Invoked Display Phase");
    response.setContentType("text/html");
    PrintWriter writer = response.getWriter();

    /**
     * generates a link to this same portlet instance, that will trigger the
     * processAction method that will be responsible of setting the public
     * render paramter
     */
    PortletURL portalURL = response.createActionURL();

    String requiredDestination = "/sample-portal/classic/getterPage";
    String url = addRedirectInfo(portalURL, requiredDestination);


    writer.write(String
            .format("<br/><A href='%s' style='text-decoration:underline;'>REDIRECT to %s and set PublicRenderParameters</A><br/><br/>",
                    url, requiredDestination));
    LOGGER.info("Generated url with redirect parameters");

    writer.close();

}

/**
 * Helper local macro that add UC_REDIRECT_TO GET parameter to the Url of a
 * Link
 * 
 * @param u
 * @param redirectTo
 * @return
 */
private String addRedirectInfo(PortletURL u, String redirectTo) {
    String result = u.toString();
    result += String.format("&%s=%s", Constants.REDIRECT_TO, redirectTo);
    return result;
}

/*
 * sets the public render paramter
 */
@Override
public void processAction(ActionRequest request, ActionResponse response)
        throws PortletException, PortletSecurityException, IOException {
    LOGGER.info("Invoked Action Phase");

    response.setRenderParameter("prp", "#######################");
}

You will see that you will be able to set the Render Parameter during the Action phase and you wil be able to specify during the RenderPhase the Parameter that will trigger the filter to issue a redirect.

Files

I have created a repo with a working sample portal, that defines a couple of portal pages, some portlet and the filter itself so that you will be able to verify the behaviour and playing the the application.

https://github.com/paoloantinori/gate-in-portlet-portlet-redirect-filter

In the README.md you will find the original instruction from the GateIn project to build and deploy the project on JBoss AS 7. In particular pay attention to the section from standalone.xml that you are required to uncomment to enable the configuration that the sample portal relies on.

My code additions does not require any extra configuration.

The portal I created is based on GateIn sample portal quickstart that you can find here:

https://github.com/paoloantinori/gate-in-portlet-portlet-redirect-filter

If you clone GateIn repo remember to switch to 3.5.0.Final tag, so that you will be working with a stable version that you can match with the full GateIn distribution + JBoss AS 7 that you can download from here:

https://github.com/paoloantinori/gate-in-portlet-portlet-redirect-filter

Thursday, February 14, 2013

Refresh your shell when the filesystem is out of sync

This tip could be so obvious that you savvy reader could laugh at me or wonder why to write a blog post about it. But this problem bothered me since a while.

In particular when dealing with svn.

If you are in a command line shell and and you update or checkout the remote resources it could happen that your shell session is not able to see the modifications. It could be that you download a file via svn co but an ls command doesn't reflect the modification showing the new file.

It's like if the filesystem is out of sync.

In these cases, you could already discovered yourself that if you change folder and you revert back to it, the shell session "updates" it's content and shows you the files you were looking for.

Well, this works but it have always annoyed me to change folder to trigger this behaviour.

Until the other day. When I discovered that

cd .

Does the trick! Without changing folder, you are now able to refresh your folder view!

I hope this post could help someone else with the same problem, since when I tried to look for the solution on the internet the last time, I wasn't able to find the right combination of keyword to spot this tip that I am sure is out there!

Enjoy!