Monday, October 5, 2015

JBoss Fuse - Turn your static config into dynamic templates with MVEL

Recently I have rediscovered a JBoss Fuse functionality that I had forgotten about and I’ve thought that other people out there may benefit of this reminder.

This post will be focused on JBoss Fuse and Fabric8 but it might interest also all those developers that are looking for minimally invasive ways to add some degree of dynamic support to their static configuration files.

The idea of dynamic configuration in OSGi and in Fabric8

OSGi framework is more often remembered for its class-loading behavior. But a part of that, it also defines other concepts and functionality that the framework has to implement.
One of the is ConfigAdmin.

ConfigAdmin is a service to define an externalized set of properties files that are logically bounded to your deployment units.

The lifecycle of this external properties files is linked with OSGi bundle lifecycle: if you modify an external property file, your bundle will be notified.
Depending on how you coded your bundle you can decide to react to the notification and, programmatically or via different helper frameworks like Blueprint, you can invoke code that uses the new configuration.

This mechanism is handy and powerful, and all developers using OSGi are familiar with it.

Fabric8 builds on the idea of ConfigAdmin, and extends it.

With its provisioning capabilities, Fabric8 defines the concept of a Profile that encapsulates deployment units and configuration. It adds some layer of functionality on top of plain OSGi and it allows to manage any kind of deployment unit, not only OSGi bundles, as well as any kind of configuration or static file.

If you check the official documentation you will find the list of “extensions” that Fabric8 layer offers and you will learn that they are divided mainly in 2 groups: Url Handlers and Property Resolvers.

I suggest everyone that is interested in this technology to dig through the documentation; but to offer a brief summary and a short example, imagine that your Fabric profiles have the capability to resolve some values at runtime using specific placeholders.


# sample url handler usage, ResourceName is a filename relative to the namespace of the containing Profile:

# sample property handler, the value is read at deploy time, from the Apache Zookeeper distributed registry that is published when you run JBoss Fuse

There are multiple handlers available out of the box, covering what the developers thought were the most common use cases: Zookeeper, Profiles, Blueprint, Spring, System Properties, Managed Ports, etc.

And you might also think to extend the mechanism defining your own extension: for example you might want to react to performance metrics you are storing on some system, you can write an extension, with it’s syntax convention, that injects values taken from your system.

The limit of all this power: static configuration files

The capabilities I have introduced above are exciting and powerful but they have an implicit limit: they are available only to .properties files or to files that Fabric is aware of.

This means that those functionality are available if you have to manage Fabric Profiles, OSGi properties or other specific technology that interact with them like Camel, but they are not enabled for anything that is Fabric-Unaware.

Imagine you have your custom code that reads an .xml configuration file. And imagine that your code doesn’t reference any Fabric object or service.
Your code will process that .xml file as-is. There won’t be any magic replacement of tokens or paths, because despite you are running inside Fabric, you are NOT using any directly supported technology and you are NOT notifying Fabric, that you might want its services.

To solve this problem you have 3 options:

  1. You write an extension to Fabric to handle and recognise your static resources and delegates the dynamic replacement to the framework code.
  2. You alter the code contained in your deployment unit, and instead of consuming directly the static resources you ask to Fabric services to interpolate them for you
  3. *You use mvel: url handler (and avoid touching any other code!)

What is MVEL ?

MVEL is actually a programming language: .
In particular it’s also scripting language that you can run directly from source skipping the compilation step.
It actually has multiple specific characteristics that might make it interesting to be embedded within another application and be used to define new behaviors at runtime. For all these reasons, for example, it’s also one of the supported languages for JBoss Drools project, that works with Business Rules you might want to define or modify at runtime.

Why it can be useful to us? Mainly for 2 reasons:

  1. it works well as a templating language
  2. Fabric8 has already a mvel: url handler that implicitly, acts also as a resource handler!

Templating language

Templating languages are those family of languages (often Domain Specific Languages) where you can altern static portion of text that is read as-is and dynamic instructions that will be processed at parsing time. I’m probably saying in a more complicated way the same idea I have already introduced above: you can have tokens in your text that will be translated following a specific convention.

This sounds exactly like the capabilities provided by the handlers we have introduced above. With an important difference: while those were context specific handler, MVEL is a general purpose technology. So don’t expect it to know anything about Zookeeper or Fabric profiles, but expect it to be able to support generic programming language concepts like loops, code invocation, reflection and so on.

Fabric supports it!

A reference to the support in Fabric can be find here:

But let me add a snippet of the original code that implements the functionality, since this is the part where you might found this approach interesting even outside the context of JBoss Fuse:

public InputStream getInputStream() throws IOException {
  String path = url.getPath();
  URL url = new URL(path);
  CompiledTemplate compiledTemplate = TemplateCompiler.compileTemplate(url.openStream());
  Map<String, Object> data = new HashMap<String, Object>();
  Profile overlayProfile = fabricService.get().getCurrentContainer().getOverlayProfile();
  data.put(“profile”, Profiles.getEffectiveProfile(fabricService.get(), overlayProfile));
  data.put(“runtime”, runtimeProperties.get());
  String content = TemplateRuntime.execute(compiledTemplate, data).toString();
  return new ByteArrayInputStream(content.getBytes());

What’s happening here?

First, since it’s not showed in the snippet, remember that this is a url handler. This means that the behavior get’s triggered for files that are referred to via a specific uri. In this case it’s mvel:. For example a valid path might be mvel:jetty.xml.

The other interesting and relatively simple thing to notice is the interaction with MVEL interpreter.
Like most of the templating technologies, even the simplest ones you can implement yourself you usually have:

  • an engine/complier, here it’s TemplateCompiler
  • a variable that contains your template, here it’s url
  • a variable that represent your context, that is the set of variables you want to expose to the engine, here data

Put them all together, asking the engine to do it’s job, here with TemplateRuntime.execute(...) and what you get in output is a static String. No longer the templating instructions, but all the logic your template was defining has been applied, and eventually, augmented with some of the additional input values taken from the context.

An example

I hope my explanation it’s been simple enough, but probably an example is the best way to express the concept.

Let’s use jetty.xml, contained in JBoss Fuse default.profile, that is a static resource that JBoss Fuse doesn’t handle as any special file, so it doesn’t offer any replacement functionality to it.

I will show both aspects of MVEL integration here: reading some value from the context variables and show how programmatic logic (just the sum of 2 integers here) can be used:

<Property name="jetty.port" default="@{  Integer.valueOf( profile.configurations['org.ops4j.pax.web']['org.osgi.service.http.port'] ) + 10  }"/>

We are modifying the default value for Jetty port, taking its initial value from the “profile” context variable, that is a Fabric aware object that has access to the rest of the configuration:


we explicitly cast it from String to Integer:

Integer.valueOf( ... )

and we add a static value of 10 to the returned value:

.. + 10

Let’s save the file, stop our fuse instance. Restart it and re-create a test Fabric:

# in Fuse CLI shell
shutdown -f

# in bash shell
rm -rf data instances


# in Fuse CLI shell
fabric:create --wait-for-provisioning

Just wait and monitor logs and… Uh-oh. An error! What’s happening?

This is the error:

2015-10-05 12:00:10,005 | ERROR | pool-7-thread-1  | Activator                        | 102 - org.ops4j.pax.web.pax-web-runtime - 3.2.5 | Unable to start pax web server: Exception while starting Jetty
java.lang.RuntimeException: Exception while starting Jetty
at org.ops4j.pax.web.service.jetty.internal.JettyServerImpl.start([103:org.ops4j.pax.web.pax-web-jetty:3.2.5]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[:1.7.0_76]
at sun.reflect.NativeConstructorAccessorImpl.newInstance([:1.7.0_76]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance([:1.7.0_76]
at java.lang.reflect.Constructor.newInstance([:1.7.0_76]
at org.eclipse.jetty.xml.XmlConfiguration$JettyXmlConfiguration.set([96:org.eclipse.jetty.aggregate.jetty-all-server:8.1.17.v20150415]
at org.eclipse.jetty.xml.XmlConfiguration$JettyXmlConfiguration.configure([96:org.eclipse.jetty.aggregate.jetty-all-server:8.1.17.v20150415]
Caused by: java.lang.NumberFormatException: For input string: “@{profile.configurations[’org.ops4j.pax.web'][‘org.osgi.service.http.port’] + 1}”
at java.lang.NumberFormatException.forInputString([:1.7.0_76]
at java.lang.Integer.parseInt([:1.7.0_76]
at java.lang.Integer.<init>([:1.7.0_76]
… 29 more

If you notice, the error message says that our template snippet cannot be converted to a Number.

Why our template snippet is displayed in first instance? The templating engine should have done its part of the job and give us back a static String without any reference to templating directives!

I have showed you this error on purpose, to insist on a concept I have described above but that might get uncaught on first instance.

MVEL support in Fabric, is implemented as an url handler.

So far we have just modified the content of a static resource file, but we haven’t given any hint to Fabric that we’d like to handle that file as a mvel template.

How to do that?

It’s just a matter of using the correct uri to refer to that same file.

So, modify the file default.profile/ that is the place in the default Fabric Profile where you define which static file contains Jetty configuration:

# change it from org.ops4j.pax.web.config.url=profile:jetty.xml to

Now, stop the instance again, remove the Fabric configuration files, recreate a Fabric and notice how you Jetty instance is running correctly.

We can check it in this way:

JBossFuse:karaf@root> config:list | grep org.osgi.service.http.port
   org.osgi.service.http.port = 8181

While from your browser you can verify, that Hawtio, JBoss Fuse web console that is deployed on top Jetty, is accessible to port 8191: http://localhost:8191/hawtio

Tuesday, February 17, 2015

JBoss Fuse - Some less known trick


  1. expose java static calls as Karaf shell native commands
  2. override OSGi Headers at deploy time
  3. override OSGi Headers after deploy time with OSGi Fragments

Expose java static calls as Karaf shell native commands

As part of my job as software engineer that has to collaborate with support guys and customers, I very often find myself in the need of extracting additional information from a system I don't have access to.
Usual approaches, valid in all kind of softwares, are usually extracting logs, invoking interactive commands to obtain specific outputs or in what is the most complex case deploy some PoC unit that is supposed to verify a specific behavior.

JBoss Fuse, adn Karaf, the platform it's based onto do alredy a great job in exposing all those data.

You have:

  • extensive logs and integration with Log4j
  • extensive list of jmx operation (you can eventually invoke over http with jolokia)
  • a large list of shell commands

But sometimes this is not enough. If you have seen my previous post about how to use Byteman on JBoss Fuse, you can imagine all the other cases:

  1. you need to print values that are not logged or returned in the code
  2. you might need to short-circuit some logic to hit a specific execution branch of your code
  3. you want to inject a line of code that wasn't there at all

Byteman is still a very good option to, but Karaf has a facility we can use to run custom code.

Karaf, allows you to write code directly in its shell; and allows you to record these bits of code as macro you can re-invoke. This macro will look like a native Karaf shell command!

Let's see a real example I had to implement:

verify if the jvm running my JBoss Fuse instance was resolving a specific DNS as expected.

The standard JDK has a method you can invoke to resolve a dns name:

Since that command is simple enough, meaning it doesn't requires a complex or structured input, I thought I could turn it into an easy to reuse command:

# add all public static methods on a java class as commands  to the namespace "my_context": 
# bundle 0 is because system libs are served by that bundle classloader
addcommand my_context (($.context bundle 0) loadClass 

That funky line is explained in this way:

  • addcommand is the karaf shell functionality that accepts new commands
  • my_context is the namespace/prefix you will attach you command to. In my case, "dns" would have made a good namespace. ($.context bundle 0) invokes java code. In particular we are invoking the $.context instances, that is a built-in instance exposed by Karaf shell to expose the OSGi framework, whose type is org.apache.felix.framework.BundleContextImpl, and we are invoking its method called bundle passing it the argument 0 representing the id of the OSGi classloader responsible to load the JDK classes. That call returns an instance of org.apache.felix.framework.Felix that we can use to load the specific class definition we need, that is

As the inline comment says, an invocation of addcommand, exposes all the public static method on that class. So we are now allowed to invoke those methods, and in particular, the one that can resolve dns entries:

JBossFuse:karaf@root> my_context:getAllByName ""

This functionality is described on Karaf documentation page.

Override OSGi Headers at deploy time

If you work with Karaf, you are working with OSGi, love it or hate it.
A typical step in each OSGi workflow is playing (or fighting) with OSGi headers.
If you are in total control of you project, this might be more or less easy, depending on the releationship between your deployment units. See Christian Posta post to have a glimpse of some less than obvious example.

Within those conditions, a very typical situation is the one when you have to use a bundle, yours or someone else's, and that bundle headers are not correct.
What you end up doing, very often is to re-package that bundles, so that you can alter the content of its MANIFEST, to add the OSGi headers that you need.

Karaf has a facility in this regard, called the wrap protocol.
You might alredy know it as a shortcut way to deploy a non-bundle jar on Karaf but it's actually more than just that.
What it really does, as the name suggest, is to wrap. But it can wrap both non-bundles and bundles!
Meaning that we can also use it to alter the metadata of an already packaged bundle we are about to install.

Let's give an example, again taken fron a real life experience.
Apache HttpClient is not totally OSGi friendly. We can install it on Karaf with the wrap: protocol and export all its packages.

JBossFuse:karaf@root> install -s 'mvn:org.apache.httpcomponents/httpclient/4.2.5'
Bundle ID: 257
JBossFuse:karaf@root> exports | grep -i 257
   257 No active exported packages. This command only works on started bundles, use osgi:headers instead
JBossFuse:karaf@root> install -s 'wrap:mvn:org.apache.httpcomponents/httpclient/\ 4.2.5$Export-Package=*; version=4.2.5'
Bundle ID: 259
JBossFuse:karaf@root> exports | grep -i 259
   259 org.apache.http.client.entity; version=4.2.5
   259 org.apache.http.conn.scheme; version=4.2.5
   259 org.apache.http.conn.params; version=4.2.5
   259 org.apache.http.cookie.params; version=4.2.5

And we can see that it works with plain bundles too:

JBossFuse:karaf@root> la -l | grep -i camel-core
[ 142] [Active     ] [            ] [       ] [   50] mvn:org.apache.camel/camel-core/2.12.0.redhat-610379
JBossFuse:karaf@root> install -s 'wrap:mvn:org.apache.camel/camel-core/2.12.0.redhat-610379\
$overwrite=merge&Bundle-SymbolicName=paolo-s-hack&Export-Package=*; version=1.0.1'
Bundle ID: 269

JBossFuse:karaf@root> headers 269

camel-core (269)

Bundle-Vendor = Red Hat, Inc.
Bundle-Activator = org.apache.camel.impl.osgi.Activator
Bundle-Name = camel-core
Bundle-DocURL =
Bundle-Description = The Core Camel Java DSL based router

Bundle-SymbolicName = paolo-s-hack

Bundle-Version = 2.12.0.redhat-610379
Bundle-License =
Bundle-ManifestVersion = 2


Export-Package = 


Where you can see Bundle-SymbolicName and the version of the exported packages are carrying the values I set.

Again, the functionality is described on Karaf docs and you might find useful the wrap protocol reference.

Override OSGi Headers after deploy time with OSGi Fragments

Last trick is powerful, but it probably requires you to remove the original bundle if you don't want to risk having half of the classes exposed by one classloader and the remaining ones (those packages you might have added in the overridden Export) in another one.

There is actually a better way to override OSGi headers, and it comes directly from an OSGi standard functionality: OSGi Fragments.

If you are not familiare with the concept, the definition taken directly from OSGi wiki is:

A Bundle fragment, or simply a fragment, is a bundle whose contents are made available to another bundle (the fragment host). Importantly, fragments share the classloader of their parent bundle.

That page gives also a further hint about what I will describe:

Sometimes, fragments are used to 'patch' existing bundles.

We can use this strategy to:

  • inject .jars in the classpath of our target bundle
  • alter headers of our target bundle

I have used the first case to fix a badly configured bundle that was looking for a an xml configuration descriptor that it didn't include, and that I have provided deploying a light Fragment Bundle that contained just that.

But the use case I want to show you here instead, is an improvement regarding the way to deploy Byteman on JBoss Fuse/Karaf.

If you remember my previous post, since Byteman classes needed to be available from every other deployed bundle and potentially need access to every class available, we had to add Byteman packages to the org.osgi.framework.bootdelegation property, that instructs the OSGi Framework to expose the listed packages through the virtual system bundle (id = 0).

You can verify what is currently serving with headers 0, I won't include the output here since it's a long list of jdk extension and framework classes.

If you add your packages, org.jboss.byteman.rule,org.jboss.byteman.rule.exception in my case, even these packages will be listed in the output of that command.

The problem with this solution is that this is a boot time property. If you want to use Byteman to manipulate the bytecode of an already running instance, you have to restart it after you have edited this properties.

OSGi Fragments can help here, and avoid a preconfiguration at boot time.

We can build a custom empty bundle, with no real content, that attaches to the system bundle and extends the list of packages it serves.

    system.bundle; extension:=framework

That's an excerpt of maven-bundle-plugin plugin configuration, see here for the full working Maven project, despite the project it's really just 30 lines of pom.xml:

JBossFuse:karaf@root> install -s mvn:test/byteman-fragment/1.0-SNAPSHOT

Once you have that configuration, you are ready to use Byteman, to, for example, inject a line in java.lang.String default constructor.

# find your Fuse process id
PROCESS_ID=$(ps aux | grep karaf | grep -v grep | cut -d ' ' -f2)

# navigate to the folder where you have extracted Byteman
cd /data/software/redhat/utils/byteman/byteman-download-

# export Byteman env variable:
export BYTEMAN_HOME=$(pwd)
cd bin/

# attach Byteman to Fabric8 process, no output expected unless you enable those verbose flags
sh -b -Dorg.jboss.byteman.transform.all $PROCESS_ID 
# add these flags if you have any kind of problem and what to see what's going on: -Dorg.jboss.byteman.debug -Dorg.jboss.byteman.verbose

# install our Byteman custom rule, we are passing it directly inline with some bash trick
sh /dev/stdin <<OPTS

# smoke test rule that uses also a custom output file
RULE DNS StringSmokeTest
CLASS java.lang.String
METHOD <init>()
DO traceln(" works: " );
traceOpen("PAOLO", "/tmp/byteman.txt");
traceln("PAOLO", " works in files too " );


Now, to verify that Byteman is working, we can just invoke java.lang.String constructor in Karaf shell:

JBossFuse:karaf@root> new java.lang.String

And as per our rule, you will also see the content in /tmp/byteman.txt

Inspiration for this third trick come from both the OSGi wiki and this interesting page from Spring guys.

If you have any comment or any other interesting workflow please leave a comment.

Tuesday, October 7, 2014

Use Byteman in JBoss Fuse / Fabric8 / Karaf

Have you ever found yourself in the process of try to understand how come something very simple is not working?

You are writing code in any well known context and for whatever reason it's not working. And you trust your platform, so you carefully read all the logs that you have.
And still you have no clue why something is not behaving like expected.

Usually, what I do next, if I am lucky enough to be working on an Open Source project, is start reading the code.
That many times works; but almost always you haven't written that code; and you don't know the product that well. So, yeah, you see which variable are in the context. You have no clue about their possible values and what's worse you have no idea where or even worse, when, those values were created.

At this point, what I usually do is to connect with a debugger. I will never remember the JVM parameters a java process needs to allow debugging, but I know that I have those written somewhere. And modern IDEs suggest me those, so it's not a big pain connecting remotely to a complex application server.

Okay, we are connected. We can place a breakpoint not far from the section we consider important and step trough the code. Eventually adding more brakpoint.
The IDE variables view allows us to see the values of the variables in contexts. We can even browse the whole object tree and invoke snippet of code, useful in case the plain memory state of an object doesn't really gives the precise information that we need(imagine you want to format a Date or filter a collection).

We have all the instruments but... this is a slow process.
Each time I stop at a specific breakpoint I have to manually browse the variables. I know, we can improve the situation with watched variables, that stick on top of the overview window and give you a quick look at what you have already identified as important.
But I personally find that watches makes sense only if you have a very small set of variables: since they all share the same namespace, you end up with many values unset that just distract the eye, when you are not in a scope that sees those variables.

I have recently learnt a trick to improve these workflows that I want to share with you in case you don't know it yet:

IntelliJ and, with a smart trick even Eclipse, allow you to add print statements when you pass through a breakpoint. If you combine this with preventing the breakpoint to pause, you have a nice way to augment the code you are debugging with log invocations.

For IntelliJ check here:

While instead for Eclipse, check this trick: or let me know if there is a cleaner or newer way to reach the same result.

The trick above works. But it's main drawback is that you are adding a local configuration to your workspace. You cannot share this easily with someone else. And you might want to re-use your workspace for some other session and seeing all those log entries or breakpoints can distract you.

So while looking for something external respect my IDE, I have decided to give Byteman a try.

Byteman actually offers much more than what I needed this time and that's probably the main reason I have decided to understand if I could use it with Fabric8.

A quick recap of what Byteman does taken directly from its documentation:

Byteman is a bytecode manipulation tool which makes it simple to change the operation of Java applications either at load time or while the application is running.
It works without the need to rewrite or recompile the original program.


  • tracing execution of specific code paths and displaying application or JVM state
  • subverting normal execution by changing state, making unscheduled method calls or forcing an unexpected return or throw
  • orchestrating the timing of activities performed by independent application threads
  • monitoring and gathering statistics summarising application and JVM operation

In my specific case I am going to use the first of those listed behaviors, but you can easily guess that all the other aspects might become handy at somepoint:

  • add some logic to prevent a NullPointerException
  • shortcircuit some logic because you are hitting a bug that is not in your code base but you still want to see what happens if that bug wasn't there
  • anything else you can imagine...

Start using Byteman is normally particularly easy. You are not even forced to start your jvm with specific instruction. You can just attach to an already running process!
This works most of the time but unluckily not on Karaf with default configuration, since OSGi implication. But no worries, the functionality is just a simple configuration editing far.

You have to edit the file :


and add this 2 packages to the proprerty org.osgi.framework.bootdelegation:


That property is used to instruct the osgi framework to provide the classes in those packages from the parent Classloader. See

In this way, you will avoid ClassCastException raised when your Byteman rules are triggered.

That's pretty much all the extra work we needed to use Byteman on Fuse.

Here a practical example of my interaction with the platform:

# assume you have modified Fabric8's and started it and that you are using fabric8-karaf-1.2.0-SNAPSHOT

# find your Fabric8 process id
$ ps aux | grep karaf | grep -v grep | cut -d ' ' -f3

# navigate to the folder where you have extracted Byteman
cd /data/software/redhat/utils/byteman/byteman-download-
# export Byteman env variable:
export BYTEMAN_HOME=$(pwd)
cd bin/
# attach Byteman to Fabric8 process, no output expected unless you enable those verbose flags
sh 5200 # add this flags if you have any kind of problem and what to see what's going on: -Dorg.jboss.byteman.debug -Dorg.jboss.byteman.verbose 
# install our Byteman custom rules
$ sh ~/Desktop/RBAC_Logging.btm
install rule RBAC HanldeInvoke
install rule RBAC RequiredRoles
install rule RBAC CanBypass
install rule RBAC UserHasRole
# invoke some operation on Fabric8 to trigger our rules:
$ curl -u admin:admin 'http://localhost:8181/jolokia/exec/io.fabric8:type=Fabric/containersForVersion(java.lang.String)/1.0' 
{"timestamp":1412689553,"status":200,"request":{"operation...... very long response}

# and now check your Fabric8 shell:
 OBJECT: io.fabric8:type=Fabric
 METHOD: containersForVersion
 ARGS: [1.0]
 REQUIRED ROLES: [viewer, admin]
 CURRENT_USER_HAS_ROLE(viewer): true

Where my Byteman rules look like:

RULE RBAC HanldeInvoke
METHOD handleInvoke(ObjectName, String, Object[], String[]) 
DO traceln(" OBJECT: " + $objectName + "
 METHOD: " + $operationName + "
 ARGS: " + java.util.Arrays.toString($params) );

RULE RBAC RequiredRoles
METHOD getRequiredRoles(ObjectName, String, Object[], String[])
DO traceln(" REQUIRED ROLES: " + $! );

METHOD canBypassRBAC(ObjectName) 
DO traceln(" CANBYPASS: " + $! );

METHOD currentUserHasRole(String)
DO traceln(" CURRENT_USER_HAS_ROLE(" + $requestedRole + "): " + $! );

Obviously ths was just a short example of what Byteman can do for you. I'd invite you to read the project documentation since you might discover nice constructs that could allow you to write easier rules or to refine them to really trigger only when it's relevant for you (if in my example you see some noise in the output, you probably have an Hawtio instance open that is doing it's polling thus triggering some of our installed rules)

A special thank you goes to Andrew Dinn that explained me how Byteman work and the reason of my initial failures

The screencast is less than optimal due to my errors ;) but you clearly see the added noise since I had an instance invoking protected JMX operation!

Monday, May 5, 2014

Continuous Integration with JBoss Fuse, Jenkins and Nexus

Recently I was putting together a quickstart Maven project to show a possible approach to the organization of a JBoss Fuse project.

The project is available on Github here:

And it’s an slight evolution of what I have learnt working with my friend James Rawlings

The project proposes a way to organize your codebase in a Maven Multimodule project.
The project is in continuous evolution, thanks to feedback and suggestions I receive; but it’s key point is to show a way to organize all the artifacts, scripts and configuration that compose your project.

In the ci folder you will find subfolders like features or karaf_scripts with files you probably end up creating in every project and with inline comments to help you with tweaking and customization according to your specific needs.
The project makes also use of Fabric8 to handle the creation of a managed set of OSGi containers and to benefit of all its features to organize workflows, configuration and versioning of your deployments.
In this blogpost I will show you how to deploy that sample project in a very typical development setup that includes JBoss Fuse, Maven, Git, Nexus and Jenkins.
The reason why I decided to cover this topic is because I find that many times I meet good developers that tell me that even if they are aware of the added value of a continuous integration infrastructure, have no time to dedicate to the activity. With no extra time they focus only to development.

I don’t want you to evangelize around this topic or try to tell you what they should do. I like to trust them and believe they know their project priorities and that they accepted the trade-off among available time, backlog and the added overall benefits of each activity. Likewise I like to believe that we all agree that for large and long projects, CI best practices are definitely a must-do and that no one has to argue about their value.

With all this in mind, I want to show a possible setup and workflow, to show how quickly it is to invest one hour of your time for benefits that are going to last longer.

I will not cover step by step instructions. But to prove you that all this is working I have created a bash script, that uses Docker, and that will demonstrate how things can be easy enough to get scripted and, more important, that they really work!

If you want to jump straight to the end, the script is available here:

It uses some Docker images I have created and published as trusted builds on Docker Index:

They are a convenient and reusable way to ship executables and since they show the steps performed; they may also be seen as a way to document the installation and configuration procedure.
As mentioned above, you don’t necessarily need them. You can manually install and configure the services yourself. They are just an verified and open way to save you some time or to show you the way I did it.

Let’s start describing the component of our sample Continuous Integration setup:

1) JBoss Fuse 6.1
It’s the runtime we are going to deploy onto. It lives in a dedicated box. It interacts with Nexus as the source of the artifacts we produce and publish.

2) Nexus
It’s the software we use to store the binaries we produce from our code base. It is accessed by JBoss Fuse, that downloads artifacts from it but it is also accessed from Jenkins, that publishes binaries on it, as the last step of a successful build job.

3) Jenkins
It’s our build jobs invoker. It publishes its outputs to Nexus and it builds its output if the code it checked out with Git builds successfully.

4) Git Server
It’s the remote code repository holder. It’s accessed by Jenkins to download the most recent version of the code we want to build and it’s populated by all the developers when they share their code and when they want to build on the Continous Integration server. In our case, git server is just a filesystem accessed via ssh.
Interaction Diagram



First thing to do is to setup git to act as our source code management (SCM).
As you may guess we might have used every other similar software to do the job, from SVN to Mercurial, but I prefer git since it’s one of the most popular choices and also because it’s an officially supported tool to interact directly with Fabric8 configuration
We don’t have great requirements for git. We just need a filesystem to store our shared code and a transport service that allows to access that code.
To keep things simple I have decided to use SSH as the transport protocol.
This means that on the box that is going to store the code we need just sshd daemon started, some valid user, and a folder they can access.
Something like:

yum install -y sshd git
service sshd start
adduser fuse
mkdir -p /home/fuse/fuse_scripts.git
chmod a+rwx /home/fuse/fuse_scripts.git # or a better stratey based on guid
While the only git specific step is to initialize the git repository with

git init --bare /home/fuse/fuse_scripts.git



Nexus OSS is a repository manager that can be used to store Maven artifacts.
It’s implemented as a java web application. For this reason installing Nexus is particularly simple.
Thanks to the embedded instance of Jetty that empowers it, it’s just a matter of extracting the distribution archive and starting a binary:

wget /tmp/nexus-latest-bundle.tar.gz
tar -xzvf /tmp/nexus-latest-bundle.tar.gz -C /opt/nexus
Once started Nexus will be available by default at this endpoint:
with admin as user and admin123 as password.



Jenkins is the job scheduler we are going to use to build our project. We want to configure Jenkins in such a way that it will be able to connect directly to our git repo to download the project source.
To do this we need an additional plugin, Git Plugin.
We obviously also need java and maven installed on the box.
Being Jenkins configuration composed of various steps involving the interaction with multiple administrative pages, I will only give some hints on the important steps you are required to perform. For this reason I strongly suggest you to check my fully automated script that does everything in total automation.
Just like Nexus, Jenkins is implemented as a java web application.
Since I like to use RHEL compatible distribution like Centos or Fedora, I install Jenkins in a simplified way. Instead of manually extracting the archive like we did for Nexus, I just define the a new yum repo, and let yum handle the installation and configuration as a service for me:

wget -O /etc/yum.repos.d/jenkins.repo
rpm --import
yum install jenkins
service jenkins start
Once Jenkins is started you will find it’s web interface available here:

By default it’s configured in single user mode, and that’s enough for our demo.
You may want to verify the http://your_ip:8080/configure to check if values for JDK, Maven and git look good. They are usually automatically picked up if you have those software already installed before Jenkins.

Then you are required to install Git Plugin:

Once you have everything configured, and after a restart of Jenkins instance, we will be able to see a new option in the form that allows us to create a Maven build job.
Under the section: Source Code Management there is now the option git. It’s just a matter of providing the coordinates of your SSH server, for example:


And in the section Build , under Goals and options, we need to explicitly tell Maven we want to invoke the deploy phase, providing the ip address of the Nexus insance:

clean deploy -DskipTests

The last configuration step, is to specify a different maven settings file, in the advanced maven properties , that is stored together with the source code:

And that contains user and password to present to Nexus, when pushing artifacts there.

The configuration is done but we need an additional step to have Jenkins working with Git.
Since we are using SSH as our transport protocol, we are going to be asked, when connecting to the SSH server for the first time, to confirm that the server we are connecting to is safe and that its fingerprint is the one the we were expecting. This challenge operation will block the build job, since a batch job and there will not be anyone confirming SSH credentials.
To avoid all this, a trick is to connect to the Jenkins box via SSH, become the user that is used to run Jenkins process, jenkins in my case, and from there, manually connect to the ssh git server, to perform the identification operation interactively, so that it will no longer required in future:

ssh fuse@IP_GIT_SERVER
The authenticity of host '[]:22 ([]:22)' can't be established.
DSA key fingerprint is db:43:17:6b:11:be:0d:12:76:96:5c:8f:52:f9:8b:96.
Are you sure you want to continue connecting (yes/no)? 
The alternate approach I use my Jenkins docker image is to totally disable SSH fingerprint identification, an approach that maybe too insecure for you:

mkdir -p /var/lib/jenkins/.ssh ;  
printf "Host * \nUserKnownHostsFile /dev/null \nStrictHostKeyChecking no" >> /var/lib/jenkins/.ssh/config ; 
chown -R jenkins:jenkins /var/lib/jenkins/.ssh
If everything has been configured correctly, Jenkins will be able to automatically download our project, build it and publish it to Nexus.


Before doing that we need a developer to push our code to git, otherwise there will not be any source file to build yet!
To to that, you just need to clone my repo, configure an additional remote repo (our private git server) and push:

git clone
git remote add upstream ssh://fuse@$IP_GIT/home/fuse/fuse_scripts.git
git push upstream master
At this point you can trigger the build job on Jenkins. If it’s the first time you run it Maven will download all the dependencies, so it may take a while.
if everything is successful you will receive the confirmation that your artifacts have been published to Nexus.

JBoss Fuse


Now that our Nexus server is populated with the maven artifacts built from our code base, we just need to tell our Fuse instance to use Nexus as a Maven remote repository.
Teaches us how to do it:
In a karaf shell we need to change the value of a property,

fabric:profile-edit  --pid io.fabric8.agent/org.ops4j.pax.url.mvn.repositories=\"\" default

And we can now verify that the integration is completed with this command:

cat  mvn:sample/karaf_scripts/1.0.0-SNAPSHOT/karaf/create_containers
If everything is fine, you are going to see an output similar to this:

# create broker profile
fabric:mq-create --profile $BROKER_PROFILE_NAME $BROKER_PROFILE_NAME
# create applicative profiles
fabric:profile-create --parents feature-camel MyProfile

# create broker
fabric:container-create-child --jvm-opts "$BROKER_01_JVM" --resolver localip --profile $BROKER_PROFILE_NAME root broker

# create worker
fabric:container-create-child --jvm-opts "$CONTAINER_01_JVM" --resolver localip root worker1
# assign profiles
fabric:container-add-profile worker1 MyProfile
Meaning that addressing a karaf script providing Maven coordinates worked well, and that now you can use shell:source, osgi:install or any other command you want that requires artifacts published on Nexus.



As mentioned multiple times, this is just a possible workflow and example of interaction between those platforms.
Your team may follow different procedures or using different instruments.
Maybe you are already implementing more advanced flows based on the new Fabric8 Maven Plugin.
In any case I invite everyone interested in the topic to post a comment or some link to different approach and help everyone sharing our experience.

Wednesday, March 12, 2014

Integration testing with Maven and Docker

Docker is one of the new hot things out there. With a different set of technologies and ideas compared to traditional virtual machines, it implements something similar and at the same time different, with the idea of containers: almost all VMs power but much faster and with very interesting additional goodies.

In this article I assume you already know something about Docker and know how to interact with it. If it's not the case I can suggest you these links to start with:

My personal contribution to the topic is to show you a possible workflow that allows you to start and stop Docker containers from within a Maven job.

The reason why I have investigated in this functionality is to help with tests and integration tests in Java projects built with Maven. The problem is well known: your code interacts with external systems and services. Depending on what you are really writing this could mean Databases, Message Brokers, Web Services and so on.

The usual strategies to test these interactions are:

  • In memory servers; implemented in java that are usually very fast but too often their limit is that they are not the real thing
  • A layer of stubbed services, that you implement to offers the interfaces that you need.
  • Real external processes, sometimes remote, to test real interactions.

Those strategies work but they often require a lot of effort to be put in place. And the most complete one, that is the one that uses proper external services, poses problems for what concerns isolation:
imagine that you are interacting with a database and that you perform read/write operations just while someone else was accessing the same resources. Again, you may find the correct workflows that invovle creating separate schemas and so on, but, again, this is extra work and very often a not very straight forward activity.

Wouldn't it be great if we could have the same opportunities that these external systems offers, but in totaly isolation? And what do you think if I also add speed to the offer?

Docker is a tool that offers us this opportunity.

You can start a set of Docker container with all the services that you need, at the beginning of the testing suite, and tear it down at the end of it. And your Maven job can be the only consumer of these services, with all the isolation that it needs. And you can all of this easily scripted with the help of Dockerfiles, that are, at the end, not much more than a sequential set of command line invocations.

Let see how to enable all of this.

The first prerequisite is obviously to have Docker installed on your system. As you may already know Docker technology depends on the capabilities of the Linux Kernel, so you have to be on Linux OR you need the help of a traditional VM to host the Docker server process.

This is the official documentation guide that shows you how to install under different Linux distros:

While instead this is a very quick guide to show how to install if you are on MacOSX:

Once you are ready and you have Docker installed, you need to apply a specific configuration.

Docker, in recents versions, exposes its remote API, by default, only over Unix Sockets. Despite we could interact with them with the right code, I find much easier to interact with the API over HTTP. To obtain this, you have to pass a specific flag to the Docker daemon to tell it to listen also on HTTP.

I am using Fedora, and the configuration file to modify is /usr/lib/systemd/system/docker.service.

Description=Docker Application Container Engine

ExecStart=/usr/bin/docker -d -H tcp:// -H unix:///var/run/docker.sock


The only modification compared to the defaults it's been adding -H tcp://

Now, after I have reloaded systemd scripts and restarted the service I have a Docker daemon that exposes me a nice REST API I can poke with curl.

sudo systemctl daemon-reload
sudo systemctl restart docker
curl # returns a json in output

You probably also want this configuration to survive future Docker rpm updates. To achieve that you have to copy the file you have just modified to a location that survives rpm updates. The correct way to achieve this in systemd is with:

sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system

See systemd FAQ for more details.

If you are using Ubuntu you have to configure a different file. Look at this page:

Now we have all we need to interact easily with Docker.

You may at this point expect me to describe you how to use the Maven Docker plugin. Unluckily that's not the case. There is no such plugin yet, or at least I am not aware of it. I am considering writing one but for the moment being I have solved my problems quickly with the help of GMaven plugin, a little bit of Groovy code and the help of the java library Rest-assured.

Here is the code to startup Docker containers

import com.jayway.restassured.RestAssured
import static com.jayway.restassured.RestAssured.*
import static com.jayway.restassured.matcher.RestAssuredMatchers.*
import com.jayway.restassured.path.json.JsonPath
import com.jayway.restassured.response.Response

RestAssured.baseURI = ""
RestAssured.port = 4243

// here you can specify advance docker params, but the mandatory one is the name of the Image you want to use
def dockerImageConf = '{"Image":"${docker.image}"}'
def dockerImageName = JsonPath.from(dockerImageConf).get("Image") "Creating new Docker container from image $dockerImageName"
def response =  with().body(dockerImageConf).post("/containers/create")

if( 404 == response.statusCode ) { "Docker image not found in local repo. Trying to dowload image '$dockerImageName' from remote repos"
    response = with().parameter("fromImage", dockerImageName).post("/images/create")
    def message = response.asString()
    //odd: rest api always returns 200 and doesn't return proper json. I have to grep
    if( message.contains("HTTP code: 404") ) fail("Image $dockerImageName NOT FOUND remotely. Abort. $message}") "Image downloaded"
    // retry to create the container
    response = with().body(dockerImageConf).post("/containers/create")
    if( 404 == response.statusCode ) fail("Unable to create container with conf $dockerImageConf: ${response.asString()}")

def containerId = response.jsonPath().get("Id") "Container created with id $containerId"

// set the containerId to be retrieved later during the stop phase"containerId", "$containerId") "Starting container $containerId"

def ip = with().get("/containers/$containerId/json").path("NetworkSettings.IPAddress") "Container started with ip: $ip" 

System.setProperty("MONGODB_HOSTNAME", "$ip")
System.setProperty("MONGODB_PORT", "27017")

And this is the one to stop them

import com.jayway.restassured.RestAssured
import static com.jayway.restassured.RestAssured.*
import static com.jayway.restassured.matcher.RestAssuredMatchers.*

RestAssured.baseURI = ""
RestAssured.port = 4243

def containerId ='containerId') "Stopping Docker container $containerId"
with().post("/containers/$containerId/stop") "Docker container stopped"
if( true == ${docker.remove.container} ){
    with().delete("/containers/$containerId") "Docker container deleted"

Rest-assured fluent API should suggest what is happening, and the inline comment should clarify it but let me add a couple of comments. The code to start a container is my implementation of the functionality of docker run as described in the official API documentation here:

The specific problem I had to solve was how to propagate the id of my Docker container from a Maven Phase to another one.
I have achieved the functionality thanks to the line:

// set the containerId to be retrieved later during the stop phase"containerId", "$containerId")

I have also exposed a couple of Maven properties that can be useful to interact with the API:

  • docker.image - The name of the image you want to spin
  • docker.remove.container - If set to false, tells Maven to not remove the stopped container from filesystem (useful to inspect your docker container after the job has finished)


    mvn verify -Ddocker.image=pantinor/fuse -Ddocker.remove.container=false

You may find here a full working example. I have been told that sometimes my syntax colorizer script eats some keyword or change the case of words, so if you want to copy and paste it may be a better idea cropping from Github.

This is a portion of the output while running the Maven build with the command mvn verify :

[INFO] --- gmaven-plugin:1.4:execute (start-docker-images) @ gmaven-docker ---
[INFO] Creating new Docker container from image {"Image":"pantinor/centos-mongodb"}
log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.BasicClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
[INFO] Container created with id 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Starting container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Container started with ip:


[INFO] --- gmaven-plugin:1.4:execute (stop-docker-images) @ gmaven-docker ---
[INFO] Stopping Docker container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc
[INFO] Docker container stopped
[INFO] Docker container deleted


If you have any question or suggestion please feel free to let me know!

Full Maven `pom.xml` available also here:

    Sample Maven Docker integration
    See companion blogpost here:

Friday, January 24, 2014

Monitoring JBoss Fuse ESB with Nagios

Note: this article describe a scenario based on JBoss Fuse, but it's applicable to any Java context able to run Servlet java, like JBoss EAP, WildFly, Tomcat, etc...

One of my recent activity at work has been to provide guidance about monitoring a JBoss Fuse ESB setup with Nagios/OpsView. Despite more specialized solutions for the specific problem do exist (Fuse plugin for Red Hat JON), Nagios is still one of the most diffuse opensource monitoring tool.

You don't need to an expert in Nagios to understand this article, I am definitely not. But if you are and you have any suggestion to improve this solution, let me know please.

Nagios is an open source monitoring tool that, with the help of plugins, is able to collect many metrics from different kind of services and to notify you when a specific value or a specific pattern ( values over time ) is identified. It can be used to monitor from the operating system status to you custom deployed application more obscure values, assuming that you give specify what is important for you.

In our example, our custom application is deployed on JBoss Fuse ESB.

Most of the metrics that we want to monitor are related to Apache Camel, Apache ActiveMQ and Apache CXF These projects already do a excellent job in exposing many runtime information that we are interested into. For example Camel tells us how many messages passed through a specific component or how what's the status of some of our routes.

The technology that these projects use to expose so many valuable information is JMX.

Nagios supports JMX with the help of external plugins.

We explored the following list:


We have found some problem with this approach:

1) to allow RMI communication, the network layer needs to allow the connection to specific ports.
2) the plugin supports only attributes and not operations
3) building JMX queries is not particularly user friendly, specifically if you are not a java developer/devop

Since we had the need to invoke some operation as part of our monitoring requirements, we were forced to look for other alternatives.

check_http with Jolokia

One of our first alternatives idea was to use Jolokia.

Jolokia is a java library that exposes JMX interfaces over HTTP, with a JSON based REST api.

To do its magic over http it just needs an http entrypoint to be invoked, that is a Servlet. I leave you with Jolokia official instruction to install it, but once you have those components in place, you are ready to use it's JMX bridging features.

But I also share with you a small trick:
I haven't manually installed Jolokia. Since we are already using the awesome as a management console, and since it's leverages Jolokia, everything that we needed was already there.

Let's explore the benefits of using a jolokia based solution.

Being http based, it clearly helps with the network configuration problems:

I still find it somehow hard to accept, but in my experience with many customers, the handling of the corporate network configuration is often more complicated than expected. Something that seems simple on an abstract paper diagram stop being so when you don't have details about the network topology and the only thing that you can see is that the end to end communication doesn't work. For this reason, depending on the popular http protocol is definitely an attractive feature.

The second added benefit is that, differently than check_jmx, it supports JMX operation invocation. This last feature turns to be handy if the metrics you are interested into are not exposed as attributes but only as operations. One example is the operation:


For what concerns the ergonomy of the interface, I personally believe that it offers a very straight forward feeling.

Requests can be simple. You could end up invoking a very tidy REST endpoint via GET, something similar to this:

curl -u admin:admin

But the moment you are starting to send complex input payload or when you can rely on POST and external input files containing you json payloads. I suggest you to check Jakub Korab helpful post:

To use Jolokia directly from Nagios we can use the common check_http plugin that mimics curl behavior just like in the previous example. The only glitch with this is that check_http doesn't offer a behavior to process json strings, that are the structure that jolokia is returning. You could probably be able to parse the output with regular expressions and simple value checking but we feel that we are missing something. And what is missing here is instead offered by the next option.

check_jmx4perl with Jolokia

jmx4perl is a set of Perl libraries and scripts that allow to communicate with jolokia exposed JMX objects. One of the tools bundled with the project is a Nagios plugin: check_jmx4perl

Don't be scared by the "perl" keyword. I don't write perl and I have problems reading it. And still I can use the tool. The project gives you executables scripts that you can invoke from command line to query JMX services exposed by Jolokia and it also provides Nagios compatible executable.

With this tool you can write queries like this one:

$ check_jmx4perl \
    --user=admin \
    --password=admin \
    --url \
    --name "[MyService - CamelContext - WebService]" \
    --mbean "org.apache.camel:context=mycontext/86-MyRoute.Request,name=\"log\",type=components" \
    --attribute "State" \
    --critical Stopped \
    --warning   !Started

OK - MyService - CamelContext - WebService] : 'Started' as expected | 'MyService - CamelContext - WebService]'=Started;!Started;Stopped

And as you can guess reading the previous command the Nagios support is very immediate, allowing you to specify the values you want to identify as representing a Warning status or an Error one.

If you are familiar with Nagios you know that to use an executable you have to define it in Nagios configuration.

This is some example of possible macros:

### check_jmx4 supports wildcards! ( you can use asterisk everywhere in the string names )

# Read JMX attributes without support for nested attributes 
define command {
     command_name         check_jmx4perl_attribute_absolute
     command_line         /usr/local/bin/check_jmx4perl \
                              $ARG1$ \
                              --url $ARG2$ \
                              --mbean $ARG3$ \
                              --attribute $ARG4$ \

# Check Bundle is Active
define command {
     command_name         check_jmx4perl_bundle_is_active
     command_line         /usr/local/bin/check_jmx4perl \
                              $ARG1$ \
                              --url $ARG2$ \
                              --warning \!ACTIVE \
                              --critical \!ACTIVE \
                              --mbean "osgi.core:type=bundleState,version=1.5" \
                              --operation "getState(long)" \

Once you have defined those macros in Nagios, you can define your real monitoring calls that use those commands. Something like:

# Root service definition that presets some values and variables
define service {
    use generic-service
    name jolokia
    register 0
    host_name localhost
    _authentication --user=admin --password=admin

# Sample Bundle is Active
define service {
     service_description    Sample Bundle is Active
     use                    jolokia
     check_command          check_jmx4perl_bundle_is_active\
                            !$_SERVICEAUTHENTICATION$ \
                            !$_SERVICEAGENTURL$ \

How to test this?

Despite installing and configuring Nagios is not rocket science, it's not always a straight forward activity.
Sometimes you make silly typos or just leave a space in the wrong place and nothing is working. Despite having a feeling that you can fix it with just some time it turns to be a time stealing activity that distracts you from your huge list of other things to do.

Or maybe you are just like me: the fact that you managed to set up everything in a couple of days doesn't mean that you will be able to precisely remember how, if asked in month.

For all those reasons I have decided to have some fun with Docker.

Docker is a cool and new tool that you can use to provide bundled stacks of applications, called containers; they can be preconfigured exactly as you want. I have put together a Docker container that starts for you a Nagios instance and provides all the plugins, scripts and sample configuration that I have discussed in this post.

In case you are not interest in Docker you can still find sample at this GitHub repository and eventually still read the Docker file that at the end gives all the step you need to install and configure Nagios with jmx4perl.

Since I've built my knowledge on the information already available on the web, this is a small list of the resources that helped me to put together this tutorial:

In case you have any other interesting approach to the problem please leave a comment.

Sunday, August 11, 2013

Share your Bash history across terminals in Terminator

As many developers I spent lots of time working with a command line shell. My operating system of choice is Fedora (currently 18) and I am using the excellent Terminator as a more powerful replacement for the basic default shell that comes with Gnome 3.

The typical workflow with Terminator is to use its very intuitive key shortcuts to split the current working window, maximize it, start working, resize when finish and jump back to any other window of interest.

It's a very fast workflow and I cannot see myself going back to anything different.

But this high productive usage has a drawback:

the commands history is not shared across the different windows!

This is often annoying since one of the reasons I am spawning a new shell is to do some "temporary" work without moving away from my current position. And this usually involves redoing some already executed step, so the lack of history is quite annoying.

I will describe here a way to address this problem.

The first thing to know is that the command history is not present because by default bash history is flushed to .bash_history only when you terminate a session.

A way to force a more frequent flush is to play with the environment variable PROMPT_COMMAND

If we modify one of our bash configuration files, .bashrc for instance.

#save history after every command
#use 'history -r' to reload history

What we are saying here is to invoke history -a with every new command prompt line. This flush the history to file every time we see a new command prompt. You can verify this if you monitory you .bash_history file.

Have we reached what we were hoping for?

Not yet.

Even if the history is now persisted, you will notice that you running shell do not see this history. This is because the history is loaded only at the beginning of a new session.

The only thing left is to manually force a reload of the history with the command

history -r

Done. We have now access to other shells history.

The last question:

why don't we add the reload of the history directly in the PROMPT_COMMAND variable?

Because you probably don't want that. Having all your shells always sharing a global history will break the most obvious behavior of the shell that is to show you the previous command you typed in that very shell.