Thursday, November 28, 2019

Apache Camel 3.0.0 released !

It's Thanksgiving today and Apache Camel 3.0.0 has just been released, so I definitely want to thank the whole Camel community for the efforts put to get to this important achievement. 

The Camel 3 work started a bit more than a year ago, so it's a 14 months effort that reaches its goal today.  But that's definitely not the end, as there's still a lot of work to do on Camel !  One thing to keep in mind is that Camel 2.0 was released in August 2009, so a bit more than 10 years ago now.

Part of this work was cleaning tons of things, components and apis, that were deprecated on this long lived 2.x branch.  Another part was modularization of the code base so that it can be used in more light weight scenarios.  We've created a migration guide to help people migrating their camel based applications.  There are also a lot of new features and we'll explore some of them in the following weeks.  My next blog entry will explore one of this new feature, the endpoint DSL.

Tuesday, July 07, 2015

Additional considerations on the new Features service

I want to give a bit more detailed explanations on some new stuff provided by the Karaf 4.x FeaturesService.

Transitive closure

The karaf-maven-plugin provides a new goal named verify.  It's an enhanced version of the previous validate goal.  This goal helps ensuring that all features are transitively closed with respect to their requirements.  This means that all the requirements for a given feature can be fulfilled without relying on already installed bundles or features.  This new goal actually uses the OSGi resolver to ensure that, so that it ensures your features can be deployed everywhere, without additional requirements.

Viable deployments

One of the new features I mentioned in my previous post is that the new Features service ensures that the features requirements are fully available.  When installing new features, this is usually not much of a problem, as we usually only add new bundles.  However, when uninstalling a feature, things can be a bit more complicated when those features are not transitively closed.  Some features do not express all their dependencies so that the user is required to install a few features in order to make them work.  Problems can appear when a feature which is required, but not explicitly, is uninstalled by the user.  The previous behaviour when uninstalling a feature was to uninstall all the bundles that were previously installed by that feature and were not directly required by another feature.  What this means is that if a feature did not have a dependency on that bundle, the removal of that bundle would cause the feature not to work anymore.
The new Features service does not have this bad behaviour anymore.  If the user uninstalls a feature, a new resolution will take place, making sure that all requirements are solved for the remaining features.
A simple example involves camel, but the same is true for cxf or activemq features.  The camel-core feature depends on the shell for the commands.  The shell has changed in Karaf 4, but a compatibility layer is provided to allow the installation of previous commands.  However, the camel-core feature does not have an expressed dependency on the shell-compat feature, which means that in order to install camel-core, you also need to install the shell-compat feature.  Once that's done, you won't be able to uninstall the shell-compat feature unless you also uninstall camel-core.
Your deployments are always valid and safe !

Full uninstallation of a feature

In order to minimise the disruption when uninstalling features, the FeaturesService in Karaf 2.x and 3.x did not uninstall feature dependencies when uninstalling a given feature (see the above point).  Now that the service uses a set of requirements as the input for the resolution, uninstalling a feature will automatically uninstall all the bundles that are not needed anymore.
I'll talk about the input requirements in more details in a next post.

Optional feature dependencies

In Karaf 2.x or 3.x, a feature can have feature dependencies.  The behaviour is that the Features Service will install those dependencies when the feature is installed.  The goal is usually to solve some requirements, for example the webconsole feature depends on the http feature.
Karaf 4.0 adds new possibilities: those dependencies can be flagged as being optional.  This means that the Features Service will install them if they are actually needed to solve some requirements, but if this feature is not really needed, it won't be installed.
This is particularly useful when we define features that provide a given specification along with an implementation.  For example, the http feature provides the OSGi HttpService, but pax-web provides multiple containers that can be used such as jetty or tomcat.  In Karaf 2.x and 3.x, having a dependency on the http feature always lead to the jetty container being installed.  With Karaf 4.x, the real behaviour is that the HttpService will be installed, using jetty by default, but not necessarily.
This is modelled using the following:

<feature name="pax-http">
    <feature dependency="true">pax-http-jetty</feature>

<feature name="pax-http-jetty">
    ... jetty bundles ...

<feature name="pax-http-tomcat">
    ... tomcat bundles ...
The benefit is that features can safely depend on pax-http.  This will always provide the HttpService.  If no specific http provider is installed, it will install the pax-http-jetty feature, but if the user installs pax-http-tomcat explicitly, the jetty provider will not be installed.

Friday, June 26, 2015

Karaf 4.0 is about to be released !

Almost 3 years since I haven't blogged, so I'm using the fact that the Karaf 4.0 release in under vote to start again.

Karaf 4.0 brings a lot of new features, but one of the most important one is the new features service.  It is originally a port of the Fuse/Fabric agent resolver to Karaf 4, but has since been extended a lot (and has actually been integrated back into Fuse 6.2).

This new features services is used to install well known features, but even if it reuses the same features xml definition, it's not limited to those anymore and works in a very different way.  The main difference is that the original features service (in Karaf 2.x and 3.x) is quite blind when it comes to installing features.  When a user asks for a feature installation, the process was quite simplistic: the service mainly goes through the list of bundles listed in the feature and install them.  The introduction of the resolver flag brought some intelligence to the process, as the old and deprecated OBR library was used to compute which bundles were actually needed.  The goal was to be smarter and not install bundles if they were not needed.
However, OBR was quite limited, and the output of OBR could theorically not be supported by the OSGi framework, or at least, not give the expected results.

The new resolver reuses the OSGi resolver, the exact same one that is used by the OSGi framework.  It also reuses the same metadata extracted from the bundle headers.  The new OSGi resolver is really generic and supports any kind of requirements and capabilities, even though it contains specific rules for known kind of constraints.
The new features service uses this resolver, by translating feature definitions into resoures with their requirements and capabilities, including bundles, but also conditionals, feature dependencies, etc...  Once the modelling is done, the service asks the resolver for an output, and brings the framework into the desired state by installing, uninstalling, updating bundles as needed.

This is major change, even though the end-user does not always see it immediately.  The features service maintains a set of requirements (usually requirement on features), and the resolution will always satisfy those requirements.  This means that the installation of a feature can not fail previously installed features, nor can it installs a feature that can't be resolved.

There are lots of new stuff coming with this new features service, and I'll try to cover some of them over the next weeks.

Thursday, June 28, 2012

FuseSource acquired by RedHat

Yesterday, an important announcement was made: RedHat is acquiring FuseSource.  This news is following Progress announce a few weeks ago about FuseSource being divested as not being part of their new product strategy.

This is truly exciting. RedHat looks like a fantastic company, pure Open Source players as we are too.  We will be integrated into the JBoss Enterprise Middleware Group and even if there is some overlap on the products, I don't have any doubts that we'll be able to find the best way to leverage our respective strengths to build new awesome products for our users and customers.

This is definitely a good fit for both RedHat and FuseSource, and I'm definitely thrilled about it.

Friday, March 16, 2012

Camel Webinar

Je présente la semaine prochaine un webinar sur Camel en français intitulé "Introduction à Apache Camel".
Inscrivez-vous !

Thursday, February 23, 2012

JLine 2.6

I've released JLine 2.6 which should be available in central soon. The main change is that JLine is almost completely conform ant with GNU readline. This means that JLine will read the ~/.inputrc by default and now supports VI editing mode, macro recording and all the goodness you can find in your standard unix shell.
It will be used in Karaf 3.0 and in the next Fuse ESB.

Wednesday, January 18, 2012

Unit testing Camel Blueprint routes

Last week, while in Vegas, I discussed with my colleague Scott Cranton and he told me that something that was really making our users tend to stick with Spring-DM when using Camel Blueprint routes was that unit testing those routes was not possible.

So I hacked a small support library for Camel, leveraging PojoSR which provides a service registry without using a fully compliant OSGi container. This allows defining real unit tests (as opposed to integration tests using Pax Exam.

See the below example.

public class DebugBlueprintTest extends CamelBlueprintTestSupport {

    protected Collection<URL> getBlueprintDescriptors() {
        return Collections.singleton(getClass().getResource("camelContext.xml"));

    public void testRoute() throws Exception {
        // set mock expectations

        // send a message
        template.sendBody("direct:start", "World");

        // assert mocks

<blueprint xmlns="">

 <camelcontext autostartup="true" id="camelContext" trace="true" xmlns="">

   <from uri="direct:start"/>
    <simple>Hello ${body}</simple>
   <to uri="mock:a"/>