Sunday, January 5, 2014

Camel Route autoStartup via Property Placeholder

It might not be obvious in the Camel autoStartup documentation, but this configuration can be managed via a property placeholder. Such that the route definition would look like the following:

<route autoStartup="{{route.feed.autostart}}">

Why would you need to do this? In my use, the property placeholder variables are mostly environment specific, allowing deployment of different ".cfg" files per environment (such as development verses production). Given a situation where your development environment consist of more than one environment, such as a QA environment, but the integration destination of the route data only has one, then you wouldn't want to execute the route in both the development and QA environments, but only one. Then, in your property placeholder ".cfg" files, you can map the autoStartup flag to be environment specific.

Saturday, January 4, 2014

ActiveMQ Connection Factory Timeout for Failover

I recently encountered an issue within our Apache ServiceMix environment such that if all the nodes are down that are participating in an ActiveMQ failover connection, without a timeout setting in place on the connection factory, ActiveMQ will wait forever for a node to become available.

This sounds reasonable, but within our usage, other processes were beings blocked and rolled back which were unrelated to the ActiveMQ integration messages. At a high level, the sending system would batch send messages to integration points A and B which are of different types. Type A might be a file system write, while B is an ActiveMQ message send.

If the message ordering was as mentioned above, all would have been fine if ActiveMQ failover nodes were all down. The file for message integration A would send, B would wait forever to send, and nothing is after it, so no harm done. Problem is, the process for the batch send of the mixed messages types will run again, and thread for sending integration B, or ActiveMQ is still waiting. The backend of the messaging system is a simple database table, which stores the messages for sends, and clears them after. So now a second process thread attempts to send message B integration again, but rolls back given that its locked in the initial attempt, and we begin to see a symptom of the original problem, the blocking.




The original failover string in the ActiveMQ connection factory looked like the following:
failover:(tcp://10.1.1.1:61616,tcp://10.1.1.2:61616)?randomize=false
Notice no timeout, resulting in the blocking forever. In our case mentioned above, we don't want the connection factory to wait forever, we want it to timeout, so the next batch message sending process can attempt the send. Referencing the following ActiveMQ documentation:

http://activemq.apache.org/failover-transport-reference.html

After adding the additional parameter to the failover string, that being "&timeout=<some number>". The result would cause the connection attempt to stop looking for an ActiveMQ node to take the messages. Once added, the sending thread fails on the ActiveMQ message rather than blocking.

Also note, the inclusion of the JMS header "JMSExpiration" had no affect on within the message since the issue was within the connection establishment, not the actual sending of the message.

Obviously you might think, "why would all the nodes in the ActiveMQ failover be down?". The answer is, they were down.

Friday, January 3, 2014

Resolving Apache ServiceMix JDBC JAR Classloading

This past year I've spent some time using Apache ServiceMix for service bus integration. One of the main issues I had was integration with database systems and using their provided JDBC drivers when defining the connection bean within Apache Camel. Many of the documented approached and user group threads did not work for me in environment. The main approaches being wrapping the JDBC JARs as OSGi bundles and / or installing the JDBC libraries as a maven repository so the dependencies are loaded properly in the ServiceMix container.

To resolve the issue in my case, I had to deploy the given JDBC JARs into the JRE library extensions (lib/ext) directory to allow the classloader to locate the libraries correctly. Once done however, I was not complete in the integration as the bean was not being detected as available within the routes in Camel as documented in the JDBC documentation. The connection was not available for passing data directly within the route. As a work around for this issue, I was forced to pass the JDBC bean reference in Camel into the depending bean as a reference. For example, if JDBC bean "model" was defined, and my logic bean as "controller", the "controller" bean definition in Camel looks like:

<bean class="com.controller.Service" id="controller">
<property name="dataSource" ref="model">
</property></bean>

Within the "Service" object, would be getters and setters for the dataSource field as type "org.springframework.jdbc.datasource.DriverManagerDataSource".

Oh, and one more thing. The deployed JDBC libraries in the lib/ext directory would not be loaded given security trust exceptions, similar to this thread reports. As mentioned in the linked article, the fix was to deploy sunjce_provider.jar into the lib/ext as well. One implemented, the container was able properly load the third party JAR files.

All mentioned was done with Microsoft SQL Server and IBM DB2.

See Also: https://github.com/cschneider/Karaf-Tutorial

Share on Twitter