After attending Sam Newman’s microservice talks at Geecon last week I started to think more about what is most likely an essential feature of service-oriented/microservice platforms for monitoring, reporting and diagnostics: correlation ids. Correlation ids allow distributed tracing within complex service oriented platforms, where a single request into the application can often be dealt with by multiple downstream service. Without the ability to correlate downstream service requests it can be very difficult to understand how requests are being handled within your platform.

I’ve seen the benefit of correlation ids in several recent SOA projects I have worked on, but as Sam mentioned in his talks, it’s often very easy to think this type of tracing won’t be needed when building the initial version of the application, but then  very difficult to retrofit into the application when you do realise the benefits (and the need for!). I’ve not yet found the perfect way to implement correlation ids within a Java/Spring-based application, but after chatting to Sam via email he made several suggestions which I have now turned into a simple project using Spring Boot to demonstrate how this could be implemented.


During both of Sam’s Geecon talks he mentioned that in his experience correlation ids were very useful for diagnostic purposes. Correlation ids are essentially an id that is generated and associated with a single (typically user-driven) request into the application that is passed down through the stack and onto dependent services. In SOA or microservice platforms this type of id is very useful, as requests into the application typically are ‘fanned out’ or handled by multiple downstream services, and a correlation id allows all of the downstream requests (from the initial point of request) to be correlated or grouped based on the id. So called ‘distributed tracing’ can then be performed using the correlation ids by combining all the downstream service logs and matching the required id to see the trace of the request throughout your entire application stack (which is very easy if you are using a centralised logging framework such as logstash)

The big players in the service-oriented field have been talking about the need for distributed tracing and correlating requests for quite some time, and as such Twitter have created their open source Zipkin framework (which often plugs into their RPC framework Finagle), and Netflix has open-sourced their Karyon web/microservice framework, both of which provide distributed tracing [edit 27/07/14: It would appear that although distributed tracing was mentioned as an upcoming feature in the Karyon blog post, it never made it in to the public Github repo. Thanks to John Eikenberry for pointing this out in the comments below]. There are of course commercial offering in this area, one such product being AppDynamics, which is very cool, but has a rather hefty price tag.

Creating a proof-of-concept in Spring Boot

As great as Zipkin and Karyon are, they are both relatively invasive, in that you have to build your services on top of the (often opinionated) frameworks. This might be fine for some use cases, but no so much for others, especially when you are building microservices. I’ve been enjoying experimenting with Spring Boot of late, and this framework builds on the much known and loved (at least by me 🙂 ) Spring framework by providing lots of preconfigured sensible defaults. This allows you to build microservices (especially ones that communicate via RESTful interfaces) very rapidly. The remainder of this blog pos explains how I implemented a (hopefully) non-invasive way of implementing correlation ids.


  1. Allow a correlation id to be generated for a initial request into the application
  2. Enable the correlation id to be passed to downstream services, using as method that is as non-invasive into the code as possible


I have created two projects on GitHub, one containing an implementation where all requests are being handled in a synchronous style (i.e. the traditional Spring approach of handling all request processing on a single thread), and also one for when an asynchronous (non-blocking) style of communication is being used (i.e., using the Servlet 3 asynchronous support combined with Spring’s DeferredResult and Java’s Futures/Callables). The majority of this article describes the asynchronous implementation, as this is more interesting:

The main work in both code bases is undertaken by the CorrelationHeaderFilter, which is a standard Java EE Filter that inspects the HttpServletRequest header for the presence of a correlationId. If one is found then we set a ThreadLocal variable in the RequestCorrelation Class (discussed later). If a correlation id is not found then one is generated and added to the RequestCorrelation Class:

The only thing is this code that may not instantly be obvious is the conditional check currentRequestIsAsyncDispatcher(httpServletRequest), but this is here to guard against the correlation id code being executed when the Async Dispatcher thread is running to return the results (this is interesting to note, as I initially didn’t expect the Async Dispatcher to trigger the execution of the filter again?)

Here is the RequestCorrelation Class, which contains a simple ThreadLocal<String> static variable to hold the correlation id for the current Thread of execution (set via the CorrelationHeaderFilter above)

Once the correlation id is stored in the RequestCorrelation Class it can be retrieved and added to downstream service requests (or data store access etc) as required by calling the static getId() method within RequestCorrelation. It is probably a good idea to encapsulate this behaviour away from your application services, and you can see an example of how to do this in a RestClient Class I have created, which composes Spring’s RestTemplate and handles the setting of the  correlation id within the header transparently from the calling Class.

Making this work for asynchronous requests…

The code included above works fine when you are handling all of your requests synchronously, but it is often a good idea in a SOA/microservice platform to handle requests in a non-blocking asynchronous manner. In Spring this can be achieved by using the DeferredResult Class in combination with the Servlet 3 asynchronous support. The problem with using ThreadLocal variables within the asynchronous approach is that the Thread that initially handles the request (and creates the DeferredResult/Future) will not be the Thread doing the actual processing.

Accordingly, a bit of glue code is needed to ensure that the correlation id is propagated across the Threads. This can be achieved by extending Callable with the required functionality: (don’t worry if example Calling Class code doesn’t look intuitive – this adaption between DeferredResults and Futures is a necessary evil within Spring, and the full code including the boilerplate ListenableFutureAdapter is in my GitHub repo):

And there we have it – the propagation of correlation id regardless of the synchronous/asynchronous nature of processing!

You can clone the Github report containing my asynchronous example, and execute the application by running mvn spring-boot:run at the command line. If you access http://localhost:8080/externalNews in your browser (or via curl) you will see something similar to the following in your Spring Boot console, which clearly demonstrates a correlation id being generated on the initial request, and then this being propagated through to a simulated external call (have a look in the ExternalNewsServiceRest Class to see how this has been implemented):


I’m quite happy with this simple prototype, and it does meet the two goals I listed above. Future work will include writing some tests for this code (shame on me for not TDDing!), and also extend this functionality to a more realistic example.

I would like to say a massive thanks to Sam, not only for sharing his knowledge at the great talks at Geecon, but also for taking time to respond to my emails. If you’re interested in microservices and related work I can highly recommend Sam’s Microservice book which is available in Early Access at O’Reilly. I’ve enjoyed reading the currently available chapters, and having implemented quite a few SOA projects recently I can relate to a lot of the good advice contained within. I’ll be following the development of this book with keen interest!

If you have any comments or thoughts then please do share them via the comment below, or feel free to get in touch via the usual mechanisms!


I used Tomasz Nurkiewicz’s excellent blog several times for learning how best to wire up all of the DeferredResult/Future code in Spring:


I’m currently having a lot of fun experimenting with node.js using IntelliJ IDEA. I installed the node.js plugin, and although this added options to create a new ‘Boilerplate’ or ‘Express’ project, the rest of the node.js integration wasn’t quite so obvious…

In particular after creating a blank project and adding a few js files I noticed that several of the core node.js modules, such as ‘require’ were not being recognised by the IDE. I restarted the IDE and it did detect that I was coding using node.js, but it still didn’t detect these code modules e.g. “Unresolved function or method require” or “Unresolved function or method http”

It turns out that the IDE JavaScript libraries have to be configured properly. This can be done as follows (I’m using OSX, and so some of the menu names may be slightly different if you’re coding on Linux and Windows)

IntelliJ IDEA -> Preferences -> JavaScript -> Libraries 
-> [Ensure 'Node.js Globals' is checked]

This sorted the problem for me!

If you are interested you can follow my current experiment with node.js (using the excellent ‘Node.js In Action‘) in the following GitHub repo:

A recent contract I was working on had decided to use Solr to implement full-text search over a product catalogue for an e-commerce platform. Naturally we were approaching development with a TDD-mindset, and were keen to implement both Unit Tests for core business functionality, and also integration tests for for a more end-to-end style of testing. The primary application stack consists of Spring (Core, Data, MVC), MySQL and Solr 4.

Just a slight aside, but for anyone looking to implement full-text search the primary candidates are Solr and ElasticSearch. I won’t discuss the merits of either implementation further as it’s best to evaluate each in respect to your use cases (and here is an excellent resource to help you decide

With our chosen frameworks and datastores we found the Unit testing relatively straight-forward, and decided to use JUnit (driven via the Maven surefire plugin), Mockito for mocking external dependencies (persistence layer, API calls etc), and PowerMock for the difficult mocking (for example, mocking static method calls of several reliable-but-decidedly-old-skool dependencies).

Integration testing was also relatively easy to setup – we chose to again drive tests via JUnit (this time via the failsafe plugin), and use Spring’s @ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests to manage injected sub-components (@Autowires etc) and instantiate various parts of the application for testing, and we also ran an embedded H2 database to allow realistic simulation of a SQL datastore (just an aside, in ~99% of ‘standard’ use cases I have found H2 to behave identically to MySQL, but there are a couple of corner cases to watch out for – this will be another blog post :))

The Problem – How do we run an embedded Solr?

When we first started using Solr 4 we naturally wanted to create integration tests running against this datastore, and we wanted to run this in the same manner as we did with H2 – executing as a light-weight in-memory (embedded) process that we could create, pre-load, and destroy relatively quickly.

We soon found the EmbeddedSolrServer Class distributed within the Solr package, and although useful it didn’t fit in exactly with the way we wanted to design and deploy the Solr communication layer within our Spring application. For production use we wanted to instantiate a SolrServer bean for which we supply the target endpoint on the network (and under the hood this SolrServer bean would actually be instantiated using a custom HttpSolrServer Class). We needed a way to create an ’embedded’ version that implemented the SolrServer interface, but also allowed us to override the Solr config and data directory (to load pre-canned indexes etc)

After a fair bit of searching we stumbled over ZoomInfo’s excellent blog in which they had shared their version of an embedded SolrServer that could easily be exposed as a Spring bean. They called the Class the InProcessSolrServer

We would like to offer many thanks to ZoomInfo for sharing there great work, and this Class provided us with many months of good service. However, with the latest releases of Solr (4.2 +) ZoomInfo’s InProcessSolrServer will no longer compile due to an interface change within the Solr internals.

In the spirit of sharing the wealth I wanted to blog an update to the original ZoomInfo code, which addresses the interface change, and I’ve also included the Spring scaffolding in the gist below to give you an idea of how we run this code.

I hope this helps, and if you have any questions then please feel free to comment or tweet 🙂

I’ve been playing around with Chef again this afternoon, and ran into a problem after following the (very useful) Opscode tutorials and then experimenting on my own

The Problem

localhost ==============================================================
localhost Chef encountered an error attempting to create the client "vagrant.vm"
localhost ==============================================================
localhost Authorization Error:
localhost --------------------
localhost Your validation client is not authorized to create the client for this node (HTTP 403).
localhost Possible Causes:
localhost ----------------
localhost * There may already be a client named "vagrant.vm"
localhost * Your validation client (xxxxxx-validator) may have misconfigured authorization permissions.

It’s quite obvious that my earlier tutorial-based activities had registered the ‘vagrant.vm’ node name with my hosted Chef. Accordingly I visited my Hosted Chef portal and removed the node, but after receiving confirmation of the node being deleted I was still getting the same error when attempting to provision my local vm box.


Give the second Vagrant node a new name when bootstrapping e.g.

$ knife bootstrap localhost \
 --ssh-user vagrant \
 --ssh-password vagrant \
 --ssh-port 2222 \
 --run-list "recipe[apt]" \
 --sudo \
 --node-name "vagrant.vm2"

Alternatively you can delete the first node you created via knife on the CLI (rather than attempting to delete the node via the web-based Hosted Chef interface):

$ knife node delete "vagrant.vm"