Archive

Tag Archives: Microservices

On Wednesday I had the pleasure of presenting a new talk at the London Microservices User Group, which was entitled “The Business Behind Microservices: Organisational, Architectural and Operational Challenges”.

The key theme of the talk was exploring the often under-appreciated organisational and people impact that introducing (or moving to) a microservice architecture will have on a business. As mentioned in the talk, I’ve lead the implementation of microservices architectures in several organisation as part of my work with OpenCredo and Container Solutions, and so I was keen to share my lessons from the trenches.

You can find a recording of the talk over on the Skillsmatter website, who were our generous hosts for the evening https://skillsmatter.com/skillscasts/6450-the-business-behind-microservices-organisational-architectural-and-operational-challenges 

Daniel Bryant - Business Behind Microservices video recording

 

You can also find the slides on slideshare:

 

The original talk abstract was as follows:

The technology changes required when implementing a microservice-based application are only one part of the equation. The business and organisation will also most likely have to fundamentally change. In an ideal world, this shouldn’t be a problem – what with the rise of agile, lean and DevOps – but this is not always the situation I encounter in my consulting travels. I would like to share some stories of successful (and not so successful) strategies and tactics I have used over the past four years when introducing service-oriented architecture into organisations.

Join me for a whistle-stop tour of the business and people challenges that I have experienced first hand when implementing a greenfield microservice project, and also breaking down a monolith. We’ll look at ‘divided companies’ vs ‘connected companies’, determine the actual impact of conway’s law, briefly touch on the lean startup/enterprise mindset, dive into change management without the management double-speak, and look at the lightweight processes needed to ensure the technical success of a microservices implementation.

As usual, please do let me know if you have any questions!

Building Microservices – by Sam Newman

5_star

TLDR: If you are looking for an introduction to and overview of the current ‘microservice’ landscape and the concepts and thinking behind it, then look no further. I would recommend this book for architect, developer, QA and operations (DevOps)

Microservices are obviously a very hot topic at the moment, and everyone and their dog appears to have an opinion about this concept. Accordingly, writing a book about microservices and keeping everyone happy is going to be a challenge (and probably impossible).

In my opinion Sam has done a great job here and manages to provide key information on a range of relevant microservice issues, such as the motivations for microservices, how to model services, integration with other systems, deployment, testing, monitoring, security and architecting/implementing microservices at scale.

I’ve seen negative comments towards this book on other sites stating that more code examples should be provided, and although I appreciate their motivations, I don’t believe this will be the book to address these issues. In my experience, code contained within a book can quickly become stale, and the diversity of language (and the associated rate of change) that is currently being used to create microservice implementations would make pleasing everyone an impossible challenge. Instead I recommend people interested in code samples have a Google, or visit InfoQ, DZone or Voxxed where there is plenty of microservice implementation code.

In my opinion I will use this book much like I did the original ‘Continuous Delivery’ book by Jez Humble and Dave Farley – the book will provide an excellent high-level overview of the issues (and potential solutions), help me to understand core concepts and how key components and methodologies relate to each other, and also provide inspiration and pointers to further reading.

Much like I didn’t expect to create a full build pipeline implementation simply from reading the ‘Continuous Delivery’ book, I wouldn’t expect to build a complete microservice ecosystem from Sam’s book. However, after reading both books I began the respective tasks with a lot more insight then I originally had, and I made much smarter decisions (and knew what to look for when searching for more knowledge) once I understood the big picture.

As stated above, I would recommend this book for any software delivery role. I’ve personally seen the benefits of a microservice architecture when deployed for the correct use case, but the nature of this architecture creates new challenges whether you are a developer, QA or operations specialist.

(In the interest of full disclosure I did provide feedback on this book as it was being published via the O’Reilly Early Access program. However, I have endevoured to write a review that takes into account only the final published book.)

I’ve received some great feedback after posting my proposal for a microservice maturity/classification model last week, some positive, and some negative.

Some private communications suggested that I may be getting caught up in the marketing hype, and several emails suggested that the microservice architecture really is just classical SOA re-invented. Other emails balanced out these comments by suggesting that microservices present an opportunity to learn and iterate on the mistakes made in the original implementation of SOA, especially now that we are embracing concepts such as domain-driven design, and are applying more consideration to well-defined software architectures.

The only public response I’ve seen so far is by my fellow London-based microservice and Spring framework expert Russ Miles – you can read it on the Simplicity Itself blog. In the interest of full disclosure I do know Russ personally (and have sank a few beers in his company), but that’s not going to influence what I think could be a great discussion about my proposal.

Maturity – not all it’s cracked up to be…

The first comment made by Russ is that the approach of creating a maturity model could be dangerous. I think this is totally fair, and it crossed my mind several times when writing the initial post. So much so, that I added the word ‘classification’ to the title as an alternative to maturity.

If you look at other maturity models, such as the Richardson model for APIs or the Continuous Delivery model, then there is a clear sense of scale, from negative to positive. As Russ quite rightly points out, there is room for interpretation in my model that smaller is better, and I probably should have taken more care to make it clear that I don’t think this is necessarily the case.

In some cases a monolithic, but well-structured, architecture may be the best solution. Russ has also conjectured in a recent talk a Skillsmatter that starting out with building a monolith and then moving to a microservice architecture may be the fastest way to build software. It’s definitely difficult to prove this beyond anecdotal evidence, but my instincts (and experience) tell me this is probably true in certain cases, especially at the current point in time where we have little in the way of modelling or tooling support for building microservices.

In my opinion Russ is quite right to think about how this model could be used negatively, and although my initial intention was to give people a model that they could look at and point to where they think their software is, it could easily be abused. I would be keen to get more feedback on how the model could be shaped or evolved to make my intentions clearer.

Size – it’s what you do with it that counts (but size still matters)

Russ also mentioned that size is a dangerous metric, and I agree. Although lines of code (KLOC) is potentially an arbitrary metric, especially with the variety of languages and frameworks currently available, I still do believe that size is important. Not in the “if you’re application is over a 100 lines, then it’s not a microservice” kind of way (which, in fairness, could have been read from my model), but in the perspective of encapsulation, responsibility and comprehension.

After a bit more thought, a measure of architectural/code cohesion is probably a better metric for this concept. I definitely believe the microservice architecture is rooted in the principle of high cohesion (and loose coupling), but it has been argued by the likes of Simon Brown and Bob Martin that this can be achieved in a monolithic codebase without the need for the creation of separate ‘services’. The key here is modularisation or componentisation.

Accordingly, I do believe that microservices should be ‘small’ in size. For me, this was one of the failings in the original approach with SOA. The lack of skilled modelling and architectural guidance allowed services to morph into ‘all singing and dancing’ applications that offered low cohesion. Hard system boundaries (and potentially expensive coordination and communication) provided by microservices in combination with the notions of ‘bounded contexts’ from domain-driven design (DDD) should make it more obvious to developers when we are straying outside the remit of a component.

Bloated vendor tools that emerged from traditional SOA also allowed developers to ‘cheat’ by circumventing the loose coupling that patterns such as the service bus initially proposed, and monstrosities such as the heavyweight ESB were born. These days we are seeing tooling emerge that encourage reactive systems and event-driven architectures based on small components (such as the very interesting AWS Lambda). With services such as AWS Lambda a codebase size limit is enforced due to the nature of execution. It will be interesting to see what applications emerge from these frameworks.

Putting a size limit on a codebase may not be an exact science, but I believe an upper bound can at least be used as a trigger to discuss if a service is growing beyond it’s original remit (or if the architectural quality is degrading). A lot of agile and architectural techniques I teach clients are not hard-and-fast rules to determine which decisions should be made, but often act as a cue for the team (or organisation) to engage in conversation or a whiteboard session to check that we are still designing high-quality software that models the business correctly.

Dogma or dogfooding – you decide…

The final section of Russ’ post contains the strongest argument, in that dogma over thinking will only lead us down the wrong path “A maturity model can be used in place of thinking; I’d like to avoid that if we can”. Yeah, this sucks, but I agree.

The problem is that my academic background drives me towards the sharing of ideas and proposals among my peers, and I enjoy the ensuing discussion. We should always take care to make sure we are being pragmatic in these discussions (and not “solving the world’s problems at the dinner table”), but I’m still a supporter of pushing stuff out to the public for comment.

Russ also makes a great reference to Greg Young’s talk at muCon last year, which should be essential watching for anyone building microservices. Paraphrasing Greg massively, he suggested that a lot of the concepts behind microservices have already been done before, and that if we aren’t careful then we will re-invent the wheel (albeit more ‘micro’ than before 🙂 ).

Greg’s observations about the negative impact of dogmatic standardisation and overly-opinionated vendor tooling were also especially damning, and I couldn’t help but nod in agreement through a lot of his talk (on a side note, the whole muCon conference was awesome, and I would highly recommend attending the next iteration later this year! Massive kudos to Russ for kickstarting this conference).

I’m definitely going to take care to avoid being dogmatic (or inspiring dogma), but I’m still keen to share my thoughts on things like the maturity/classification model. It might turn out that this approach isn’t useful, but I’ve already been using a slightly less polished version of this model with tech friends over the last few months to help them understand where their software stack currently sits in relation to the ‘unicorn’ organisations such as Netflix and Amazon (what else are techies going to discuss over a few beers! :- ).

Something that I believe could emerge from this type of proposal (which may be more valuable) is some kind of model that shows organisations where their software sits on the big picture scale of innovation, architecture and delivery. Each level of the model should also clearly show the benefits and drawbacks, and provide guidance on why (if at all) organisations should move some of their software to the next level. We would also need to show how organisations should go about doing this, both from a pragmatic organisational and cultural perspective (Conway’s law in action), and also from a technical tooling and process perspective.

I’m currently reading Jez Humble’s new book ‘Lean Enterprise’, and this is providing some superb inspiration for approaching these tasks. I’m also dogfooding some of my new models and processes, and as soon as I have some useful insights I’ll make sure I share them.

In summary…

I really appreciate Russ taking the time to reply to my original post, and I’m definitely going to think more of several of the great points he’s made (and I’m sure I’ll also catch up with him for a beer after an upcoming London Microservices User Group meetup).

I will also take more care to make my intentions clearer, but I’m still keen to share my thoughts and inspire debate. I’m also keen to avoid dogma, and focus more on dogfooding the model, and here I’ll use some of Russ’ comments to attempt to refine the model.

As usual, if anyone has any comments or feedback then please do get in touch!

I’ve been chatting to various people for quite some time about how there isn’t an agreed maturity model for the current trend to implement microservice architectures, and so I though I would have a go at creating one (quick link to PDF: Microservice Maturity Model Proposal).

I’m in no way suggesting this first draft is complete or definitive, but I hope it may stimulate the conversation around this topic. I’m sure some people will argue that a maturity or classification model isn’t necessary, but I believe it is a fun exercise, and it does enable us to explore (and discuss) what we think are requirements for a microservice implementation.

I’ve proposed six classifications of application architectural styles:

  • Megalith Platform
    • Humongous single codebase resulting in a single application
  • Monolith Platform
    • Large single codebase resulting in a single application
  • Macro SOA Platform
    • Classical SOA applications, and platforms consisting of loosely-coupled large services (potentially a series of interconnected monoliths)
  • Meso Application Platform
    • ‘Meso’ or middle-sized services interconnected to form a single application or platform. Essentially a monolith and microservice hybrid
  • Microservice Platform
    • ‘Cloud native’ loosely-coupled small services focused around DDD-inspired ‘bounded contexts’
  • Nanoservice Platform
    • Extremely small single-purpose (primarily reactive) services

I’ve then attempted for each classification to write about things such as, motivations, challenges, architecture, code modularisation, state data stores, deployment, associated infrastructure, tooling and delivery models.

The full proposal can be found in the following PDF ‘Microservice Maturity Model Proposal – Daniel Bryant (@danielbryantuk)

Please do let me know what you think – I’m keen to see whether this model could be useful, and also explore how it could be developed.

A micro approach to a macro problem?

The microservice hype is everywhere, and although the industry can’t seem to agree on an exact definition, we are repeatedly told that moving away from a monolithic application to a Service-Oriented Architecture (SOA) consisting of small services is the correct way to build and evolve software systems. However, there is currently an absence of traditional ‘Enterprise’ organisations talking about their adoption of microservices. This blog post is a preview to a larger article, which explores the use of microservices in the Enterprise.

Interfaces – Good contracts make for good neighbours

Whether you are starting a greenfield microservice project or are tasked with deconstructing an existing monolith into services, the first task is to define the boundaries and corresponding Application Programming Interfaces (APIs) of your new components.

The suggested granularity of a service in a microservice architecture is finer in comparison with what is typically implemented when using a classical Enterprise Service Oriented Architecture (SOA) approach, but arguably the original intention of SOA was to create cohesive units of reusable business functionality, even if the implementation history tells a different story.

A greenfield microservice project often has more flexibility, and the initial design stage can define Domain Driven Design (DDD) inspired bounded contexts with explicit responsibilities and contracts between service provider and consumer (for example, using Consumer Driven Contracts).

However, a typical brownfield project must look to create “seams” within the existing applications and implement new (or extracted) services that integrate with the seam interface. The goal is for each service to have high cohesion and loose coupling; the design of the service interface is where the seeds for these principles are sowed.

Communication – Synchronous vs asynchronous

In practice, we find that many Enterprises will need to offer both synchronous and asynchronous communication in their services. It is worth noting that there is a considerable drive within the industry to move away from the perceived ‘heavyweight’ WS-* communication standards (e.g. WSDL, SOAP, UDDI), even though many of the challenges addressed by these frameworks still exist, such as service discovery, service description and contract negotiation (as articulated very succinctly by Greg Young in a recent presentation at the muCon microservices conference).

Middleware – What about the traditional enterprise stalwarts?

Although many heavyweight Enterprise Service Bus ESBs can perform some very clever routing, they are frequently deployed as a black box. Jim Webber once joked that ESB should stand for “Egregious Spaghetti Box,” because the operations performed within proprietary ESBs are not transparent, and are often complex.

If requirements dictate the use of an ESB (for example, message splitting or policy-based routing), then open source lightweight ESB implementations such as Mule ESB or Fuse ESB should be among the first options you consider.

I usually find that a lightweight MQ platform, such as RabbitMQ or ActiveMQ is more suitable because we believe the current trend in SOA communication is towards “dumb pipes and smart endpoints” In addition to removing potential vendor fees and lock-in, other benefits of using lightweight MQ technologies include easier deployment, management, and simplified testing.

Deploying microservices – How hard can it be?

However you choose to build microservices, it is essential that a continuous integration-style build pipeline be used which includes rigorous automated testing for functional requirements, fault-tolerance, security and performance. The classical SOA approach of manual QA and staged evaluation is arguably no longer appropriate in an economy where ‘speed wins’ and the ability to rapidly innovate and experiment is a competitive advantage (as captured within the Lean Startup movement).

Behaviour of your application can become emergent in a microservice-based platform, and although nothing can replace thorough and pervasive monitoring in your production stack, a build pipeline that exercises (or tortures) your components before they are exposed to your customers would appear to be highly beneficial. As I’ve argued in several conference presentations, a good build pipeline should exercise services in the target deployment environment as early in the pipeline as possible.

Summary – APIs, lightweight comms, and correct deployment

Regardless of whether you subscribe to the microservice hype, it would appear that this style of architecture is gaining traction within practically all software development domains. This article has attempted to provide a primer for understanding key concepts within this growing space, and hopefully reminds readers that many of these problems and solutions have been seen before with classical Enterprise SOA. We would be wise to take care not to reinvent the proverbial ‘service-oriented’ wheel.

Please click here for the complete original article, which provides additional information on microservice implementation options on the JVM platform, and also discusses the requirement for Continuous Delivery. A version of this article was originally published in the DZone 2014 Guide to Enterprise Integration.

References

A full list of references and recommended reading can also be found in the original article and a recent article discussing the business implications of microservices.

I’m currently at JavaOne and have just finished presenting the latest iteration of my “Cloud Developer’s DHARMA” talk, which was great fun. As promised, here are the slides:

 

 

The abstract for this talk is included here (just for the search engine’s benefit 🙂 )

“Building Java applications for the IaaS cloud is easy, right? “Sure, no problem. Just lift and shift,” all the cloud vendors shout in unison. However, the reality of building and deploying cloud applications can often be different. This session introduces lessons learned from the trenches during several years of designing and implementing cloud-based Java applications, which we have codified into our Cloud Developer’s “DHARMA” rules: Documented (just enough); Highly cohesive/loosely coupled (all the way down); Automated from code commit to cloud; Resource-aware; Monitored thoroughly; and Antifragile. “
 

If you have any questions then please do get in touch!

After attending Sam Newman’s microservice talks at Geecon last week I started to think more about what is most likely an essential feature of service-oriented/microservice platforms for monitoring, reporting and diagnostics: correlation ids. Correlation ids allow distributed tracing within complex service oriented platforms, where a single request into the application can often be dealt with by multiple downstream service. Without the ability to correlate downstream service requests it can be very difficult to understand how requests are being handled within your platform.

I’ve seen the benefit of correlation ids in several recent SOA projects I have worked on, but as Sam mentioned in his talks, it’s often very easy to think this type of tracing won’t be needed when building the initial version of the application, but then  very difficult to retrofit into the application when you do realise the benefits (and the need for!). I’ve not yet found the perfect way to implement correlation ids within a Java/Spring-based application, but after chatting to Sam via email he made several suggestions which I have now turned into a simple project using Spring Boot to demonstrate how this could be implemented.

Why?

During both of Sam’s Geecon talks he mentioned that in his experience correlation ids were very useful for diagnostic purposes. Correlation ids are essentially an id that is generated and associated with a single (typically user-driven) request into the application that is passed down through the stack and onto dependent services. In SOA or microservice platforms this type of id is very useful, as requests into the application typically are ‘fanned out’ or handled by multiple downstream services, and a correlation id allows all of the downstream requests (from the initial point of request) to be correlated or grouped based on the id. So called ‘distributed tracing’ can then be performed using the correlation ids by combining all the downstream service logs and matching the required id to see the trace of the request throughout your entire application stack (which is very easy if you are using a centralised logging framework such as logstash)

The big players in the service-oriented field have been talking about the need for distributed tracing and correlating requests for quite some time, and as such Twitter have created their open source Zipkin framework (which often plugs into their RPC framework Finagle), and Netflix has open-sourced their Karyon web/microservice framework, both of which provide distributed tracing [edit 27/07/14: It would appear that although distributed tracing was mentioned as an upcoming feature in the Karyon blog post, it never made it in to the public Github repo. Thanks to John Eikenberry for pointing this out in the comments below]. There are of course commercial offering in this area, one such product being AppDynamics, which is very cool, but has a rather hefty price tag.

Creating a proof-of-concept in Spring Boot

As great as Zipkin and Karyon are, they are both relatively invasive, in that you have to build your services on top of the (often opinionated) frameworks. This might be fine for some use cases, but no so much for others, especially when you are building microservices. I’ve been enjoying experimenting with Spring Boot of late, and this framework builds on the much known and loved (at least by me 🙂 ) Spring framework by providing lots of preconfigured sensible defaults. This allows you to build microservices (especially ones that communicate via RESTful interfaces) very rapidly. The remainder of this blog pos explains how I implemented a (hopefully) non-invasive way of implementing correlation ids.

Goals

  1. Allow a correlation id to be generated for a initial request into the application
  2. Enable the correlation id to be passed to downstream services, using as method that is as non-invasive into the code as possible

Implementation

I have created two projects on GitHub, one containing an implementation where all requests are being handled in a synchronous style (i.e. the traditional Spring approach of handling all request processing on a single thread), and also one for when an asynchronous (non-blocking) style of communication is being used (i.e., using the Servlet 3 asynchronous support combined with Spring’s DeferredResult and Java’s Futures/Callables). The majority of this article describes the asynchronous implementation, as this is more interesting:

The main work in both code bases is undertaken by the CorrelationHeaderFilter, which is a standard Java EE Filter that inspects the HttpServletRequest header for the presence of a correlationId. If one is found then we set a ThreadLocal variable in the RequestCorrelation Class (discussed later). If a correlation id is not found then one is generated and added to the RequestCorrelation Class:


public class CorrelationHeaderFilter implements Filter {
//…
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
throws IOException, ServletException {
final HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
String currentCorrId = httpServletRequest.getHeader(RequestCorrelation.CORRELATION_ID_HEADER);
if (!currentRequestIsAsyncDispatcher(httpServletRequest)) {
if (currentCorrId == null) {
currentCorrId = UUID.randomUUID().toString();
LOGGER.info("No correlationId found in Header. Generated : " + currentCorrId);
} else {
LOGGER.info("Found correlationId in Header : " + currentCorrId);
}
RequestCorrelation.setId(currentCorrId);
}
filterChain.doFilter(httpServletRequest, servletResponse);
}
//…
private boolean currentRequestIsAsyncDispatcher(HttpServletRequest httpServletRequest) {
return httpServletRequest.getDispatcherType().equals(DispatcherType.ASYNC);
}

The only thing is this code that may not instantly be obvious is the conditional check currentRequestIsAsyncDispatcher(httpServletRequest), but this is here to guard against the correlation id code being executed when the Async Dispatcher thread is running to return the results (this is interesting to note, as I initially didn’t expect the Async Dispatcher to trigger the execution of the filter again?)

Here is the RequestCorrelation Class, which contains a simple ThreadLocal<String> static variable to hold the correlation id for the current Thread of execution (set via the CorrelationHeaderFilter above)


public class RequestCorrelation {
public static final String CORRELATION_ID = "correlationId";
private static final ThreadLocal<String> id = new ThreadLocal<String>();
public static String getId() { return id.get(); }
public static void setId(String correlationId) { id.set(correlationId); }
}

Once the correlation id is stored in the RequestCorrelation Class it can be retrieved and added to downstream service requests (or data store access etc) as required by calling the static getId() method within RequestCorrelation. It is probably a good idea to encapsulate this behaviour away from your application services, and you can see an example of how to do this in a RestClient Class I have created, which composes Spring’s RestTemplate and handles the setting of the  correlation id within the header transparently from the calling Class.


@Component
public class CorrelatingRestClient implements RestClient {
private RestTemplate restTemplate = new RestTemplate();
@Override
public String getForString(String uri) {
String correlationId = RequestCorrelation.getId();
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.set(RequestCorrelation.CORRELATION_ID, correlationId);
LOGGER.info("start REST request to {} with correlationId {}", uri, correlationId);
//TODO: error-handling and fault-tolerance in production
ResponseEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET,
new HttpEntity<String>(httpHeaders), String.class);
LOGGER.info("completed REST request to {} with correlationId {}", uri, correlationId);
return response.getBody();
}
}
//… calling Class
public String exampleMethod() {
RestClient restClient = new CorrelatingRestClient();
return restClient.getForString(URI_LOCATION); //correlation id handling completely abstracted to RestClient impl
}

Making this work for asynchronous requests…

The code included above works fine when you are handling all of your requests synchronously, but it is often a good idea in a SOA/microservice platform to handle requests in a non-blocking asynchronous manner. In Spring this can be achieved by using the DeferredResult Class in combination with the Servlet 3 asynchronous support. The problem with using ThreadLocal variables within the asynchronous approach is that the Thread that initially handles the request (and creates the DeferredResult/Future) will not be the Thread doing the actual processing.

Accordingly, a bit of glue code is needed to ensure that the correlation id is propagated across the Threads. This can be achieved by extending Callable with the required functionality: (don’t worry if example Calling Class code doesn’t look intuitive – this adaption between DeferredResults and Futures is a necessary evil within Spring, and the full code including the boilerplate ListenableFutureAdapter is in my GitHub repo):


public class CorrelationCallable<V> implements Callable<V> {
private String correlationId;
private Callable<V> callable;
public CorrelationCallable(Callable<V> targetCallable) {
correlationId = RequestCorrelation.getId();
callable = targetCallable;
}
@Override
public V call() throws Exception {
RequestCorrelation.setId(correlationId);
return callable.call();
}
}
//… Calling Class
@RequestMapping("externalNews")
public DeferredResult<String> externalNews() {
return new ListenableFutureAdapter<>(service.submit(new CorrelationCallable<>(externalNewsService::getNews)));
}

And there we have it – the propagation of correlation id regardless of the synchronous/asynchronous nature of processing!

You can clone the Github report containing my asynchronous example, and execute the application by running mvn spring-boot:run at the command line. If you access http://localhost:8080/externalNews in your browser (or via curl) you will see something similar to the following in your Spring Boot console, which clearly demonstrates a correlation id being generated on the initial request, and then this being propagated through to a simulated external call (have a look in the ExternalNewsServiceRest Class to see how this has been implemented):


[nio-8080-exec-1] u.c.t.e.c.w.f.CorrelationHeaderFilter : No correlationId found in Header. Generated : d205991b-c613-4acd-97b8-97112b2b2ad0
[pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : start REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0
[nio-8080-exec-2] u.c.t.e.c.w.f.CorrelationHeaderFilter : Found correlationId in Header : d205991b-c613-4acd-97b8-97112b2b2ad0
[pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : completed REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0

view raw

gistfile1.txt

hosted with ❤ by GitHub

Conclusion

I’m quite happy with this simple prototype, and it does meet the two goals I listed above. Future work will include writing some tests for this code (shame on me for not TDDing!), and also extend this functionality to a more realistic example.

I would like to say a massive thanks to Sam, not only for sharing his knowledge at the great talks at Geecon, but also for taking time to respond to my emails. If you’re interested in microservices and related work I can highly recommend Sam’s Microservice book which is available in Early Access at O’Reilly. I’ve enjoyed reading the currently available chapters, and having implemented quite a few SOA projects recently I can relate to a lot of the good advice contained within. I’ll be following the development of this book with keen interest!

If you have any comments or thoughts then please do share them via the comment below, or feel free to get in touch via the usual mechanisms!

References

I used Tomasz Nurkiewicz’s excellent blog several times for learning how best to wire up all of the DeferredResult/Future code in Spring:

http://www.nurkiewicz.com/2013/03/deferredresult-asynchronous-processing.html

 

So, I’m currently travelling back from my first Geecon conference in Krakow, Poland, and I must say it was an awesome experience! Firstly, I have to say a massive congratulations and thanks to all of the organisers and volunteers, and especially to Andrzej (@ags313), Konrad (@ktosopl), Adrian (@adrno) and Adam (@maneo). These guys and girls work tirelessly throughout the year in order to run Geecon, and when I heard that they do everything themselves (without paid-for project managers etc.,) I was even more impressed!

I learned so much at the conference that I’m going to try and do a couple of blog posts to brain-dump my thoughts (although I know from past experience I know that this may not happen, and so I’ll focus on the one post at the moment 🙂 ). I’ll start with a review of the conference itself, move on to key memes (that I saw), and then briefly outline the sessions and social activities that I attended:

Conference Overview

  • The organisation of the conference is superb, right from the location (an out-of-town Cinema), the sessions, the speakers, the attendees, the food, to the amazingly helpful volunteers. My only small minus (and it’s really a symptom of the great success of the conference) was that the corridors can get super crowded during break times and lunch. I was chatting to Andrzej about this, and he mentioned that so many people wanted to come to Geecon that he had to stop accepting registrations after the initial 1200+…wow!
  • Krakow is an awesome city for the conference. It has some amazing sight-seeing and historical opportunities (Old Town, Castle, Salt mines, Auschwitz). Everyone I interacted with in the city was very friendly, and the vast majority also spoke superb English (which again puts the UK’s language skills to shame!)
  • Geecon is not just a Java conference, and this makes it all the better. It may be primarily focused on Java and the JVM, but you can easily pick sessions to avoid this (if you really wanted to?), and I managed to get a nice blend of different languages over the three days
  • Geecon is at the cutting-edge of thought-leadership within software development. There aren’t many conferences that can claim a perfect mix of sessions on programming fundamentals (often ignored), DDD, crafted design and architecture, Agile good practices, Java 8, JEE 7, Spring boot, Groovy, Scala, JavaScript modularity, microservices, DevOps, Open Source Software, low latency performance and more (except perhaps DevoxxUK, but I could be biased 😉 )
  • The social events and after parties are amazing – you’ll probably learn as much at the parties as you will in the main sessions 🙂
  • Get plenty of sleep before the event, and also expect your brain to hurt after the three days of intensive knowledge injection. You know the part of The Matrix films where Neo ‘downloads’ knowledge on Kung-fu while plugged in to chair? Well Geecon is pretty much the beta version of this (fortunately without the wires plugging into your brain)

Key Memes

  • Never stop learning, and make sure you keep revising the fundamentals as well as innovating.
  • Jurgen Appelo’s awesome keynote set the tone for the first part of the above point perfectly, and I would recommend that all developers watch this talk when it is released on video by the Geecon team. The key takeaways for me were; have a goal, work relentlessly towards it (evaluating progress all the way, and adjusting were necessary); daily habits and discipline are key; be open to other people’s ideas; read more; read more; read more… 😉
  • The ever-inspiring Kevlin Henney also did a superb job at both of his talks on Thursday. The first was a call to action for avoiding the common mistakes that many of us make when writing code, and also a reminder that style and layout matter. This can be summed up perfectly by one of his quotes “Style and layout matter in programming for the same reason they matter in writing” Amen to that… The second of Kevlin’s presentations was a homage to the ‘worse is better’ work by Richard Gabriel, and reminded us all of the fundamental’s of agile development, and that often less is more when it comes to software development. Key takeaways here were strive for simplicity, completeness, correctness, consistency
  • Java 8 really is making a difference to developers. I realise that many of the people presenting at the conference are thought-leaders, and also at the cutting-edge of software design, but I could clearly see the impact of Java 8’s new Lambdas and Streams. For a start, it made a lot of the code on slides much more readable (no more crazy anonymous inner Class boilerplate), and also more expressive, which allowed for key memes to be demonstrated in a much more understandable way (for example, in the great talk about Netflix’s RxJava library by Tomasz Kowalczewski)
  • Java EE 7 is also making a clear difference to developers. Many of the JEE sessions I have attended in conferences over the last few years have been about work-arounds for JEE versions <= 6, or the promised benefit of JEE 7, but at Geecon we actually got to see clear improvements from within the developer trenches (for example, Adam Bien’s and Arun Gupta’s sessions)
  • JavaScript really is growing-up. I know that this observation is not particularly new, and the JavaScript language (and tooling) has been developing in leaps and bounds over the past few years, but again at Geecon I saw proper evidence of this. In particular, the session by Paul Bakker and Sander Mak on JavaScript modularity and the proposed enhancements in ECMAScript 6 look awesome.
  • Microservices are at the tipping-point for mainstream adoption (and are in danger of being the ‘next big thing’). Both of Sam Newman’s talks about microservices were superb, and clearly demonstrated that Sam and the guys/girls at Thoughtworks have been doing this stuff in the trenches for quite some time. Sam is clearly ahead of the game in many areas of service design, implementation and delivery that I was very pleased to hear he was writing a book with O’Reily (which I’ve now bought in Early Access format here). Key takeaway’s from Sam’s talks included; software architects should be more like town-planners than building architects, especially when designing with microservices; be flexible with the implementation of microservices, but standardise the stuff ‘in between’ – monitoring, interfaces, deployment, architectural safety; strive for resource (not data or procedure) oriented designs; distributed txns are hard and should be avoided (see CAP theorem 🙂 ); distributed tracing and correlation IDs are very valuable (I’ll second this piece of advice!); get the testing strategy correct (think Mike Cohn’s test pyramid); make thing more ‘production-like’ close to the development (vagrant, docker, packer are useful here); consumer-driven contracts are very useful for testing; semantic monitoring is a great technique for testing in production.
  • The need for well-crafted design and architecture is still as important (maybe more) as it ever was. Sandro Mancuso’s excellent ‘Crafted Design’ talk provided clear evidence for this. Building heavily on DDD-based principles, Sandro proposed a new architecture and package design for Java applications that allows more cohesive modelling of the problem domain, and promotes clear separation between the Model Domain and delivery and infrastructure mechanisms. I wouldn’t do the proposal justice if I tried to describe it here, but check out the slides on Slideshare. It’s seriously worth spending time looking at this, as creating the ideal package/module structure that represents the problem domain is somewhat the holy grail of Java developers. It’s also worth checking out Simon Brown’s related work in this field (of which I am a big fan), and although he wasn’t at Geecon you will be able to catch him at Devoxx UK in June. I chatted to Sandro at the conference, and we both agreed that it would be great if he and Simon could get together sometime for a meeting of minds!
  • Several of the emerging languages and tooling are becoming a lot more opinionated, which I welcome. It’s great to have an uber-flexible language (Scala I’m looking at you here), but the flexibility and lack of constraints can confuse novices, and also stifle the creative challenge. Ken Sipe did an amazing job of introducing Google’s (very opinionated) Go language in an hour session, and this has inspired my to further look into this language as perhaps an alternative to some of the Python I write. Ken also did a superb advanced session on the testing framework Spock. I could see some people’s brains melting at some of the content (and mine got quite hot), but some of the power demonstrated looked simply awesome. Both of Ken’s talks demonstrated to me that the developers of the language/tool had clearly surveyed their respective fields and picked what they considered best for inclusion into their offering. This obviously could be hit-and-miss, but with Go lang and Spock I think we have a couple of winners.

That’s all for the moment, but if you ever get the chance to attend Geecon then make sure you do – you’ll learn lots, and have a great time doing it! if you have any comments then please let me know!