Archive

Tag Archives: Java

Josh Long, Richard Warburton and myself were having an interesting conversation on twitter about standardisation early today, specifically related to the Java Community Process (JCP), which is the mechanism for developing standard technical specifications for Java technology. Josh asked a question that I often get asked “what does JCP standardisation offer?” (I’m paraphrasing here slightly). This is a totally fair question, and I thought it deserved a little more explanation than I could craft on Twitter.

Innovation and Standardisation; Ying and Yang

The key thing to remember about the JCP process is that it is not about innovation. Quite the opposite in fact. For a standard to be created there must have an initial requirement or problem, significant innovation creating solutions, ideally some competing ideas and implementations, plenty of evaluation and discussion, and ultimately an agreed approach on how to meet the requirement. This process takes time, and it is only at the second from final point the JCP can start creating standards. This is the biggest misunderstanding I encounter when running JSR hack days around the world, particularly with junior developers, as they think the JCP is some mystical think tank who crank out the latest and greatest innovative frameworks (I appreciate calling EJB ‘latest and greatest’ is very ironic 🙂 ).

It’s also worth mentioning at this point that the work of the JCP is now undertaken in the open (I do appreciate the fact that it didn’t used to be, but JSR-348 has made great progress to abolish the ‘behind closed door’ work). This openness provides a platform that allows anyone who wants to get involved to be able to contribute opinions and ideas to the process, and if a standard will cause problems (or is evolving in a problematic fashion) then the community can rise up and publicly duke this out with the spec leads (no Duke pun intended!)

Now on the flip side to this there exists organisations like Spring.io/Pivotal who are all about innovation, and are constantly pushing the boundaries of what a language or framework can do. Personally I love this. I have an entrepreneurial background, and I thrive on innovation and playing with the latest tech and bleeding-edge frameworks as do many of the companies I work with. The Spring framework really does excel here, and this is why I made the transition to coding in Spring back when the framework was at version 1.X and I was really struggling with building J2EE applications. However, as a consultant I appreciate that not all my clients (or the industry in general) think like this, or desire this level of innovation or disruption.

Many companies are inherently risk adverse (sometimes with good reason) and they want to ensure any investment in technology or training their people in a specific technology offers a long-term return on investment (ROI). Such organisation also often desire portability of application/code, and although the practical implementation on the Java platform of this philosophy may not have been perfect in the past, I’ve personally moved several large(ish)-scale Java EE applications across differing application servers with minimal effort. In my mind this is where standardisation can offer enormous benefits, particularly if the standardisation work is undertaken out in the open. On a related note, last year within the London Java Community (LJC) we undertook a community survey of our members, and many Java developers were in favour of standards such as those offered by the JCP (check out the result here http://londonjavacommunity.wordpress.com/2013/09/16/the-java-community-process-survey/)

Horses for Courses…

I strongly believe that innovation and standardisation are far from mutually exclusive, and in fact are very much mutually beneficial (perhaps to the level where one cannot exist without the other, but this is just my opinion). Without innovation we wouldn’t be the embracing the benefits offered by the latest incantation of Service Oriented Architecture (SOA), currently being labelled as ‘microservices’, lead by the likes of Spring Boot, Dropwizard and Ratpack in the Java space. I am very much enjoying working in this space, and the fact that I don’t have to follow any kind of specification results in some very agile, flexible and effective applications.

However, you don’t have to look too far to see the problems that an absence of standardisation can surface. Earlier in the year Facebook announced that it was attempting to create a specification for PHP, as none had existed up until this point, and this made it difficult to decide what the ‘correct’ behaviour of any particular PHP runtime should be. Recently the AngularJS team announced a new version of their framework, and suggested that there will most likely be no clear migration path between the current 1.X and new 2.X versions. This will surely stifle innovation and hamper maintenance of code within companies who have invested significant resources into version Angular JS 1.X (not to mention the problem of dealing with thousands of lines of code that are currently running in production). There are a couple of other related examples that spring to mind, but I won’t mention them as I hope readers will follow my intentions. On a related topic, I’m also very interested to see what will happen with the .NET platform now that Microsoft have open sourced the underlying code with an MIT/Apache2 licence…

Summary

So in summary, I think there is most definitely a place for innovation and standardisation, and I believe both are very useful. This is why I choose to publicly evangelise the Spring platform (and write stacks of code in Spring Boot), and at the same also support the great efforts of the JCP and the OpenJDK which help to drive the future of a standards-based Java platform.

I would be keen to hear other’s thoughts, and so please feel free to comment below 🙂

Disclaimer: I am a member of the OpenJDK Adoption group, and also contribute to the excellent work undertaken within the JCP via the London Java Community JCP committee. However, in contrast 90% of the Java code I write when consulting is currently Spring-based (specifically Spring boot of late), and I publicly evangelise the superb innovation undertaken by the Spring framework team.

I’m currently at JavaOne and have just finished presenting the latest iteration of my “Cloud Developer’s DHARMA” talk, which was great fun. As promised, here are the slides:

 

 

The abstract for this talk is included here (just for the search engine’s benefit 🙂 )

“Building Java applications for the IaaS cloud is easy, right? “Sure, no problem. Just lift and shift,” all the cloud vendors shout in unison. However, the reality of building and deploying cloud applications can often be different. This session introduces lessons learned from the trenches during several years of designing and implementing cloud-based Java applications, which we have codified into our Cloud Developer’s “DHARMA” rules: Documented (just enough); Highly cohesive/loosely coupled (all the way down); Automated from code commit to cloud; Resource-aware; Monitored thoroughly; and Antifragile. “
 

If you have any questions then please do get in touch!

I was once again privileged to present at the London Java Community (LJC) “Straight from the LJC” series of talks at Skillsmatter last night, and my chosen topic this time was “Professional Software Development: Thinking Fast and Slow”. In a nutshell, I attempted to relate the great content about human decision-making from the book “Thinking Fast and Slow” to the decisions we make in our day-to-day work as software developers. Enjoy!

 

 

Here is the talk abstract:

“In the international bestseller ‘Thinking, Fast and Slow’, Daniel Kahneman explains how we as human beings think and reason, and perhaps surprisingly how our thought processes are often fundamentally flawed and biased. This talk briefly explores the ideas presented in the book in the context of professional software development. As software developers we all like to think that we are highly logical, and make only rational choices, but after reading the book I’m not so sure. Here I’ll share my thinking on thinking.”

Thanks to the LJC, Recworks and OpenCredo in supporting my speaking efforts! As usual, if you have any questions, then please do get in touch!

Ok, so the review of this excellent conference is probably about a month overdue, but time just seems to have flown by! Anyway here it goes…

Life on the Program Committee

I was fortunate enough this year to be invited to sit on the program committee and review submissions (thanks Mark et al!). I’ll create a separate post about this experience later, but I wanted to start this review by mentioning that the number of submissions for Devoxx UK was very high, and the quality was outstanding.

According to my fellow Program Committee members, the job of picking which talks to include in the conference gets more difficult each year! As many committee members mentioned, we could have easily picked talks to fill an entire week of conference, but the difficult thing was that Devoxx UK was only running for 2 days!

The Conference Itself

The conference began in style with a live beatbox session from the UK Champion ‘Reeps One‘. Check out some of his stuff on youtube, but this guy needs to be seen live to get the full experience!

After a great introduction to the conference from the steering committee (Mark Hazell, Ellie May, Stephan Janssen, Dan Hardiker, James McGivern) Dan North opened up the with the keynote “Deliberate advice from an accidental career”. I enjoy most of Dan’s stuff, due largely to his story-telling skills and the fact that I can relate to most of the experiences he talks about, and this presentation was no exception. The key takeaways can be summarised in the photo below, but if you get chance to watch the complete presentation then I strongly recommend you take the opportunity 🙂
2014-06-12 10.17.57

Notes on good sessions

The first official session of the day for me was Andrew Harmel-Law’s “The 5 whys: Counter Intuitive Solutions to (all too common) Problems”, which was a great discussion of key problems in relation to the DevOps movement. The central theme to the talk related to a book on Japanese farming, and how the concepts could be mapped onto continuous delivery and DevOps, and as a fan of Eastern philosophy I enjoyed this metaphor very much 🙂

During the next two sessions I was dropping in and out of talks, and also catching up with other presenters and friends who I hadn’t seen in a while. Talks that looked interesting were Graham Allan’s “How switching to Scala made me less productive, and why that matters less than I thought” and Dick Wall’s “What have Monads ever done for us?”

The final conference session of the day that I attended was John Smart’s “It’s testing Jim, but not as we know it”. This session was a personal highlight of the conference for me, and the discussion of BDD really hit home to the projects I have recently been working on. John also has a MEAP book ‘BDD In Action’ which I can highly recommend.

In the evening Mani Sarkar, Richard Warburton, Martijn Verburg and myself presented an OpenJDK BOF session, which was well received, and generated plenty of great questions. If you want more information (or want to get hacking yourself) then please pop along to Mani’s excellent ‘Getting Started with OpenJDK document’

Roll on to Friday…

Friday began with Simon Brown’s “Software Architecture and the Balance with Agility”, which was another personal conference highlight. As part of my program committee role I had invited Simon to Devoxx UK, as not only am I a big fan of his work, but I also think the topic of architecture is often neglected in developer conferences. Simon didn’t disappoint, and key takeways for me included; developing software in an agile way doesn’t guarantee that the resulting software architecture is agile; a good architecture enables agility; there could be a sweet spot (architecturally speaking) somewhere between a monolithic architecture and SOA (but no-one want to hear this?); sketches can be maps – the key thing is creating a shared vision. Simon has a great website www.codingthearchitecture.com and also a book entitled “Software architecture for Developers”, both of which I can recommend. I definitely am a strong believer that all developers should be aware of architecture (and also that all architects should code!)

Next was an interesting talk ‘Groovy DevOps and the Cloud’, which touched on many of my favourite DevOps topics, but with a Groovy theme. I also couldn’t help noticing that a lot of the tools developed already existed in a non-groovy form, and so I wondered in some of this was vulnerable to the ‘not invented here’ argument, but the presenter did seem aware of this fact, and gave some great recommendations for resources and books at the end of the session

The third session of the day was my turn to present with Steve Poole “Moving to a DevOps mode: Easy, hard or just plain terrifying”. Here is an action-shot of Steve and I on stage…

14577280683_1bd3e6b977_z14370589518_8ac0759092_z

It’s safe to say that both Steve and I had a great time presenting, and the feedback and questions after the session were great. You can watch the recording of the session here (subscription required, but free to all who attended Devoxx UK). You can also find the slides in another blog post here

After our talk both Steve and I stayed in the same room to listen to the panel session “What does the Oracle/Google shenanigans mean to the Java Developer”. This talk was not only very funny (almost guaranteed when you create a panel containing Ted Neward with a lawyer…), but also quite pertinent, what with all of the interesting IP/patent issues constantly flying around.

The next session of the day “Modern web architecture” by Ted Neward was great. Ted has his own brand of ‘edu-tainment’, and this session lived-up to his usual billing. I’m not going to spoil his punchline, but I’ll simply say that watching the recording will be well worth your time!

I spent the remainder of the day in the Hackergarten, chatting to many people about OpenJDK and various JSRs (and also answering a few DevOps questions). I also managed to catch up with old friends in the hackergarten, and Anatole Tresch and I also planned our upcoming JSR-354 hackday which we ran at OpenCredo.

The conference was drawn to a close with an ensemble keynote, which was awesome. Martijn did a great job reviewing the conference content, Dick Wall shone an interesting light onto the technological present (including lots of details about wi-fi enabled dog Dotty!), and Arun and company covered (a hopefully very exciting) future, especially relating to the work everyone is doign in Devoxx for kids

The Saturday after-party

A few of us managed to continue the party into Saturday, with John Stevenson from Salesforce offering to run a Devoxx UK themed ‘Hack the Tower’ event. I had great fun here as well, and managed to get a few more people hacking on OpenJDK 🙂

In Conclusion

The conference was awesome, and built upon last years success. Devoxx UK is still one of my favourite conferences in terms of atmosphere, and the ‘by developers, for developers’ mantra really does ring true. Everywhere I went I overheard interesting conversations, saw impromptu hacking, and people taking the time to connect with one another. The quality of speakers was also excellent, and the range of topics nicely balanced (although I may be biased here… 🙂 )

I would also like to say a big thanks to all of the organisers, the volunteers, the speakers and the attendees for making the conference so great. Thanks also to all the friends (both old and new) I managed to catch up with, and I’ll see you all at another conference soon (maybe JavaOne?)

On a final note, Devoxx UK has already been confirmed for 2015, and so please book space in your diaries for a 3 day extravaganza (yes 3 days, not 2) between 17th-20th June!

14553793591_ff1f8f8d43_z

 

 

I’m very excited to be part of Oracle’s JavaOne event again this year, and I can’t wait to head over to San Francisco at the end of September!

This year I’ll be presenting a solo session on “Cloud Develop’s DHARMA: Redefining done for Cloud applications” (which is an improved version of the talk at gave at Skillsmatter earlier this year), and three other joint Birds-of-a-feather (BOF) sessions on OpenJDK Adoption, the JCP Process, and “How to make your Java User Group and Java More Awesome”

You can find details of all of the talks on my JavaOne conference profile page here.

 

I'm speaking at JavaOne

I’m speaking at JavaOne – Join me!

 

Last year at JavaOne I was fortunate to be invited to talk to theserverside.com’s Cameron McKenzie about a range of topics, and you can find a link to the recordings below [please note that some of the editing in the videos is a bit wonky, as it appears that I am not always answering the question asked – I can assure that you I was answering appropriately during the actual interview itself 🙂 ]

I look forward to catching up with old friends (and making new ones) at this year’s JavaOne, and so if you are there please come and say hello!

As my other blog post (will soon) reveal, my entire experience of Devoxx UK 2014 was awesome, but in particular I enjoyed presenting “Moving to a DevOps Mode: Easy, Hard or just Plain Terrifying” with Steve Poole.

Steve and I have previously presented together about the OpenJDK at JavaOne, but this was a more ambitious project. Earlier in the year we both attended a series of meetups in London, and started talking about our respective experiences with enabling agility within organisation, and working with such topics as Continuous Integration, Continuous Delivery and ‘DevOps’. Steve has plenty of experience of this with large organisations, and I have experience from working with smaller organisations, and so we figure that a joint talk combining all of our learnings would be a good idea.

Both Steve and I were very happy with the talk, and we received some great feedback and questions at the end of the presentation. We will also be presenting a very similar talk at JAX London this year, and so we will try and address all of the comments here.

 

 

You can also watch the full video recorindg of the presentation at Parleys, but in order to view the content you will need to have been an attendee of the conference or pay a subscription:

https://parleys.com/play/53b15b01e4b0543940d9e5ec/chapter1/about

As usual, if you have any comments then please do get in touch!

I once again had the pleasure of talking at Skillsmatter in early May, and this time I presented “Cloud Developer’s DHARMA: Redefining ‘done’ for Cloud applications”. I wrote about this on my company’s blog the night after I delivered the talk, but I’ve just realised I didn’t post anything here – therefore here we are. The synopsis for the talk can be found below.

As is always the case with giving a presentation at Skillsmatter, I very much enjoyed the experience, and there were some great questions and chat in the pub afterwards. Many thanks to all who attended – your comments and feedback are very much appreciated!

Skillsmatter have very kindly recorded the session, and you can watch my full talk here. You can also find a link to the slides on slideshare below.

Cloud Dharma Talk at Skillsmatter - Daniel Bryant

 

Talk synopsis:

Building applications for the IaaS Cloud is easy, right? “Sure, no problem – just lift and shift!” all the Cloud vendors shout in unison. However, the reality of building and deploying Cloud applications can often be different. This talk will introduce lessons learnt from the trenches during two years of designing and implementing cloud-based Java applications, which we have codified into our Cloud developer’s ‘DHARMA’ rules; Documented (just enough); Highly cohesive/loosely coupled (all the way down); Automated from code commit to cloud; Resource aware; Monitored thoroughly; and Antifragile.

We will look at these lessons from both a theoretic and practical perspective using a real-world case study from Instant Access Technologies (IAT) Ltd. IAT recently evolved their epoints.com(http://epoints.com/) customer loyalty platform from a monolithic Java application deployed into a data centre on a ‘big bang’ schedule, to a platform of loosely-coupled JVM-based components, all being continuously deployed into the AWS IaaS Cloud

If you have any questions then please do get in touch via the usual methods!

After attending Sam Newman’s microservice talks at Geecon last week I started to think more about what is most likely an essential feature of service-oriented/microservice platforms for monitoring, reporting and diagnostics: correlation ids. Correlation ids allow distributed tracing within complex service oriented platforms, where a single request into the application can often be dealt with by multiple downstream service. Without the ability to correlate downstream service requests it can be very difficult to understand how requests are being handled within your platform.

I’ve seen the benefit of correlation ids in several recent SOA projects I have worked on, but as Sam mentioned in his talks, it’s often very easy to think this type of tracing won’t be needed when building the initial version of the application, but then  very difficult to retrofit into the application when you do realise the benefits (and the need for!). I’ve not yet found the perfect way to implement correlation ids within a Java/Spring-based application, but after chatting to Sam via email he made several suggestions which I have now turned into a simple project using Spring Boot to demonstrate how this could be implemented.

Why?

During both of Sam’s Geecon talks he mentioned that in his experience correlation ids were very useful for diagnostic purposes. Correlation ids are essentially an id that is generated and associated with a single (typically user-driven) request into the application that is passed down through the stack and onto dependent services. In SOA or microservice platforms this type of id is very useful, as requests into the application typically are ‘fanned out’ or handled by multiple downstream services, and a correlation id allows all of the downstream requests (from the initial point of request) to be correlated or grouped based on the id. So called ‘distributed tracing’ can then be performed using the correlation ids by combining all the downstream service logs and matching the required id to see the trace of the request throughout your entire application stack (which is very easy if you are using a centralised logging framework such as logstash)

The big players in the service-oriented field have been talking about the need for distributed tracing and correlating requests for quite some time, and as such Twitter have created their open source Zipkin framework (which often plugs into their RPC framework Finagle), and Netflix has open-sourced their Karyon web/microservice framework, both of which provide distributed tracing [edit 27/07/14: It would appear that although distributed tracing was mentioned as an upcoming feature in the Karyon blog post, it never made it in to the public Github repo. Thanks to John Eikenberry for pointing this out in the comments below]. There are of course commercial offering in this area, one such product being AppDynamics, which is very cool, but has a rather hefty price tag.

Creating a proof-of-concept in Spring Boot

As great as Zipkin and Karyon are, they are both relatively invasive, in that you have to build your services on top of the (often opinionated) frameworks. This might be fine for some use cases, but no so much for others, especially when you are building microservices. I’ve been enjoying experimenting with Spring Boot of late, and this framework builds on the much known and loved (at least by me 🙂 ) Spring framework by providing lots of preconfigured sensible defaults. This allows you to build microservices (especially ones that communicate via RESTful interfaces) very rapidly. The remainder of this blog pos explains how I implemented a (hopefully) non-invasive way of implementing correlation ids.

Goals

  1. Allow a correlation id to be generated for a initial request into the application
  2. Enable the correlation id to be passed to downstream services, using as method that is as non-invasive into the code as possible

Implementation

I have created two projects on GitHub, one containing an implementation where all requests are being handled in a synchronous style (i.e. the traditional Spring approach of handling all request processing on a single thread), and also one for when an asynchronous (non-blocking) style of communication is being used (i.e., using the Servlet 3 asynchronous support combined with Spring’s DeferredResult and Java’s Futures/Callables). The majority of this article describes the asynchronous implementation, as this is more interesting:

The main work in both code bases is undertaken by the CorrelationHeaderFilter, which is a standard Java EE Filter that inspects the HttpServletRequest header for the presence of a correlationId. If one is found then we set a ThreadLocal variable in the RequestCorrelation Class (discussed later). If a correlation id is not found then one is generated and added to the RequestCorrelation Class:


public class CorrelationHeaderFilter implements Filter {
//…
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
throws IOException, ServletException {
final HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest;
String currentCorrId = httpServletRequest.getHeader(RequestCorrelation.CORRELATION_ID_HEADER);
if (!currentRequestIsAsyncDispatcher(httpServletRequest)) {
if (currentCorrId == null) {
currentCorrId = UUID.randomUUID().toString();
LOGGER.info("No correlationId found in Header. Generated : " + currentCorrId);
} else {
LOGGER.info("Found correlationId in Header : " + currentCorrId);
}
RequestCorrelation.setId(currentCorrId);
}
filterChain.doFilter(httpServletRequest, servletResponse);
}
//…
private boolean currentRequestIsAsyncDispatcher(HttpServletRequest httpServletRequest) {
return httpServletRequest.getDispatcherType().equals(DispatcherType.ASYNC);
}

The only thing is this code that may not instantly be obvious is the conditional check currentRequestIsAsyncDispatcher(httpServletRequest), but this is here to guard against the correlation id code being executed when the Async Dispatcher thread is running to return the results (this is interesting to note, as I initially didn’t expect the Async Dispatcher to trigger the execution of the filter again?)

Here is the RequestCorrelation Class, which contains a simple ThreadLocal<String> static variable to hold the correlation id for the current Thread of execution (set via the CorrelationHeaderFilter above)


public class RequestCorrelation {
public static final String CORRELATION_ID = "correlationId";
private static final ThreadLocal<String> id = new ThreadLocal<String>();
public static String getId() { return id.get(); }
public static void setId(String correlationId) { id.set(correlationId); }
}

Once the correlation id is stored in the RequestCorrelation Class it can be retrieved and added to downstream service requests (or data store access etc) as required by calling the static getId() method within RequestCorrelation. It is probably a good idea to encapsulate this behaviour away from your application services, and you can see an example of how to do this in a RestClient Class I have created, which composes Spring’s RestTemplate and handles the setting of the  correlation id within the header transparently from the calling Class.


@Component
public class CorrelatingRestClient implements RestClient {
private RestTemplate restTemplate = new RestTemplate();
@Override
public String getForString(String uri) {
String correlationId = RequestCorrelation.getId();
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.set(RequestCorrelation.CORRELATION_ID, correlationId);
LOGGER.info("start REST request to {} with correlationId {}", uri, correlationId);
//TODO: error-handling and fault-tolerance in production
ResponseEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET,
new HttpEntity<String>(httpHeaders), String.class);
LOGGER.info("completed REST request to {} with correlationId {}", uri, correlationId);
return response.getBody();
}
}
//… calling Class
public String exampleMethod() {
RestClient restClient = new CorrelatingRestClient();
return restClient.getForString(URI_LOCATION); //correlation id handling completely abstracted to RestClient impl
}

Making this work for asynchronous requests…

The code included above works fine when you are handling all of your requests synchronously, but it is often a good idea in a SOA/microservice platform to handle requests in a non-blocking asynchronous manner. In Spring this can be achieved by using the DeferredResult Class in combination with the Servlet 3 asynchronous support. The problem with using ThreadLocal variables within the asynchronous approach is that the Thread that initially handles the request (and creates the DeferredResult/Future) will not be the Thread doing the actual processing.

Accordingly, a bit of glue code is needed to ensure that the correlation id is propagated across the Threads. This can be achieved by extending Callable with the required functionality: (don’t worry if example Calling Class code doesn’t look intuitive – this adaption between DeferredResults and Futures is a necessary evil within Spring, and the full code including the boilerplate ListenableFutureAdapter is in my GitHub repo):


public class CorrelationCallable<V> implements Callable<V> {
private String correlationId;
private Callable<V> callable;
public CorrelationCallable(Callable<V> targetCallable) {
correlationId = RequestCorrelation.getId();
callable = targetCallable;
}
@Override
public V call() throws Exception {
RequestCorrelation.setId(correlationId);
return callable.call();
}
}
//… Calling Class
@RequestMapping("externalNews")
public DeferredResult<String> externalNews() {
return new ListenableFutureAdapter<>(service.submit(new CorrelationCallable<>(externalNewsService::getNews)));
}

And there we have it – the propagation of correlation id regardless of the synchronous/asynchronous nature of processing!

You can clone the Github report containing my asynchronous example, and execute the application by running mvn spring-boot:run at the command line. If you access http://localhost:8080/externalNews in your browser (or via curl) you will see something similar to the following in your Spring Boot console, which clearly demonstrates a correlation id being generated on the initial request, and then this being propagated through to a simulated external call (have a look in the ExternalNewsServiceRest Class to see how this has been implemented):


[nio-8080-exec-1] u.c.t.e.c.w.f.CorrelationHeaderFilter : No correlationId found in Header. Generated : d205991b-c613-4acd-97b8-97112b2b2ad0
[pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : start REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0
[nio-8080-exec-2] u.c.t.e.c.w.f.CorrelationHeaderFilter : Found correlationId in Header : d205991b-c613-4acd-97b8-97112b2b2ad0
[pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : completed REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0

view raw

gistfile1.txt

hosted with ❤ by GitHub

Conclusion

I’m quite happy with this simple prototype, and it does meet the two goals I listed above. Future work will include writing some tests for this code (shame on me for not TDDing!), and also extend this functionality to a more realistic example.

I would like to say a massive thanks to Sam, not only for sharing his knowledge at the great talks at Geecon, but also for taking time to respond to my emails. If you’re interested in microservices and related work I can highly recommend Sam’s Microservice book which is available in Early Access at O’Reilly. I’ve enjoyed reading the currently available chapters, and having implemented quite a few SOA projects recently I can relate to a lot of the good advice contained within. I’ll be following the development of this book with keen interest!

If you have any comments or thoughts then please do share them via the comment below, or feel free to get in touch via the usual mechanisms!

References

I used Tomasz Nurkiewicz’s excellent blog several times for learning how best to wire up all of the DeferredResult/Future code in Spring:

http://www.nurkiewicz.com/2013/03/deferredresult-asynchronous-processing.html

 

So, I’m currently travelling back from my first Geecon conference in Krakow, Poland, and I must say it was an awesome experience! Firstly, I have to say a massive congratulations and thanks to all of the organisers and volunteers, and especially to Andrzej (@ags313), Konrad (@ktosopl), Adrian (@adrno) and Adam (@maneo). These guys and girls work tirelessly throughout the year in order to run Geecon, and when I heard that they do everything themselves (without paid-for project managers etc.,) I was even more impressed!

I learned so much at the conference that I’m going to try and do a couple of blog posts to brain-dump my thoughts (although I know from past experience I know that this may not happen, and so I’ll focus on the one post at the moment 🙂 ). I’ll start with a review of the conference itself, move on to key memes (that I saw), and then briefly outline the sessions and social activities that I attended:

Conference Overview

  • The organisation of the conference is superb, right from the location (an out-of-town Cinema), the sessions, the speakers, the attendees, the food, to the amazingly helpful volunteers. My only small minus (and it’s really a symptom of the great success of the conference) was that the corridors can get super crowded during break times and lunch. I was chatting to Andrzej about this, and he mentioned that so many people wanted to come to Geecon that he had to stop accepting registrations after the initial 1200+…wow!
  • Krakow is an awesome city for the conference. It has some amazing sight-seeing and historical opportunities (Old Town, Castle, Salt mines, Auschwitz). Everyone I interacted with in the city was very friendly, and the vast majority also spoke superb English (which again puts the UK’s language skills to shame!)
  • Geecon is not just a Java conference, and this makes it all the better. It may be primarily focused on Java and the JVM, but you can easily pick sessions to avoid this (if you really wanted to?), and I managed to get a nice blend of different languages over the three days
  • Geecon is at the cutting-edge of thought-leadership within software development. There aren’t many conferences that can claim a perfect mix of sessions on programming fundamentals (often ignored), DDD, crafted design and architecture, Agile good practices, Java 8, JEE 7, Spring boot, Groovy, Scala, JavaScript modularity, microservices, DevOps, Open Source Software, low latency performance and more (except perhaps DevoxxUK, but I could be biased 😉 )
  • The social events and after parties are amazing – you’ll probably learn as much at the parties as you will in the main sessions 🙂
  • Get plenty of sleep before the event, and also expect your brain to hurt after the three days of intensive knowledge injection. You know the part of The Matrix films where Neo ‘downloads’ knowledge on Kung-fu while plugged in to chair? Well Geecon is pretty much the beta version of this (fortunately without the wires plugging into your brain)

Key Memes

  • Never stop learning, and make sure you keep revising the fundamentals as well as innovating.
  • Jurgen Appelo’s awesome keynote set the tone for the first part of the above point perfectly, and I would recommend that all developers watch this talk when it is released on video by the Geecon team. The key takeaways for me were; have a goal, work relentlessly towards it (evaluating progress all the way, and adjusting were necessary); daily habits and discipline are key; be open to other people’s ideas; read more; read more; read more… 😉
  • The ever-inspiring Kevlin Henney also did a superb job at both of his talks on Thursday. The first was a call to action for avoiding the common mistakes that many of us make when writing code, and also a reminder that style and layout matter. This can be summed up perfectly by one of his quotes “Style and layout matter in programming for the same reason they matter in writing” Amen to that… The second of Kevlin’s presentations was a homage to the ‘worse is better’ work by Richard Gabriel, and reminded us all of the fundamental’s of agile development, and that often less is more when it comes to software development. Key takeaways here were strive for simplicity, completeness, correctness, consistency
  • Java 8 really is making a difference to developers. I realise that many of the people presenting at the conference are thought-leaders, and also at the cutting-edge of software design, but I could clearly see the impact of Java 8’s new Lambdas and Streams. For a start, it made a lot of the code on slides much more readable (no more crazy anonymous inner Class boilerplate), and also more expressive, which allowed for key memes to be demonstrated in a much more understandable way (for example, in the great talk about Netflix’s RxJava library by Tomasz Kowalczewski)
  • Java EE 7 is also making a clear difference to developers. Many of the JEE sessions I have attended in conferences over the last few years have been about work-arounds for JEE versions <= 6, or the promised benefit of JEE 7, but at Geecon we actually got to see clear improvements from within the developer trenches (for example, Adam Bien’s and Arun Gupta’s sessions)
  • JavaScript really is growing-up. I know that this observation is not particularly new, and the JavaScript language (and tooling) has been developing in leaps and bounds over the past few years, but again at Geecon I saw proper evidence of this. In particular, the session by Paul Bakker and Sander Mak on JavaScript modularity and the proposed enhancements in ECMAScript 6 look awesome.
  • Microservices are at the tipping-point for mainstream adoption (and are in danger of being the ‘next big thing’). Both of Sam Newman’s talks about microservices were superb, and clearly demonstrated that Sam and the guys/girls at Thoughtworks have been doing this stuff in the trenches for quite some time. Sam is clearly ahead of the game in many areas of service design, implementation and delivery that I was very pleased to hear he was writing a book with O’Reily (which I’ve now bought in Early Access format here). Key takeaway’s from Sam’s talks included; software architects should be more like town-planners than building architects, especially when designing with microservices; be flexible with the implementation of microservices, but standardise the stuff ‘in between’ – monitoring, interfaces, deployment, architectural safety; strive for resource (not data or procedure) oriented designs; distributed txns are hard and should be avoided (see CAP theorem 🙂 ); distributed tracing and correlation IDs are very valuable (I’ll second this piece of advice!); get the testing strategy correct (think Mike Cohn’s test pyramid); make thing more ‘production-like’ close to the development (vagrant, docker, packer are useful here); consumer-driven contracts are very useful for testing; semantic monitoring is a great technique for testing in production.
  • The need for well-crafted design and architecture is still as important (maybe more) as it ever was. Sandro Mancuso’s excellent ‘Crafted Design’ talk provided clear evidence for this. Building heavily on DDD-based principles, Sandro proposed a new architecture and package design for Java applications that allows more cohesive modelling of the problem domain, and promotes clear separation between the Model Domain and delivery and infrastructure mechanisms. I wouldn’t do the proposal justice if I tried to describe it here, but check out the slides on Slideshare. It’s seriously worth spending time looking at this, as creating the ideal package/module structure that represents the problem domain is somewhat the holy grail of Java developers. It’s also worth checking out Simon Brown’s related work in this field (of which I am a big fan), and although he wasn’t at Geecon you will be able to catch him at Devoxx UK in June. I chatted to Sandro at the conference, and we both agreed that it would be great if he and Simon could get together sometime for a meeting of minds!
  • Several of the emerging languages and tooling are becoming a lot more opinionated, which I welcome. It’s great to have an uber-flexible language (Scala I’m looking at you here), but the flexibility and lack of constraints can confuse novices, and also stifle the creative challenge. Ken Sipe did an amazing job of introducing Google’s (very opinionated) Go language in an hour session, and this has inspired my to further look into this language as perhaps an alternative to some of the Python I write. Ken also did a superb advanced session on the testing framework Spock. I could see some people’s brains melting at some of the content (and mine got quite hot), but some of the power demonstrated looked simply awesome. Both of Ken’s talks demonstrated to me that the developers of the language/tool had clearly surveyed their respective fields and picked what they considered best for inclusion into their offering. This obviously could be hit-and-miss, but with Go lang and Spock I think we have a couple of winners.

That’s all for the moment, but if you ever get the chance to attend Geecon then make sure you do – you’ll learn lots, and have a great time doing it! if you have any comments then please let me know!

Just another piece of shameless self promotion, but you can catch me at the Geecon conference in Krakow, Poland next week (May 14th-16th), where I’ll be joining Heather Vancura, Richard Warburton and Arun Gupta on a panel discussing the adoption of OpenJDK, the Java Community Process (JCP) and the Java Specification Request (JSR).

You can find out more about the session and the speakers here. I look forward to seeing you at Geecon!