Archive

Musings

I’ve received some great feedback after posting my proposal for a microservice maturity/classification model last week, some positive, and some negative.

Some private communications suggested that I may be getting caught up in the marketing hype, and several emails suggested that the microservice architecture really is just classical SOA re-invented. Other emails balanced out these comments by suggesting that microservices present an opportunity to learn and iterate on the mistakes made in the original implementation of SOA, especially now that we are embracing concepts such as domain-driven design, and are applying more consideration to well-defined software architectures.

The only public response I’ve seen so far is by my fellow London-based microservice and Spring framework expert Russ Miles – you can read it on the Simplicity Itself blog. In the interest of full disclosure I do know Russ personally (and have sank a few beers in his company), but that’s not going to influence what I think could be a great discussion about my proposal.

Maturity – not all it’s cracked up to be…

The first comment made by Russ is that the approach of creating a maturity model could be dangerous. I think this is totally fair, and it crossed my mind several times when writing the initial post. So much so, that I added the word ‘classification’ to the title as an alternative to maturity.

If you look at other maturity models, such as the Richardson model for APIs or the Continuous Delivery model, then there is a clear sense of scale, from negative to positive. As Russ quite rightly points out, there is room for interpretation in my model that smaller is better, and I probably should have taken more care to make it clear that I don’t think this is necessarily the case.

In some cases a monolithic, but well-structured, architecture may be the best solution. Russ has also conjectured in a recent talk a Skillsmatter that starting out with building a monolith and then moving to a microservice architecture may be the fastest way to build software. It’s definitely difficult to prove this beyond anecdotal evidence, but my instincts (and experience) tell me this is probably true in certain cases, especially at the current point in time where we have little in the way of modelling or tooling support for building microservices.

In my opinion Russ is quite right to think about how this model could be used negatively, and although my initial intention was to give people a model that they could look at and point to where they think their software is, it could easily be abused. I would be keen to get more feedback on how the model could be shaped or evolved to make my intentions clearer.

Size – it’s what you do with it that counts (but size still matters)

Russ also mentioned that size is a dangerous metric, and I agree. Although lines of code (KLOC) is potentially an arbitrary metric, especially with the variety of languages and frameworks currently available, I still do believe that size is important. Not in the “if you’re application is over a 100 lines, then it’s not a microservice” kind of way (which, in fairness, could have been read from my model), but in the perspective of encapsulation, responsibility and comprehension.

After a bit more thought, a measure of architectural/code cohesion is probably a better metric for this concept. I definitely believe the microservice architecture is rooted in the principle of high cohesion (and loose coupling), but it has been argued by the likes of Simon Brown and Bob Martin that this can be achieved in a monolithic codebase without the need for the creation of separate ‘services’. The key here is modularisation or componentisation.

Accordingly, I do believe that microservices should be ‘small’ in size. For me, this was one of the failings in the original approach with SOA. The lack of skilled modelling and architectural guidance allowed services to morph into ‘all singing and dancing’ applications that offered low cohesion. Hard system boundaries (and potentially expensive coordination and communication) provided by microservices in combination with the notions of ‘bounded contexts’ from domain-driven design (DDD) should make it more obvious to developers when we are straying outside the remit of a component.

Bloated vendor tools that emerged from traditional SOA also allowed developers to ‘cheat’ by circumventing the loose coupling that patterns such as the service bus initially proposed, and monstrosities such as the heavyweight ESB were born. These days we are seeing tooling emerge that encourage reactive systems and event-driven architectures based on small components (such as the very interesting AWS Lambda). With services such as AWS Lambda a codebase size limit is enforced due to the nature of execution. It will be interesting to see what applications emerge from these frameworks.

Putting a size limit on a codebase may not be an exact science, but I believe an upper bound can at least be used as a trigger to discuss if a service is growing beyond it’s original remit (or if the architectural quality is degrading). A lot of agile and architectural techniques I teach clients are not hard-and-fast rules to determine which decisions should be made, but often act as a cue for the team (or organisation) to engage in conversation or a whiteboard session to check that we are still designing high-quality software that models the business correctly.

Dogma or dogfooding – you decide…

The final section of Russ’ post contains the strongest argument, in that dogma over thinking will only lead us down the wrong path “A maturity model can be used in place of thinking; I’d like to avoid that if we can”. Yeah, this sucks, but I agree.

The problem is that my academic background drives me towards the sharing of ideas and proposals among my peers, and I enjoy the ensuing discussion. We should always take care to make sure we are being pragmatic in these discussions (and not “solving the world’s problems at the dinner table”), but I’m still a supporter of pushing stuff out to the public for comment.

Russ also makes a great reference to Greg Young’s talk at muCon last year, which should be essential watching for anyone building microservices. Paraphrasing Greg massively, he suggested that a lot of the concepts behind microservices have already been done before, and that if we aren’t careful then we will re-invent the wheel (albeit more ‘micro’ than before 🙂 ).

Greg’s observations about the negative impact of dogmatic standardisation and overly-opinionated vendor tooling were also especially damning, and I couldn’t help but nod in agreement through a lot of his talk (on a side note, the whole muCon conference was awesome, and I would highly recommend attending the next iteration later this year! Massive kudos to Russ for kickstarting this conference).

I’m definitely going to take care to avoid being dogmatic (or inspiring dogma), but I’m still keen to share my thoughts on things like the maturity/classification model. It might turn out that this approach isn’t useful, but I’ve already been using a slightly less polished version of this model with tech friends over the last few months to help them understand where their software stack currently sits in relation to the ‘unicorn’ organisations such as Netflix and Amazon (what else are techies going to discuss over a few beers! :- ).

Something that I believe could emerge from this type of proposal (which may be more valuable) is some kind of model that shows organisations where their software sits on the big picture scale of innovation, architecture and delivery. Each level of the model should also clearly show the benefits and drawbacks, and provide guidance on why (if at all) organisations should move some of their software to the next level. We would also need to show how organisations should go about doing this, both from a pragmatic organisational and cultural perspective (Conway’s law in action), and also from a technical tooling and process perspective.

I’m currently reading Jez Humble’s new book ‘Lean Enterprise’, and this is providing some superb inspiration for approaching these tasks. I’m also dogfooding some of my new models and processes, and as soon as I have some useful insights I’ll make sure I share them.

In summary…

I really appreciate Russ taking the time to reply to my original post, and I’m definitely going to think more of several of the great points he’s made (and I’m sure I’ll also catch up with him for a beer after an upcoming London Microservices User Group meetup).

I will also take more care to make my intentions clearer, but I’m still keen to share my thoughts and inspire debate. I’m also keen to avoid dogma, and focus more on dogfooding the model, and here I’ll use some of Russ’ comments to attempt to refine the model.

As usual, if anyone has any comments or feedback then please do get in touch!

Advertisement

I’ve been chatting to various people for quite some time about how there isn’t an agreed maturity model for the current trend to implement microservice architectures, and so I though I would have a go at creating one (quick link to PDF: Microservice Maturity Model Proposal).

I’m in no way suggesting this first draft is complete or definitive, but I hope it may stimulate the conversation around this topic. I’m sure some people will argue that a maturity or classification model isn’t necessary, but I believe it is a fun exercise, and it does enable us to explore (and discuss) what we think are requirements for a microservice implementation.

I’ve proposed six classifications of application architectural styles:

  • Megalith Platform
    • Humongous single codebase resulting in a single application
  • Monolith Platform
    • Large single codebase resulting in a single application
  • Macro SOA Platform
    • Classical SOA applications, and platforms consisting of loosely-coupled large services (potentially a series of interconnected monoliths)
  • Meso Application Platform
    • ‘Meso’ or middle-sized services interconnected to form a single application or platform. Essentially a monolith and microservice hybrid
  • Microservice Platform
    • ‘Cloud native’ loosely-coupled small services focused around DDD-inspired ‘bounded contexts’
  • Nanoservice Platform
    • Extremely small single-purpose (primarily reactive) services

I’ve then attempted for each classification to write about things such as, motivations, challenges, architecture, code modularisation, state data stores, deployment, associated infrastructure, tooling and delivery models.

The full proposal can be found in the following PDF ‘Microservice Maturity Model Proposal – Daniel Bryant (@danielbryantuk)

Please do let me know what you think – I’m keen to see whether this model could be useful, and also explore how it could be developed.

Josh Long, Richard Warburton and myself were having an interesting conversation on twitter about standardisation early today, specifically related to the Java Community Process (JCP), which is the mechanism for developing standard technical specifications for Java technology. Josh asked a question that I often get asked “what does JCP standardisation offer?” (I’m paraphrasing here slightly). This is a totally fair question, and I thought it deserved a little more explanation than I could craft on Twitter.

Innovation and Standardisation; Ying and Yang

The key thing to remember about the JCP process is that it is not about innovation. Quite the opposite in fact. For a standard to be created there must have an initial requirement or problem, significant innovation creating solutions, ideally some competing ideas and implementations, plenty of evaluation and discussion, and ultimately an agreed approach on how to meet the requirement. This process takes time, and it is only at the second from final point the JCP can start creating standards. This is the biggest misunderstanding I encounter when running JSR hack days around the world, particularly with junior developers, as they think the JCP is some mystical think tank who crank out the latest and greatest innovative frameworks (I appreciate calling EJB ‘latest and greatest’ is very ironic 🙂 ).

It’s also worth mentioning at this point that the work of the JCP is now undertaken in the open (I do appreciate the fact that it didn’t used to be, but JSR-348 has made great progress to abolish the ‘behind closed door’ work). This openness provides a platform that allows anyone who wants to get involved to be able to contribute opinions and ideas to the process, and if a standard will cause problems (or is evolving in a problematic fashion) then the community can rise up and publicly duke this out with the spec leads (no Duke pun intended!)

Now on the flip side to this there exists organisations like Spring.io/Pivotal who are all about innovation, and are constantly pushing the boundaries of what a language or framework can do. Personally I love this. I have an entrepreneurial background, and I thrive on innovation and playing with the latest tech and bleeding-edge frameworks as do many of the companies I work with. The Spring framework really does excel here, and this is why I made the transition to coding in Spring back when the framework was at version 1.X and I was really struggling with building J2EE applications. However, as a consultant I appreciate that not all my clients (or the industry in general) think like this, or desire this level of innovation or disruption.

Many companies are inherently risk adverse (sometimes with good reason) and they want to ensure any investment in technology or training their people in a specific technology offers a long-term return on investment (ROI). Such organisation also often desire portability of application/code, and although the practical implementation on the Java platform of this philosophy may not have been perfect in the past, I’ve personally moved several large(ish)-scale Java EE applications across differing application servers with minimal effort. In my mind this is where standardisation can offer enormous benefits, particularly if the standardisation work is undertaken out in the open. On a related note, last year within the London Java Community (LJC) we undertook a community survey of our members, and many Java developers were in favour of standards such as those offered by the JCP (check out the result here http://londonjavacommunity.wordpress.com/2013/09/16/the-java-community-process-survey/)

Horses for Courses…

I strongly believe that innovation and standardisation are far from mutually exclusive, and in fact are very much mutually beneficial (perhaps to the level where one cannot exist without the other, but this is just my opinion). Without innovation we wouldn’t be the embracing the benefits offered by the latest incantation of Service Oriented Architecture (SOA), currently being labelled as ‘microservices’, lead by the likes of Spring Boot, Dropwizard and Ratpack in the Java space. I am very much enjoying working in this space, and the fact that I don’t have to follow any kind of specification results in some very agile, flexible and effective applications.

However, you don’t have to look too far to see the problems that an absence of standardisation can surface. Earlier in the year Facebook announced that it was attempting to create a specification for PHP, as none had existed up until this point, and this made it difficult to decide what the ‘correct’ behaviour of any particular PHP runtime should be. Recently the AngularJS team announced a new version of their framework, and suggested that there will most likely be no clear migration path between the current 1.X and new 2.X versions. This will surely stifle innovation and hamper maintenance of code within companies who have invested significant resources into version Angular JS 1.X (not to mention the problem of dealing with thousands of lines of code that are currently running in production). There are a couple of other related examples that spring to mind, but I won’t mention them as I hope readers will follow my intentions. On a related topic, I’m also very interested to see what will happen with the .NET platform now that Microsoft have open sourced the underlying code with an MIT/Apache2 licence…

Summary

So in summary, I think there is most definitely a place for innovation and standardisation, and I believe both are very useful. This is why I choose to publicly evangelise the Spring platform (and write stacks of code in Spring Boot), and at the same also support the great efforts of the JCP and the OpenJDK which help to drive the future of a standards-based Java platform.

I would be keen to hear other’s thoughts, and so please feel free to comment below 🙂

Disclaimer: I am a member of the OpenJDK Adoption group, and also contribute to the excellent work undertaken within the JCP via the London Java Community JCP committee. However, in contrast 90% of the Java code I write when consulting is currently Spring-based (specifically Spring boot of late), and I publicly evangelise the superb innovation undertaken by the Spring framework team.

I haven’t posted anything for quite some time now, and the main reasons for this are twofold – first, I was travelling in the USA for all of September (visiting the awesome SpringOne 2GX and JavaOne conferences in San Francisco – more on this in another blog post!), and also because I’ve taken on a new role within my work life. As this post’s title suggests, I am no longer a contractor, and instead I have signed up for a great permanent role as CTO at Instant Access Technologies (IAT) Ltd in London. Many of you may remember that this is the company I have been consulting to over the past year.

For those of you that know me this might come as a somewhat of a shock, as I’ve been contracting for over 8 years. However, this latest move was a opportunity too good to dismiss. IAT have been doing some great work since I joined them as a contractor in August 2012, and have created several interesting and synergistic brands (more details below). They’ve also been open to using some of the latest and greatest technologies, many of which I’ve recommended, and several others which have been contributed by an amazing (and rapidly growing!) technical team based here in the UK and also in Poland.

When the CEO of IAT Ltd, Matt Norbury, approached me recently with the offer of becoming CTO, I quickly realised what a great opportunity I would have to build on excellent foundations and further steer the technical direction of this rapidly growing company. I plan to continue posting on this blog, and in collaboration with several colleagues I’m also aiming to set up an IAT technical blog, but stay tuned to this space for more details.

I encourage everyone to explore the brands that IAT offer, and please feel to get in touch with me or the company if you would like to take advantage of the services we offer, or find out more about forming a mutually beneficial partnership. I also encourage everyone to sign up for our flagship customer loyalty at www.epoints.com, as it makes clear sense to get rewarded for the shopping that you do as part of everyday life (especially with the festive season approaching!) 🙂

  • http://www.epoints.com – this is IAT’s flagship customer loyalty scheme, which rewards members with ‘epoints’ simply for doing their everyday shopping or by contributing to various online communities, for example ‘liking’ posts or commenting on articles. Your epoints can then be redeemed for unique and amazing items and experiences. Check out the site for more details – you can even save up for your own island!
  • http://www.onedoo.com – this is a ‘one-stop’ price comparison site, which allows you to get the best deals on items ranging from CDs to shoes, from books to BBQs and more. Of course, many of your purchases made through this site allow you to also earn epoints!
  • http://www.bigdl.com – this site will soon offer an amazing mobile app which will help you find the best local deals, on which you can also earn epoints (are you noticing a theme here? 🙂 ). If you are a retailer then the BigDL platform will enable you to create and target deals at specific demographics, and receive near real-time feedback on the effectiveness. Stay tuned for more information, as this application will be launching shortly!

epoints.com landing page

Thanks to everyone who has already offered me advice on this new career move, and I look forward to sharing the experiences and highlights with you over the coming years.

I’ve just read this interesting blog post about DevOps and the Cloud, and felt compelled to leave a comment, which is now turned into a blog post. The original article can be found here:

http://redmonk.com/dberkholz/2013/05/03/devops-and-cloud-a-view-from-outside-the-bay-area-bubble/

The post is USA-centric, talking about the Bay Area in particular, but it does make some very good (and thought-provoking) comments about how cutting-edge practices such as the DevOps philosophy tend to gravitate around the well established tech hubs.

I wanted to add my 2 cents (2 pence?) to the discussion to make sure people don’t just assume this practice only occurs around the West coast of America. Obviously this geographical area is highly influential in the global IT landscape, but people all over the world experience similiar trends and practices, albeit on a more micro scale.

I work as a freelance development consultant in the London areas, and many small companies are investing heavily in the new DevOps philosophy, particularly around the “Silicon Roundabout” area which is a haven for tech-focused start-ups. Although there are many other tech-hubs around the UK (as I’m sure there are all over the US; NYC for example?) it’s all too easy to see the pattern mentioned in the above article, especially in the more established ‘traditional’ IT sectors. When it comes to the Cloud, people in these sectors often talk a good game, but play very badly. This trend is regardless of geographical location.

It’s not difficult to see why. If I’m riding my ‘startup’ pedal bike then I can change direction at any time. If I see something I like, or something that looks new and shiny I can hop onto the sidewalk or even ride down a one-way street if I really want to. If I’m driving my SME articulated truck then I have to think ahead, but if something interesting pops up on the sat nav then I can usually change direction within a reasonable distance. If I’m driving (piloting?) my Enterprise-grade freight train then I have to start planning things months in advance and talk to a thousand other people before I even consider travelling on another track.

Having said this, I am generally very encouraged to see these new ‘DevOps’ approaches emerging, regardless of where this is happening. Anyone who is a true practitioner of this philosophy knows and can demonstrate the benefits it will bring to a business. My personal favourites are how DevOps methodologies can enable the implementation of Continuous Delivery, thus allowing more iterative product/feature releases, and also how DevOps can facilitate the automation of provisioning and deployments, thus allowing rapid auto-scaling to meet demand and also lowering the cost of experimentation. Anyone who is a fan of the “Lean Startup” methodologies should be jumping up and down with joy at the thought of this, but notice how I mentioned the words ‘Lean Startup’ There isn’t a ‘Lean Enterprise’ philosophy that I know of

In my opinion it’s only a matter of time before these early DevOps adopters spread the good word and this practice becomes mainstream. It doesn’t really matter where they are located or in what size company they work – if the methodology is based on sound and proven principles then it will eventually become adopted by almost everyone in the industry. It’s just a matter of how flexible the organisation is, and this largely relates to size and willingness to take (what are perceived as) risks.

[Flashback to the early 2000’s] TDD and Agile development anyone? Surely only the cool Bay Area kids do that? 😉

Thanks for stopping by the new home of the Tai-Dev Blog! I’ve moved to WordPress because the technical limitations of my old blogging platform were starting to restrict what I could to post.

blog.co.uk was great for sharing text, but as a developer I want to share code and diagrams easily, and this platform had severe limitations. For example, blog.co.uk doesn’t support GitHub gists, or allow links through to books I wanted to recommend on Amazon. These are two activities I wanted to undertake on a near-weekly basis, as my primary motivations for blogging are to share my thoughts and recommendations to help other developers.

Accordingly, I’m marking the old blog http://tai-dev.blog.co.uk/  as officially ‘deprecated’ from today. I’ll leave it up and running for a few months, and I may even port across some of the more popular articles when I get chance, but all new content will appear here.

Thanks for reading, and I look forward to hearing everyone’s thoughts!