Events of the past week have led me back to the "Great Granularity Debate" that goes hand-in-glove with Service Orientation. I was discussing this with some colleagues last night - I described the problem I was dealing with as a 'nano-Lego' problem. This problem seems to come about when technically-focused architects define a 'SOA' without binding it to business drivers and objectives - this results in a plethora of fine-grained 'architecture-for-architecture-sake-services-for-god's-sake technical services that look suspiciously like re-usable 'OO' objects (they didn't get reused either did they?).
In this particular case, the business would like to move away from their old monoliths to more granular architecture that would allow for more efficient change. They don't seem to be bothered about reuse and put performance much higher on the list. They also recognise that they're not experienced in doing things a 'Service Oriented' way and can see some of the problems in funding cross-project service development.
All this tells me that the most appropriate SOA for these guys would be a coarse-grained and business focused. Finer grained services might be developed later as their maturity in things service oriented develops.
1 comment:
I've tried to reply to this one several times. Each time I have read my response and realized it isn't right/good/worthy. That indicates to me that there is a lot going on with granularity.
If I look inside a service, I may well see very fine-grained objects and behaviors. Why? Because, at least for me, it seemed natural to write them that way. Organizing principle, not reusability being the big driver.
But then it gets hard. What level of granularity should a complete service (and associated operations) expose? Are all the operations on a service at the same granularity? At least the second question is that they probably aren't. But I don't have a proof either way.
As a thinking model, let's imagine a calculation engine service. It isn't a calculator because it has no visual components.
The first operation I need is plus (I want to be able to add a pair of numbers together). So, I could make a plus operation on the service, implement it appropriately and all would be well.
Then someone announces the need for minus. Minus is tricky, the sequence of operands is important (a-b) != (b-a) - it is not commutative. So maybe I should name the parameters to make sure.
We keep adding operations to this calculation engine. Eventually someone adhering to DRY practice will suggest that we should have just one operation - calculate. The first parameter should be the function to be formed, the second parameter the first operand, etc.
So now we have a much coarser grained service (albeit a trivial one). Which is better (and why?)
I hazard some advantages to each.
Separately named functions enable to name my operands properly so I won't mix them up by putting them in the wrong sequence. That seems to be a little less brittle.
Separately named functions allow me to handle specific errors (e.g. division by zero) with a great degree of specificity. Again, less brittle.
The service looks like what a casual user might expect. The user can see the shape of the business in the solution. This adds credibility. It doesn't look like some clever-dick architect has abstracted it so much that it is incomprehensible.
Now looking at the general case. It is easy to add new capabilities. The interface doesn't change at all, but the implementation does. So unaffected clients aren't impacted. That feels like goodness.
You have some standardized error handling available - checking the types of the arguments, etc.
You are maintaining and documenting one interface, not many.
So what is the correct level of granularity. The jury is out - at least based on the cases I have discussed here. There may be some discriminators which help us to decide. But what are they?
Answering that last question will give us some insights into how to think about granularity
Post a Comment