Frank Martin of Oracle quipped that after the takeover of Sun, Planets and Moon might follow soon. Meanwhile, The Economist had another very good high-level story about the Sun takeover by Oracle titled "Mr Ellison helps himself". Great analysis with the conclusion that our industry is going from horizonal/modular to vertical/integrated with the Sun acquisition being just one indicator of the overall trend. The cake (market) is more segmented into slices than into layers. This has significant repercussions regarding the IT value chain that all players wil need to adapt to.
My vision is the closed loop between business and IT (both on the same page). The Moebius loop perfectly symbolizes how different sides can "melt". Overcoming cultural, language and collaboration barriers while changing the role of IT is a prerequisite for successful digital transformation in my eyes.
Tuesday, April 28, 2009
Monday, April 27, 2009
Message-Driven SOA = Robustness
I love Bill Poole's blog - In a post last year, Bill describes a service model that is built around cohesive business areas/capabilities and along the following principles: large grain services; asynchronous reliable communication; no centralized data; no cross-service transactions.
In the past I've worked as architect on an eGovernment project that followed just these same architectural principles. That was by design and also by necessity -- since the underlying product (SonicESB) naturally guided developers into modelling services as entities with a purely message-oriented interface (of course, that was putting the horse before the cart). Services in SonicESB aren't really services in the conventional sense -- they are written against an API and can communicate with other Sonic services only using that API. The API binds onto a JMS abstraction layer that exercises the underlying MOM implementation. However, some queues that drive services may be exposed via Web service interfaces to the external world. Sonic also offers a light-weight process construct: itinerary-based routing includes a routing slip with every message and describes which queues/services must be traversed in what order.
The project team really bought into the benefits of temporal decoupling and came to see it as godsend -- they also came to accept the unusual asynchronous communication style that goes with it.
In the past I've worked as architect on an eGovernment project that followed just these same architectural principles. That was by design and also by necessity -- since the underlying product (SonicESB) naturally guided developers into modelling services as entities with a purely message-oriented interface (of course, that was putting the horse before the cart). Services in SonicESB aren't really services in the conventional sense -- they are written against an API and can communicate with other Sonic services only using that API. The API binds onto a JMS abstraction layer that exercises the underlying MOM implementation. However, some queues that drive services may be exposed via Web service interfaces to the external world. Sonic also offers a light-weight process construct: itinerary-based routing includes a routing slip with every message and describes which queues/services must be traversed in what order.
The project team really bought into the benefits of temporal decoupling and came to see it as godsend -- they also came to accept the unusual asynchronous communication style that goes with it.
- individual service may be taken down for maintenance without negatively affecting the rest of the system -- messages simply get buffered until the service comes up again
- as a corollary to the above: services do not need to handle peak loads -- just the average load. When bursts of messages arrive they are simply backlogged in the queue until the service can process them.
- non-blocking for the message producer: fire-and-forget with reliable messages; producers "know" that messages eventually arrive - SLA of the recipient is can help in giving a deterministic timeframe
- monitoring of queues: analyze queue utilization over time; detect usage patterns and adapt to and/or predict load situations;
- inspection of in-flight messages: analysis of messages; remove and store individual messages; replay them at a later stage (or for testing); isolate poison messages and use them for failure analysis
Thursday, April 23, 2009
Virtualization at the Endpoint?
Aaron Skonnard describes the Microsoft Managed Services Engine (MSE) over at MSDN.
Service virtualization is nothing new, of course - it's a pattern that underlies most SOA middleware products (ESBs) and agents (Actional, AmberPoint,..). Hosting platforms (WCF, App Servers) also provide rudimentary virtualization on top of the business logic.
The strength of virtualization lies in its power to isolate consumer and provider and decouple them -- more so than what would be possible if they interacted directly (somebody once said - not without irony - that "any problem can be solved by another layer of indirection" - was it an R. Johnson?). This shields a service developer from the requirements of the consumer and allows her to focus on business logic -- rather than getting multiple stakeholders to agree on the non-functional aspects of the service interface.
ESBs place service virtualization "in the middle" - the MSE that Aaron Skonnard describes places it at the endpoint. Endpoint-based virtualization is becoming a new trend. WCF does it and some other containers - to a limited degree. Some LOB packaged applications start to offer this functionality at the endpoint (e.g., Siebel with the inclusion of Fusion Middleware). This is also in support of pure SOAP architectures that describe the enterprise as just a collection of SOAP-speaking endpoints (with reliability, security, etc. all being controlled through WS* protocols).
Virtualization at the endpoint, however, is not equivalent to virtualization in the middle. It limits you to those aspects of a communication that occur -- well, at the endpoint -- just before hitting the business logic. All aspects that are more centralized conceptually, are more difficult to address. Think about location transparency, for example. If this aspect is implemented on a central, shared intermediation layer it is simple to move backend service providers to new locations. If there is no such infrastructure, then every consumer must be able to directly address the provider endpoint. Either you configure these addresses in peer-to-peer fashion (which is a mess) or you do some kind of lookup: this could be via DNS (only works with HTTP), a registry or a centralized resolver service. The two latter solutions require that the consumer endpoint (either the stack or the business logic) handles the address resolution.
Then, think about content-based routing -- and things get more difficult. This would be a functionality that the consumer endpoint would provide -- or an intermediate infrastructure service. Other examples for virtualization/intermediation requirements taht map more easily onto a (conceptuall) centralized implementation, rather than onto a set of distributed smart endpoints:
Service virtualization is nothing new, of course - it's a pattern that underlies most SOA middleware products (ESBs) and agents (Actional, AmberPoint,..). Hosting platforms (WCF, App Servers) also provide rudimentary virtualization on top of the business logic.
The strength of virtualization lies in its power to isolate consumer and provider and decouple them -- more so than what would be possible if they interacted directly (somebody once said - not without irony - that "any problem can be solved by another layer of indirection" - was it an R. Johnson?). This shields a service developer from the requirements of the consumer and allows her to focus on business logic -- rather than getting multiple stakeholders to agree on the non-functional aspects of the service interface.
ESBs place service virtualization "in the middle" - the MSE that Aaron Skonnard describes places it at the endpoint. Endpoint-based virtualization is becoming a new trend. WCF does it and some other containers - to a limited degree. Some LOB packaged applications start to offer this functionality at the endpoint (e.g., Siebel with the inclusion of Fusion Middleware). This is also in support of pure SOAP architectures that describe the enterprise as just a collection of SOAP-speaking endpoints (with reliability, security, etc. all being controlled through WS* protocols).
Virtualization at the endpoint, however, is not equivalent to virtualization in the middle. It limits you to those aspects of a communication that occur -- well, at the endpoint -- just before hitting the business logic. All aspects that are more centralized conceptually, are more difficult to address. Think about location transparency, for example. If this aspect is implemented on a central, shared intermediation layer it is simple to move backend service providers to new locations. If there is no such infrastructure, then every consumer must be able to directly address the provider endpoint. Either you configure these addresses in peer-to-peer fashion (which is a mess) or you do some kind of lookup: this could be via DNS (only works with HTTP), a registry or a centralized resolver service. The two latter solutions require that the consumer endpoint (either the stack or the business logic) handles the address resolution.
Then, think about content-based routing -- and things get more difficult. This would be a functionality that the consumer endpoint would provide -- or an intermediate infrastructure service. Other examples for virtualization/intermediation requirements taht map more easily onto a (conceptuall) centralized implementation, rather than onto a set of distributed smart endpoints:
- messaging patterns (pub/sub, reliable, async) - reliable queueing can be done at the endpoint but it's a maintenance nightmare (think trapped messages, poison messages, ...)
- complex event processing - for correlation/pattern detection, all events must be brought together at the event correlation engine for the application of rules
- message routing (addressing, CBR) - as mentioned above
- layered security defense - combination of multiple security mechanisms where threats are filtered out at multiple policy enforcement points (PEP) along the communication path
- control point for runtime governance (can you really rely on your distributed containers?) - to some degree, since you can monitor messages in progress, but not the availability of services that are not in use
- Provididing multiple interfaces to different clients at different network locations - maybe you only want to make a certain "engineered" interface (different operations, different policies) available to another department, DMZ-external, etc.
Subscribe to:
Posts (Atom)