Thursday, November 03, 2016

McKinsey: Why digital transformation should be a priority for health insurers

McKinsey has a great article on the impact of the digital revolution on the health insurance sector:

Market Changes in Health Insurance

Digital innovation in health insurance can
  • achieve substantial SG&A cost savings (10-15%)
  • decrease medical spending (lower-cost alternative services; customers are the major beneficiaries)
  • drive organizational transformation
  • change the operating model
  • improve customer engagement and satisfaction
Disruption starts to impact health insurance
  • healthcare has traditionally been slow on customer centricity
  • digitally native firms are entering the market and could severely disrupt
  • incumbents must react now to avoid irrelevancy
  • VC investment in digital health is 4.5B p.a. (up 400% in 4 years)

Compared to other sectors
  • heavy regulation is no protection for incumbents (compare financial sector)
  • disruption takes about 5 years from tech innovation to industry change (compare music industry)
  • P&C: disruptors lead with a direct-distribution model instead of broker-led sales

Changing customer behavior
  • Consumers have become accustomed to online and mobile channels and will expect these from health insurance as well (during purchase and service)
  • Incumbents shift IT budgets from core to digital capabilities (50% within 3-5 years)
  • Tipping point for market shift when a) consumer change becomes widespread and b) network  effects take hold 

Digital Impact relies on Four Levers

Four levers (and their combination) can yield significant improvements to health insurance:
Stronger connectivity: better engagement with all stakeholders:
  • For customers: more frequent interaction on the basis of personalized, omnichannel services that optimize care and allow for better health management.
  • For providers: improved collaboration & coordination, optimized care delivery, better health population management.

Greater efficiency and automation:
  • radically reimagine workflows to align with the health journeys of customers and other stakeholders
  • e.g., a clean-sheet customer-centric online sign-up process with reduced escape rate and cycle time (as done by one health insurer)

Better decision making:
  • Analytics helps to better understand customers & risk
  • Optimize population health management
  • Innovate around payments & products (e.g., customer segment focus)

Business Model and Care Innovation (Examples):
  • Wearables monitor patients with chronic conditions
  • Telemedicine, “virtual visits”
  • Locate nearby providers
  • Easy access to medical history 


Customer Journey Redesign
  • Identify customer journeys around enrollment, billing, claims, submissions/processing and issue resolution; in detail:
  • Increase consumer awareness: product education and evaluation
  • Enroll customers: inquiry, quote-to-card, billing
  • Solve Problems: for customers, brokers and providers
  • Pay what’s owed: claims processing, broker commissions
  • Analyze and report data: broker reporting, etc.
  • Help members take control: care management
  • Help providers deliver better care: performance review
  • Optimize the provider network: network design, credentialing
  • Prioritize and analyze customer journeys; risk and stakeholder assessment
  • Redesign and digitize to the fullest extent possible where value is high and risk is low

Organizational Redesign
  • Break functional silos by creating cross-functional teams (XFT)
  • include multiple business segments, IT and operations (BizDevOps)
  • XFT teams work collaboratively on customer journey redesign and bring in different perspectives
  • Leaders should set bold quantitative targets (e.g., 50% improvement in cycle time)

Resource Allocation
  • Shift budgets from run to change to sustain the digital transformation
  • Focus on a small number of high-priority journeys
  • Focus on talent in the digital transformation team

Bimodal IT and Decoupling
  • Adopt a bimodal IT approach to enable rapid digital innovation
  • Integration can decouple the innovation layer (which supports minimum viable products, fail-fast, quick development times) from the stable core
  • Integration should standardize around data models and functional services
  • Standardize existing data models within the legacy infrastructure
  • Create reusable standard service interfaces to avoid point-to-point integration
Six step plan for Incumbents
  1. Start from the customer: Customer Journey end-to-end digitization
  2. Break functional silos: cross functional BizDevOps teams with clear mandate
  3. Create measurable targets for each team
  4. Resource allocation and budgets: reallocate investments
  5. Focus on talent: get and retain digital talent
  6. Maximize value of two-speed IT: digitally enable legacy infrastructure

The ITMC Take

McKinsey describes the business case of digitization for the health insurance sector in terms of bottom line results and improved customer satisfaction. The bigger picture however is a win-win situation for health insurers and their members: Business model innovations have the potential to improve the quality of care while driving down medical cost. While the details are not clear yet, the impact of innovations can be enormous. Analytics, wearables, connectivity work in concert to steer members towards low-cost prevention and a healthy lifestyle, create awareness and enable them to better manage their health through personalized recommendations.  Some customers and many providers may cringe at this prospect. But the rapid consumerization in healthcare through wearables demonstrate that many customers want to take responsibility for their health. Health insurance must develop the necessary digital capabilities to respond to future market developments now. 

Monday, July 23, 2012

WebLogic JMS in-flight Message Compression

WebLogic JMS has a feature for compression of in-flight JMS messages. Compression happens between the JMS client library and the JMS server. This avoids bandwidth issues and nasty problems with maximum message size limitations (see

Dynamic compression can be enabled on the JMS connection factory. Here you specify a threshold in bytes. If the serialized message body exceeds this threshold and if the client is not collocated then the message payload is compressed in-flight. See Compressing Messages in the WebLogic Performance and Tuning Guide.

An alternative is to programmatically set a compression threshold via WLMessageProducer interface.

Monday, June 11, 2012

Interactive OSB Testing

Interactive Testing with OSB Test Console

Oracle Service Bus has a built-in test console for the interactive testing of OSB artifacts at runtime. It generates synthetic payloads (including security tokens) based on the service description. Note that the test console performs an actual invocation of services - which can have unintended consequences on production systems. Therefore, use the test console only on development and staging systems and block it on production systems. 

Note that interactive testing only gets you so far. What you really want is automatic unit testing of OSB as the basis for regression testing and continuous integation, as well as test-driven development (TDD).
The test console is accessed via the icon 

Business services are backend service definitions in OSB. The test console invokes the actual backend service and therefore tests its correct endpoint configuration, its availability as well as any downstream dependencies the backend service may have (DBs, etc.).

Test console can invoke proxy services. The invocation triggers off the proxies pipeline which is graphically displayed in an invocation trace . The invocation trace contains detailed information about each step and variable values recorded during execution.

The following transformation types are supported in OSB an can be tested with the test console. For various transformation actions the test icon is available:
  • XSLT transformations in OSB are used for XML-to-XML mappings. You can test them with source XML formats within the test console. The test console displays named parameters supported by the transformation.
  • XQuery transformations  work with either XML or non-XML input
  • MFL is used to describe non-XML data formats and the layout of binary and text data in these formats.  The test console can verify the correct bidirectional mapping.

Thursday, March 29, 2012

JMS MaxMessage Size - persistent messages

Large JMS Messages in WebLogic


Applies to

WLS 10.3.5 / OSB 11g - plain vanilla setup, 64bit, 3GB Heap


Send large messages (250MB) across OSB using JMS Transport


OSB supports streaming mode for HTTP and File-based protocols - which we tested successfully with multi-GB files. However, things are not so straightforward with JMS proxies which essentially boils down to the WLS JMS subsystem.

Test Setup

The test is based on a JMS client that creates a persistent message with a variable payload size (1-250 MB). It sends this message to a JMS queue on a remote WLS server and receives it synchronously from the same queue. We measure the roundtrip time on the client as well as server load (heap, CPU) on the WLS server.


First Run



At first it looks good - you'd get a near linear increase in response time in relation to the  message size. But then - at 10MB size - you will encounter the first roadblock:


weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: ’10000080′ bytes exceeds the configured maximum of: ’10000000′ bytes for protocol : ‘t3′.


Increase MaxMessageSize


For most protocols, including T3, WLS limits the size of a network call to 10MB by default (see Performance and Tuning Guide). The 10MB limit is a soft limit meant as a protection against DoS-attacks. It is worth noting, that a server is more prone to DoS attacks when increasing this limit. Anyway, let's increase it to see how large a message we can process.


  • Go to Managed server--> Protocols --> General and set the Maximum Message Size to something large.
  • To set the maximum message size on a client, use the following command line property: -Dweblogic.MaxMessageSize
  • Verify the Queues maximum accepted message size:

Queue-> Configuration-> Threshholds and Quotas-> Maximum Message Size




Setting Maximum Message Size for Network Protocols



Increase Server Heap Size

When running the test again you'll notice that the server-side heap is really busy on those large messages. 




Out heap size of max 3GB seems to be a limiting factor. So we increase it to 8GB (note: you need a 64-bit JVM for anything larger than 3GB):


USER_MEM_ARGS="-Xms3096m -Xmx8192m -XX:PermSize=128m -XX:MaxPermSize=512m -XX:NewRatio=3 -XX:SurvivorRate=128"


And off we go with our next test.  This time we can send up to around 100MB messages. Roundrip times increase up to 50-80 seconds.  Then we get a CORBA exception:


weblogic.jms.common.JMSException: weblogic.messaging.dispatcher.DispatcherException: java.rmi.MarshalException: CORBA COMM_FAILURE 1398079691 Maybe; nested exception is:

org.omg.CORBA.COMM_FAILURE:   vmcid: SUN  minor code: 203 completed: Maybe ...



Bypass CORBA Stack

At this point we got a hint from Oracle support: "Switch from wlclient.jar to wlthint3client.jar so you bypass all the CORBA stack…". This change guarantees that the T3 protocol is used for JMS communications.


We can now send messages up to around 220 MB of size - albeit at extremely long roundtrip times of around 10 minutes. Try anything larger and you get:


weblogic.jms.common.JMSException: weblogic.messaging.dispatcher.DispatcherException: weblogic.rjvm.PeerGoneException: No message was received for: '240' seconds

at weblogic.jms.dispatcher.DispatcherAdapter.convertToJMSExceptionAndThrow(

at weblogic.jms.dispatcher.DispatcherAdapter.dispatchSyncNoTran(

at weblogic.jms.client.JMSSession.receiveMessage(

at weblogic.jms.client.JMSConsumer.receiveInternal(

at weblogic.jms.client.JMSConsumer.receive(




In-flight Compression


WLS JMS has another nice feature: online compression of messages during transport (see

Compressing Messages). Since our test messages contain EDIFACT sample payloads they can be reduced to around 10% of their original size.  This also has a dramatic effect on roundtrip times, as  the chart below shows (up to 100MB). 250 MB messages can now be easily transferred and take less than 10 seconds for roundtrips.





We didn't crack the 200MB limit on physical message sizes in WebLogic JMS. However, we found a solution to our particular requirement. It would be interesting to find out, how to get rid of the PeerGoneException in Test-03. Even though, the roundtrip times for this kind of message size are prohibitively long.

Wednesday, November 30, 2011

Oracle ADF Powerkurs - 15. Dezember in Bern

Am 15. Dezember findet in Bern ein 1-tägiger Powerkurs zum Thema Oracle ADF Entwicklung statt. Durchgeführt von Frank Nimpius, dem Experten der Oracle für das Thema. Weitere Details und Anmeldung unter:


Schnell und effizient mit Oracle ADF

Am 15. Dezember 2011 in Bern

Suchen Sie ein effizientes Framework für die Entwicklung Ihrer Rich-Enterprise-Applications im Web- und Mobile-Bereich? Oder planen Sie eine Rapid-Application-Development-Initiative in Ihrer Organisation? Wir zeigen Ihnen an diesem Event, wie Sie mit Oracle ADF signifikante Effizienz- und Qualitätssteigerungen in der Java EE Applikationsentwicklung erreichen können. Ergreifen Sie die einmalige Gelegenheit Frank Nimphius persönlich kennenzulernen, den Experten für Oracle ADF im deutschsprachigen Raum.
Nach einer Einführung in die Herausforderungen bei der Applikationsentwicklung zeigen wir Ihnen, wie das auf Standards basierende End-to-End Framework Oracle ADF aufgebaut ist. Dann vergleichen wir es mit anderen Frameworks für die Applikationsentwicklung und zeigen auf, wie Oracle ADF Sie bei der Realisierung der Rapid-Application-Development-Konzepte unterstützt.
Die Demo nach der Mittagspause setzt die am Morgen präsentierten Ansätze praktisch um und zeigt wie schnell eine Rich-Enterprise-Application mit den visuellen und deklarativen Werkzeugen der ADF-Plattform entwickelt werden kann.

Monday, October 31, 2011

First-class citizens and splendid isolation

Michael Poulin did a review of our SOA Design Patterns book:

Thomas Rischbeck says: “The ESB pattern is about performing integration tasks and adding value to client-service communication in an SOA – all completely transparent to the participants”. I would welcome this transparency if it were not based on intelligent routing and data enrichment having its own policies on the interaction between the participant. In SOA, we have a mechanism that regulates the interactions – Service Contract – and it is not seemed to be considered by the ESB at all.

Michaels seems to imply that the ESB doesn't honor service contracts at all. I get the impression that Michael also sees an ESB as a magical integration switchbox in the middle of the enterprise. This makes Michael feel uneasy – which I can understand very well.

There’s also another way of looking at an ESB: You could think about it as an opaque substrate for hosting "facade" services – which are fully-fledged, first-class citizens in the SOA world. Like every other service, such facade services have a formal interface (and a less formal contract which is a superset of the interface (as James Watson says. While I’d admit that there can be quibbles about business logic on the ESB – it doesn’t “break […] the service contract between the service consumer and the service provider”, as Michael thinks. The consumer wouldn’t’ reason about the actual backend provider but only about the ESB-exposed endpoint it communicates with. The rest is implementation details, so to say.

From this point of view the ESB becomes opaque. It cannot violate a service contract because it isolates whatever backend services are multiplexed through the ESB. This splendid isolation renders numerous benefits. For example, services can be normalized according to enterprise requirements; they may also enforce coarse-grain security and shield backend services from certain attack vectors. Many rings of defense is a security best practice since the middle ages.

Cheers, Thomas.

Saturday, November 06, 2010

Ride the Cloud at Lightning Speed

How can you make your computationally intensive application run like lightning on the cloud? Use Cloud Parallel Processing!

Some computational tasks are so demanding that a single machine cannot process them within a reasonable amount of time. Applications that leverage massive amounts of data from the Web, a social media network or any other large-scale system can easily overwhelm a single machine. Many other examples exist of analytical tasks that operate on large data sets.  

Parallel processing has been automated and widely practiced at the processor level. In order to utilize the cloud efficiently it is necessary to scale out. Scaling out means that multiple nodes share the workload and therefore get things done faster. Parallel processing at the application level is necessary to harness the power of the cloud.

Creating a Scalable and Elastic Application
With the advent of cloud computing a large-scale and virtually unbounded grid of compute nodes becomes a reality. In principle, this makes the cloud an ideal substrate for on-demand high-performance parallel processing. The cloud can adapt elastically when CPU requirements change and spin-up or tear-down nodes accordingly.

In the latest Gartner Hype Cycle for Cloud Computing (July 2010), Cloud Parallel Processing is an entrant of the Cloud technology adoption curve. They note that “as cloud-computing concepts become a reality, the need for some systems to operate in a highly dynamic grid environment will require these techniques to be incorporated into some mainstream programs. [] The application developer can no longer ignore this as a design point for applications.”

Parallel Programming is hard!

This points to the caveat of parallel processing: it is very hard to build scalability and elasticity into general purpose applications. But if you cannot make multiple nodes share the workload then the cloud will not help you get things done faster.

How can you make your computationally intensive application run in parallel on several nodes? A parallel programmer must address these steps:
  •          Partition the workload into smaller items
  •          Map these onto processors or machines
  •          Communicate between these subtasks
  •          Synchronize between subtasks as required

Note that synchronization is a real bugger – because it is anathema to parallelism. Yet it is unavoidable in most applications. 

Shift of Programming Model

Even with the use of toolkits that offer communication and synchronization primitives (such as PMI) parallel programming is a bit like rocket science. Without a shift in programming model parallel programming will stay elusive. Programmers will have to deal with parallelism to some degree at least – they can no longer rely on middleware, databases and operating systems to exploit parallelism.

Chris Haddad mentions the following programming model shifts that happen with the cloud:
  •  Actor Model: Dispatching and scheduling instead of direct invocation Queues and asynchronous interactions
  • RESTful interactions Message passing instead of function calls or shared memory
  • Eventual consistency instead of ACID

Fortunately, parallel programming has been a research topic for some decades already – long before the cloud existed. Demand for high performance computation stemmed from various domains, such as weather prediction, finite element simulation and other data analysis. During my PhD I worked on object-oriented parallel programming using COTS (commercial-off-the-shelf) workstation clusters. These are called distributed memory machines in parallelism jargon. The cloud is nothing else than a distributed memory machine - albeit at a much larger scale. And my work also touched the Actor model that Chris mentioned. 

Watch this space

The principles and models that apply to distributed memory machines would also apply to the cloud (essentially, the cloud is a very large distributed memory machine). In a follow-up series of posts I will look at those topics in more detail and examine how they can be applied to Cloud Parallel Processing. I’ll dig deeper into programming models shifts (such as Actors) and outline some parallel algorithms (such as map reduce, n-body and others of the 13 computational dwarfs).