If you have seen recent news, Healthcare industry is going through a massive revolution. Consumer products ranging from Smartphones, Fitness trackers to Genetic Sequencing kits are launched that make use of latest technologies like Artificial Intelligence, Machine learning, Blockchain, Gene Sequencing to improve consumer health. Apple recently launched Personal Health Record (PHR) feature with iOS 11.3 and partnered with 39 hospitals that support this feature. Amazon’s recent news of dive into healthcare space has brought healthcare into limelight. Google, Microsoft, Samsung etc. and few other companies have already followed suit and invested in internal healthcare initiatives or healthcare startups and many others have started focusing on this trend by investing resources to build products and features in related market. Billion-dollar companies are not just fighting for share of healthcare market revenue, but the real reason behind the jump is DATA. These companies would be fist fighting in near future to own consumers healthcare data. With Machine Learning and AI as new tools, the time is right for companies to dig deep into Healthcare and Personal Health Record.

 

Healthcare products are highly complex, as they handle a variety of data and have various interfaces to interact with. In addition, if the product is consumer facing it adds another level of complexity related to Health Information Security and Privacy. With billion-dollar companies diving in the Healthcare business with various kinds of products, it is no wonder there is special emphasis on standardization. HL7 and FHIR standards were born from a similar need for healthcare industry.

 

The goal of standards like HL 7 or FHIR is to let everyone securely access and use the right health data when and where they need it and provide standards that empower global health data interoperability. To accomplish interoperability, the ways information is processed and saved need to be harmonized. This problem was solved by “Health Level Seven International” by introducing HL7 standards.

 

HL7 (Health Level 7) refers to set of international standards for the integration, sharing, and retrieval and exchange of electronic health and administrative information between applications used by various health care providers.

 

FHIR (Fast Healthcare Interoperability Resources) is a next generation standards framework created by HL7. FHIR combines the best features of HL7’s product lines while leveraging the latest web standards and applying a tight focus on ability to implement.

 

This makes Healthcare standards a very important aspect to a healthcare product and allows data exchange less complex. In future if you ask a question — Why is it that every time we go to a new doctor, we don’t have to fill out the same paperwork, and the new doctor already seem to know what the last one did? Or How does an app on your smartphone knows all your vitals? The answer is HL7 and FHIR.

 

HL7 and FHIR deal with HOW and not WHAT that every Product Manager is looking for in a product building equation, but these standards are increasingly gaining importance in company vision, roadmaps and in building new products in healthcare. Here are some reasons it is important for a Product Manager to understand these standards:

 

1. Opportunities: As a Healthcare Product Manager it is important to understand standards involved to identify new product opportunities. HL7 and FHIR like standards can enable connectivity between different systems and products. This connectivity can help identify new product opportunities. Example, imagine not having an existing product in market to fill a certain gap in use case. Wouldn’t it be awesome for your company to build new product or provide new feature and grow?

 

2. Customer Experience: Knowing what information can be presented or acquired from a user or Healthcare provider can help design optimal User Experience(UX). This understanding can also help Information Architecture(IA) and provide overall improvement in customer experience of your product.

 

3. Product Building: Understanding these standards can make Product Managers work with engineering easy. These standards also provide recommendations for Programming Structures, Technical specifications and guidelines for software. This can help reduce the no of iterations to deliver a feature successfully and bug free.

 

4. Metrics/Analytics: Understanding these standards and the information they provide can help identify right success metrics for a feature or product. It can also help build features dependent on analytics to provide patients more in-depth insights into their health.

 

5. Competition: Knowing these standards can sometimes mean having an edge over competitors. In case of Apple, imagine a competitor has already partnered with top hospitals that Apple wants information from?

 

It is clear the next generation Healthcare products would be more connected and interface using HL7 and FHIR. The news like below definitely support the claim:

 

Apple announces effortless solution bringing health records to iPhone

https://news.ycombinator.com/item?id=16225152

McKesson is Preparing for the HL7® FHIR® Standard

Traditionally Product Managers working on software products and platforms have made decisions based on sales insights, customer feedback and feature requests. With big data and analytics being the buzz words, the focus has changed more towards a metrics-driven feature prioritization. If the numbers do not support a feature, a feature is critically debated on its priority until there are numbers to prove otherwise. Thanks to analytics and advances in data mining, product managers these days truly understand their customers, often way better than even their customers understand their own situation.

Industry has pretty much standardized the analytical data models required to make decisions, but in order to make such decisions quickly it is important to have ability to tweak a product and deploy environments as and when required. Long application release cycles make it difficult to stay ahead of competition. Imagine a feature production ready for analysis and delayed for ramping up infrastructure and getting environment ready and this is the exact problem that Dockers solves.

Dockers is an open-source project that automates the deployment of applications inside software containers. So, what is Software Containers? Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. Containers allow you to easily package an application’s code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment.

Teams can now stop wasting hours setting up developer environments, spinning up new instances, and making copies of production code to run locally. With Docker, you simply take copies of your live environment and run them on any new endpoint running a Docker engine. Packaging an application in a container with its configs and dependencies guarantees that the application will always work as designed in any environment: locally, on another machine, in test or production. No more worries about having to install the same configurations into different environments.

To get started with Dockers Or for a more in-depth look watch the video below:



It would be interesting to know how product teams can leverage Dockers. If you have any brilliant ideas please share in the comments below.

WCF REST provides flexiblity to transport data over http and surely should be preffered if data is required to be accessed from applications build in different technologies. There are times when this is not possible or when all the involed web service consumers or service clients are build on .Net technology, chances are they may prefer using SOAP over REST then REST and SOAP both endpoints need to coexists in a single service. This series would provide provide tips about combining SOAP with REST in single WCF Service.

I have compared how Behaviors would look for just SOAP VS SOAP+REST service.

Behavior defines how the endpoint interacts with clients. Attributes like security, concurrency, caching, logging, etc are part of service behavior. Behaviors describe the client what the service will be.Eventhough behaviors can be added in three different ways like code, configuration and Attributes, I will use configuration for this example. Below is a comparison of how your SOAP vs SOAP+REST behaviors would look like in configuration.

SOAP:

 <behaviors>
 <serviceBehaviors>
 <behavior name="SoapBehave">
 <serviceMetadata httpGetEnabled="true" />
 <serviceDebug includeExceptionDetailInFaults="false" />
 <serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="50" maxConcurrentInstances="50" />
 <dataContractSerializer maxItemsInObjectGraph="2147483647" />
 </behavior>
 </serviceBehaviors>
 </behaviors>

SOAP+REST:

<behaviors>
<endpointBehaviors>
<behavior name="web">
<webHttp />
</behavior>
</endpointBehaviors>
<serviceBehaviors>
<behavior name="internal">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="false" />
<serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="50" maxConcurrentInstances="50" />
<dataContractSerializer maxItemsInObjectGraph="2147483647" />
</behavior>
</serviceBehaviors>
</behaviors>

 

Today I was faced with a challenge in a WCF Service where my host service returned http 504 response.

HTTP/1.1 504 Fiddler – Receive Failure
Date: Wed, 15 Jan 2014 18:53:11 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Timestamp: 10:53:11.745

[Fiddler] ReadResponse() failed: The server did not return a response for this request.Server returned 0 bytes.

Initially my thought was this issue is due to WCF REST POST request without any inbound request data stream, but, after a bit of research I found that my hosted service was unable to output the huge xml data object I was returning from the datastore. I resolved this issue by adding maxItemsInObjectGraph in Service behaviour’s dataContractSerializer element of web.config.

I think WCF or any API’s is a game of correct configurations and understanding what this configurations mean is important for API design. Let’s understand what maxItemsInObjectGraph and dataContractSerializer is and how to use it in service configuration of a API.

As we know Service Behavior defines how the endpoint interacts with clients. Attributes like security, concurrency, caching, logging, etc. – are all part of the behavior and so does dataContractSerializer. DataContractSerializer serializes and deserializes an instance of a type into an XML stream or document using a supplied data contract and maxItemsInObjectGraph gets the maximum number of items in an object graph to serialize or deserialize. Why 2147483647? maxItemsInObjectGraph is a integer and integer value is a 32-bit number, and that’s the largest value that can be represented in 32-bit two’s complement. This will allow to obtain a xml/json response as huge as aproximately 2GB. I think if you are not sure or if you think data might grow over the period of time it is better to set this property to max.

Here is how my service behavior code block in web.config looks like.

       <serviceBehaviors>
        <behavior name=”ServiceBehaviors”>
          <serviceMetadata httpGetEnabled=”true” />
          <serviceDebug includeExceptionDetailInFaults=”true” />
          <serviceThrottling maxConcurrentCalls=”100″ maxConcurrentSessions=”50″ maxConcurrentInstances=”50″ />
          <serviceAuthorization serviceAuthorizationManagerType=”CCAService.AuthorizationManager, CCAService” />
          <dataContractSerializer maxItemsInObjectGraph=”2147483647″/>
        </behavior>
     </serviceBehaviors>