Medicity Blog Post: How Data is Consumed into a Healthcare Data Platform
#1
Part 1:  Interoperability 2.0 - How data is consumed into a healthcare data platform

Industry standards such as HL7 and IHE have generally provided mechanisms for the push and/or query and retrieve of clinical data.  However as most are painfully aware, these standards are often not so standard at all. 

As we aim to move toward a new generation of interoperability, we first need to consider actually ingesting the data into the network.  Sounds pretty basic, but you might be surprised how many solutions today merely pass data from the left hand to the right.  Now certainly when you choose to capture the data, you have to take on appropriate security and privacy responsibilities, but without ingesting the data you forfeit any opportunity to do anything of value with it later.  Closely related to consuming the data is providing some level of validation as part of the ingest process.  Doing so will help to ensure the presence and syntax of certain data elements needed to properly file the transaction or document.  It doesn’t do anybody any good to store junk.   

Another important consideration is being flexible in terms of how you can acquire the data.  For example, consider the standards we previously referenced.  Certain systems today may support IHE in general, but only certain profiles.  What happens if the requesting system only supports XCA and XCPD, and the system in which the data resides only supports XDS?  Nothing.  New interoperability platforms need to be able to bridge such standards, allowing otherwise disparate systems to communicate.  Same goes for the transport mechanisms – TCP/IP, Direct, SFTP, APIs – flexibility is key.

Next week we’ll investigate how these new Interoperability 2.0 concepts apply to step 2 in the process:  how data is organized within the healthcare data platform so that it can add real value.

Read Full Blogs Here
Reply


Forum Jump: