Diego Boscá Tomás, Semantic Interoperability Consultant talks to us about InfoBanco, the intersection between openEHR and OMOP, and what the future holds for Veratech...

“What if we combined our knowledge of openEHR and OMOP?”

Thanks to Diego Boscá Tomás, Semantic Interoperability Consultant, for taking the time to talk to us about InfoBanco, the intersection between openEHR and OMOP, and what the future holds for Veratech

At Veratech, our history is deeply rooted in standards and data quality assurance. When the idea of normalisation for research emerged, we recognised an opportunity to become actively involved. Over time, we’ve observed various OMOP projects involving different hosts.

Out of curiosity we asked: “What if we combined our knowledge of openEHR and OMOP?” And ultimately it was that curiosity which led to projects such as InfoBanco de Madrid, where these two systems were effectively merged. We learned lessons from the project and afterwards our research findings based on openEHR were transformed into OMOP.

While openEHR served as the core, we understood that these models couldn’t alone provide the foundation for a high-quality core system facilitating the transition to OMOP. This marked the final step in the InfoBanco project.

Collaboration is key, and we will be working with people from Karolinska, Catsalut, Charitè and the HighMed consortium.We’re at the stage of putting people in touch in this potential openEHR research network. Additionally, we’re making efficient use of the resources available through InfoBanco to enhance our knowledge. Our primary objective is to develop a user-friendly and transparent mapping system.

Our ultimate goal is to simplify the technical aspects, making this tool accessible to a broad audience. After all, it’s the shared models that drive transactions, and our mission is to ensure they remain accessible to all.

Is there a reason you decided to choose OMOP over another standard? 

OMOP, which is a standard endorsed for real-world data research, is gaining prominence. 

Essentially it simplifies the model, which in itself is designed to be straightforward and to encompass conditions, medications and more, while normalising its vocabulary.

This means that it employs a highly streamlined vocabulary which everyone adopts. And this streamlined approach enables researchers to conduct network studies efficiently. Essentially, once you embrace the OMOP model, you can collaborate with others easily and you can execute identical SQL queries on the data.

This standard has gained momentum in Europe, largely due to a Europe-wide project known as EHDEN (European Health Data Evidence Network), which incentivised organisations by giving funding to those that adopted the OMOP standard. These funds facilitated the transition to OMOP, so by participating in this initiative you’re automatically aligned with network research endeavours.

With all this in mind, our journey began on a promising note. Currently in Spain there are approximately 53 data partners, all involving the use of OMOP. 

It’s clear that OMOP established itself due to the absence of a standardised research framework. I firmly believe that with the openEHR components at our disposal, we can achieve a standardised environment that the research community needs. 

Within openEHR, we’ve already established a unified data model, encompassing various aspects and shared quality standards that apply to all. We’ve also adopted a common protocol, creating ample opportunities for collaborative research with other openEHR users. As an example, a query submitted to my CDR in openEHR can be relayed to you, thereby allowing for the examination of patient data related to a specific condition.

With just a few remaining issues to tackle in openEHR, we find ourselves in this privileged position where we can engage in collaborative research based on the openEHR platform and there’s no need to reinvent standards, since everything required for research purposes is readily available within openEHR.

But how does OMOP intersect with openEHR? 

That’s a good question. The core concept behind OMOP was to work with real-world data, attempting to extract historical data from a multitude of sources.

That’s where openEHR comes in: it’s the industry standard for clinical data and used by everyone. The main idea here is to repurpose this data for research, which is what I find challenging. What’s interesting is determining where data contained in openEHR fits into OMOP. For instance, if we have data following the medication exposure archetype, it’s categorised as a drug exposure in OMOP. If we have a lab result in openEHR it becomes a measurement in OMOP. This is a gradual process involving various archetypes – in the end, around 34 structures were involved. Our goal was to cover at first approximately 60% of a typical patient summary, addressing diagnostic issues, medications standard measurements, and so on.

To accomplish this, we take archetyped data, examine the code or the semantics of the element in openEHR, and decide whether it can be translated or requires a mapping to OMOP vocabularies. By making these transformations, data can be directly extracted into OMOP.

In some cases, like a laboratory measurement, the code for the measurement is in the data and not in the archetype definition. The process is quite straightforward, and the beauty of it lies in its interoperability, meaning it’s consistent for everyone. Once the community reaches a consensus on what can be mapped from the architecture to OMOP, it’ll become a widely agreed-upon and shared practice.

In the context of architects using the CKM (Clinical Knowledge Manager), we propose a mapping definition that – while not yet official – aims to provide a practical resource for users to apply these mappings within various standards. This initiative seeks to offer architects a broader perspective beyond the specific mode. 

Veratech, as a company that collaborates with multiple standards, has experts in various domains, including HL7 FHIR, v2, and CDA. Although we previously extensively engaged with the European ISO 13606 standard, we have shifted our focus fully to openEHR. Nonetheless, our commitment to standardisation, rooted in our origins as a university offshoot, remains the same and we firmly believe in data normalisation, regardless of the underlying standard.

Our model allows users to select their target destination and transfer data smoothly, while also adhering to the designated models. It’s an approach that aligns with the philosophy of duality, in that openEHR and the 13606 standard are the most suitable choices for these purposes. The model’s flexibility means it is able to accommodate the construction of the destination CDR and the application of data quality rules, such as in planning. Yes, some inconsistencies may arise, and these issues necessitate validation and verification. But overall, our model facilitates both data transformation and creation, in line with our commitment to normalising data from various sources.

In this way, we are continuously bridging the gap between data and standards, actively engaging with openEHR, FHIR, and related data transformation projects, reflecting our passion for this field.

What other work are you involved in, and what projects is Veratech working on currently? 

Another significant aspect of our work involves code transformation, where we’ve invested considerable effort in applying NLP and legacy data integration with openEHR for enhanced data processing.

Our work here extends to managing legacy data in existing systems and ensuring its seamless transition to openEHR. We’ve achieved this through extraction layers or structured plans, making the data openEHR-compatible. This process includes normalisation again, enabling users to extract valuable insights from previously underutilised, unstructured data.

Regarding OMOP, the proposed tooling for transforming openEHR to OMOP can be applied in various projects. It’s an open-source project developed in collaboration with HiGHMed, available for download, testing, and use. We have presented it both in front of OMOP and openEHR audiences, and was well-received for its innovative approach. It allows users to efficiently extract and organise patient data from openEHR, offering a versatile solution for doing research with patient information and medications within the OMOP framework.

We’re also engaged in various other projects, initiatives and upcoming solutions like the CatSalut project. I can’t really say too much at this time, but we’re actively collaborating with several companies, supporting their systems and offering expertise in openEHR. Projects like the CatSalut openEHR project are setting a great example by establishing modelling offices, and we want to follow suit. Quality models are instrumental in enhancing the openEHR framework, showing the importance of governance and maintenance.

What are some of the challenges you hope to address in the future?

Generally speaking, we find that everyone often finds the typical challenges associated with data integration and versioning. But even though I consider myself a novice in many ways, one consistent belief we uphold at Veratech is the value of incorporating models into our processes. This strategic use of models, particularly in data management, brings significant benefits such as enhanced quality and ease of decision-making, because they provide a clear pathway for formulating queries and extracting valuable insights, eliminating the need for constant consultation or uncertainties.

Yes, there are occasional setbacks or uncertainties, but I think that having a well-defined model and employing the right tools can dramatically enhance our understanding and use of the available data. This clarity minimises the need for extensive guidance or additional extensions, and makes for a more efficient workflow; it not only guarantees data quality and security but also means we can explore data without any hindrance, even when the information may appear obscure at first. This approach is pivotal in our operations, and we consider it a crucial aspect of our work.

Our consulting work primarily revolves around addressing data transformation challenges and bridging gaps in expert knowledge. Integrating standards can be complex, and our role involves making necessary agreements to ensure compatibility.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *