Clinical Terminology Shock and Awe

From NCOR Wiki
Jump to navigationJump to search

Fifth Annual Workshop of the Clinical and Translational Science Ontology Group

Announcement

Are clinical terminologies and other healthcare data standards realizing their goals of system interoperability and data compatibility? Do they enhance or detract from EHR usability? How usable are the terminologies and standards themselves? Can systems developers understand them sufficiently well to be able to incorporate them successfully into EHR design? Can clinicians understand them well enough to reliably communicate to both computers and humans? Can researchers benefit from these standards? Do they enable translational science? Do they support or inhibit research reproducibility? What work remains to be done? What approaches are needed to realize the vision of interoperability and data compatibility?

The Clinical and Translational Science Ontology Group invites you to join us this September in Buffalo as we assess the state of the art in clinical terminologies and ontologies and build a research agenda for closing the "interoperability" and "data compatibility" gap. Our keynote speaker will be Dr. Stefan Schulz who will address the reliability of professional SNOMED CT coding and what ontological approaches might help to improve it.

Date

September 7-8, 2016

Venue

Ramada Hotel, Amherst, NY

Special "UB" room rate: $89 (2 queen beds), $99 (1 king bed)

Schedule (Preliminary Draft) Day 1

Wednesday Morning

8:00am Registration and Breakfast

9:00am The Electronic Health Record: A Survey of Problems with Special Reference to the Research Data Needs of Clinical and Translational Science

Speakers will include: Ross Koppel (Penn), Barry Smith (Buffalo)

We will focus on the three broad families of problems identified identified by Koppel in [1]: data standards, interoperability and usability.

1. Data Standards – the format one uses to record the collected medical information:

  • There were several available ontologies and data standards for defining almost all of the measures used in medicine in 2009. We could have chosen one and insisted that any system that could receive incentives and subsidies had to use that data standard. Without data standards, interoperability becomes almost impossible. Of course, there could have been a flexibility built into that process. For example, any system could be installed in 2009–2010 but that system had to incorporate the unified data standards within a year.
  • … without unified data standards we cannot share information across systems; we fail to achieve real interoperability. The systems create towers of Babel and those towers become isolated from each other; a noisy but deaf city.

2. Interoperability: sending information from one system to another –

  • The problem of interoperability has been mastered in electronics and almost every other industry for over 40 years, often for several hundred. The major barrier in HIT was the aforementioned lack of a unified standard and the refusal of vendors to select a method of data transmission. Again, selecting any of the available methods in 2009 would have enabled the transmission and collection of medical information – a core, but still missing,

virtue of HIT. Several arguments are offered for the industry’s inability or refusal to create its own interoperability protocols or for its lack of agreement on existing interoperability protocols:

  • Vendors benefit from sales of entire suites of products …. By not allowing a vendor’s software and/or hardware to interact with other vendors’ systems, a vendor ensures sales of a combined package.
  • Because these systems are so expensive, because implementing them is three to five times more than just the initial software and hardware costs, and because the implementation process takes 3–5 years, opportunities for buyer remorse are limited or made unacceptable. The buyer is locked in; often wed to that system for a decade. The vendors thus seek to capture market share as soon as possible, and are encouraged to rush HIT products to market before they are sufficiently tested. … The vast funds involved, and the consequential career implications of those participating in

HIT purchases enhance intimidation of critics and those who report problems with the technology. The general faith in technology and the sincere desire to see HIT improve medical workflow encourages so many to define critics as technophobes, incompetents, and non-team players. •Data loss threats: lack of interoperability makes switching HIT systems perilous, with dangers of massive data loss, which would be a catastrophic failure for healthcare institutions. … As with data standards, the ONC could have offered flexibility in the timing of an interoperability requirement. Thus, for example, any system would be acceptable to purchase in 2009–2010, but all systems would have to be able to use an agreed-upon exchange protocol within a year of installation.

3. Usability: defined as ease of use, ability to learn, effectiveness, efficiency, error tolerant, engaging, and responsive.

  • HIT vendors have agreed that usability is dependent on:
The training and skill of the user
The implementation of specific systems in specific settings
The history of HIT use in any setting and by any user
The relationship of a specific system to the other IT systems with which it must interact
The physical environment (e.g., lighting, noise levels, quality of display screens).
  • All of these factors absolutely influence usability, often profoundly. But none of them should be allowed to obscure the reality that usability is intimately dependent on the design of the system. Moreover, none of these factors means that usability is not measurable. Indeed, there are well-documented scientific methods for measuring usability, including measures that incorporate and acknowledge the other factors

that affect use. As a thought experiment consider automobile safety. No one would deny that a car’s performance and braking ability are influenced by road conditions, the driver’s skill, and the driver’s alertness. Yet it would be absurd to insist that basic automobile design decisions do not seriously affect a car’s stability, safety and braking effectiveness. In contrast to the automobile analogy, HIT vendors have, until recently, defended their lack of focused attention on usability by reiterating the mantra that usability is subjective, too theoretical, or essentially unmeasurable. Some vendors have claimed that there is only scant proof of the relationship between usability and safety. At the same time, and apparently without irony, several vendors also note they have employed usability experts and that their own tests find their systems to be very usable.

12:00: Lunch

Wednesday - Afternoon

1:00pm Olivier Bodenreider (NLM): SNOMED CT as Clinical Terminology Foundry harmonizing LOINC, GMDN, ICNP, ICD-11, OrphaNet and FMA

2:00pm James R. Campbell (University of Nevada Medical Center): Clinical terminology for personalized medicine: Deploying a common concept model for SNOMED CT and LOINC Observables in service of genomic medicine Abstract

3:30pm Break

4pm: Keynote address by Stefan Schulz: Coding clinical narratives: Causes and cures for inter-expert disagreements

We will investigate the fitness for use of clinical terminologies to enable EHR interoperability. Information extraction from clinical narratives using NLP was identified as an important use case. For this purposes, terminology experts built a gold standard annotation for SNOMED CT and a UMLS extract, where shockingly low inter-annotator agreement values resulted. This talk will elucidate typical reasons for disagreement and point out how disgreement can be partially mitigated for SNOMED CT by exploiting its axiomatic basis, at least partially built on ontological grounds.
Stefan Schulz is a professor of Medical Informatics at the Medical University of Graz, Austria. Trained as a physician, his research encompasses electronic health records, medical language processing, biomedical terminologies, and the application of formal ontologies for biomedical knowledge representation. He has contributed to the development of clinical terminology standards such as WHO classifications and SNOMED CT.

Schedule (Preliminary Draft) Day 2

Thursday - Morning

8:00am Registration and Breakfast

9:00am

Possible topics

  • A​dvanc​ing​ reproducibility ​of clinical and translational research (BFO, OBI, LOINC)
  • Advancing interoperability of clinical data generally and of EHR data in particular
  • ​I​mproving SNOMED / CCD / c-CDA usability
  • i2b2, PCORnet, OMOP, FHIR and other approaches to clinical data sharing
  • Interoperability
The role of CDA
  • Mismatch of EHR data with the needs of clinical and translational research
Patient data repositories
The issue of coordination across the CTSA
The role of CDISC
  • ​Advancing EHR interoperability​ (addressing SNOMED / CCD and meaningful use regulations whereby SNOMED CT is required for recording problem list and smoking status, and CCD is required for care summary)

12:00 Lunch

Thursday - Afternoon

1:00pm-4:00pm Wrapup sessions TBD

Rationale

The CTSA Program has always emphasized the need for data standards to promote sharing and comparison of data across the CTSA Consortium and beyond. Yet creation and adoption of such standards is still painfully slow. Urgent action remains necessary. History shows the high value of standard terms, definitions, and symbols (i.e. ontology) to science. But the creation and adoption of such standards often takes decades. Translational science requires a consistent set of standard ontologies spanning all scales, from molecule to organism to population. But clinical terminologies at the macroscale – such as SNOMED and ICD – inhibit translational science. They are inconsistent with successful micro-scale ontologies such as the Gene Ontology, and they also cannot change rapidly with the advance of science. Furthermore, we will address additional issues with clinical terminologies as they currently exist, specifically the problem that even professional coding with them has poor inter-coder reliability. This situation degrades the quality of terminology-encoded data below acceptable research standards. Lastly, we believe confusing and incoherent terminologies are a barrier to end-user usability of resources like EHRs and the data they produce.

Translational science must settle on standards that evolve in a way that is closely tied to scientific advance. In the case of chemical symbols and SI Units adoption proceeded in three overlapping stages. First came widespread recognition and understanding of the problem. Second, influential stakeholders helped to develop, test, and select appropriate standards. Third, once scientifically useful standards emerged, the community enforced them via peer review. How can we accelerate progress on clinical ontologies through all three stages? How, in other words, can we create and implement standard clinical ontologies that are open and sufficiently well disseminated to achieve consortium-wide adoption?

A key barrier to adoption of ontologies is the widespread perception among IT companies and programmers, especially EHR developers, that ontology is impractical and inaccessible to them. And thus ontology is not relevant. The perception is that merely adopting standard value sets in their proprietary information models is sufficient. But it is not. How do we demonstrate the value, and a practical and accessible path forward, for the adoption of BFO / RO / OBI / IAO / OGMS / HPO / GO / HDO / ChEBI / DrOn / OMRSE and other OBO ontologies in EHRs, i2b2, REDCap, and other systems in support of translational science?

This workshop will convene stakeholders interested in identifying ways to harmonize clinical terminology resources with their counterparts at the molecular level and make substantial progress in their implementation in every day clinical and research information systems, especially the EHR. A consistent framework for ontologies that enable interoperability of systems, compatibility of data, and research reproducibility is the vision.

Goals

The Clinical and Translational Science Ontology Group was established in 2012 to leverage the use of common ontologies to support different aspects of information-driven clinical and translational research. The focus of this meeting is to explore new and existing uses of common ontologies to support creation, sharing, and analysis of clinical data.

Like its predecessors in the series, this meeting is designed to bring together clinical and translational scientists from across the CTSA Consortium who are interested in using ontologies to promote discoverability and interoperability of biomedical data.

Persons interested in attending or in presenting at the meeting should write to Barry Smith.

Sponsors

Department of Biomedical Informatics, University at Buffalo

National Center for Ontological Research, Buffalo

Organizing Committee

Barry Smith (University at Buffalo)

William Hogan (University of Florida)

Participants

Sivaram Arabandi (Health 2.0, Houston)

Olivier Bodenreider (National Library of Medicine)

Jonathan Bona (Buffalo)

Mathias Brochhausen (Arkansas)

Werner Ceusters (Buffalo)

Kei-Hoi Cheung (Yale / VA Connecticut Healthcare System)

Alexander Diehl (Buffalo)

Willian Duncan (Buffalo)

Peter Elkin (Buffalo)

Fernanda Farinelli (Minas Gerais, Brazil)

Yongqun "Oliver" He (Ann Arbor)

William Hogan (Gainesville)

Mark Jensen (United Nations Environment Programme)

Ross Koppel (University of Pennsylvania)

Asiyah Lin (Food and Drug Administration)

Sina Madani (MD Anderson Cancer Center)

Øystein Nytrø (Trondheim, Norway)

Edison Ong (Ann Arbor)

Jihad Obeid (Charleston)

Jose Parente de Oliveira (ITA, Brazil)

Rasmus Rosenberg Larsen (Buffalo)

Alan Ruttenberg (Buffalo)

Stefan Schulz (Graz, Austria)

Barry Smith (Buffalo)

Dagobert Soergel (Buffalo)

Ram Sriram (HealthIT, National Institute of Standards and Technology)