Interoperability challenges slow down machine learning in healthcare

By | September 23, 2018

Big data, machine learning, and interoperability are all topics we’ve been hearing about for many years in health tech. But in fact these banner ideas are deeply intertwined with one another. Machine learning requires a certain amount and type of big data — and interoperability plays a large role in whether or not researchers can gain access to the data they need to train models effectively.

At a panel discussion at Health 2.0’s Provider Symposium, moderated by Healthbox Chief Medical Officer Eric Louie, different speakers weighed in on this complex web of issues.

“We’ve got huge oceans of data both in our electronic medical record systems and outside of the system, but we’re still able to derive just little bubbles of clinical meaning from that data,” Matt Menning, director of IHMI engagement at the American Medical Association, said. “Our goal is not just to start to agree on terms and structure for exchanging data, but also get to some standards around what data we’re collecting.”

Many of the current interoperability efforts are focused specifically on the data and data sources that are most useful for clinicians conducting their work. And for those use cases, hospitals are starting to see results.

“Our experience shows us that interoperability is possible and it is happening,” Dr. Steven Lane, clinical informatics director at Sutter Health, said. “I work in an organization in Silicon Valley, so we’re very wired. But we’ve been interoperating, if you will, for the last eight years. We’ve impacted more than four million patients in our region, we’ve done over 100 million data exchanges, and our rate of document exchange is 5 million a month. We are interoperating.”

Read More:  How to read between the lines of the healthcare data blocking debate

The problem comes when researchers want to use a large set of data, from multiple institutions, in machine learning research. This is important in most cases to create an unbiased model. An audience member stated the problem during the Q&A of one of the sessions.

“If I want to look at a note from Dignity Health, that kind of interoperability works pretty well,” he explained. “But if you need to train a model with a million data points, it’s not really optimized for that. And that’s the feedback I hear from people working on these machine learning projects. They need a well-organized data lake that can easily transform some feature matrix that can power the model. And if you want to use the data that are currently supported by current interoperability frameworks, it just takes a lot more work and it’s a lot less efficient.”

Lane advised the question-asker to “go where the light is” and work with the data that’s available through HIEs for now. He believes the quality of data that is interoperable between networks is getting better both as technology advances and as ONC qualifies its standards.

David King, executive director of the HHS Global Innovation Institute, spoke about some of his challenges using clinical data to develop a machine learning algorithm for medical imaging, specifically for a recently FDA-cleared system for detecting wrist fractures. He said that sometimes HIPAA rules and machine learning model needs can pull in different directions.

“We licensed a de-identified set of our data, so it’s all de-identified before we license it to anyone. The process of de-identification is not trivial either,” he said. “One of the specific challenges is dates of service is considered identifiable, so how do you get longitudinal data, how do you establish the model time, with a lot of stripped down data? In some cases, the rules around privacy can make things like establishing personalities, looking at time frames is very difficult in the modeling process.”

Read More:  You gotta be kitten me: Study says cats are not fed enough times

If interoperability is a source of problems for AI, it’s also possible that AI can be a source of solutions for interoperability. That’s what Dr. Anthony Chang, chief intelligence and innovation officer at Children’s Hospital of Orange County, suggested.

One possibility is AI modeled more after the peripheral nervous system than the central nervous system, in which chips in individual devices could configure the data as needed as they sent it out, he said.

“I think 10 years from now we’ll be looking back at this meeting and saying ‘Why were we talking about all these issues?’” he said. “Because I think AI is going to be a tremendous paradigm shift in healthcare. I think … AI is going to help us accelerate this process of solving some of these logistical problems and data problems.”

MobiHealthNews