×

Welcome to National Journal!

Enjoy this premium "unlocked" content until December 22, 2024.

Continue

Q+A with Micky Tripathi

The assistant secretary for technology policy at Health and Human Services discusses the regulatory landscape and dangers of AI in the health care space.

A brain wave is shown at the iMediSync booth during the CES tech show in Las Vegas in 2023. (AP Photo/Rick Bowmer)
ASSOCIATED PRESS
Add to Briefcase
Philip Athey
Nov. 26, 2024, 7:50 p.m.

Just over two years ago, OpenAI publicly launched ChatGPT, kicking off the artificial-intelligence revolution. The new technology has impacted just about every field, often outpacing regulation. The health care field has often been the leading edge of where this technology has rushed through holes in the current regulatory system. The Biden administration has reacted to the new threats and opportunities posed by the technology, issuing an executive order in 2023. Part of the order reinforced the activity the Health and Human Services Department was already taking to wrangle AI. Recently Micky Tripathi, the chief AI officer for the agency, sat down with Philip Athey to discuss actions HHS is taking to handle the AI revolution.

How is the agency using existing regulations and the new AI executive order from Biden to affect how AI hits the medical market?

The Department of Health and Human Services actually had a chief AI officer that preceded the executive order. We established the chief-AI-officer role in the department itself, and then we elevated it to assistant secretary, to my level, as a part of the executive order.

One is, [the Food and Drug Administration] has had long-standing regulations related to the safety and effectiveness of devices and medical devices. That goes back to statutes in 1938, the Food, Drug, and Cosmetic Act, updated in 1976, to specifically incorporate medical devices—as a part of that, AI as a medical device. There's two categories—software in a medical device, or software as a medical device, which is explicitly about AI-based technologies that, a part of or by themselves, are defined as a medical device. That's long-standing regulation. The FDA has approved over 1,000 AI-based technologies now for use in the market.

The second thing I'll point to is my agency finalized regulations in December that will go into effect on January 1, 2025, that requires every certified electronic-health-record vendor, which covers 97 percent of hospitals and over 80 percent of physician offices, to provide transparency to the clinicians, the hospitals, the physicians who are using those systems about the AI-enabled technologies that are in the electronic health record. Often we're hearing that these technologies are buried in the electronic health record, and the doctors are like, “I don't know what this is, and I need to have greater awareness of it.” We require what we colloquially called a nutrition label in the industry, what’s known as a model card, which has 31 data elements that says, "You need to give the doctor information that she can use to determine whether that particular AI technology is appropriate to their particular care setting." It may have been developed in a geographic region that's very different from a patient-population perspective than the one you're using it in. For example, it may have been designed for a therapeutic area, which is a little bit different than what you're trying to design it for.

Micky Tripathi, assistant HHS secretary for technology and policy None

A few weeks ago Elon Musk asked people on Twitter to send their medical images to his AI program Grok “for analysis.” What authority does your agency have to regulate AI programs that can be used for medical purposes but weren’t developed specifically as a medical device?

We've got a moment in time now, a pretty significant moment in time, where you've got two things that are really important as it relates to AI. One, it's a very powerful set of technologies. But two, it's very easily accessible, which is why you're seeing tremendous adoption, diffusion of that technology, compared to almost any other technology.

Certainly, that's starting to press the boundaries of where are the existing regulations and statutes that cover these kinds of things, and where do we need to identify where existing statutes will actually cover a certain set of activities. Whether it's AI or just an Excel spreadsheet, HIPAA still applies. On the other hand, there are certainly areas that now you're seeing growth in the uses of AI-based technologies that do fall outside different parts of the regulatory frameworks we have.

We have a strategic plan underway right now that is scheduled to be released in January. That’s looking across the department. We're looking at five primary domain areas, as we call them: medical-product research and discovery—that's development of new drugs, development of new devices, development of new therapies; regulation of safety and effectiveness of medical products, which is FDA approvals of drugs and devices; health care delivery, which is financing; as well as the health services that are provided in the doctor's office or physician's office—human services gets too little attention, I would argue, for its great importance; and then public health.

What we're looking at is: What are the industry trends? What is the regulatory and statutory authority that the department has in each of those areas? Where are there issues that might fall outside of those areas? And then, how do we want to think about that? How do we first flag and identify those areas, and then second, how do we want to start thinking about the ways that we might be able to address those in the future?

What regulations are there currently to handle generative AI operating in the health space?

There's nothing. The FDA has not approved any generative-AI-based solution. We've certainly seen the statute that provides the authority for the FDA approvals goes back to 1976, and that was at a time when they didn't contemplate that devices would actually be pure software. The idea of a device was like an X-ray machine. They didn't anticipate the rapid software-development cycles that we have. They didn't anticipate that you would have systems that were essentially self evolving, and they also didn't anticipate that we might want to be able to have a regulatory approach that isn't event-based, that really requires more continuous monitoring.

Right now, the reporting that goes back is really based on whether there's a death, a serious injury, or a software malfunction. But these self-evolving technologies—it's like, well, no, wait a minute, that needs more time-based regular monitoring of the system.

Those are all areas now that we're looking really hard at as a department to say, "What is the right approach to the things that weren't contemplated in the various types of statutory authorities?"

Currently we have the Health Insurance Portability and Accountability Act, or HIPAA, that protects medical privacy in certain circumstances. But some of these devices are not protected by HIPAA. What are the gaps there, and how is the agency trying to close them?

HIPAA is a law that stood us very well over many, many years, considering when it was actually passed in the late 1990s. But I think one of the things that too few people understand is that information that is in the hands of the patient is not covered by HIPAA. The minute that you have taken control of it, you've downloaded it into your own possession, is the minute that you actually don't have HIPAA protections.

That was a growing area that we've been monitoring, for sure, as you see more and more apps that come into play that really live outside of HIPAA regulations. So that's not a new issue, but now, with AI-based technologies, it can become a bigger issue. People just need to be very cognizant of the fact that they have taken full responsibility for the privacy of it.

Two areas that I think we all need to be looking very carefully at is, one, that AI-based technologies are data hungry. They are based on data. If you don't have data, you don't have AI. It’s made developers in the market very data-hungry.

I think the other area that is certainly an area of further consideration is: Does AI-based technology make it easier to re-identify information in ways that we weren't able to before? There were specific provisions in HIPAA about once you've de-identified information, according to HIPAA, then that information is no longer protected health information, then an organization can do whatever they want with it. Arguably, now, with these more advanced technologies and more availability of other information on us that people can just get in the public domain—like our driver's license, driver's information—combining that information might make it easier to re-identify information in ways that wasn't true just a short five years ago.

To fix the re-identification issue and other gaps in health care privacy posed by AI, does that require Congress to go back and amend the law, or is that something the agency can handle through the rulemaking process?

I certainly wouldn't want to weigh in on what are the statutory-authority limits of HIPAA or the FDA regulations or anything like that. But I will say that we as an administration have said repeatedly that we believe very strongly that we need to have a bipartisan statutory approach to data privacy in general—not even just health care, just data privacy in general. We think that that is a need now, and that we need a bipartisan law that does protect data privacy in general, and certainly that would cover health information.

If you were in charge of Congress for the day, what kind of health regulation, whether it is a new law or a new authority, would you enact?

Well, first off, I wouldn't like to be in charge of Congress. ... It doesn't look like a fun job.

The importance of having a national law related to data privacy in general is a growing concern among Americans—red state, blue state, wherever they are. There is a growing concern about how is their data being used.

Often it's being used in ways that they want, for convenience. Every single day we give up a little bit of our privacy so that we can just use Google and we can use social media. We do that because we feel like, in the risk-benefit trade-off, well, it's worth it because I get a lot of convenience, and I use Amazon—I allow them to store my information because it's really convenient for me next time I shop.

On the other hand, I think people are certainly seeing areas where these technologies are being used against them, and have grown concerned about that. A nationwide approach to data-privacy laws, I think, would be probably certainly one, if not the most important thing, that I think we need as a country.

Welcome to National Journal!

Enjoy this featured content until December 22, 2024. Interested in exploring more
content and tools available to members and subscribers?

×
×

Welcome to National Journal!

You are currently accessing National Journal from IP access. Please login to access this feature. If you have any questions, please contact your Dedicated Advisor.

Login