Now that the Biden administration has finally issued its long-awaited executive order on AI, it’s worth considering just how tricky it will be for the lumbering bureaucracy to keep up with a technology that can quite literally teach itself how to change and adapt to its environment. POLITICO’s Daniel Payne tackled that very subject in a report published over the weekend covering how AI products are hitting doctors’ offices and hospitals across the country without the voluminous amount of testing that the government usually requires for new medical tools. That poses a big problem when the unanswered questions surrounding privacy, bias and accuracy in AI are applied to quite literally life-and-death situations. Suresh Venkatasubramanian, a Brown University computer scientist who helped draft last year’s Blueprint for an AI Bill of Rights, told Daniel, “There’s no good testing going on and then they’re being used in patient-facing situations — and that’s really bad.” Even one CEO of an AI health tech company expressed concern that users of his software “would start just kind of blindly trusting it.” Troy Tazbaz, the director of the Food and Drug Administration’s Digital Health Center of Excellence, acknowledged that the FDA (which falls under the Department of Health and Human Services) needs to do more to regulate AI products even as they hit the market, saying they might require “a vastly different paradigm” from the one in place now. Daniel writes that Tazbaz foresees “a process of ongoing audits and certifications of AI products, hoping to ensure continuing safety as the systems change” — echoing the type of “agile” regulatory framework proposed in a recent book by former Federal Communications Commision Chair Tom Wheeler. The White House’s new executive order, in the meantime, looks to spur the agencies forward. As POLITICO’s Mohar Chatterjee and Rebecca Kern wrote in their exclusive look at the order published Friday, it directs HHS to develop a “strategic plan” for AI within the year. HHS is also directed to determine whether AI meets the government’s standards when it comes to drug and device safety, research, and promoting public health, and to evaluate and mitigate the risk that AI products could discriminate against patients. According to the draft of the executive order, the HHS’s AI “Task Force” will be responsible for the “development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery,” “taking into account considerations such as appropriate human oversight of the application of AI-generated output.” It’ll also be required to monitor AI performance and outcomes like any other products, develop a strategy to encourage AI-assisted discovery of new drugs and treatment, and help determine the risks of AI-generated bioweapons. The EO doesn’t necessarily set new rules around AI safety and privacy concerns in health care, but rather sets a plan in motion to create them. (Eventually. Maybe.) But even with this most sensitive — and highly regulated — of policy subjects, experts and lawmakers are skeptical about when or whether actual, legislative action might be taken once that plan is in place. Brad Thompson, an attorney at Epstein Becker & Green who counsels companies on AI in health care, told Daniel the legislative “avenue just isn’t available now or in the foreseeable future.” Rep. Greg Murphy (R-N.C.), co-chair of the House’s Doctors Caucus, said state government should take the lead. So in the meantime, it’ll be up to the executive order-mandated “task force” to suggest rules for how to deploy and regulate AI tools in health care most responsibly, and to the medical sector to follow them. As Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told Mohar ahead of the order’s publication today: “Issuing the EO is the first step. Now comes the budget battle, then comes the implementation.”
|
Comments
Post a Comment