The Trump administration is proposing new rules to guide future federal regulation of artificial intelligence used in medicine, transportation and other industries.
But the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.
The White House said that in deciding regulatory action, U.S. agencies “must consider fairness, non-discrimination, openness, transparency, safety, and security.” But federal agencies must also avoid setting up restrictions that “needlessly hamper AI innovation and growth,” reads a memo being sent to U.S. agency chiefs from Russell Vought, acting director of the Office of Management and Budget.
“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits,” the memo says.
The rules won’t affect how federal agencies such as law enforcement use facial recognition and other forms of AI. They are specifically limited to how federal agencies devise new AI regulations for the private sector. There’s a 60-day public comment period before the rules take effect.
“These principles are intentionally high-level,” said Lynne Parker, U.S. deputy chief technology officer at the White House’s Office of Science and Technology Policy. “We purposely wanted to avoid top-down, one-size-fits-all, blanket regulations.”
The White House said the proposals unveiled Tuesday are meant to promote private sector applications of AI that are safe and fair, while also pushing back against stricter regulations favored by some lawmakers and activists.
Federal agencies such as the Food and Drug Administration and the Federal Aviation Administration will be bound to follow the new AI principles. That makes the rules “the first of their kind from any government,” Michael Kratsios, the U.S. chief technology officer, said in a call with reporters Monday.
Rapid advancements in AI technology have raised fresh concern as computers increasingly take on jobs such as diagnosing medical conditions, driving cars, recommending stock investments, judging credit risk and recognizing individual faces in video footage. It’s often not clear how AI systems make their decisions, leading to questions of how far to trust them and when to keep humans in the loop.
Terah Lyons of the nonprofit Partnership on AI, which advocates for responsible AI and has backing from major tech firms and philanthropies, said the White House principles won’t likely have sweeping or immediate effects. But she said she was encouraged that they detailed a U.S. approach centered on values such as trustworthiness and fairness.
“The AI developer community may see that as a positive step in the right direction,” said Lyons, who previously worked for the White House science and technology office during the Obama administration. “It’s a little bit hard to see what the actual impact will be.”
What’s missing, she added, are clear mechanisms for holding AI systems accountable.
Another tech watchdog, New York University’s AI Now Institute, said it welcomed new boundaries on AI applications but it “will take time to assess how effective these principles are in practice.”
Kratsios said he hopes the new principles can serve as a template for other democratic institutions such as the European Commission, which has put forward its own AI ethical guidelines, to preserve shared values without impeding the tech industry.
That, he said, is “the best way to counter authoritarian uses of AI” by governments that aim to “track, surveil and imprison their own people.” The Trump administration has sought to penalize China over the past year over AI uses the U.S. considers abusive.
The U.S. Commerce Department last year blacklisted several Chinese AI firms after the Trump administration said they were implicated in the repression of Muslims in the country’s Xinjiang region. On Monday, citing national security concerns, the agency set limits on exporting AI software used to analyze satellite imagery.