AI businesses must submit safety testing to the US government.

Washington — The Biden administration will force key AI system developers to reveal their safety test results to the government.

The White House AI Council will evaluate President Joe Biden's three-month-old executive order to govern the fast-changing technology on Monday.

The order's 90-day aims included requiring AI businesses to disclose safety test data with the Commerce Department under the Defense Production Act.

In an interview, White House AI special advisor Ben Buchanan said the government wants “to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.

Software businesses agree on safety test categories but not on a standard. Biden's October directive requires the National Institute of Standards and Technology to create a consistent safety framework.

Given the investments and uncertainties produced by emerging AI tools like ChatGPT that can generate text, pictures, and sounds, AI has become a top economic and national security concern for the federal government. The Biden administration is also reviewing congressional legislation and developing technology management regulations with other nations including the EU.

The Commerce Department is drafting a rule on U.S. cloud providers that service foreign AI developers. Nine government agencies, including Defense, Transportation, Treasury, and Health and Human Services, have conducted risk evaluations for AI usage in vital infrastructure like the power grid.

Federal agencies have hired more AI and data scientists. “We know AI has transformative effects and potential,” Buchanan added. “We’re not trying to upend the apple cart there, but we are trying to prepare regulators to manage this technology.