US government to launch AI Safety Institute, develop standards for biotech R&D

The Biden administration is launching a federal organization dedicated to evaluating the safety of artificial intelligence, touching on its use in healthcare as well as the broader societal concerns that have come to the fore following the rapid rise of generative programs.

Operating under the National Institute of Standards and Technology, better known as NIST, the new U.S. AI Safety Institute will develop technical guidance for future regulatory rulemaking and enforcement efforts, according to the White House.

The project was announced by Vice President Kamala Harris during the Global Summit on AI Safety, being held in London—where the country is planning its own similar effort, with a future partner in the aptly named U.K. AI Safety Institute. 

The news follows up on an executive order signed earlier this week by President Joe Biden, which aims to establish guardrails for AI security and seeks to require developers to share their safety test results with the U.S. government.

Meanwhile, the NIST “will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,” the White House said in its announcement of the executive order—which also covers the use of AI in life science research, by calling for synthesis screening standards for computer-generated compounds. Agencies that fund biotech R&D will then tie those standards to federal grants.

AI-generated drug compounds have yet to be approved by the FDA but have rapidly advanced into clinical testing compared to traditional R&D methods. Several companies have demonstrated that AI can shave months to years off of early-stage development timelines—though not every compound has gone on to succeed in human trials.

Meanwhile, the FDA this month published a master list of AI and machine learning-powered medical devices that have received 510(k) green lights, de novo clearances and premarket approvals—spanning hundreds of products through July of this year, and dating as far back as 1995.

According to the agency, the number of AI/ML devices jumped in 2020, increasing 39% over the year before, but slowed to 15% growth in 2021. Projections for 2023 are expected to reach 30% in the number of clearances and approvals compared to 2022.

The field of radiology has claimed the recent and increasing lion’s share, with dozens of green lights in the past two years for computer vision algorithms and programs to analyze MRI, CT and X-ray scans.

Outside of healthcare, the Biden administration also plans to tackle fraudulent phone calls using AI-generated voice models and will call for digital signatures and watermarking to help identify AI-made content. 

A separate draft guidance, through the White House’s Office of Management and Budget, will address the use of AI by the government itself—spanning operations in healthcare, education, employment, federal benefits, law enforcement, immigration, transportation and critical infrastructure.