The Biden administration said on Tuesday it was taking the first step toward writing essential standards and guidance for the safe deployment of generative artificial intelligence and how to test and safeguard systems. The Commerce Department’s National Institute of Standards and Technology (NIST) said it was seeking public input by Feb. 2 for conducting critical testing crucial to ensuring the safety of AI systems, which can be used to produce text, photos, and videos based on open-ended prompts. It is also working to set best practices for AI risk assessment and management. The White House, seeking a cohesive National AI Strategy, tasked NIST with setting these guidelines in an executive order issued last month.
Generative AI has sparked excitement among some and fear in others, with supporters saying the technology can boost productivity and innovation while boosting human creativity. But critics worry it could make some jobs obsolete, upend elections, and potentially overpower humans or have catastrophic effects. The Biden executive order aims to prevent those dangers and “ensure America leads the world in seizing the promise and managing the risks of AI.”
Other efforts by the administration include encouraging federal agencies to build talent and expertise in the field and to take steps to reduce the risk of bias and discrimination in hiring. The administration is also promoting research into technologies that can help detect misinformation, and it plans to set watermarks for AI-generated content to help consumers distinguish what was built by machines from what was created by humans.
In addition, it is directing the Office of Science and Technology Policy (OSTP) to convene a cross-agency task force to explore ways to enhance training programs for the field of artificial intelligence and to improve retention rates for scientists in the field. The administration is also pushing for the development of a national network of AI laboratories where researchers can share best practices and experiment with new technologies. It plans to ease barriers to immigration for highly skilled workers.
The order also requires agencies that bankroll AI projects to develop standards for the work, and it sets out goals like reducing the risk of social media AI algorithms making people more addicted to platforms and preventing the spread of false information and “deep fake” videos. It also urges the government to invest in research into encryption tools to protect privacy online.