Scientists Turn to AI While Lawmakers Consider Guidelines
One of the primary obstacles to using AI in research is the inability to validate the accuracy of a model’s work. Lawmakers and government scientists have proposed solutions to this problem, and AI developers are chipping away at it. For example, the most recent OpenAI model, o1, was built to use a “chain of thought" that enables some insight into the processes that led to its conclusions. In the meantime, researchers are finding their own ways to use "black-box" AI and partially validate their results.
There are currently no established standards for validating the accuracy of black-box AI models, but lawmakers and scientists have expressed support for the development of such standards. In July, Sens. John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV) introduced the Validation and Evaluation for Trustworthy Artificial Intelligence Act, which would require the director of the National Institute of Standards and Technology to develop voluntary guidelines for validating and evaluating AI systems.
At the same time, other lawmakers have introduced major legislation that would bolster the use of AI in science, like the Department of Energy AI Act. Researchers, federally funded and otherwise, are also moving forward in their exploration of AI as a research tool. For instance, the Climate Modeling Group at the Department of Energy, led by Peter Caldwell, is using AI models called emulators to run thousands or millions of physics-based simulations to help distinguish between climate change signals and normal weather variability.
Keep reading: https://lnkd.in/eqgsH6Vz