Pitt, Duquesne and CMU join national AI safety consortium
The Biden administration has established a consortium dedicated to safety and artificial intelligence — as a hub for technology research, several Pittsburgh universities are in the mix.
The U.S. Artificial Intelligence Safety Institute Consortium’s mandate is to find ways to develop and deploy artificial intelligence that’s trustworthy. The initiative comes through the U.S. Commerce Department and has more than 200 member companies and organizations, including Carnegie Mellon University, the University of Pittsburgh and Duquesne University.
The advancement of AI promises enormous potential, but also introduces new and dangerous risks, said CMU in a press release announcing the school’s participation.
As a Catholic institute, Duquesne University's Grefenstette Center for Ethics in Science, Technology and Law brings a slightly different perspective to the group. The center's director John Slattery said he’s especially focused on the moral and ethical social implications of AI.
Take for example ChatGPT, which uses material written by humans to train the algorithms needed to generate text based on that content. But it is not yet discerning. If ChatGPT pulls from bigoted or erroneous writing, what it produces will likely reflect those flaws.
“Companies like OpenAI, like Microsoft and Google, have to work really, really hard to make sure this generative content doesn't have a lot of sort of racist things or biased things or misogynistic things that are coming out of it,” said Slattery, who hopes that the consortium will lead to regulations and standards that hold companies accountable for how their AI products effect and shape society.
Other consortium members include Facebook’s parent company Meta, Google and the Rand Corporation — all three have offices in Pittsburgh.