Johan Bollen, chair of Informatics and professor of Informatics and Computing at the Luddy School of Informatics, Computing, and Engineering, remains on the international forefront of implementing guidelines to make generative AI safe for the public and science.
Along with thought leaders such as Claudi Bockting, Evi-Ann van Dis, Jelle Zuidema, and Robert van Rooij of the University of Amsterdam, Bollen has co-authored an article in the prestigious Nature journal to explore what it will take to reap the many benefits of generative AI while avoiding the danger.
In other words, to embrace the opportunity, but manage the risk.
“This work will affect all of our research efforts because we expect that generative AI and large language models will become integrated into the scientific process as well as the publishing and review process,” Bollen said. “It is therefore of the utmost importance that we have scientists and research-led independent institutions that can regulate and safeguard this technology.”
Bollen and his co-authors propose that the risks regarding widespread use of generative AI and large language models are real, but banning AI technology is unrealistic.
The article, “Living guidelines for generative AI – why scientists must oversee its use,” described the outcomes of recent international summits and proposed guidelines and actions needed to make AI safer.
Two summits were organized last spring and summer at the University of Amsterdam’s Institute for Advanced Studies to introduce “living guidelines” and establish the structures to implement them. The summits drew members and representatives from such major international organizations as the United Nations and UNESCO, the International Science Council, the European Academy of Sciences and Arts, the Patrick J. McGovern Foundation that advises the Global AI Action Alliance of the World Economic Forum, the Organization for Economic Co-operation and Development and the European Commission.
“We are calling for the implementation of a set of living guidelines that strive for accountability, transparency, and independent oversight by a scientific body that can audit AI systems in cooperation with an independent committee of scientists and stakeholders,” Bollen said. “We are also calling for financial investments in such bodies so they have the resources to test and develop systems.”
Summit participants listed three key principles for using generative AI in research – accountability, transparency and independent oversight.
They said humans must evaluate the quality of generated content. Researchers and other stakeholders should always disclose when using generative AI. External, objective auditing of generative AI is needed to ensure quality and ethical use.
Self-regulation won’t work given some companies’ multi-billion-dollar rush to push AI research.
This was a follow up to another Nature article Bollen co-authored last February, “ChatGPT: five priorities for research,” that addressed how to respond to the game-changing implications of conversational AI.
Nature, begun in 1869, is the world’s leading multidisciplinary science journal. It produces the best peer-reviewed research in all fields of science and technology with the focus on originality, importance, timeliness and accessibility.