A multidisciplinary task force of Cornell faculty and staff has issued a report offering perspectives and practical guidelines for the use of generative artificial intelligence (GenAI) in the practice and dissemination of Cornell’s academic research.
The report, published Dec. 15, is the first step in establishing an initial set of perspectives and cultural norms for Cornell researchers, research team leaders and research administration staff. The task force was led by Krystyn Van Vliet, vice president for research and innovation.
“I am especially grateful to the task force members for spending their time together to learn, debate and frame practical guidelines – at a time of enormous creative opportunity and concern for ambitious, responsible use of such tools in research,” Van Vliet said. “This reflects thoughtful input from across our campuses in the same year that many of us became more aware of how such tools could be developed or used in our research lives.
“By framing GenAI use in the context of duties we each hold in our research and translation roles as faculty, staff and students,” she said, “the perspectives and cultural norms considered here are starting points for wider discussion at Cornell and beyond.”
Early in the fall semester, Cornell issued a report offering guidance to facultyfor teaching in the age of ChatGPT and other GenAI technologies. And on Jan. 5, Cornell issued its third and final GenAI-related report, with guidance on Generative AI in Administration; all three reports are on IT@Cornell’s AI website.
The research report addresses the use of GenAI at four stages of the research process:
- conception and execution – includes ideation, literature review, hypothesis generation and other parts of the “internal” research process by the individual and research team, prior to any public dissemination of ideas or research results;
- dissemination – includes public sharing of research ideas and results, including peer-reviewed journal publications, manuscripts, books and other creative works;
- translation – includes reducing research findings or results to practice, which may be in the form of patented inventions or copyright; and
- funding and funding agreement compliance – includes proposals seeking funding of research plans, as well as compliance with expectations of sponsors or the U.S. government policies relevant to Cornell.
As noted in the report, in addition to such ubiquitous features as spell- and grammar-check, AI is already used as a tool in activities related to research, such as data analysis and document retrieval, but only for those with substantial programming experience. GenAI would allow these tools to be accessed by more people, including researchers and support staff.
“These rapidly evolving technologies have the potential to bring about transformative changes in academic research, but they represent unchartered territory, with great opportunities and significant risks,” said Natalie Bazarova, M.S. ’05, Ph.D. ’09, professor of communication in the College of Agriculture and Life Sciences and associate vice provost in the Office of the Vice President for Research and Innovation (OVPRI). “In our report, we provide guidelines and safeguards to ensure that research is conducted with the highest levels of integrity while also encouraging the exploration of these new tools and GenAI research frontiers.”
Task force member David Mimno, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science, describes his general sense of the technology as “optimistically cautious.”
“While there are a lot of valuable and useful opportunities, which will only grow as people figure out new ways to put systems to use, there’s a lot of uncertainty, rapidly changing technology and fundamental limits,” he said. “Right now we’re in a very dangerous zone where systems are good enough that people will trust them, but not good enough that they should trust them.”
The task force lays out the possibilities, and potential perils, of the emerging technology: “GenAI provides the user a sense of power in its apparent intellectual assistance on demand, which unsurprisingly also vests the user with a need to take responsibility. Academic research groups and projects often include multiple users with different stages of contribution, different degrees of experience and leadership, and different responsibilities to research integrity and translation of research results to societal impact.”
The report includes a Q&A focused on best practices and use cases for each of the four stages of research that may serve as discussion starters for research communities, as well as a summary of existing community publication policies regarding the use of GenAI in research from funders, journals, professional societies and peers.
Other members of the task force are:
- Michèle Belot, the Frances Perkins Professor of Industrial and Labor Relations (ILR School) and professor of economics (College of Arts and Sciences);
- Olivier Elemento, professor of computational genomics and of physiology and biophysics, Englander Institute for Precision Medicine, Weill Cornell Medicine;
- Thorsten Joachims, professor of computer science and of information science, and associate dean for research (Cornell Bowers CIS);
- Alice Li, executive director, Cornell Center for Technology Licensing (CTL);
- Bridget MacRae, Office of Research Integrity Assurance (OVPRI);
- Alexander Rush, associate professor of computer science, Cornell Tech and Cornell Bowers CIS;
- Lisa Placanica: senior managing director for CTL at Weill Cornell Medicine
- Stephen Shu, professor of practice, Cornell SC Johnson College of Business;
- Simeon Warner, associate university librarian for information technology and open scholarship, Cornell University Library; and
- Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering in Cornell Engineering.
# # #
The above article originally appeared in the Cornell Chronicle on January 17, 2024.