An analysis of policy documents from 116 R1 U.S. universities found that 63% of these institutions encourage the use of generative AI, with 41% offering detailed guidance for its use in the classroom. More than half of the institutions discussed the ethics of generative AI use, while the majority of guidance focused on using generative AI for writing activities. The research was published in Computers in Human Behavior: Artificial Humans.
Generative AI is a type of artificial intelligence that creates new content such as text, images, audio, code, or video based on patterns learned from large datasets. It works by using models like neural networks to predict and generate outputs that resemble human-created content.
People use generative AI to write documents, summarize information, create artwork, design products, and automate routine tasks. It also supports scientific research by analyzing data, generating hypotheses, and assisting in code or experiment design. Businesses use it for customer support, marketing, prototyping, and improving productivity across many workflows.
In education, generative AI helps students learn by providing explanations, tutoring, and personalized feedback. In medicine, it assists with interpreting data, drafting reports, and even exploring molecular designs for new drugs. Artists and designers use it to explore creative variations and accelerate their creative process. However, generative AI also raises concerns about misinformation, copyright issues, and ethical use.
Study author Nora McDonald and her colleagues wanted to explore what guidance higher education institutions were providing to their constituents about the use of generative AI, what the overall sentiment was regarding its use, and how that sentiment was manifested in actual guidelines.
They were also interested in ethical and privacy considerations, if represented in the guidelines. These authors note that, although the use of generative AI—primarily ChatGPT—became very popular very quickly after its release, there are voices in education that remain staunchly opposed to the use of such applications.
The study authors collected policy documents and guidelines that were publicly available on the internet from 116 R1 institutions, utilizing the Carnegie Classification framework for classifying colleges and universities in the United States. According to this classification, R1 institutions are universities with the highest level of research activity.
The researchers downloaded documents that specifically dealt with generative AI, resulting in a total of 141 documents. Four researchers reviewed 20 of these documents to create a codebook (a coding system for classifying the documents according to their contents). They then used this system to categorize all the other documents.
Results showed that 56% of institutions provided sample syllabi for faculty that included policies on generative AI use, while 55% gave examples of statements regarding usage permissions, such as “embrace,” “limit,” or “prohibit.” Fifty percent provided activities that would help instructors integrate and leverage generative AI in their classrooms, while 44% discouraged the use of detection tools meant to catch AI-generated work. Fifty-four percent provided guidance for designing assignments in ways that discourage the use of generative AI by students, and 23% gave guidance on how to use AI detection tools.
Overall, 63% of universities encouraged the use of generative AI, and 41% offered detailed guidance for its use in the classroom. The majority of guidance focused on writing activities; references to code and STEM-related activities were infrequent and often vague, even when mentioned. Fifty-two percent of institutions discussed the ethics of generative AI regarding a broad range of topics.
“Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices,” the study authors concluded.
The study contributes to the scientific understanding of the stances U.S. universities take on generative AI use. However, the results of the study are based on an analysis of policy documents, not on the study of real classroom practices, which might not fully reflect the provisions specified in the policies.
The paper, “Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines,” was authored by Nora McDonald, Aditya Johri, Areej Ali, and Aayushi Hingle Collier.
Leave a comment
You must be logged in to post a comment.