Generative AI at the U of Regina
Guidelines for faculty and instructors
Guidelines for faculty and instructors
Menu
The emergence of generative AI systems like ChatGPT presents both opportunities and challenges for teaching, learning, and research at all academic institutions including the University of Regina. These technologies have the potential to enhance and augment human capabilities but also raise serious concerns about social impacts, academic integrity, and the role and purpose of AI in teaching and learning. In light of these complexities, the University of Regina has formed a Working Group on Generative AI in Teaching and Learning. This group has been tasked with establishing guidelines for the appropriate and responsible use of AI as it relates to U of R coursework. The information below represents the initial work of the committee; further guidance will be added in the coming weeks and months.
As an institution that strives to provide high-quality and accessible education, the University of Regina approaches these new technologies carefully. We are committed to fostering informed discussions about the responsible use of AI, developing policies that uphold the principle of academic integrity, and identifying constructive applications of AI that will enrich the student experience. This is uncharted territory, but the University of Regina community is committed to facing it with curiosity, care, and humanity.
Please see the FAQ below for important definitions, information, and guidelines for the use of generative AI at the University of Regina. Please keep in mind that as our collective understanding of Generative AI evolves, so too will these guidelines. As such, this is considered to be a living document. (Last updated August 23, 2023)
Artificial intelligence (AI) refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human-like thinking. Within this realm, Generative AI stands out as a subset that can create new content and artefacts, such as text, images, audio, and video, based on patterns learned from large amounts of data. Unlike other types of AI systems that merely analyze and interpret data, generative AI produces novel and unique outputs from simple text prompts. The recent emergence and popularization of powerful generative AI platforms like ChatGPT offers both promising opportunities and new challenges for university teaching and learning, particularly in areas related to academic integrity, intellectual property, and creativity.
ChatGPT is a powerful, conversational generative AI system (i.e., chatbot) developed by OpenAI that was launched in November 2022. ChatGPT utilizes deep learning techniques to comprehend and generate human-like responses to the input it receives. ChatGPT’s release to the general public has led to concern in both K-12 and post-secondary learning institutions about the tool’s potential use for student plagiarism and other academic misconduct. In considering the uses of generative AI in teaching and learning, it is important to acknowledge that ChatGPT is only one of many similar tools that are available for public use (e.g., Google’s Bard, Anthropic’s Claude, Microsoft’s Bing, Meta’s LLaMA, Stanford’s Alpaca). Additionally, based on announcements throughout the tech industry, generative AI systems will soon be ubiquitous and will be embedded within commonly used tools such as Microsoft 365 and Google Workspace.
In post-secondary education, generative AI has potential uses across teaching, learning, research, and administration. Some examples specific to teaching and learning might include:
As generative AI becomes more ubiquitous and accessible, it is expected that its applications in teaching and learning will become more commonplace and varied.
The decision of whether or not to allow the use of ChatGPT or other generative AI in academic work ultimately rests with each instructor, based on their pedagogical and disciplinary expertise. However, instructors have a crucial role to play in mitigating potential misuse. It is recommended that instructors take the following actions:
Tools designed to detect AI-generated content have demonstrated varying degrees of reliability; therefore, such tools should not be relied upon to ensure academic integrity. At present, Turnitin, the University of Regina’s institutionally adopted plagiarism-detection tool, produces an AI-detection report alongside the originality report when used for text-based assignments. This additional report is only available to the instructor. Due to the unreliability of AI-detection tools, and the possibility of false positives, the AI-detection report cannot be considered conclusive proof of academic misconduct; however, the report may be presented to the investigating dean alongside other potential evidence of cheating (see below for other possible signs). It should also be noted that Turnitin is presently the only institutionally adopted plagiarism- and AI-detection tool and that instructors are not permitted to use non-approved tools for the purpose of AI-detection.
In cases where instructors suspect that student coursework has been partially or wholly AI-generated without proper disclosure/citation, instructors are encouraged to look for additional signs, such as:
Note that many of the issues listed above may also point to other types of academic misconduct (including plagiarism and contract cheating) or to poor writing ability.
If you’d like to receive support for the use or integration of generative AI in teaching and learning, feel free to contact the Centre for Teaching and Learning via email at ctl@uregina.ca.
If you require support regarding a matter related to academic misconduct, please consult with your investigating dean and/or consult the instructor’s page on the Academic Integrity Hub.