The LT&I approach to engaging on the topic of generative AI has three underlying principles:
- Disclosure and transparency are critical. Guidance from most overarching bodies, from the Federal government to journals like Nature and publishers like Elsevier, centre the practice of disclosing when AI has been used and keeping tracking of how AI and human thinking intersect in a work.
- Policing is not pedagogical. We don’t support the use of AI detectors because there is no evidence that they work, their use violates provincial privacy legislation, and their high rate of false-positives is most likely to impact language learners and neurodivergent folks.
- Ethical considerations are not secondary. We believe that responsible use of generative AI means being aware of the carbon production, water use, labour exploitation, underlying biases, and intellectual property debates about these technologies.
We’re also very aware of the range of approaches to AI on campus and the overall confusion many students have with when AI use is or is not appropriate, how to document its use, and how to use it in line with academic integrity practices. Our consistent advice to faculty is outline your expectations for student use of AI in every class and on every assignment.
With this framing in mind, consider the following supports available:
- The AI in Education website, a faculty-facing resource.
- The Library AI LibGuide, a student-facing resource.