The unauthorized A.I. use long game OR How not to burn out my empathy circuits in an online stats course

 It is Week 1 in my online introduction to statistics course, and I've already detected unauthorized A.I. use in discussion forums. I'm not going to share the details of how I know, but in addition to technical details the writing is competent, factual, and bland. In just reading the suspected posts, they included course concepts we have yet to talk about, not really address the points being asked to discuss, and were just sad. 

In looking at many of these student's posts and in the online discussions I'm having with other faculty, I get the sense that many were using A.I. because they didn't feel like their honest writing would be appreciated, or awarded credit. This bums me out, as the struggle to make yourself understood is something every human engages in, and in statistics, it is a central problem. By using A.I. to write their posts for them, these students are missing the opportunity to struggle through these ideas. 

For example, I posted the following in a general announcement;

Data

A source of data that I really enjoy teasing apart is you! Each interaction with Canvas leaves behind some data, and as your instructor for a statistics course, I enjoy diving into that data and thinking about what it tells me about you. One interesting data set records how much time students actively spend in Canvas. Below is the data set and a few questions you may want to consider. The data is formatted so that each row is one student, the first column is the number of hours they have spent actively in our Canvas course, and the number of minutes is the number of minutes (minus the number of hours) they have spent in our Canvas course. So for example a 1 and then 51 would represent a student who spent 1 hour and 51 minutes in the course. 

Hours Minutes
1 11
1 14
1 1
2 10
1 30
0 48
2 0
0 14
0 41
14 40
2 59
4 5
0 12
0 50
2 47
0 47
2 24
0 38
1 35
5 37
6 25
1 51
0 25
0 41
1 23

    • Do there seem to be 'unusual' values in this data set?
    • Do some values seem more likely than others? 
    • If you have spent more than the average amount of time in the course than your colleagues, what might you do (if anything) differently?
    • If you have spent less than the average amount of time in the course than your colleagues, what should you do differently?
Of the responses I got, one had the technical and linguist hallmarks of A.I. use. Again, this bummed me out, as asking about these ideas of what values seem 'unusual' or seem more likely than others leads directly to measures of center and variability. I am offering students the opportunity to think through these ideas, building a need for mean and standard deviation, but in using A.I. students don't get any of that. They don't build the foundation for these ideas, and therefore likely won't understand these ideas, turning to more A.I. use. 

In the same announcement I shared the following;

If you find yourself using A.I. for course assignments, please talk to me. I am concerned that if you start using A.I. now you may not be able to stop throughout your college career, as you won't have the prerequisite knowledge to complete future courses. A college degree is not just about getting a better job, it is meant to be a marker of understanding about the world and your discipline in a meaningful way. Using A.I. for course assignments takes that opportunity away from you. 

To be clear, if I discover unauthorized A.I. use in any of our graded assignments, I'll award a zero (0) for the assignment the first time, and a failing grade (F) for the course the second time. This is not the outcome I want for any student, and I will do the necessary work of maintaining the clear academic standards I have communicated to all students in any course I am the instructor of record for. 

I know, reads a lot like Abe... Or maybe Skinner.


But look, I'm not budging on expecting students to write their own thoughts, as a form of learning. That's the job. I believe in their ability to grow, and capacity to learn statistics. If I have to do that in spite of students, I'm ok with that. I'd rather not. I'd rather students understand the utility of statistics and math, and use them as tools to make better decisions. It certainly feels like I have to determine the level of this I am willing to put up with, and invest a corresponding amount of time.

On that note, I don't think I'll be able to investigate each and every instance of unauthorized A.I. use. There is just enough plausible deniability of using A.I. in many assignments, that I can't 'prove' A.I. use in all cases. 

Note-Taking Assignments – Given that students are expected to hand write these notes, it is unlikely that they will use A.I. Unlikely, but not impossible.

Student ability to submit A.I. generate work - Unlikely
Ability to confirm unauthorized A.I. use with technical means - No
Ability to confirm unauthorized A.I. use with textual means - Unlikely or difficult

Discussion Forums  - This area seems ripe for students to use A.I., as most of these will either be a regular Statistics in the News discussion forum, or a Homework Help discussion forum. 

Student ability to submit A.I. generate work - High
Ability to confirm unauthorized A.I. use with technical means - Some
Ability to confirm unauthorized A.I. use with textual means - Unlikely or difficult

Homework – I use a MyOpenMath related site (WAMAP) where many of the questions were written by faculty from across Washington state. It is already known that many of the questions are on websites like Chegg and MyCourseHero. With A.I. it is trivial to copy and paste the questions into a chatbot, and even for questions that use a graph a screenshot of the question into the chatbot will suffice. There is only one technical way I know of determining if a student is using A.I. or not. 

Student ability to submit A.I. generate work - High
Ability to confirm unauthorized A.I. use with technical means - One method that can be unreliable.
Ability to confirm unauthorized A.I. use with textual means - None

Quizzes - Again I am using the same site for homework, so it is likely that a student will be able to use A.I. for these quizzes. For some questions I will require students to submit written work, so that may reduce the amount of A.I. use, however it will work counter to the one technical method I have for detecting A.I. use. 

Student ability to submit A.I. generate work - High
Ability to confirm unauthorized A.I. use with technical means - One method that can be unreliable.
Ability to confirm unauthorized A.I. use with textual means - None

Excel Skills/Applications - This is the biggest question mark as I used these assignments before November 2022. In Excel Skills assignments I provide students with an Excel file, and on different worksheets walk them through a concept or computation, and on other worksheets ask them to apply these ideas. In Excel Applications I have three worksheets that have students answer question using the skills they learned in the Excel Skills assignments, with what we have learned in class. 

Student ability to submit A.I. generate work - Unknown
Ability to confirm unauthorized A.I. use with technical means - Unknown
Ability to confirm unauthorized A.I. use with textual means - Unknown

Course Project - This is where I have a plan to detect A.I. use, and I am unsure of how students will react. The idea is to provide each student with their own Google Document, where they will draft their project submissions. 

Student ability to submit A.I. generate work - High
Ability to confirm unauthorized A.I. use with technical means - One method that I have yet to explore.
Ability to confirm unauthorized A.I. use with textual means - Unlikely or difficult

From that analysis I come to the following table;


From this I am feeling the most confident about identifying A.I. use in the Course Project, and possibly the Excel Skills/Applications. Homework and Quizzes. I am thinking of using my detection of A.I. use in Discussion Forums as a kind of signal of students who may be prone to use it in other assignments. If and when I submit a zero for an assignment I will likely do with a package of evidence that they have used A.I. for other assignments.

To be clear, this sucks. Constantly playing a Turing test on every student submission is not what I signed up for, and wonder if I just don't teach online after this year. 

What do you think? Am I being overly prescriptive? Paranoid? Is there a way to actually get students to understand that learning takes effort, and the smoothness of A.I. use is limiting their own growth?



Comments

Popular posts from this blog

Planning for the Fall: Flipping my class and shadow grading

Building Thinking Classrooms: Planning for Winter 2024