Faculty and Data: Threading the needle

Last spring my department did an assessment project to determine if our corequisite support courses (which I am the coordinator of) are effective in helping students learn the course material. My home institution has been running corequisite support courses (Business Mathematics with Support, College Algebra with Support, and College Trigonometry with Support) for more than a year and it seemed like a good time to assess whether these courses were effective. I wrote assessment questions aligned to our course outcomes and asked for faculty feedback, which turned into really interesting and engaging conversations. I also wrote rubrics with granular criteria (53 in total) and asked for faculty feedback, which turned into even more interesting conversations. Faculty in all of the support courses and most of the standard courses included these questions (a feat by itself), and after a day of about two dozen instructors grading the 220 student artifacts we had solid data showing that there was no significant difference in student understanding of the course outcomes.

As a result of this project I was asked to became the Outcomes Assessment Liaison for the Math Department at my home institution. I've also joined the Outcomes Assessment Committee, and it feels like I'm gearing up for another phase of my work. The conversations I had with faculty showed me that I have a lot more to learn about assessment, rubrics, and measuring understanding. Granted I've made decisions about what questions to write to assess outcomes and how to grade them. However seeing how faculty make these decisions for their own courses, and then the effect these decisions have on students makes me wonder whether faculty can (or should) have conversations about the decisions they are making. Why decide on a five point scale versus a ten point? How would different faculty grade the same student work? What are the best practices of math assessments senior faculty have and junior faculty crave?

With these questions the idea of collecting data on questions, rubrics, and course success rates is fairly natural, but like venomous animals, natural doesn't mean safe. Faculty are (rightfully) concerned that this data would be used as a hammer to shape everyone into the mold of the faculty member with the highest course success rates. Note these questions are separate from the expectations of most administors; How can we measure program-level outcomes? How can we show student progress to accreditors? How can we determine the areas in a program that need support? Yes, administrator's questions are absolutely related to faculty decisions, but their focus seems to be on the system as a whole. (I could be wrong in that last point, but being in a service department I don't have students wanting to go into my discipline, and therefore I don't have a lot of control over a specific program.)

Another point to consider is the college's Guided Pathways initiative. Like other community colleges we are undergoing reforms to clarify our pathways, get students onto pathways, keep students on pathways, and to determine whether these pathways are effective. The college created four Guided Pathway Pillars for each of these tasks, and I was (am?) on the pillar tasked with keeping students on the path. In that work I looked at both automated and manual methods for determining students who are 'at-risk' of not being successful in their courses. One dataset that was always asked for was access to the live student grades in our learning management system, Canvas. The idea is that with actual grade data our student services staff could conduct interventions to help students get back on track.

I am not opposed to the idea that student grades could be used to determine how outreach should happen, but that use of grade data needs to be restricted in a very specific way. Granted, administrators can get access to course success rates but live student grades during a term feels like a different beast altogether. If a course had a 50% pass rate half-way through the term and an administrator heard about that, is it out of the realm of possibility that the administrator would consciously or unconsciously apply pressure to that faculty member to increase their pass rate? Sure, lots of hypotheticals and unknowns there, but there very well could be other impacts I'm not thinking of.

This leads me to two thoughts on how we can create an environment of trust among faculty to use data;

1. A memorandum of understanding of how course data is to be used. There needs to be a line between faculty and student services use of the data, with administration accessing the data. Yes, administration needs some data, and that would be outlined in the memo. After the current union negotiations are completed I'm hoping to get facetime with our union president to propose such a document.

2. A faculty learning community where membership means you share your course data with the other faculty members. I really like this idea as it creates and builds community, while taking ownership for what we are doing. Starting from a place of looking at equity data we could address reforms that would have the biggest impact on all students.

Have you tried any of this? What are the successful data projects you have been a part of, or heard about?

Comments

Popular posts from this blog

Building Thinking Classrooms: Planning for Winter 2024

Culinary Math and Visual Mnemonics