Grades were due last Monday, and on last Friday the student teaching evaluations were released. Contrary to all the previous times I had teaching evaluations, I actually held off from immediately reading them. Instead, I took the time to write out my own three points of improvement for each class. The goal was not to predict what students would write, but to set goals for improvement without being biased by students.
Instead of talking about all of them, let me just mention one point of improvement that applies to both of my courses: that I need to make the big picture more explicit. This was surprisingly to realize, since the courses cover very different topics and are aimed at very different audiences. But both courses are organized as surveys of multiple topics, and I just didn’t spend enough time explicitly pulling everything together. For Topics in AI, this was somewhat compensated for by the chats, which allowed me to prod students in that direction; for Intro to Cog Sci, although write homework and exam questions that pull together multiple lectures’ content, I suspect students still don’t get the big picture.
I didn’t think students would pick up on this, but they did. Aside from that, the remainder of this post will talk about each class individually, starting with my better one. Instead of sharing all the numbers – which I don’t feel comfortable doing yet – I will talk about general trends I noticed, and (where applicable) ideas for fixing them in the future.
Note: the evaluations are separated into sections about the student, the instructor, and the course. I will only talk about the last two.
Topics in Artificial Intelligence
For the course, the lowest rating is for whether students improved on speaking clearly – an understandable issue, since students never had to present anything (except to me). The next two, however, were surprising: that students don’t think they improved on their ability to work independently or to write clearly. The independence issue I can see, since I require all projects to be done in pairs. Getting students to work individually was not a goal of mine, and a good question at this point is whether it should be. Part of my consideration is that I’m not sure of the value of working individually in a course like this one, since the focus is not on developing any deep technical ability. I think the evaluation is justified, and that it’s not something that I intend to change.
The small improvements in rating I’m more confused by. Most of the projects require a written report as the deliverable, although I admit that no effort was spent on my part trying to improve their writing. One possibility here is to direct students to the writing center, but the projects here are also not the type of writing they can help with. The conflict I see here is between improving writing – which often requires a more open-ended prompt – versus guiding students in asking and considering the right questions. The latter was something I had trouble with, which naturally led to the restriction on what students could write about. I do want to help my students develop their writing ability, but I don’t want to sacrifice the topical (artificial intelligence) questions that they should also be thinking about. It may be possible to provide sample answers as demonstration of what I expect, as opposed to explicitly listing what questions they have to answer – but I suspect that the questions are abstract enough that it’s hard for beginners in the field to grasp.
The last thing about the course evaluations, I do want to say that I don’t believe that students improved their ability to read critically, despite the relatively high ratings for that question. Although I assigned readings and made students ask questions about them, I never explicitly talk about the readings in detail, never mind any kind of deep analysis. Unlike the writing, this I could have done more about; maybe that’s something for me to work on in the future.
Onto the instructor questions then. The lowest rating I got in that set of questions was for whether the instructions and criteria for assignments were clear. I agree with this evaluation, even taking into account the improvements made after learning to be specific about the type of explanations I wanted. One example of this (this is me speculating, not from student comments) is in the last assignment about NLP, when I asked students to give their information extraction program a grade, but then took points off when they didn’t say how they are counting false positives and false negatives. As I said above, this may be an issue of course structure more than assignment instruction.
Other than that issue, the overall evaluations for this class are fairly high – which I think is expected for a class of fifteen. There are some more smaller issues with the format of the lecture – I opted not to use slides and instead using the blackboard where necessary, but in retrospect having something that students can refer back to may have been more beneficial, in additional to providing (potentially interactive) visual aid. Another good suggestion from students is including smaller assignments between the “projects”, which could provide practice for the mathematical/algorithmic underpinnings of the topics. I think both of these are good ideas worth incorporating in the next iteration of the course.
Introduction to Cognitive Science
I know I mostly focused on the areas of improvement for Topics in AI, and it therefore sounds strange to say that the evaluations for this course is less positive.
As before, let’s start with the course-level comments. Keeping in mind that this course is graded almost entirely on homeworks and exams, it’s perhaps not surprising that students didn’t feel their speaking or writing skills improved. Again, a question here is whether this course should aim to train students in these skills as well, and whether it would take away from the cognitive science content. In this case, I think it’s possible to do so, although it would require a larger change in the structure of the course. Although – I wonder what STEM class has the highest scores for these categories. The speaking I can see as coming from some research-focused course, where the final assessment is a presentation of some kind (with practices during the semester for scaffolding). The writing part I am truly baffled by. How do humanities courses do it? They have multiple papers on predefined topics that apply some theory, which is then workshopped over multiple weeks. I can’t decide whether a similar system would work for STEM courses – nor why it would or wouldn’t work.
The more intrusive part of the evaluation for this course is about me as the instructor. The category with the lowest score was for my ability to clearly explain concepts, and a look at the comments makes it clear that it was specifically the computational (eg. A*, perceptrons) and mathematical (eg. Bayes’ theorem) concepts that students found opaque. Let me see if I can break them down into categories:
- The lack of connection between the computer science parts and the cognitive science parts. This one I agree with, especially for teaching A*, which I completely screwed up on, and I explained last time how students didn’t understand how artificial neural networks fit into cognitive science. The solution here, I think, is two fold. First, there needs to be a more general lecture (or portions of a lecture devoted to why computers are used in cognitive science in the first place. This is a theme that has been implied the entire semester, but which (given the comments) needs to be emphasized. Second, every computational and mathematical topic needs to be turned on its head – first present a problem in cognitive science, then suggest computers/math as the way to tackle it. This would embed the concepts within the broader content of the course.
- The general difficulty of computer science/math. I’m… not sure what to do about this. I know there are improvements to my teaching, some of which I’ve mentioned previously on this blog; students also suggested having more worked examples/problems. Both of these are good ideas which I will use for next semester. What I’m still unclear on is how much computer science to cover. I will actually single out a comment here, from a student who felt that “There was too much actual computer science in the sense I was doing actual computer science computations. I feel as though the class could do with out it, or reduced. Keep the aspects of computer science, but take away the computations.”
Here’s what’s been bothering me for the last couple days: what does it mean to do computer science without the computation? I mean, students can read about computer models and what that showed about (say) neural networks. Is that computer science or cognitive science? When a students reads Descartes and dualism, it’s clearly both philosophy (of mind) and some of the underpinnings of cognitive science. The same cannot be said of cognitive models, not without also teaching what it means to follow an algorithm. The same can be said about math – we can talk about how researchers use math in their work, but even at the introductory level, I don’t think it’s unreasonable for students to be doing calculations.
What I’m trying to do is understand where students are coming from, and what would help them understand the content better. To put it more simply: why are people afraid of math/find it more difficult, and what can I do as an instructor to overcome that? I’ve never asked myself that question before, but that seems like a topic the evaluations are pointing me towards. It’s also interesting to me that this didn’t come up when I taught introductory computer science at Michigan, despite the first and second projects being somewhat math heavy (no more advanced than floors and ceilings; but then, neither is Bayes’ theorem). Glimpsing the answer to this question – in addition to improving how I explain things – seems like appropriate first steps.
There are some other points that students made, but they are relatively insignificant compared to the two above. It seems like I have some thinking to do before I teach those topics again.