Step 22: Learn to Teach AI to Cognitive Science Students

I think I’ve figured out what the problem is.

I have written multiple posts already on the things I learning about teaching cognitive science: how people use readings differently, how we team-teach to provide different perspectives, and so on. The most recent development is that I spent a lecture on the A algorithm, which we teach to give students a perspective on how computers solve problems. (Aside: the course used to teach depth- and breadth-first search, but I figured that A is intuitive enough that students can get the idea. A* was also the topic of my teaching demonstration, so that was mildly amusing.) Then we assigned homework, which made students follow several steps of the algorithm by hand. Keep in mind that this is the first real graded work that we have received from students.

And students all failed.

No, that’s not true, but the distribution for that question is definitely more uniform and longer-tailed than the distribution for the other questions. Some students performed best-first search instead (without knowing it), while others explained their answers with “intuition” without providing the calculations. I had anticipated that the scores would be lower, but not to this extent. This is especially since I thought the lecture went well, and students seemed to follow along or even preempted me on the introduction of the heuristic. Maybe one student out of the fifty had questions during office hours, so I though it wasn’t a big deal.

While this was going on, I also got back an assignment I gave to my Topics in AI students. To accommodate students who do or do not know how to code, I gave them two choices. The first is to implement a lambda-Q-learner (ie. with eligibility traces), then test its performance on different domain and agent parameters. For the other students, their assignment was to describe a real task in reinforcement learning terms, by specifying what the environment consists of, how they would represent the state, what actions are available, and what the rewards would be, and justifying all of the above. The intention was for students to recognize the cases that cause problems with reinforcement learning (eg. large state spaces, exploration/exploitation) and use them to explain their choice.

As is the theme for this post, the assignment didn’t go as planned. Their representations of state are often valid – discretizing GPS locations into a larger grid, for example – but what they didn’t do is explain why they did this, or what they are trading off when they make that decision.

Thinking about both of these failures, what I realized is that I was missing some crucial pieces of pedagogical content knowledge. Or rather, there’s something that the idea of pedagogical content knowledge doesn’t capture: the background knowledge and training of the students. This is not surprisingly in any way – one of the first things I tell knew teachers is to consider what students already know – but I feel this is a different kind of knowledge that I have failed to account for. It’s not the students’ lack of understand of A* that matters, but they’re unfamiliarity with the methods of computer science itself.

For example, one explanation for students performing best-first search instead of A is that they don’t understand the technical definitions of “cost” and “heuristic”, since this is not a precision necessary for normal essay writing. Similarly, some students were confused about the Manhattan distance as a calculation, versus its use as a heuristic. This is a difference that has never occurred to me before, how these are actually different, and the Manhattan distance is in fact used differently in different problems – say, between a grid world where the Manhattan distance is used directly, versus in a sliding puzzle where the Manhattan distance is used on each tile*, with the heuristic being the sum of these distances. Again, this is not a mistake I would have anticipated before looking at student work.

My students in Topics in AI have a similar problem, if only on a more abstract level. I take more fault here – I should have gone through an example first, and also have been more specific on what questions they should be answering. In this case, though, it was the mental habits that the students have yet to build, of asking the questions that computer scientists tend to ask. Of course, this is exactly what computer science is about – representing the relevant information so that algorithms can take advantage of it. However, I have yet to figure out how to explain this to students, never mind teaching them to think about the representations. I wonder how long it take students to internalize asking these kinds of questions.

Now that I have some experience teaching cognitive science students, I can start adapting my existing pedagogical content knowledge. Although I’m not happy to have made the mistakes I have, I also think it’s fascinating that I still have so much to learn about teaching computer science.

Advertisements
Step 22: Learn to Teach AI to Cognitive Science Students

One thought on “Step 22: Learn to Teach AI to Cognitive Science Students

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s