I have known that I will be teaching a Topics in AI course since March. It’s standard for new faculty, both at liberal arts colleges and at larger universities, to teach a course on their research their first semester. For me, however, that would have been entirely insensitive to the new department and the lack of computer science students, especially given the niched area of cognitive architectures. Although Oxy currently has an AI course, it is offered by the cognitive science department and has no direct computer science requirements (although a Computational Approaches to Cognition course is one of several possible prerequisites). The course therefore struggles with teaching material that students both with and without computer science experience could understand. So a slightly more advanced AI class, taught from a computer science perspective, is a good fit for both my first semester and for the college.
Since I’ve spent some time thinking about this course and what should be in it, and since I’m keeping a blog about my life as a computer science professor, I thought I would write about my plans on the course thus far. It’s likely that this would change drastically once the semester starts, but comparing the end result with these initial plans would also be informative. This post covers the general outline of the course, and coming posts will talk more about each topic in detail.
What are the goals of the course?
As with all AI courses, an obvious goal of this course is for students to gain familiarity with the algorithms used in AI. One thing I’ve always felt that AI courses fail at, however, is teaching students how to think about AI problems. Students may come out knowing how an algorithm works, but they lack the ability to represent real problems in a form upon which the algorithms could be applied. This process is non-trivial, and often requires knowledge of the problem domain that lie outside computer science. For example, when I taught the AI course at Michigan, I created a Bayesian network that calculated the probability that a student would receive an A for a course. After some questions examining how different known facts affected the prediction, I then asked students for a factor that they thought were missing, and to suggest the right causal relations and probabilities such that the new factor can be incorporated in to the model. Despite the simplicity of the question – there were basically no wrong answers – I was surprised to learn that students disliked the question. I was not as surprised to find that although most of the homework was reused this semester, this particular question was taken out.
The moral of the story for me is that while students may understand the technicalities of calculating probabilities and likelihoods, they do not understand the work that goes into creating the representation, which often determines whether the chosen method would succeed. This is something that I value as much as the ability to code up a complicated algorithm, and is something I want students to start thinking about from the first AI technique they study. Along similar lines, since this is an upper-level computer science course, I want students to also know where the algorithm fails and having a sense of the open research questions that have yet to be answered.
(Of course, knowing when to apply an algorithm and its failure cases is not a skill unique to AI, and I plan on making this a part of the main computer science sequence as well.)
Finally, since a number of students will be coming into the course with a cognitive science background, I will likely how the techniques relate to cognitive science, either as a tool for modeling, or as differences from how people approach the same problem.
Who are the students?
The course is listed in the catalog as requiring both Introduction to Cognitive Science and Fundamentals of Computer Science (the CS1 course). Notably, the existing AI course is not a prerequisite, meaning I cannot rely on students knowing what search means in an AI context. These students are likely to be comfortable with Java, although only optionally having a basic understand of data structures and algorithms. I’m not as much worried about their ability to implement complicated algorithms as I am with their ability to write non-trivial programs. Students also may or may not have some math background for computer science (eg. logic, set theory, probability, etc.), but this is less of a concern. I am hoping that making students work in pairs or groups will help reduce some of these disparities, but whether it works will depend on the exact experience of the students I get.
What topics are going to be covered?
I plan on covering at least four topics, three of which I’ve decided on:
- Reinforcement learning
- Bayesian networks
- Cognitive architectures
The first two topics are often in standard AI courses, but do not make an appearance in Oxy’s current AI course. When/If I reorganize Oxy’s AI course, I may or may not include these topics. I have yet to determine the last topic for the course, but it will likely be driven by student interest.
These topics were chosen partially because I have experience teaching them – there were all part of the AI course I taught – and partially because they do not rely too heavily on the ideas of search (that is, missing the basic AI course should not hinder students too much). These topics were also chosen because of their relation to cognitive science, especially for reinforcement learning and cognitive architectures. Whether I actually emphasize these connections depends on whether students are interested in them.
How is the course organized?
With four topics, each topic will receiving roughly a month of class time. Given the discrete topics, my plan is to have the roughly the same approach for each one. The first two weeks will be scaffolded exercises, for students to build up to the point of understanding the basics of the field and being able to implement the relevant algorithms. The last two/three weeks will focused on a paired/grouped project, with the goal of either applying the technique to something that interests the students or to explore how the results of the technique vary on more difficult domains. For each group and each topic, students will have to write a short report on what they learned, as well as a short in-class presentation (probably require each group to present at least once over the semester). At the end, there will also be a chat with each student individually, a conversation during which their understanding will be much more apparent than through an exam. My main concern currently is the amount of work this would require of students, particularly if the students are not yet comfortable structuring larger programs. I believe this work load is doable, however, provided I keep an eye on student performance.
In the coming weeks, I will expand on this general plan and sketch out in more detail what I’m planning for each topic, including my ideas for the undecided one.