Step 10: Prepare for a Topics in AI Course (Part 3)

This is part of a series on the Topics in Artificial Intelligence course I will be teaching in the fall. The first part was posted on 2015-06-23.

The second topic that I would like to teach is Bayesian networks. For those unfamiliar, Bayes nets are a way of presenting probabilistic causality – that is, whether one event is likely to have caused another and, more importantly, if the second event occurred, how likely it is that the first event is the cause. The overused example is of diseases and symptoms: if you have a fever, you’re more likely to have caught the flu than have (say) malaria, even though it’s a symptom of both.

To be honest, of the three topics I have settled on, I’m the least comfortable with including Bayes nets. It’s not that they are not interesting or useful – in fact, my research is likely to involve Bayes nets in the medium-term future – but I feel it’s hard to get students excited about them. For one, there is some math involved in how the inference works. It’s not difficult math, but it’s tricky math, and it can get tedious very quickly. But back to the issue of engagement, Bayes nets are ultimately a way of representing knowledge, and not of doing something. Unlike reinforcement learning, the computer is not learning to do anything, or even learning anything at all. In many ways, Bayes nets is just a particular type of math, and I’m not sure I have the talent to make pure math sound interesting.

A fair question at this point would be why I thought Bayesian networks should be a topic in the course at all. One answer – and frankly the one that has the most weight – is that I have taught the topic before, and so have some confidence I can do it well. Less selfishly, Bayesian causality is one of those ideas that had a big impact in AI. It’s inventor received the Turing Award (the “Nobel Prize” of computing) for it, and it’s an area of ongoing research. Which is to say that, even if it’s not something students might be overly interested in, it’s definitely something they should know about.

So, if I keep Bayes nets in the syllabus, I would probably spend just enough time on probability, then focusing on the creating, interpreting, and critiquing networks. Whereas reinforcement felt like a very non-human way of solving problems, Bayes nets should feel intuitive, only more rigorous. What I would like is for students to understand why Bayesian inference works the way it does – then apply it to something in their lives that they may have simply accepted before.

This is arguably stretching the boundaries of what should be taught in computer science – but then, where would such a thing be taught in college? The only places it would fit, outside of computer science or statistics, is psychology or philosophy. And since we’re getting computers to do the dirty mathematical work for us, this seems as good a time to make students go through this exercise. If nothing else, at the end they will have applied the Bayesian framework to some real-world phenomenon.

Although, from writing this post, I clearly have more work to do on this topic. I will have to think harder about what I want to achieve.

Advertisements
Step 10: Prepare for a Topics in AI Course (Part 3)

2 thoughts on “Step 10: Prepare for a Topics in AI Course (Part 3)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s