Step 11: Prepare for a Topics in AI Course (Part 4)

This is part of a series on the Topics in Artificial Intelligence course I will be teaching in the fall. The first part was posted on 2015-06-23.

Last time I talked about teaching Bayesian networks, and after sitting on it for a bit, I decided that it would work better as the third topic of the course, after the topic for this post.

So, the second topic for the Topics in AI is cognitive architectures. This is not a common topic for an AI course, but it actually forms much of the background for my research. The idea behind cognitive architecture research is to actually build a human-level intelligence. We are nowhere near there, of course, but a lot of is in trying to understand how different specialized AI algorithms work together and how information flows from one to the other. My own research is more in the latter – and I’m sure I will write a future post about it – but the point is, cognitive architecture is a topic that I enjoy.

Despite that, I’m actually unsure what students would get out of learning about cognitive architecture. I want to say that there is a connection to cognitive science that the students might enjoy, and that as a field that’s more nebulous than reinforcement learning, they will also get a different perspective on AI. But both of these feel like rationalizations, and not the real reason I’m including it. The closest thing to a good justification is that I can introduce students to my research, and maybe find a couple I can work with over the year. But what do students get out of it?

If I have to argue for why students should study cognitive architecture, it would be to learn that integration is non-trivial. It is extremely difficult, if not impossible, to simply say “here’s a set of interfaces”, then plug-and-play specialized algorithms, because each algorithm has a consequence and can impact whether other algorithms behave optimally. One example is how the representation of knowledge changes how efficiently that knowledge can be used. These tradeoffs are common in AI and in computer science in general, of course, but they are much more explicitly a concern in a cognitive architecture. I’m also bringing up representation of knowledge here as the reason why I decided to switch the ordering of this topic with Bayesian networks – because it combines both action and knowledge, and it makes the transition to Bayesian networks a little easier.

Given the desired learning goal of thinking about integration and tradeoffs, I’m not yet sure what the assignments would look like. It’s hard to think of a task that requires cognitive architectures, because the research is explicitly aimed at making computers do multiple things well – at least, it’s hard to think of tasks which students can do that fulfill this requirement. I suspect that the assignments will be less technical here; two weeks can be spent learning the basics of an architecture, followed by reading some recent developments in the field and being able to articulate why those developments were made.

It’s clear that more work will be needed on this topic as well, much as I will need to spend time on Bayesian networks, but at least I’m more convinced that this should be in the syllabus.

Step 11: Prepare for a Topics in AI Course (Part 4)

2 thoughts on “Step 11: Prepare for a Topics in AI Course (Part 4)

Leave a comment