Let’s get straight to it.
My approach here is probably frowned upon. Although students were using an IDE (PyCharm for Python and IntelliJ for Java), I never taught them to use the debugger; instead, I relied entirely on printing. There’s really no good reason for this, except perhaps for the rustication that students should be able to debug even without a debugger. At Michigan, where we did teach the debugger (in both Visual Studio and Xcode), students were often confused by the various stepping functionalities, and especially when loops are involved, they get tired of continuing until some particular state. Conditional breakpoints could solve the problem, but at that point using print statements were easier.
I think the biggest obstacle in debugging was not seeing what values the variables contained, but matching those values with the error descriptions. While Python error messages were relatively clear, they often required a deeper understanding of what is going on. One error that stumped many students on a quiz comes from a line such as
>>> a, b = '1 2 3'.split() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack (expected 2)
You can argue that I shouldn’t be teaching multiple assignment (… which is a valid point), but the general point here is that error messages can be cryptic even if they identify the buggy line in question. I don’t know if there’s any good way to teach that.
The main way I have approached this is through crowd-sourced testing using the autograder. For grading, students’ test cases accounted for 10% of their grade for an assignment, based on their two testcases that the most students failed. The testing score for the entire class is curved, so the student whose tests failed the most submissions always get 100%; this means that everyone will get 100% for the testing portion in the unlikely event that everyone submitted perfect code. Incorrect testcases subtract from their score.
One concern when I first proposed this approach to testing is that students won’t necessarily learn to write good testcases. That’s true, but I think it needs to be qualified. For one, I think we need to separate the habit of writing testcases at all from the ability to write good testcases. I think the autograder did a decent job at the former, as I’ve mentioned in a previous blog post.
I also think that whether students understood what “good” testcases are depends greatly on the domain. I did a lab where students had to determine whether a latitude-longitude pair was in the State of California (as defined by seven corner points). The lab was too math-heavy, but the heavily visual problem allowed students to use Google Maps to identify points just inside or just outside the boundary. These testcases actually identified a fundamental problem with the lab (namely, that we assumed the coordinates were on a plane and used straight lines instead of using great-circle distance) and, in that sense, they were exactly the right testcases that students should be writing. I don’t think it’s a strong habit by the end of the semester, but I was not pushing on testing particularly hard either.
This acumen, however, did not extend to more abstract problems. For a function that determines whether a date is part of the Gregorian calendar (ie. after 1752-09-14), most students were unable to come up with a comprehensive test suite. Similarly, for some of the more data-driven assignments, most students didn’t think of creating simpler fake data for testing. This second oversight can probably be amended, but I think the fundamental problem is that writing good testcases is hard. As with debugging, some of the knowledge is not actually about the domain, but about more complicated aspects of computer science; in this case, what errors a stereotypical programmer might make. I can walk students through my own testcases (and in fact, I should do exactly that next time), but I suspect a lot of it comes down to experience.
We only used Version control for the final project, where it was the easiest way to put a webapp on Heroku. I only taught the basics of version control (committing, pulling, merging, and pushing in git), and only in the GUI provided by PyCharm, and for the most part students had little trouble. To be fair, I mostly had students follow scripts of how to use git, but I think exposure is the right goal to aim for in CS1.
While I understand what I mean here at an, ah, abstract level, it is hard for me to say what this means concretely in a CS1 class. Perhaps the best example is one where the student did not abstract. For the course-directory webapp, one student wanted to list courses by the core requirement that they satisfied. Instead of writing a function that takes as argument the requirement, however, they wrote one function for every single requirement, duplicating the same several lines of code about a dozen times. I was honestly surprised by this, as the student had otherwise a good understanding of the concepts in the class.
I can tell a similar story about structural abstraction, about a different student who create multiple copies of a class that differed only in one hard-coded constant, which should instead have been an argument to the constructor.
I will be honest: this kind of abstraction never had a prominent place in my curriculum, mostly because I’m not sure what I have to say about it. The idea of abstraction itself is sufficiently abstract that I’m not sure what lecture to provide. I will try to give more worked examples of generalizing code, but I would appreciate any ideas for more directly getting students to thinking about/practice abstraction.
This is another area where I struggled. I think there are several types of functional decompositions. One is actually with functions, where you find a part of the problem that seems self-contained and make it a function. I never wrote an assignment that emphasizes this. What my students did have trouble with is functional decomposition at the line-by-line level, where they can’t figure out what variables they need and how they should change. This was the main conceptual obstacle that students faced with loops, and I tackled that through worked examples and a much more scaffolded assignment. Students got it by the end of the semester, although next time I will just start with the scaffolded examples.
Going back to the larger scale functional decomposition, some of that is part of the functional abstraction I’ve already talked about. Students did pick up on the fact that they would have trouble breaking down an intermediate into the functions that I did. Looking back, however, the benefits to those functions in the assignment was mostly conceptual. Each function is only used once in the execution of the program, so putting everything in one block would not have drastically increased code size. I’m not saying that reducing cognitive load is pointless, but I’m wondering if decomposition is best motivated if the code would otherwise be inefficient.
I don’t think I talked about the space-time tradeoff specifically, other than a throwaway mention when I answered a student’s question. Tradeoffs in general, however, I talked about a decent amount, most prominently in my representation lab. I am not sure what CS1 students need to take away regarding tradeoffs – should they only know that they exist? Maybe the goal here should be more specifically that students should be deliberate in selecting the compromise, which implicitly includes identifying tradeoffs. This general lesson is not one that I worked into my course at all, although I can now see places where it could be inserted. (For example, selecting between two lists, a dictionary, or a list of instances is a tradeoff in data representation.)
Connections with Other Disciplines
At first I thought interdisciplinary connections is something I emphasized, but look back now there are not a lot of places where I make these connections explicit. Students did spend two or three weeks on graphics as they learned object-oriented programming, but I would hesitate to call it art (although I would make an exception for some pieces). I hinted at some cognitive science/HCI/design work for the webapp as well, but I didn’t force students to go through the design process.
I was originally going to do a lab on music visualization, which would have included a primer in the physics of sound, but I got swamped and canceled the assignment. For next semester, I’m currently planning a cellular automata/agent-based modeling assignment, in the spirit of the Parable of the Polygons (aka. the Schelling serration model), which could pull in sociological and economic concepts.
Connections with the Non-Technological Real World
I mean, more than usual. When I wrote this goal, I was thinking of identifying how computer science concepts are embodied in existing infrastructure, like how library book stacks are an unrolled linked list. These examples are harder to find for CS1, and I don’t think I included any this semester. I will have to work harder to find cases where students can appreciate computation as a perspective and not just as a tool.