I am currently teaching an introductory computer science class, with a three-hour lab every week, split into two lab sections. The last lab on Thursday went a little pear shaped – not enough that I couldn’t fix it in the two hours before the other half of the class had lab, but enough that I had to change the grading criterion for the first section. What caught my attention is that this is the second lab (technically it’s the third, but the first lab was just getting software set up), which reminded me of the fiasco with my second assignment for Topics in AI last semester. This is not a universal trend – the second homework for the intro course is due tonight, and students have finished it without too much trouble – but I do think it’s a trend worth exploring.
The most salient addition connection between the semesters is that the assignment/lab didn’t work for the same reasons, namely, that it was too difficult for the students. More specifically, I think in both cases an excellent first assignment led me to raise the bar for the second, only to find that the required thinking was too abstract. Last semester’s assignment was on reinforcement learning, where I expected students to connect properties of the domain to properties of their feature space – despite only having learned about agents a month ago. For Intro CS this semester, although the content itself was not as abstract as with reinforcement learning, the method of thought I expected of them was too difficult. We had just covered functions and branching, so I thought a good lab would be for them to write two functions (and a main) that tells the user whether they are in California, and how far away they are from Oxy. As a warm up, I also made students write a test case for a simple function, with the idea that they will first write their own test cases for the two functions and submit them to the autograder, then submit the code, and their grade will be based on some weighted average of the two.
The first wrong thing I did was I stupidly left the text telling them that they have to write their own test cases at the end of the lab. If I’ve learned anything from three weeks of labs, it’s that no one reads ahead. Not even the to the next line, which would provide an explanation for the puzzling behavior they just got.
But the bigger wrong thing was that the thought process of figuring out whether a point was in California was much more difficult than I had anticipated it to be. I defined seven points that marked the (non-convex) corners of California, so the next step was to rule out increasingly tight bounding boxes, with a bit of high school geometry/algebra to help. I paired students up by self-identified ability in high school math… which as a rule was an over-estimate of their actual ability.
The real problem was not even about the computer science – once students figured out what they were doing, they could write the code to do it. The problem was that students didn’t have a good intuition for how to tackle the geometry problem. Some students wanted to separate the area of California into multiple triangles, which while a possible solution, is much harder than ruling out external areas. A number of pairs got really close to code that would pass most of the test cases, at least for that function, but they were 15-45 minutes short of ironing out all the bugs.
For the second lab section, I moved the text saying that they have to write test cases first, then added a hint about wanting to rule out areas instead of rule in areas. I also removed the other two functions from the lab. Most of the students were then able to finish in the last 30 minutes, which means the lab was well timed, and that the fault was almost entirely in the lab instructors.
Still, even identifying the error now in retrospect, I don’t feel like I have a good grasp on what other skills I can trust students to have. The failure in reinforcement learning I could understand, since it was a topic new to all of them. I’m not sure if what students lack is computational thinking, although one could certainly after that they lacked the skill of breaking things down into simple/the simplest components. It wasn’t exactly the high school math, either since, students eventually could figure out the equation of a line and determine whether a point is above or below it. The difficulty was entirely in the approach, and I’m not sure how to teach that to students in any generalizable way.
At a different level, I don’t think the lab worked as well as I wanted as an introductory challenge for students to learn about functions and branching. They did write a function, but they didn’t get to use it, and most solutions did not require any nested branching. I still think the question is one that gets students thinking, but I will probably replace this lab in future semesters.
For now, I’m temporarily scaling back my lab ambitions. My original intention for the coming lab (on lists and looping) was for students to write all of (console-based) Connect Four, but now I think I’ll just have them write two key functions. I worry that it will be too easy, but I suspect that it will be perfect with the added difficulty of figuring out what they need the computer to do in the first place. I’m also thinking of making them explain their plan of attack before letting them write code, treating the whole process as though its engineering design, but that’s probably a little too much.