Technology aimed at education could really benefit an incredible number of students by making classes and learning (potentially) a more pleasant and efficient experience. Computers can't replace a really good human teacher, but they can make it easier for good human teachers to reach a vast audience of students. Massively open online courses (MOOCs) promise to change how education works, but there are some technological tools that might be missing. It's pretty straightforward to test students on math problems in an automated way, but grading essays is a much more daunting problem. There have been some calls for automated grading software from various organizations (like the Hewlett Foundation).
But at the same time, the National Council of Teachers of English argues that computers simply can't grade essays. Here are just a few more links on this debate over the use of algorithms over English professors (or grad students).
Technology can be very useful for helping teachers reach out to more students and for spreading information efficiently among schools. Some grading can be automated, but obviously not all grading can be done with heuristics and strict rules. Here are just a few examples of grading challenges that teachers are already facing that might need some technological improvement.
Math might not be the easiest subject for some students, but there might be different ways of teaching it that could make it more tolerable for kids. The more we learn about how our brains process math problems, the better we can teach ourselves how to tackle math education. There's a lot of concern over how Americans can compete in a global economy if our kids don't have some pretty basic math skills. Maybe some of these findings will help students pick up some much needed math skills.
There are a lot of standardized tests for kids to take, but it's not always clear what the results of the tests actually mean. If society wants to create a huge population of adults who can memorize some facts or fill out circles with no.2 pencils, then we're doing a pretty good job of it. Here are a few links that question the usefulness of certain kinds of tests.
A year ago, we wrote about how a report had uncovered that the there was widespread cheating by FBI agents on a test to get them to stop abusing surveillance tools. Apparently, agents passed around the answers to one another, and many -- including the head of the FBI's DC office -- finished in such a short period of time that it was impossible that they actually went through the exam.
Yes, it certainly appears that the FBI's response to FBI agents rushing through the exam and cheating... is to make the test that much easier.
It's also not at all clear if anyone was disciplined for the cheating, though it certainly doesn't sound like it. If anything, it sounds like rather than recognizing that the agents did anything wrong, the FBI has determined that the cheating just meant that the agents didn't want to spend so much time making sure they understood the rules for surveillance.
While the TSA is still fighting as hard as possible to be able to either see you naked or touch your private parts, apparently it hasn't spent that much time actually figuring out how to look for people carrying weapons onto planes. A few folks have sent in this ABC story about a man who boarded a plane with a loaded handgun that had been in his carry-on bag. The guy noted that he normally carries the gun in his bag, but takes it out before traveling -- he just forgot to do so and was pretty spooked when he realized he had the gun on him (he reported the incident to the TSA upon landing).
But even more scary than that is the article notes that the TSA admits that it's really bad at finding weapons, saying that the "failure rate" of tests is reaching 70% at some major airports and at some airports "every test gun, bomb part or knife got past screeners." So, while scanners are looking at or touching your crotch, they're apparently not bothering to look for guns. Comforting.
A friend passed on this Telegraph story about how 200 students in a Strategic Management class at the University of Central Florida came forward to admit to "cheating" on the midterm exam after the professor in the class, Richard Quinn, gave a lecture where he noted the evidence that about 1/3 of the 600 student class had "cheated" on the exam. He then gave them an option: saying that, if they admitted to cheating within a week,re they would be able to complete the class and the incident would not go on their record and they would not face discipline (they also had to take an ethics class). If they did not, and they were still caught, then they could face expulsion for violating academic integrity policies. You can watch the video of the lecture here:
Not surprisingly, the story of 200 students "turning themselves over" made all sorts of headlines. It's a good story of "cheaters" being pressured into 'fessing up... right? It's leading to typical hand-wringing stories about what should we do about cheating in schools. But, as I watched the video, the whole thing started to feel just a little bit off... My main interest was to learn two things: (1) what the students did to cheat and (2) how the professor was identifying who cheated. Both points seemed like pertinent details.
The answer to that first one surprised me. The "cheating" was that students got their hands on the textbook publisher's "testbank" of questions. Many publishers have a testbank that professors can use as sample test questions. But watching Quinn's video, it became clear that in accusing his students of "cheating" he was really admitting that he wasn't actually writing his own tests, but merely pulling questions from a testbank. That struck me as odd -- and I wasn't really sure that what the students did should count as cheating. Taking "sample tests" is a very good way to learn material, and going through a testbank is a good way to practice "sample" questions. It seemed like the bigger issue wasn't what the students did... but what the professor did.
In looking around, it looks like a lot of the students agree. They're saying that the real issue is that Prof. Quinn simply copied questions from the publisher, rather than actually recreating his own test, and noting that this seems like a massive double standard. The professor is allowed to just copy questions from others for his tests? In fact, some of the students have put together a video pointing out that, at the beginning of the year, Prof. Quinn claimed that he had written the test questions himself. As the article notes:
Can the UCF students be blamed for using all the available tools to study for the test? How were the students to know that Quinn would take his questions from the test bank, when he explicitly said that professors do not do so any more? Moreover, why did Quinn tell his students that he is the one who creates the mid-term and final exams, when in fact it wasn’t so?
The students have put together a video pointing out where he said (in the first lecture) that he writes the questions himself:
The local student news operation sent a reporter to speak to Quinn and ask him about the double standard and his copying of questions, and Quinn totally ignored him:
Now, there's a pretty good chance that some of the students probably knew that Quinn was a lazy professor, who just used testbank questions, rather than writing his own. That's the kind of information that tends to get around. But it's still not clear that using testbank questions to study is really an ethical lapse. Taking sample tests is a good way to practice for an exam and to learn the subject matter. And while those 200 students "confessed," it seems like they did so mainly to avoid getting kicked out of school -- not because they really feel they did anything wrong -- and I might have to agree with them.
We've seen plenty of stories over the years about professors trying to keep up with modern technology -- and I recognize that it's difficult to keep creating new exams for classes. But in this case, it looks like Prof. Quinn barely created anything at all. He just pulled questions from a source that the students had access to as well and copied them verbatim. It would seem that, even if you think the students did wrong here, the Professor was equally negligent. Will he have to sit through an ethics class too?