Hacker News new | past | comments | ask | show | jobs | submit login

I recently took a CS class at Stanford with an interesting policy on cheating. While cheating almost certainly happened during the course, at the end of the quarter the course staff made a public post allowing any student who cheated to make a private message to the staff admitting they've done so.

If a student admitted to cheating, while they would face academic disciplinary action (i.e. receiving a failing or low grade), they would not be brought up to the administrative office that deals with issues of academic integrity, and therefore would not face consequences like expulsion or being on official academic probation.

However if a cheating student decided to risk it and not admit their guilt, they were at risk of a potentially even greater degree punishment. The course staff would run all students code through a piece of software to detect similarities between each other, as well as online solutions. Students who were flagged by this software would then have their code hand-checked by at least one course staff, who would make a judgement call as to whether it seemed like cheating.

I found this policy quite interesting. As a former high school teacher, I've certain encountered teaching in my own classes, and have historically oscillated between taking a very harsh stance, or perhaps an overly permissive one.

The one taken by the lecturers of this course offered a "second chance" to cheaters in a way I hadn't seen before.




That sounds great and all but I honestly have doubts about this software that detects similarities… there’s only so many ways to solve the bland questions that professors lift from books; kind of ironic. I’m assuming it’s basicallly doing AST analysis and it’s no smarter than eliminating things like variables being renamed.

They are basically stating that this “software” is 100% accurate. Furthermore it’s then left to whims of some TAs?

No algorithm can detect cheating unless the number of permutations are very very large (I.e being struck by lightening). Maybe one way to offset would be to use data as the student is entering the solution but that was never the case for us; just upload the source code to their custom made Windows app.


Speaking from experience using similar software on students assignments, it is often blatantly obvious when cheating is occurring.

To start with, at an undergrad level, most students had fairly distinct coding styles - usually with quirks of not "proper" coding. Some cheaters had the exact same quirks in multiple students assignments.

Also, some cheaters had the exact same mistakes in their code, on top of the same code style.

Yes the software picks up people that write correct solutions with perfect syntax, but those are the ones that you just toss out because there isn't any proof there.

The people that get caught cheating generally don't know what correct solutions and good code look like, so they don't understand how obvious it is when they copy paste their friends mediocre code.


I agree with you. I run a days science department in a corporation and when I'm doing code review for a junior, I can tell what was original and what came from somewhere else. Fortunately, in the workplace context that just means trying to get people to paste the SO URL as a comment above the appropriate code block.


Assuming that the software detects a similarity between two or more student’s submissions, how do you know which students cheated? What if one of the students (the one that actually did the work) had their program stolen/copied somehow (eg left screen open in lab or print out of code)?


I teach some courses with coding assignments and we just tell the students very clearly and repeatedly, at the beginning of the course and before each submission deadline, that submitting duplicate material means failing. It doesn't matter if A copied from B, B from A, both copied from an external source, or even A stole B's password and downloaded their data. The penalty is the same. We cannot go into such details because we just don't have the means to find out, and some students are amazing at lying with a poker face.

It's a pity to have to fail students sometimes because they failed to secure their accounts and someone stole their code, but they have been warned and hey, securing your stuff is not the worst bitter lesson you can learn if you're going to devote your career to CS, I guess...


Cheater Student enters the lab, turns on the video camera on their phone, walks casually behind other students recording their screens, reviews video for useful information. Other students fail. Seems like a poor outcome that is plausible and unfair to the student whose info was stolen to no fault of their own.


Indeed, it's plausible enough that I've actually caught students trying to do that.

The problem is: what's the realistic alternative? Just letting cheating happen is also unfair (to students who fail while the cheater passes). And finding out what exactly happened is not viable because students lie. We used to try to do that in the past, but the majority of the time all parties involved act as outraged and say they wrote the code and don't know what happened. Some students are very good actors, many others aren't, but even when you face the latter, your impression that they are lying is not proof that you can use in a formal evaluation process and would withstand an appeal.

So yes, it can be unfair, but it's the lesser evil among the solutions I know.


Ask the students how their code works and how they came up with it. It shouldn't be hard to tell who actually wrote it.


On the one hand, as we know from the P vs. NP problem (at least if we assume the majority opinion), explaining a solution is much easier than coming up with it... and even easier if they copy from a good student who not only writes good code, but also documents it.

On the other hand, even if I am very confident that a student didn't write the code because they clearly don't understand it (which is often the case), this is difficult to uphold if the student appeals. For better or for worse, the greater accountability in grading and the availability of appeal processes means that you need to have some kind of objective evidence. "It was written in the rules that duplicate code would not be accepted, and this is clearly duplicate code" is objective. "I questioned both students and I found that this one couldn't correctly explain how the code works, so I'm sure he didn't write it" is not.

Note that I do this kind of questioning routinely (not only when cheating is involved) and take it into account in grades, because it of course makes sense to evaluate comprehension of the code... but outright failing a student on the grounds of an oral interview can easily get a professor into trouble.


> On the one hand, as we know from the P vs. NP problem (at least if we assume the majority opinion), explaining a solution is much easier than coming up with it... and even easier if they copy from a good student who not only writes good code, but also documents it.

You can ask “tricky” questions that someone who understands the material shouldn't have a problem answering, such as “if the problem required you to also do this, how would you change your code?”.

"I questioned both students and I found that this one couldn't correctly explain how the code works, so I'm sure he didn't write it" is not.

Fair enough. But at least you can give a bad grade for not understanding the course material.


I would let 100 people cheat if it meant I was sure 1 innocent student wasn’t punished unjustly.

People that don’t cheat may benefit in the future for not doing so.

I said May here because I generally found university education to be useless for myself. Instead, I wish I had met folks I consider mentors at work, earlier in my life.


> I would let 100 people cheat if it meant I was sure 1 innocent student wasn’t punished unjustly.

This makes sense in the justice system, but in the justice system you often can find proof as to what happened, so the system still acts as a deterrent even if a fraction criminals get away with no punishment. In university assignments, most of the time it's practically impossible to find evidence of who copied from whom, so applying that principle would basically mean no enforcement, that everyone would be free to cheat and assignments would just not make sense at all.

Also, failing a course is far from such a big deal as going to jail or paying a fine. At least in my country, you can take the course again next year and the impact on your GPA is zero or negligible. You will have an entry in your academic record saying that you failed in the first attempt, but it won't be any different from that of someone who failed due to, e.g., illness.

If the consequences were harsher (e.g. being expelled from the institution, or something like that) then I would agree with you.


Put a security camera in the lab. Catch a student doing something like that and you have grounds for expulsion.


When I was a TA checking "Intro to programming" HW assignments, my brain was the similarity check software.

Anyway, when I detected two basically-identical submissions, I would call in both students to my office. I would chide them, explain to them that learning to code happens with your fingers, and that if they don't do it themselves, then even though they might sneak past the TA, they'll just not know programming, and would be stuck in future courses.

The I would tell them this:

"Look, I have a single assignment here, with a grade, on its own, of X% (out of a total of 100%), and two people. I'm going to let you decide how you want to divide the credit for the assignment among yourselves, and will not second-guess you. Please take a few minutes to talk about it outside and let me know who gets what."

Most times, one person would confess to cheating and one person got their grade. For various reasons I would not report these cases further up the official ladder, and left it at that.


It becomes obvious when you ask them to explain the code. At my university I once overheard a boy and a girl presenting some code "they" had written to a TA. The TA asked them some basic questions on while-loops and function calls. It became obvious that the boy had written all the code and the girl had no clue. So the TA decided that the boy had passed but that the girl had to come back and present the code herself on the next session.


It doesn't matter, both violated academic integrity by letting the copy happen. (Submissions are never stolen) If you think letting copying happen is less severe, you ask them and rebalance based on the work. Most of the time 'they made it together'.


Curves are the lesser evil. There are professors who dont give good grades at all. If you select a curse run by them cant get more than a grade C. Meanwhile there are other professors where everyone gets an A easily.

Most students will probably pick the easy professors who give only As - because for them the degree is just a ticket for the job.

In fact those "tough" professors can have an adverse effect on those who picked the harder route. If you dont get good grades, you will have a lower GPA and that dream company will not even invite you for an interview. Some automated HR system will reject your application. They dont care that you went to a professor who taught you a lot -> they only see the low grade.

Same for scholarships - tough professor makes it already difficult to have good grades, but if you are graded without a curve, you get bad grade -> and can lose your scholarship.

Nobody cares about you as a person, or your knowledge, they measure you by your grades.

This is a tragedy of the commons in some ways: professors are supposed to give good grades, otherwise students wont choose them. Those who want to know more, are punished for it - in multiple way (first of all, they need to study more, but then they get a lower grade, which means lower GPA, what can lead to worse job offers, no scholarships etc).

If you want to be a "popular" professor, just pass everyone?

On a side note, in those great universities, dont they pass everyone anyway? I think frontpage had an article some time ago, that when you get to Ivy League, you will get a B or C even if you are bad, they generally dont kick out students who try to study, but arent particularly good.

Curves wouldnt be needed if every course had an objective list of material that should be learned - but even this is difficult - and not comparable between professors on same university, not to mention different ones - despite standards and various efforts (not to mention measuring if students really can know the whole list)


How is sharing knowledge "violating academic integrity"? Unless given specific and explicit instructions not to reveal working solutions, then sharing your code is literally just "helping" others, it's up to them to either study it and produce their own versions, or jut blatantly copy & cheat.


Because each university has university-wide rules forbidding sharing assignment solutions. It is explicitly forbidden even before starting the course unless the syllabus or professor directs you so. You can't "help" others on their own assignments by giving your solution. You can't receive direct "help" either.

Edit: here's the text of my alma mater: Any behavior of individual students by which they make it impossible or attempt to make it impossible to make a correct judgment about the knowledge, insight and/or skills of themselves or of other students, in whole or in part, is considered an irregularity that may give rise to an adjusted sanction.

A special form of such irregularity is plagiarism, i.e. the copying without adequate source reference of the work (ideas, texts, structures, designs, images, plans, codes, ...) of others or of previous work of one's own, in an identical manner. or in slightly modified form.

[https://www.kuleuven.be/onderwijs/oer/2021/?faculteit=500004... translated with Google]


In my time in college I helped a lot of fellow students work through a lot of assignments. I sat down with them and helped them to think through the problem and find examples to learn from that weren't full solutions to the assignment. I helped them find difficult bugs in their implementation by pointing them in the right direction or showing them debugging tricks I found helpful.

What I didn't do was show them my implementation or even talk about how I solved it. Yeah, doing it the long way takes a bit more effort, but the result is that the students I helped actually understood the code they submitted and were better equipped to solve the next assignment without help.


You ask them to do it again in front of you


I implemented the widely used MOSS algorithm (mentioned by a sibling) for my CS department in my senior year. That algorithm doesn't do AST analysis, it just looks at the plain text in a way that is resistant to most small refactorings. MOSS compares sets of k-grams (strings of k characters) between every pair of projects that are under test and produces the number of shared k-grams for each pair of projects. On any given assignment in a given semester, there's a baseline amount of of similarity that is "normal". You then test for outliers, and that gives you the projects that need closer scrutiny.

On the test data we were given (anonymized assignments from prior semesters together with known public git repos), we never had a false positive. On the flip side, small refactorings like variable renames or method re-ordering still turned up above the "suspicious" threshold because there would be enough remaining matching k-grams to make that pair of projects an outlier.

Our school explicitly did not use the algorithm's numbers as evidence of cheating and did not involve the TAs--the numbers were used only to point the professor in the right direction. We excluded all k-grams that featured in the professor's materials (slides, examples, boilerplate code). It also helped that they only used it on the more complex assignments that should have had unique source code (our test data was a client and server for an Android app).

My sense was that this was a pretty good system. Cheaters stood out in the outliers test by several orders of magnitude, so false positives are extremely unlikely. At the same time, the k-gram approach means that if you actually manage to mangle your project enough that it's not detected as copied, you had to perform refactorings in the process that clearly show you know how the program works--anything less still leaves you above the safe zone of shared k-grams.


From doing some cursory research, it appears the software in question is called MOSS (Measure of Software Similarity) and is currently being provided as a service [0].

Since it is intended to be used by instructors and staff, the source is restricted (though "anyone may create a MOSS account"). According to the paper describing how it's used [1], "False positives have never been reported, and all false negatives were quickly traced back to the source, which was either an implementation or a user misunderstanding."

Sources:

[0]: https://theory.stanford.edu/~aiken/moss

[1]: http://theory.stanford.edu/~aiken/publications/papers/sigmod...


I used something similar when I was a TA 20 years ago and while your assumption seems reasonable, there are actually a lot of different ways to solve even quite simple tasks and most cheating is very obvious on manual inspection.


Yep... If you're going to go through the effort of completely rewriting a piece of code to try and dodge an AST analysis algorithm, you've effectively just done 70% of the work and put your grade/position at the institution on the line. It's not worth it, and so people don't tend to do that. It's the same thing with plagiarism—students could very well resynthesize a stolen work in their own words. It would still be plagiarism, sure, but it's also putting in a large amount of effort while still being risky.


If you rewrite everything you steal (e.g. never copy/paste), it’s no different from using an especially well written source.


Well, no. It's still plagiarized if you fail to communicate that it isn't your original work. You can't just steal ideas from someone else's paper, even if you rewrite everything. If you rely on another paper for inspiration, you have to cite it. If a student submitted a paper that was just another paper entirely rephrased, that would not be acceptable in the least even if they cited their source because the expectation of writing a paper is that you contribute something novel and not just regurgitate someone else's argument


If the problem is large enough, I do submit that there are multiple (even many) ways of solving it.

I will also say that there’s problems where that is not the case. For example, we were told to write simulators for scheduling schemes (RR, MLFQ). Other than using different data structures (even that’s a bit of a stretch) not sure how much variance there will be.

Using the right tool for the right job is important.

Just above your post another author posted/cited results of a system that “never produced false positives”.

I think that cited number another author shared is probably correct but presumably, the tool is used in cases where problems are big enough to warrant it.


The problems we had were way way simpler than anything deserving an acronym. You'd think there was only one way to do it and yet it was not hard to distinguish plagiarism.


Do you happen to have a few examples? I’m super curious! How many students were taking the course?


I noticed a swap in your prose (still comprehensible), but just realized that cheating and teaching are semi-spoonerisms (swapping the sound order of a single word)... how appropos!


I have the same policy at my uni in Poland. Admit to cheating without being called out? Depending on the professor mood, they either allowed you to retake exam (though the best grade you'd get was the lowest passing one), or you just fail the course and try again next year.

Both they had less paperwork, and well, they wouldn't report that person.


> I've certain encountered teaching in my own classes,

This kind of mistyping reminds me of those example of people whose names fit their job but it's rare to find such an apropos example.

Sorry to derail your point but the juxtaposition of "ch" and "t" here is perfect.


Not juxtaposition, transposition. https://en.wiktionary.org/wiki/juxtaposition


Yes, my mistake, thanks.


The results of checking against existing and other test takers solutions must be taken with a strong human judgment. Programming problems such as would be asked in tests are essentially like mathematical formulas/algorithms, and there isn't much variation in how a given formula or algorithm can be implemented.


I don't think these techniques are often applied to problems in tests - there are other, simpler ways of catching cheaters there.

They are much more likely to be applied to homework assignments, where the opportunity for copying is large, but the chance of two students producing the exact same >500-1000 line program is slim to none. Perhaps once in a while a critical function will be copied and no one will realize or similarities in a trivial function will be unnecessarily flagged, but this will be relatively rare and quickly discovered in manual review.


There is a lot of syntactic variation possible, both for formula and algorithms. Even for something as simple as quicksort there is enough natural variation for a class of 30 maybe even 100 (if no references can be used). Anything more complex and even with references it should be unique.


It's not _just_ trying to be lenient and offer a second chance - it's a way to catch more cheaters. "Turn yourself in and we'll go easy on you... because we might not catch you."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: