In the case of testing, it very much can scale. Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
If a test is already graded on partial credit, which in the field of engineering at least most are, then it's no harder to grade than an equivalent test that has less but longer questions.
This obviously doesn't translate for multiple choice tests where there is no partial credit but at least in engineering those don't really exist outside of first year and maybe one or two second year classes. And honestly, every intuition tells me that those classes that I remembered doing no-partial-credit multiple choice should not be doing so in the first place.
Maths classes like algebra, precalc, calculus, statistics, and linear algebra should by no means be using no-partial-credit exams. That defeats the entire purpose of the classes as those classes are to teach techniques rather than any particular raw knowledge.
Same for the introductory hard sciences like chemistry and physics.
And for the ability to handle those more "bespoke" exams, we really need to be asking the question of why certain students are taking certain classes. Many programs have you take a class knowing that only maybe 30% of that will be relevant to your degree.
Instead of funneling all the students through a standard "maths education" class, maybe courses would be better suited by offering an "X degree's maths 1-3" or even simply breaking up maths classes into smaller semesters where you are scheduled to go to teacher X for this specific field up to week A, then teacher Y for this other unrelated maths field up to week B, and teacher Z until the end of the semester. In-major classes need not do this but general pre-req classes could benefit by being shorted and split up through the semester into succinct fields of knowledge so that maths or physics departments aren't being unnecessarily burdened by students who will never once apply the knowledge possibly learned in that class.
-------------
The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world. No artificial handicaps and at most checks should be made for obviously plagiarized solutions. If that's not a viable testing mechanism, it's probably worth asking why and considering reworking the course or program.
The solution to students not wanting to absorb knowledge is to stop forcing students to learn topics & techniques they'll never use because maybe some X<25% of them will. Instead split up courses into smaller chunks that can be picked and chosen when building degree tracks.
---------------
Edit: I forgot to include it but this is largely based on my experiences not necessarily just on my own as a student but as a tutor for countless peers and juniors during my time at university, and as a student academics officer directly responsible for monitoring and supporting the academic success of ~300 students for an organisation I was part of. This largely mirrors discussions I've had with teaching staff and it always seems to boil down to "the administration isn't willing to support this" or some other reason based on misplaced incentives at an administrative and operational level (such as researchers being forced to teach courses and refusing to do anything above or often even just at the bare minimum for the courses they are teaching).
> Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
Coursework is already along these lines, no?
> The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world.
How would this apply to a course in real analysis, say?
University education generally isn't intended to be vocational.
It is but exams are not and if the intent of exams is to test knowledge, they should be in a format that is applicable to the real world and one that can't easily be cheated. Also for what it is worth, for essentially all of the courses I took in university, unless they were explicitly projects based classes, exams were the overwhelming majority of the grades in the course (often ~75-90%).
What this meant in practice was that exams that were closed-book, closed-notes often had averages in the 30s or 40s where everyone got curved upwards at the end of the day while open-book exams had averages in the 60s-80s and students who could apply their knowledge passed the exam while students who couldn't didn't. I can't recall a single course with the latter style of exams where I passed without knowing the material or failed while knowing it. For the prior however I personally experienced both and witnessed numerous other students go through this at the same.
> How would this apply to a course in real analysis, say?
Sorry if I wasn't clear but when I said "as if in the real world" I was referring specifically to students having access to the same resources they would have in the real world (aka reasonably flexible time constraints and with access to texts, resources, and tools) not necessarily that the questions needed to be structured as "in your field you'd use this like this" kind of questions.
Unit testing is also frequently very artificial and disconnected from production use of a codebase. Nevertheless, there is a great deal of value in checking whether things you wrote actually do have the effects you intended.
If a test is already graded on partial credit, which in the field of engineering at least most are, then it's no harder to grade than an equivalent test that has less but longer questions.
This obviously doesn't translate for multiple choice tests where there is no partial credit but at least in engineering those don't really exist outside of first year and maybe one or two second year classes. And honestly, every intuition tells me that those classes that I remembered doing no-partial-credit multiple choice should not be doing so in the first place.
Maths classes like algebra, precalc, calculus, statistics, and linear algebra should by no means be using no-partial-credit exams. That defeats the entire purpose of the classes as those classes are to teach techniques rather than any particular raw knowledge.
Same for the introductory hard sciences like chemistry and physics.
And for the ability to handle those more "bespoke" exams, we really need to be asking the question of why certain students are taking certain classes. Many programs have you take a class knowing that only maybe 30% of that will be relevant to your degree.
Instead of funneling all the students through a standard "maths education" class, maybe courses would be better suited by offering an "X degree's maths 1-3" or even simply breaking up maths classes into smaller semesters where you are scheduled to go to teacher X for this specific field up to week A, then teacher Y for this other unrelated maths field up to week B, and teacher Z until the end of the semester. In-major classes need not do this but general pre-req classes could benefit by being shorted and split up through the semester into succinct fields of knowledge so that maths or physics departments aren't being unnecessarily burdened by students who will never once apply the knowledge possibly learned in that class.
-------------
The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world. No artificial handicaps and at most checks should be made for obviously plagiarized solutions. If that's not a viable testing mechanism, it's probably worth asking why and considering reworking the course or program.
The solution to students not wanting to absorb knowledge is to stop forcing students to learn topics & techniques they'll never use because maybe some X<25% of them will. Instead split up courses into smaller chunks that can be picked and chosen when building degree tracks.
---------------
Edit: I forgot to include it but this is largely based on my experiences not necessarily just on my own as a student but as a tutor for countless peers and juniors during my time at university, and as a student academics officer directly responsible for monitoring and supporting the academic success of ~300 students for an organisation I was part of. This largely mirrors discussions I've had with teaching staff and it always seems to boil down to "the administration isn't willing to support this" or some other reason based on misplaced incentives at an administrative and operational level (such as researchers being forced to teach courses and refusing to do anything above or often even just at the bare minimum for the courses they are teaching).