Good timing! I'm currently battling memory issues with a project where I'm straightening full resolution panoramas. I often reach close to my machine's limit of 16GB.
I think this might help me immensely. Thank you OP.
One random and possibly incorrect memory, from back when I was using scikit-image: switching from order=3 to order=2 interpolation on images can reduce CPU usage and... I think memory usage? significantly, for only minor (but real) reduction in quality of output.
No. According to rumours: the high-end 64-128 GPUs will be exclusively for the Mac Pro and the MacBook Pro will have 16-32 cores GPUs instead. It think it's fair to assume that we won't see much changes with the cooling for their laptops.
For my last internship, I had to build an e2e testing suit. After studing the various options already on the market, I ended up choosing Playwright and I didn't regret it.
Having gone through the same experience, I can tell you that it isn't necessarily the case. More often than not, those who had some programming experience in some high-level language would often get discouraged with the difficulty and drop out.
In the end, it was mostly those that didn't get discouraged and socialized with the other students that would remain in the end.
I myself did not have any programming experience before going through that ordeal.
My experience with C courses with this structure of automatically validated home works not only filter "the weak" but also people with previous (especially C on Unix) experience, because nobody with any kind of practical Unix experience will write code that will pass these kinds of rigorous C-standard conformance and memory leaks checks, because for practical applications doing all that is actually not only unnecessary bud also detrimental to runtime efficiency.
I think a passing test suite, no diff after clang-format, clean valgrind and clang-analyze checks are not too much to ask for. As long as the requirements are documented and the system is transparent and allows resubmission.
But I agree there is a risk of academic instructors going way overboard in practice, e.g. by flagging actually useful minor standard conformance violations (like zero length arrays or properly #ifdef'd code assuming bitfield order).
My aversion to such systems is primarily motivated by the fact that every one of such system somehow penalized resubmissions. I probably don't have anything against "you have to write program that is compiled by this gcc/llvm commandline without producing any diagnostics and then passes this intentionally partially documented test suite". But in most cases the first part ends up meaning something like "cc -Werror -std=c89 -ansi -strict" where the real definition of what that really means depends on what exactly the "cc" is and the teachers usually don't document that and don't even see why that is required (ie. you can probably produce some set of edge-case inputs to gcc to prove that gcc is or isn't valid implementation of some definition of C, but this conjecture does not work the other way around).
In most of my courses that did something like this there was no resubmission.* The professor supplied a driver program, sample input the driver would use to test your program, expected output, and a Makefile template that gave you the 3 compilers + their flags that your program was expected to compile against and execute without issue. His server would do the compile-and-run for all 3 against the sample input and against hidden input revealed with the grade. He used the same compiler versions as were on the school lab computers.
* As a potentially amusing aside, a different course in a different degree program had a professor rage-quit after his first semester because he didn't want to deal with children -- he had a policy of giving 0s on papers with no name or class info on them, and enough students ("children") failed to do that correctly but complained hard enough to overturn the policy and get a resubmit.
I think this might help me immensely. Thank you OP.