> As an analogy, driving a car is dangerous. Whenever I drive, I could easily kill someone. But the government doesn’t force me to submit a driving plan any time I want to go somewhere. Instead, if I misbehave, I am punished in retrospect. Why don’t we apply the same policy to research?
"We" decided that Tuskegee was bad enough that it should be stopped before harm is done, and that there is no appropriate or sufficient "punish[ment] in retrospect" for the fallout.
The government makes you get a license to drive at all, then "drive a Pinto" versus "drive a Trabant" are similar enough that they don't require more info. They require you to get different licensure to drive a bigger truck where you could potentially cause more harm, or to drive an airplane. In this analogy the IRB is the DMV/FAA/whatever, and you're asking for permission to drive a tank, a motorized unicycle, a helicopter, an 18-wheeler or a stealth fighter. You don't get a Science License rubber stamp because that's like getting a Vehicle License - the variation in "Vehicle" is big enough that each type needs review.
>"We" decided that Tuskegee was bad enough that it should be stopped before harm is done, and that there is no appropriate or sufficient "punish[ment] in retrospect" for the fallout.
The thing is, although you and the linked article seem to be associating IRB approval just with human studies, these days you need it for mouse studies.
There are different IRBs to review animal research[1]. I believe it created for an ethical framework around the use of animals in science. Same thing: what are "we" accepting of when it comes to research of this nature?
In the context of universities, the equivalent of riding a bike or a skateboard here is having people fill out surveys after events, or piloting new services offered by a student health clinic.
(I guess the point of analogies like these are to force us to sweat the details and examples.)
Indeed, and a person with a medical license is able to do much, much more damage than the people who need to ask IRB for permission to do research.
If your point is that we could replace IRBs with some sort of a researcher license, that you need to obtain before being able to do studies that today require IRB approval, then I support it, because while not ideal, it improves over the status quo.
I do think that there’s a debate to be had on how reactive or proactive we should be in ensuring the ethical practice of… well, anything involving significant investment. As a wage case, reactive systems like malpractice suits or board actions against physicians aren’t easy to navigate if you don’t have many resources.
When HN was an entrepreneurship oriented community, the overwhelming attitude had been that it’s better to ask for forgiveness than permission. That’s because even if you’re doing something clearly good and morally unimpeachable, having to ask for permission slows you down and invited bikeshedding. Now that HN is a general industry forum, the attitude is more favorable towards preventing risk at a cost of reducing amount of value produced.
Every time you get one of those surveys rank them at zero, then add "Net Promoter Score is a flawed vanity metric and shouldn't be used for business purposes" in the comment box. Sometimes I link the Wikipedia NPS "Criticism" section as well.
Most places don't care about the results from an actual customer service perspective. The above gets crickets, not even an auto responder.
For companies that do care (tiny startups, mostly) I've gotten IMMEDIATE personal email responses from CEOs and founders asking what they can fix for a zero NPS. That's a great place to link the criticism section if not done previously, and to provide useful, raw feedback on what you love/hate about their products.
This tanks the evaluation of any individuals you interacted with while dealing with the company, which can impact their pay or even push them towards getting laid off for low performance. So I'd advise caution when applying this particular idea, since many employers use these surveys to decide who to fire.
That's "negotiate with terrorists" logic. I'm not going to pretend to take some company's bullshit seriously because they implicitly threaten to fire their employees at random if I don't.
(I do advocate for laws against arbitrary firings and encourage employees to unionise and/or move to jurisdictions with strong labour laws).
It's very likely zero or positive impact on the decompression side of things.
Starting with smaller data means everything ends up smaller. It's the same decompression algorithm in all cases, so it's not some special / unoptimized branch of code. It's yielding the same data in the end, so writes equal out plus or minus disk queue fullness and power cycles. It's _maybe_ better for RAM and CPU because more data fits in cache, so less memory is used and the compute is idle less often.
It's relatively easy to test decompression efficiency if you think CPU time is a good proxy for energy usage: go find something like React and test the decomp time of gzip -9 vs zopfli. Or even better, find something similar but much bigger so you can see the delta and it's not lost in rounding errors.
It's a computer based on BSD, but with no WiFi, no BT, no screen, and no ability to play movies or games or music. And it's all programmed in C - not C++ or Rust anything similarly memory-safe-ish.
So RPi, but more vague and vulnerable and less useful. And maybe more expensive?
I think they are ethical but annoying if done properly, and likely give good signal as to a candidate's skill. Planning and (voluntary) time boxing to 1-2 hours make the argument stronger, as that time could equally be spent in a 1:1 synchronous interview which is worse IMO.
Leetcode-ish and strict timeboxing are awful and can't possibly provide useful signal beyond "can program in some manner". Nobody can do their best work in 1 hour timed and limited, only in the web IDE which isn't the same as their dev environment, no looking up anything, no progress on part 2 without completing part 1 and similar unrealistic restrictions.
They encourage the worst in coding. Globals, dumb temporary names, no comments and done-vs-maintainable style? Ship it. I only need to deal with this code for an hour and then it's thrown away. I'm not going to make my `important_thing_to_remember` variable anything longer than `i`, and I'm going to use `foo[0]` from that ridiculous regex I bodged together instead of splitting it up and building it from pieces where I name the capture group so Future Me can understand it.
I'd much rather have a test for 1h of reasonable work, and let me take 2h if needed to solve it and then refactor to make it maintainable.
Why would someone ever take a 40Mbps (compressed) video and downsample it so it can be encoded at 400Kbps (compressed) but played back with nearly the same fidelity / with similar artifacts to the same process at 50x data volume? The world will never know.
You're also ignoring the part where all lossy codecs throw away those same details and then fake-recreate them with enough fidelity that people are satisfied. Same concept, different mechanism.
Look up what 4:2:0 means vs 4:4:4 in a video codec and tell me you still think it's "pure insanity" to rescale.
Or, you know, maybe some people have reasons for doing things that aren't the same as the narrow scope of use-cases you considered, and this would work perfectly well for them.
>Why would someone ever take a 40Mbps (compressed) video and downsample it so it can be encoded at 400Kbps (compressed) but played back with nearly the same fidelity
Because you can just not downscale them and compress them in the frequency domain and encode them in 200Kbps? This is pretty obvious, seriously do you not understand what JPEG does? And why it doesn't do down sampling?
Do you seriously believe downscaling outperforms compressing in the frequency domain?
Yes, absolutely. Paychovisual encoding can only do so much within the constraints of H.264/265.
Throwing away 3/4 (half res) or 15/16 (quarter res) of the data, encoding to X bitrate and then decoding+upscaling looks far better than encoding to the same X bitrate with full resolution.
For high bitrate, native resolution will of course look better. For low bitrate, the way H.26? algorithms work end up turning high resolution into a blocky ringing mess to compensate, vs lower resolution where you can see the content, just fuzzily.
Go get Tears of Steel raw 4K video (Y4M I think it's called). Scale it down 4x and encode it with ffmpeg HEVC veryslow at CRF 30. Figure out the bitrate, then cheat - use two-pass veryslow HEVC encoding to get the best possible quality native resolution at the same bitrate as your 4x downscaled version. You're aiming for two files that are about the same size. Somehow I couldn't convince the codec to go low enough to match, so I had the low-res version about 60% of the high-res version filesize. Now go and play them both back at 4K with just whatever your native upscale is - bilinear, bicubic, maybe NVIDIA Shield with it's AI Upscaling.
Go do that, then tell me you honestly think the blocky, streaky, illegible 4K native looks better than the "soft" quarter-res version.
Scaling color data is a different technique than down sampling. Again, all I am saying is that for a very good reason you do not stream pixel data or compress movies by storing data that was down sampled.
"We" decided that Tuskegee was bad enough that it should be stopped before harm is done, and that there is no appropriate or sufficient "punish[ment] in retrospect" for the fallout.
The government makes you get a license to drive at all, then "drive a Pinto" versus "drive a Trabant" are similar enough that they don't require more info. They require you to get different licensure to drive a bigger truck where you could potentially cause more harm, or to drive an airplane. In this analogy the IRB is the DMV/FAA/whatever, and you're asking for permission to drive a tank, a motorized unicycle, a helicopter, an 18-wheeler or a stealth fighter. You don't get a Science License rubber stamp because that's like getting a Vehicle License - the variation in "Vehicle" is big enough that each type needs review.
reply