This is one of my favourite posts.
Part of my PhD was based on this post. https://discovery.ucl.ac.uk/id/eprint/10085826/ (Section 4.1).
I presented this section of my research in a conference and won best paper award as well.
Desk rejects also happen regularly from Editors (who are not blind to the author names and affiliations). Editors can also make final decisions to accept or reject when reviewers don't agree.
Even in a blind peer review, the subject of study introduces another layer of bias in the process. Let's say two economics papers are submitted that discusses how people spend their last part of their pay check. One studies a pool of people across United States and other studies of people across Nigeria. Which one do you think is going to get positive review from editors and reviewers? The Nigeria paper might not even go to review. The editor is going to say something on the lines of "the focus of the study is too narrow for the journal" but won't say the same when it is across US. This is in addition to the trust issues with any research conducted outside western countries.
For researchers from the "low ranked" universities, the game is rigged against them and there is nothing they can do to swing it to their favour.
I went all in on apple ecosystem this year after literally 13 years of using android.
Wife and I got iPhone 14 Pro, Apple Watch 8 and AirPods Pro. Got HomePod mini and Apple TV for home. We got bunch of home kit compatible tech at home (baby monitor, smart plugs, bulbs etc). We also subscribe to apple one family. The results are fantastic.
Everything works with everything else, which is impossible to explain without experiencing it. For example, none of the Apple ads/promotions will tell you that if you connect your phone with carplay and ask for directions, your watch will also be automatically a part of it and show you next turns. The watch will also gently vibrate when you are approaching a turn. This requires no configuration. It just works as is. We can ask my homepod to switch off the tv. We can use either of our phones or watches as remote to the TV. The video from baby monitor can be seen on our phones or TV (as pip), or on our watches if needed.
Usually every evening, we put the baby to sleep, switch on the TV, put on our AirPods, put the baby monitor video pip on top right corner of the TV and play a TV shows directly to our AirPods which can connect simultaneously to the TV. Again no config needed, it just works as it is. All we have to do is to login with our accounts (Even this is easy since you do it through phone)
I download shows/tv to my Mac, drag and drop to Apple TV app, they show up on the main TV over wifi. No casting etc. They just look and play as if they are streaming from Apple TV. I can put my phone on a stand, FaceTime with my family on the TV directly using my phone's camera and mic. It looks and works stunning - https://images.macrumors.com/t/0gxYFSdAW32RTme9jwsKnVnYidA=/.... iMessage and FaceTime are much much more reliable than WhatsApp and much higher quality. All my files are synced up between all my devices without any pain. I can copy text on my phone, and just paste it in my laptop and vice versa. I can right click on my Mac to insert image from iPhone Camera. You can use your iPhone camera on your laptop for teams/zoom calls. When we both leave the home, everything (lights, fans, AC etc) switches off and the entrance lights switch on when we come back automatically. we can control all apple home devices from any of our phones, watch, Siri on AirPods, home pod, MacBooks, etc. and every one of the device with microphone responds to "Hey Siri" keyword.
I don't know about Apple hardware as standalone devices, but the who ecosystem is just bloody brilliant. I just cannot go back to the old way of sitting and configuring each device and bluetooth pairing etc. Since we made this switch in March my productivity has gone through the roof.
It can play media off pretty much anything. I use it to connect to multiple Plex instances (I have a System :D) and I've added a few local folders along with OneDrive and Dropbox to it.
It has apps for macOS, tvOS iPadOS and iOS that sync between each other.
So in your case you could just use Infuse on Apple TV and connect it to a directory on your Mac, then play stuff directly from that share.
I spent the last ten years pushing my family this way (my nuclear family, then as I got married, my wife joined the fold), and it pays off so well. My initial push was support - I can better support iPhones or Macs because that's what I had and they're all the same. No remembering what GPU is installed for which Control Panel option when HDMI isn't working, or being able to navigate to the exact weird section in Settings->General on iPhone by looking at mine. Since then, iCloud sharing has made sending notes and other enabled content so easy. They just don't advertise that stuff, and I don't know why.
> I download shows/tv to my Mac, drag and drop to Apple TV app, they show up on the main TV over wifi. No casting etc. They just look and play as if they are streaming from Apple TV.
I presume these are shows that you’ve downloaded/bought from Apple? If yes, then I’m guessing it’s just synced from your purchases and is downloading and playing from there.
Or does Apple somehow allow shows/movies from other sources to be downloaded and played on the TV App (like a VLC equivalent)? I doubt Apple would enable something like this.
Apple allows movies from other sources (ahem) to be shared across devices seamlessly. It's called something similar to "home sharing" "home library" etc. There is a separate app in Apple TV which lists shows and movies from all devices on the home network.
Apparently it is not. The reddit thread has an ex Apple employee who runs repair shops in Germany. He says if you just change the serial number of a iPad screen(using specialised hardware) in existing iPad this issue will appear. This seems to indicate that it’s intentionally done by Apple.
Other option is that Apple bakes in calibration details of all possible serial numbers in every iPad sold which doesn’t sound like a plausible scenario.
At software level iOS registers some calibration data (after some factory calibration) tied to that specific serial number (as calibration makes sense only for that specific instance of the panel anyway) so when serial number changes iOS can't find calibration data on device and continues uncalibrated instead of trying to use calibrations for another-serial screen (from the OS perspective).
It makes sense, if there actually is some sort of calibration going on in the first place of course.
Yeah I think this is most likely. Then when you change the serial you're using the calibration data for the wrong screen, but most of the time you're lucky and the two screens behave close enough that an average user doesn't notice.
Probably for touch the effects are negligible as the accuracy needed for touch is much more coarse compared to accuracy needed for Apple Pencil drawing. So if calibration is perfecting things at Apple Pencil-level accuracy, it should already be below the threshold of touch-based highest acceptable margin of error anyway.
Touch and pen would generally require completely different calibrations because touch is detected by the capacitive touch screen whereas the pen position is tracked inductively.
However, I wonder why the pen calibration data is not stored on an controller on the display panel? That would absolutely make sense.
Pencil also has a small calibration table and when paired with a new iPad it checks and downloads that immediately.
If not, it might also be (more likely) sending whatever it senses to iPad and iPad processes that raw input with the calibration data to determine the final output.
It only makes sense from the perspective of screwing third-party repair and then trying to come up with something with plausible deniability. Are people swapping screens frequently enough to justify the extra complexity of keeping calibration data segregated by serial number?
Printer manufacturers have been pulling the same trick with storing (approximations of) ink levels in chips in the cartridge, claiming that it makes it easy for users to swap cartridges and continue to have accurate quantities and in that case there's a little bit more truth to the argument, but not this one.
I'm just speculating but the accuracy needed for Pencil to work properly is, by nature of the application, much higher than estimating ink in cartgridges.
I mean, if cartgridges' calibration drift a bit it might also be more acceptable for many, but if Pencil starts drawing incorrectly many artists would be extremely frustrated and move away from the ecosystem.
The fact that they showed how using the controller chip from the existing screen causes it to start working correctly shows definitively that this is not about calibration at all.
Actually not that unlikely, imagine the allowed range of calibration is 0-100, the factory is likely to produce parts with small variance, but having same constant offset, resulting in output of screens between 70-80. While default in software might be set exactly in the middle at 50. Knowing how a large organization works, the default set in sw is probably not even close to what comes out of production lines, it's just an assumption some dev made that happened to work good enough on his desk 4 years ago.
Even if the screens would cover the whole spectrum, there is still a high chance this coincidence could happen, and more data would be needed to validate either theory. Is there any statistics showing that this happens all the time or just some times.
Serial numbers encode production-specific parameters. We don’t yet know how to produce uniform displays batch to batch. Running Toyota engine code on a Subaru will produce glitches without requiring any conspiracy.
But realistically this would mean I couldn't use an iPad with it's pen without connecting it to the internet first. I mean how would Apple bake every calibration for every display into all devices, even ones it hasn't even produced yet?
It's the display that needs calibration. Pen doesn't know where it is, the screen does. So each complete device or at least the screen needs to have a valid calibration table or a function for raw xy -> calibrated xy.
They think the real experiment will show what they concluded within some tolerance. They just don’t want to put in the time and effort to do it. they want to publish their hunch with fake data first to get the glory.
For example, US census agency person for a county can just make up numbers based on his understanding of the place without actually doing a full survey. He believes that even if they check, it will be within reasonable tolerance and the final results will be similar. So he just sits at home keep making up numbers year after year. Until someday USPS opens a postoffice based on the numbers and get no customers causing a full investigation.
Another lofty comment about honesty in academia. No research job lets you spend 6 months redoing stuff just to make sure it’s 100%. It’s about publish what you have with enough disclaimers. It’s about convincing the reviewers to get the paper in. Rest will be resolved later.
Classic cases are the Bell labs guy finding organic semiconductors and fermi lab guy finding new elements. They both were just making up stuff which they believed existed but just needed work to be found and published. They would make up experimental data to support existence of theoretical things. The assumption was just publish this first, get the accolades then use it to get money to get someone to eventually do this properly. But they just predicted stuff which weren’t true. Nobody knows the amount of bullshit that was made up that eventually turned out to be true.
All of this because scientists these days are not expected to work on things. They are expected to produce results. Nobody is giving passionate people time and space to explore things. It’s all about results now.
The irony is that the reward for running the rat race is freedom from rat race. Postdocs publish random crap to boost numbers for their tenure which they think will relieve them of this stupidity and let them focus on pure research. but once they catch the tigers tail they gotta keep running.
> No research job lets you spend 6 months redoing stuff just to make sure it’s 100%.
These jobs do exist[1], and we as a society need to figure out how to stop organizing research around a single manager (professor) with zero meaningful oversight.