Parallax shift from the spacecraft moving along its orbit isn't measurable past 16,000 light years. The Milky Way is about 100,000 light years in diameter, so for most of the stars in our galaxy, let alone extragalactic objects, there's no effect.
For closer objects this shift is useful, as it's the only way to directly measure the distance to a object: https://en.wikipedia.org/wiki/Parallax_in_astronomy Every other method for estimating distances to astronomical objects (standard candles, redshift, etc) are based on parallax measurements.
This comes from the same group as the EfficientViT model. A few months ago, their EfficientViT model was the only modern and small ViT style model I could find that had raw pytorch code available. No dependencies to the shitty framework and libraries that other ViT are using.
Every customer of a 3rd party cloud provider is going to have a hard time getting Nvidia GPU compute from 3rd party cloud providers - not only is the amount of GPUs available limited, but Nvidia themselves are trying to spin up their own cloud provider offering, and unsurprisingly don't want to help a competitor.
> but Nvidia themselves are trying to spin up their own cloud provider offering, and unsurprisingly don't want to help a competitor
I thought Jensen recently said he does not want to offer their own cloud offering. He instead wants to focus on creating ready made solutions for cloud vendors to purchase & re-sell services with.
Fair point! I think that's a recentish pivot though (past 2-3 years). I vaguely remember that in the late 2010s they were building and testing DGX Cloud as it's own standalone offering, but I might be wrong and confusing some other offerings they were working on.
Sage/Breville Dual Boiler and do the slayer mod + drip tray mod, you can then pull espresso that rivals any machine out there for a fraction of the cost. https://www.youtube.com/watch?v=JmQgxQ5Higw
The machine is old enough to be well understood with documented mods and fixes, it's dual boiler, triple PID, extreme temp stability, fill from the front, drip tray indicator. I love it.
Get a used Rancilio Silvia + do a fully fledged Gaggiuino build.
Check this Video comparison (Not a modded Silvia though, but a Gaggia classic which I would not recommended due to its aluminum boiler and the new China manufactured models have a (probably Teflon) coated boiler that likes to shed the coating... just search on reddit). The emphasis in this comparison is on the Gaggiuino mod, which does all the magic (profile based brewing). The underlying base machine is not that important. I would stay away from Sage/Breville, too much non standard parts and lots of plastic for my taste.
That's the issue - I keep hearing that it is beyond small research group's budget to meaningfully train such a large model. You don't just need GPU time, you also need data. And just using the dregs of the internet doesn't cut it.
If you go back to the main page of tech meme, and hover over a story an arrow will appear on the left. Click it to see follow up stories to the first story.
I actually tried this last year, before OpenAI released their cheaper embeddings v2 in december. From my experiments, when compared to Bert embeddings (or recent variation of the model) the OpenAI embeddings are miles ahead when doing similarity search.
Interesting. Nils Reimers (SBERT guy) wrote on Medium that he found them to perform worse than SOTA models. Though that was, I believe, before December.
Most of the practitioners I see attempting this are running text through an embedding and then using cosine similarity or something similar as a metric.
which one would expect to do better than cosine similarity if it was trained correctly. I'd imagine the same Siamese network approach he is using would work better than cosine similarity with GPT-3.
There's also the issue of what similarity means for people. I worked on a search engine for patents where the similarity function we wanted was "Document B describes prior art relevant to Patent Application A". Today I am experimenting with a content based recommendation system and face the problem that one news event could spawn 10 stories that appear in my RSS feeds and I'd really like a clustering system that groups these together reliably without false positives.
I'd imagine a system that is great for one of these tasks might be mediocre for the other, in particular I am interested in some kind of data to evaluate success at news clustering.