One problem with threads like this is that, because the story itself contains so little information, the comments fill up with low-substance dross—and if the topic has anything sensational about it, often indignation too. The mind seems to resort to these things in the absence of anything chewy to discuss.
Although I admit I have nowhere near the experience dang does in moderation, this policy just doesn't sit well with me.
In this human existence of timelines of information consumption, the sooner we confront and address new information the better. Isn't one of the most dopamine-inducing fun parts of news to be the first one to know about it, and be known to others you share it with as being skilled at keeping up with it?
"Low-substance dross" seems very subjective, but I kind of get the idea. I would call it "reddit-tier". I guess your moderation is what keeps us above that level of content quality, and I certainly don't have time to dig into the history of your reasoning here, so I will give you the benefit of the doubt.
> In this human existence of timelines of information consumption, the sooner we confront and address new information the better. Isn't one of the most dopamine-inducing fun parts of news to be the first one to know about it, and be known to others you share it with as being skilled at keeping up with it?
My counter opinion - decreasing latency beyond a threshold is one of the most harmful things in humanity today. It makes it impossible to form opinions based on deep thought / Type 2 systems.
Similarly, one of the best ways to feel good immediately are cocaine or heroin. Fast news is cocaine - feels good now, but harmful.
I'm thinking there will be another announcement when they've actually done it. However you define that word, though, the point about the comments is clear: the discussion, if and when they do open source the code, will be quite different because there will be actual code to discuss.
I'm interested what the format of this will be (if it ever happens).
There's never just "an algorithm", there might be a model, there might be a bunch of feature extraction code, there might be a server that runs it all, but none of these would be that useful in isolation, it's likely that all would be needed to be able to get an accurate idea of the behaviour of the system. But, as Musk says – no one really understands it all, so even then is that really useful?
What may be more useful is releasing a set of policies, or product decisions about recommendations. Not necessarily code that implements them, but the ideas and values of the system and the people building it. The code would likely only be an approximation of that, but the policies/decisions are really the part most people can or will engage with.
Just because you can't run it doesn't mean it wouldn't be useful. This likely will give insight on the kind of modelling, experiments, goal function, deployment methods, resiliency etc. And you could easily reverse engineer policy decision with code(IMO), but other way is not possible.
I think reverse engineering the policy from the code will be hard – this is why no one really understands these systems in full.
However I think given the policy, assuming we trust that Twitter does in fact attempt to implement that policy, the code doesn't really matter. We wouldn't be able to run the code anyway, and bugs aren't really a problem compared to what the intention is, as Twitter would supposedly be constantly working to make the code match the policy.
> Our “algorithm” is overly complex & not fully understood internally. People will discover many silly things , but we’ll patch issues as soon as they’re found! […] Providing code transparency will be incredibly embarrassing at first, but it should lead to rapid improvement in recommendation quality.
Sounds like Elon wants to crowdsource the analysis work.
After having fired many of the devs, and the remaining ones not figuring out what all this is about, this looks like a desperate (or smart?) move to fix things while not spending a cent...
Unlike with the software itself, there was never a serious attempt to open-source such algorithms in the big social media space. Obscurity was the undisputed king.
I would be glad if this attempt actually worked out. Sunlight is said to be the best disinfectant. But there will be a lot of bad actors trying to nudge the resulting algorithm somewhere in a subtle way. I wonder if all of them can be detected.
There don't seem to be many examples of a once-proprietary tool improving after going open source. Blender is the one that comes to mind, but that's driven by a dedicated user base full of working professionals looking to get away from expensive proprietary 3D suites. Anyone who can really understand this algorithm is highly incentivized to use it to their advantage rather than improve it.
> So this is a joke, right? It's on April 1st. So it's a joke, right?
> No, it's not a joke. It's on April 1st, because the source is going to hit the net on March 31st. The source is going out on March 31st because that's when the folks who wrote Netscape's press release said it would go out, way back on January 22nd.
> Besides, posting a joke announcement a week before April Fool's Day would be bad form.
There’s also a distinct lack of politics surrounding Blender. One of the first comments is about Fauci Files, leading me to believe that no matter the content of thee actual release, there will be a shitstorm around whether or not a certain person is vindicated or villainized girthed.
What makes you think anyone understood it before the layoffs? It sounds like lots of different people had been caking on layer after layer over the years to push things in whatever direction they wanted. Much of the code likely was put there by people who left the company long before Musk bought it.
Another idea I just had - the algorithm isn't meaningful without the data. Give someone the code/algo, and they can project whatever they want onto it because there is no actual proof based on the data.
There will be people who replay tweets through the algo, but as we all know - the secondary market of simulation is going to be different than the in-moment reality.
(We also have no reason to trust Musk or believe the code will be complete. He is a well-documented stretcher-of-truth, and likely doesn't understand where all the relevant code lives. After firing 3/4 of the company, it's also likely that nobody at twitter knows this.)
I believe in the concept of crowdsourced recommendation engines, but I am afraid Twitter might fail at that.
A great recommendation engine is tuned to what people like. However, that would be subjective, and solving the problem in an abstract fashion seems difficult.
So with crowdsourcing you could sidestep some layers of abstraction. For example, you can crowdsource how you label and group tweets, and use that for the machine learning.
But, for example, if you use crowdsourcing for removing manual tagging of data (embarrassing?) then you might make things worse.
Sounds like a smart decision. A lot of people care about this and would like input. So much so that they will work for free. I fail to see why this is an issue from Musk's perspective or from mine as a Twitter user.
Not really. Did you see what I wrote? I have my own nitter instance, I use it regularly. But it's on my terms, when I see a twitter link I can replace it with my nitter instance.
Forcing your own nitter instance on others and not letting them make their own decision is absolutely forcing something on others.
I strongly disagree. When people link to twitter they're basically forcing the running of a corporation's client side application (computational paywall). Nitter mirror's client side experience is just HTML with real text and is much more accessible for things like screen readers. The web as an application movement has been a disaster for accessibility.
I mean, I don't care about the guy but man, 13 days later you're now looking like the fool since 'the claim' has now suddenly turned into a fact. [0] [1]
The pedo guy incident opened a lot of doors because he has zero credibility, so he can say anything he wants, and it's ok because you can't believe anything he says.
Most people regularly say things that they believe are true, but probably not proven. Yes, celebrities generally shouldn't do that. I don't follow how that means they have zero credibility.
That's literally what credibility means. You may be smart and influential but if you cry wolf one too many times people are going to start to take everything you say with a grain of salt. After a hundred such incidents involving Musk that's where his reputation is going.
Personally I think Elon was foolish for calling the cave guy a pedo. But things like missed deadlines I don’t really think affect his credibility. It’s technology, estimates are hard.
Most of the supposed damage to Elon’s reputation is simply attacks by Twitter competitors. Think of the hoax a few weeks ago about people being forced to read his tweets.
Saying Elon “believed” the diver guy was a pedophile is cutting Elon a lot of slack. Perhaps “imagined” or “wanted to believe” or, more charitably, “guessed” would be more appropriate.
Most likely what happened is Elon felt insulted and decided to use his enormous platform to make wild allegations
"He's an old, single white guy from England who's been travelling to or living in Thailand for 30 to 40 years, mostly Pattaya Beach, until moving to Chiang Rai for a child bride who was about 12 years old at the time.
"There's only one reason people go to Pattaya Beach. It isn't where you go for caves, but it is where you'd go for something else. Chiang Rai is renowned for child sex-trafficking."
Isn't that going to be a disaster? If I know exactly how the algorithm decides how the recommendations work I can program my bots to take advantage of the setup completely.
I predict this will be a one time thing and any changes will no longer be made public.
The quote you are referencing is in the comment section, and he said there would be vehicles competing 90% of some journeys.
That was accurate.
His most inaccurate estimate was in 2020 that they could be a million robotaxis on the road in a year. That was off by a factor of 75% or so, as there were only about 200K beta testers in 2021.
Today that figure is about 450K.
FSD10 is good enough to drive 98% of journeys and 11 looks closer to 99%.
Tesla is more cautious about removing humans because the scale and theater of operation is drastically different from the waymo/ford taxis which have no path to full self driving (as they cannot operate on the full roadway system) and because they are consumer products instead of their own internal tooling (complete with remote monitoring and control capability).
If we're going by inaccurate estimates in 2020, how about this one?
> “I think we will be ‘feature-complete’ on full self-driving this year, meaning the car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year,” Musk said during a podcast interview with the money management firm ARK Invest, which is a Tesla investor. “I am certain of that. That is not a question mark.”
> FSD10 is good enough to drive 98% of journeys and 11 looks closer to 99%.
That is really not very good at all. If I take 4 trips a day (home to work, work to home, drive to pick up kids from school, drive home) that means I get into a car accident once a month.
This is all just very irrelevant from the main point which is that Elon routinely makes highly dubious statements about the future.
Shouldn't you wait to give credit until it actually happens? Elon has proved many times that what he says and what ends up happening is very different.
Love it when people post links to nitter/mastodon/etc instances that can't handle the traffic from HN because their extreme hatred of Elon Musk stops them from making rational decisions.
nitter is simply a better browser interface than twitter itself. It doesn't burn up my CPU doing god knows what or interrupt my reading with popups asking me to create an account.
Please don't post unsubstantive and/or flamebait comments, regardless of how you feel about someone. We ban accounts that abuse HN this way. Perhaps you don't feel you owe that person better, but you owe this community better if you're participating in it.
i m sure it's de rigueur to diss Musk, but this is smart for business. Twitter's recommendations arent that great and i doubt they re such a big driver of traffic (i use the chronological mode and it s not much different). The EU will now have to eat its paper, cause they are trying to pass laws about this, and the EU has solely called Musk to testify in the parliament. If true, this pulls the rug under a lot of very opinionated people
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
On HN, there's no harm in waiting for the actual thing.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
One problem with threads like this is that, because the story itself contains so little information, the comments fill up with low-substance dross—and if the topic has anything sensational about it, often indignation too. The mind seems to resort to these things in the absence of anything chewy to discuss.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...