US politics isn't easy to understand as an outsider. It looks like Harris was was an ok candidate for the core democrat voter and a terrible candidate to win a populist election in what is essentially a deeply divided and mostly conservative country. She didn't address the swing voters greatest concerns which was their decline in real wealth due to inflation and fear of change. I am sure money and influence had a lot to do with it as well but still a colossal misreading of public sentiment and an inability to reach out to a broader audience.
I am in a private Discord server that has two bots in it. One is a bot that is a basic Markov chain trained on the entire chat history. The second is a proper LLM trained on some amount of tokens backward. Both will occasionally just randomly chime in during the chat.
The markov chain bot is always considerably funnier.
The past. At least pre-Spotify, before that particular rot essentially took the business model of piracy and added a veneer of legitimacy which eroded the already tenuous economics of recorded music into afterthoughts in the common case and fractional microtransactions in the best.
Or since copyright & the arts tend to make tech discourse stupid, we could turn our attention to the vampiric nature of food delivery and rideshares where we can also observe the tendency to not merely build platforms and work on experience but also look for places where labor/service costs can be opaquely externalized and investment can be used to subsidize anticompetitive pricing and lower wages.
Once that die was cast, of course it was inevitable that most platforms were going to go digital vampire on creators.
The other place we could go is law & policy -- really, the idea that training is "fair use" is wrong. High scale automated training can't be fair use, fair use was created well before HSAT was even conceptualized let alone possible. The law should be that every training use should require explicit use-specific opt-in, with consideration. Every other training use of a copyright claimed work should be an infringement.
Without that, there is no place to go that doesn't surrender everything to digital vampirism, and there never will be.
At the time I felt like Apple was getting rid of the 3.5mm jack as a potential bottleneck for future iPhone designs (as one of the limiting aspects of form factor), but there still doesn't seem to be anything design-wise to justify it, even several years later. It is very clear now that it was merely to encourage Air Pod adoption.
Killing the API access made detecting and tracking spam bots impossible. There was a whole subreddit called thesefuckingaccounts where the latest tactics in spam and karma farming were being tracked.
We've seen numerous stories at this point about lawyers trusting AI to generate case documents that turned out to have false cites - AI generated scientific papers are being published. Doctors are using AI. Law enforcement is using AI. Everyone is using it and a lot of people are using it with the assumption that it's intelligent and factual. That it works like the computer from Star Trek. People on this very forum who should know better have said they trust AI more than they trust themselves and more than other people.
AI probably has a niche where it's useful, but because it smells like a magic money machine that will allow managers to replace employees and create value from essentially nothing, modern capitalism dictates we must optimize our entire economy around it, no holds barred, damn the torpedos and full speed ahead because "money." I just hope the fever breaks before people start getting killed.
There's definitely a bit of influence / perception manipulation on HN. A few years back I heard a story that a tech company would monitor HN for certain keywords, and if their product or category was ever brought up or mentioned multiple developers would always show up to engage on the topic. This isn't quite spamming or cheating the system, but it's a very effective tactic for shifting public perception.
> [...] but to widespread public concerns about the risks posed by the tech industry at large. Effective accelerationists worry that these concerns have become so entrenched that they threaten to extinguish the light of tech itself.
Those "years of gloom" (which aren't very many years -- has everyone forgotten when the tech industry was widely seen in optimistic terms?) have been brought on by the behavior of the tech industry itself, in large part because of the misapplication of the idea "move fast and break things" (which is, unless I'm misunderstanding, the very essence of e/acc that this article discusses).
Our industry has been breaking a lot of things that people don't want broken, then tends to shame people for being upset about that. The problem isn't some inherent fear of tech itself, it's a (supportable) fear of the tech industry and what other things it may damage as time goes on.
If the industry wants to assuage these fears, the solution isn't to move even faster and break even more things, it's to start demonstrably acting in a way that doesn't threaten people and the things they hold dear.
How did Sam Altman manage to Tom-Sawyer this into a "we" project? He's an individual seeking to raise money for a private sector venture in order to bend the world to his will, which seems to be to create his own version of utopia. I don't think his world-bending has the public buy-in that WWII spending had -- he can't even articulate clearly what that utopia looks like or why he's so, so, so confident that we'll get there with this path.
The Costco screenshot is really interesting. The terminal apps of old were truly works of art and _incredibly_ fast. A non-technical worker wouldn't take long to understand the system and the keys/shortcuts to do something quickly. I remember having to sit with a few folks when we were looking at modernizing an app, watching them with a few key strokes process a record or something and thinking "we'll never match this speed".
Web or even desktop apps these days really pale in comparison.
"When people say I changed the culture of Boeing, that was the intent, so that it’s run like a business rather than a great engineering firm."
Yes, because I'd rather fly in a plane made by a "business" rather than "a great engineering firm." Why in the f*ck isn't this scumbag in prison? When you take a firm that produces a product that must ensure the safety of its users because the consequences are dire and you purposely subvert that, and then people die, you need a long stint of FPMIA prison. Also, cheated on his wife in their Golden Anniversary year. Lowest of the low.
This is not about open source AI, and the people who are saying it is don’t seem to understand the point Anthropic are making here.
The point here is that malicious hidden behaviour encoded during pre-training seems to be very resistant to generic finetuning without knowing what the hidden behaviour is.
If random websites start including hidden or discreet bits of text which include malicious instructions, they might be activated post-hoc to get a model to do something nefarious. This impacts open source and closed source models alike since they all general train on trillions of tokens which can’t be manually verified for hidden traps like this.
From 2006 when I joined until maybe just after Obama (2009 or 2010, not sure? maybe as late as 2011) it was the best ever. Like HN on roids. Better than Slashdot that came before it, which was already a junk site by that point, larger than K5. Then it ate every internet forum ever, and turned into this weird authoritarian pervert Myspace thing.
Now it's not even a website, but a phone app. I hesitate to click on reddit links unless they're old.* prefixed.
> how has generative AI had a positive impact on the average person?
Generative AI, for the most part, is a technology in search of a problem. For now. The tech is in a “demos well, productizes poorly” stage. Once you try doing anything at scale and with proper evals of success, you see that real world performance is pretty poor still.
Yes yes we have lots of totally-not-cherry-picked papers where researchers achieved something fantastic. Then you look at the detail and it’s either “we ran this once because expensive” or “it achieved this great result almost 35% of the time so it’s state-of-the-art best-in-class”
> create a techno-utopia, and they may be sorely disappointed when others don't feel the same way
Every techno-utopia I’ve ever seen in movies, books, etc has always been a secretly dystopia. Looks nice and polished on the surface, but achieves this result through aggressive oppression and disenfranchisement of dissenting voices.
Star Trek perhaps is the only one that didn’t follow that pattern. And even then you have groups outside the federation who are not super happy with how the federation does things.
Top tier VCs will definitely tell you “no” and they’ll often get there quickly.
It’s more often the small funds, the inexperienced family offices, and the junior associates that don’t have authority who string companies along forever. They don’t have as much authority or funds to actually work with, so they have a lot of time to string you along.
Lumping all VCs together really doesn’t lead to accurate descriptions of how the industry works.
That's the shortcoming of every alternative protocol and "indie web" community I've come across. They only attract existing techies and have a weird sheen of forced kindness about them. If you're just chatting with other programmers under American HR communication standards, then how is it any different to work?
The true magic of the early web was somebody genius but decidedly untechnical like David Bowie shitposting at his own fans. There's no special line of code that's going to foster that. You have to ruthlessly curate a community to avoid a critical mass of sensitive nerds, but guess who the early colonizers of these alt platforms are. None of these communities will attract today or tomorrow's David Bowies.
You can also use software to detect “cuts” in the video, which can be used to improve the frame-extraction over just getting six evenly spaced frames from the video.
> If they haven't actually done it by now, it's a management problem, and no AI tech is going to fix that.
This. Further, it’s a failure to continue to disincentivize roles that will support or port this business critical logic to something else. I worked at a large insurer where they slowly laid off mainframe talent over the last decade. Those mainframe salaries were counter to the narrative they were promoting around cloud being the future. Unfortunately in their haste to create optics they failed to migrate any of the actual code or systems from mainframe to cloud.
I am reminded of the phrase "The easiest way to prove a nihilist as a hypocrite is to point a shotgun at his head." You have reconstructed moral responsibility in no responsibility land, possibly only removing the judgement of character that comes with "defensive behavior".
TESCREAL is one of the things I point to when people say e.g. "Just because I dress in a suit doesn't mean I'm well put together in my daily life."
Of course wearing a suit and being having a life in order aren't strictly casually related. But the much weaker claim that they are positively correlated is one I'd be surprised to see disproved.
Similarly, I find it hard to imagine someone who is T, E, S, C, R, and EA, but not L. They're out there, but all 7 ideas co-occur so often that they're an outlier among people who identify with, say, at least 4 of the 7 terms. There are many more such people who are straightforwardly all 7/7 than there are 6/7, even if there are six times as many ways to be 6/7 on TESCREAL.
Framed this way it makes the fact that the Big Five genuinely don't appear very correlated to one another very impressive.
(I'm not sure I fall under any of the 7 anymore, go figure. Maybe I'll come back one day. Credit where credit is due: The Russian Cosmist imperative to not just make everyone alive immortal, but resurrect everyone that ever died stirs my soul in a way few things have.)
I don't understand this characterization. He cites a wealth of the kind of history that only a seasoned and successful fiction author could have deep knowledge of. I may end up rereading it so I can follow the thread better. It's fascinating all because of that, but you want to skip to the end and just criticize the mindset because you didn't like the conclusion.
I get it, tech geeks want to believe in tech geekdom. But this is an unexamined religion, the priesthood of which is right here to peel back the curtains and show how it's all smoke and mirrors, and you just want to crucify the non-believer. Elon Musk et al are not the writers of the myths, and rockets and LLMs are not the communion wafers. But you seem to want to treat them as such.
I fully recognize the danger AI presents, and believe that it will mostly likely end up being terrible for humanity.
But thanks to my own internal analysis ability and the anonymity of the internet, I am also willing to speak candidly. And I think I speak for many people in the tech community, whether they realize it or not. So here we go:
My objective judgement of the situation is heavily adulterated by my incredible desire for a fully fledged hyper intelligent AI. I so badly want to see this realized that my brain's base level take on the situation is "Don't worry about the consequences, just think about how incredibly fucking cool it would be."
Outwardly I wouldn't say this, but it is my gut feeling/desire. I think for many people, especially those who have pursued AI development as their life's work, how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.
I've seen a lot of lamentation about PE eating the world, but very few people discuss *why* PE got so huge.
PE is popular for one reason and one reason only: taxes. PE generally makes money on a trade called a leveraged buyout (LBO), where they take out a massive loan to buy a company. Because interest on debt is tax-deductible, going debt-heavy increases the take-home profits of the company (this is called a "tax shield"). Because the profits are higher, the value of the company is higher, and the PE firm makes money on their trade.
What this means in practice is that if you run your company sustainably (low debt, lots of assets). You become a target for a PE firm to attempt a hostile takeover of the company, all while claiming (defensibly, actually) to be doing whats in the best interest of the shareholders. So good companies will try to ward off these attacks by taking on lots of debt and going asset light to minimize the value gain a PE firm might have.
In short, both PE ownership and the brittle, debt-heavy nature of the American economy today can be traced to the tax advantaged nature of debt. For reasons I can't quite understand, nobody seems to be advocating for revoking this tax deduction. I can only surmise this is because everyone hates taxes.
Thank you for coming to my TED talk, your take home exam is a short essay on what you think the mortgage interest tax deduction (started in 1913) did to household debt.
> The problem is a judicial review system that's extremely hesitant to consider exculpatory evidence following a conviction... and here we have an edge case where the conviction was gained by discredited scientific means. Science adapts quickly to new evidence and new methods, and quickly discards old ones; the courts don't work that way.
To expand on that, appeals are not intended to re-litigate the facts. They are there to fix procedural mistakes during trial, or wrong argumentation of how the law should be applied.
The question of 'what happened' is generally not up for debate during appeal, and is generally not grounds for lodging an appeal. Instead, appeals are meant for when the law was applied wrong. Or when a trial was unfair due to a judge's mistake. Effectively, it is meant to protect you against judge's mistakes and unscrupulous prosecutors. There is much less protection against experts that happen to be wrong.
It seems to me that a system like that is fundamentally incompatible with a death-penalty, since it leaves a decent amount of space for mistakes that go uncorrected.
Here's the thing. We've basically been using TOML since the INI file format from MS-DOS. It works, and it's useable, and we've all seen it. TOML is just an evolution of the INI format to fix some of its shortcomings. YAML blasts onto the scene like "hey, what if we just rewrite the JSON format to make it legible?" which brings with it a million edge cases of problems.
YAML is the new kid on the block. And despite having several great formats for different use cases already, some kind of mass hysteria caused big players to adopt YAML. I suspect everyone who adopted YAML is either a Python dev or they were drawn in by the legibility of the format. It's easy to read. While that's true, did anyone stop to think about what it's like to actually use the format?
https://www.congress.gov/bill/118th-congress/senate-bill/214...
Looks pretty dead. Only 1 cosponsor and no action taken in a year.