Hacker News new | past | comments | ask | show | jobs | submit | pfsalter's comments login

"actively exploring exits" is not the same as "rush to sell". Most startups are actively exploring exits as that's the whole damn point of a startup


Humans don't really generate text as a series of words. If you've ever known what you wanted to say but not been able to remember the word you can see this in practice. Although the analogy is probably a helpful one, LLMs are basically doing the word remembering bit of language, without any of the thought behind it.


How do you generate your text? Do you write the middle of the sentence first, come back to the start then finish it? Or do you have a special keyboard where you drop sentences as fully formed input?

As systems humans and LLMs behave in observably similar ways. You feed in some sort of prompt+context, there is a little bit of thinking done, a response is developed by some wildly black-box method, and then a series of words are generated as output. The major difference is that the black boxes presumably work differently but since they are both black boxes that doesn't matter much for which will do a better job at root cause analysis.

People seem to go a bit crazy on this topic at the idea that complex systems can be built from primitives. Just because the LLM primitives are simple doesn't mean the overall model isn't capable of complex responses.


    Do you write the middle of the sentence first, come back to the start then finish it?
Am I the only one that does this?

I'll have a central point I want to make that I jot down and then come back and fill in the text around it -- both before and after.

When writing long form, I'll block out whole sections and build up an outline before starting to fill it in. This approach allows better distribution on "points of interest" (and was how I was taught to write in the 90's).


Back in 2018 [1], at a similar time that there was a lot of moving about and restructuring. I think this is about the time that Google search started going downhill as well

[1] https://gizmodo.com/google-removes-nearly-all-mentions-of-do...


Again, the phrase was not dropped. You might notice that your link is titled "Google removes nearly all mentions of don't be evil from its code of conduct". (By which they mean 3 mentions.)

But as that article notes, they didn't drop the phrase. It's there now. It's always been there. There was never a time when they dropped it.


No idea what this guy is talking about. Big tech are only talking about the existential risks to hide talking about the actual very real risks of how the technology will be misused.


IMO the biggest risk is that big tech will be the only ones allowed to do AI in a future regulatory regime, leaving smaller players out of the wealth making opportunity, and leaving society with a low diversity of options. And that regulatory capture is enabled ironically by people like Tegmark, who are pushing for restricting everyone with bureaucratic nightmares.


It's part of the whole EA schtick (FLI is in that space w/ their funding from the Center of Existential Risk).

I always found those guys annoying - they adopted sci-fi tropes while ignoring decades of data driven work on how to minimize misuse of technology.

It's like a postmodern version of Herman Kahn - overusing data driven models while ignoring the variability that arises from humanity.

Edit: also this article is a submarine piece from the AI Safety Summit in Seoul that was cohosted by the UK and SK, and was a flop [0]

[0] - https://www.reuters.com/technology/second-global-ai-safety-s...


My interpretation from the paper is that this algorithm is simpler than other options but also worse. So in a professional context you'd use one of those instead


The biggest problem is that if you're using this in production then it could cause problems. It's not like a traditional outage where you get a load of 503 errors, the system appears to be working correctly but generates gibberish. If you're using this in a chat bot you could be liable for things it's saying...


Agreed. There's nothing inherent about slaves that is more efficient than paid laborers. It's just cost and time. It can look incredible that the Romans built these huge structures, but the timescales were measured in decades. Same with cathedrals in the middle ages. If you've got 200 years to build something you can really do a great job.


That's something you could philosophise about but not research unless you already have a human level intelligence to test. We won't know if it's even possible to replicate in silicon for a very long time


Very weird that it's using paths instead of the Accept header to change the content type, but otherwise pretty cool


Every person you hire to help drive a period of growth is someone who will later be fired once that growth (or commonly just funding) runs out. It's very easy to just increase headcount to increase growth but it's not sustainable.

I feel like companies often think they're always going up an exponential curve rather than breaching the top of the 'S' and you really need to plan for the long term health of your company. This company obviously hasn't


This is only true if you are measuring growth in extremely dumb ways. Most metrics for growth are revenue-based and if the growth is creating revenue there is nearly always enough money to pay for the roles created to grow the company. If the growth project fails sure you'll need layoffs but you don't need to fire a $300k developer who creates $1M/year in revenue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: