Hacker News new | past | comments | ask | show | jobs | submit | svdr's comments login

We are using Helpscout wich is very nice over all. The also do not send the weirdly formatted ticket email, with 'respond above this line' etc.

Did you find out what the error was in computing the volume of the truncated icosidodecahedron?


The attackers are demanding only $500,000 as a ransom payment, that's cheap!


Yep, it's almost a Dr. Evil just-arrived-from-1967 sort of price. Bizarre.


or extremely expensive, if the contents are not as described


I always think people calling AI just ‘fancy autocomplete‘ haven’t really tried it yet.


But that's literally what it is. The only reason you can have dialog-like interactions with language models is because they have been trained with special "stop tokens" surrounding dialog, so the model can (generally) auto complete something that looks like a reasons, and then the inference engine can stop producing text when the model produces the stop token.


Technically it is, of course, but the experience is completely different, and I get the feeling people call it that to downplay it.


I think understanding that helps me get more out of them. I feel like I am better able to provide information to the model with the expectation that it will need that information to autocomplete the dialog that I want.


Or when it produces “\nYou:”. But that doesn’t matter much, since the value is in what happens in a dialog.


s/reasons/response/


Indeed, it's actually worse. You should only use it for stuff where factuality doesn't matter. Verifying that an answer is complete and correct is more work than consulting a reliable source in the first place.


I think it's an accurate assessment and still get some use out of LLMs. Brainstorming various things, tutorials, entertainment. Straight questions with a factual answers really are a poor fit. Just yesterday GPT3.5 told me the Belgian city famous for its mustard is Dijon.


I see LLM's as next-generation search. So much of everything created on the internet is useless, outdated, false or just irrelevant to the thing you want answered. It gets increasingly harder to find what you need to get your job done. A LLM can extract relevant information from the garbage pile faster than I can sift through various forums and mail threads via Google. The more I use it the more I feel that it's a greater step in information-finding than Google itself was when it launched.

You need a second source of actual truth to verify it, of course, but that's always the case anyway. For coding it's easy, the code works or it doesn't. Lawyers and such have a harder time when they use a LLM trained on "the internet", but one can imagine a LLM trained mostly on actual case histories and law texts doing much better for example.


> For coding it's easy, the code works or it doesn't.

Oh, wow. That’s a quote that could be written on a lot of figurative gravestones. Also some literal ones; see THERAC-25.


The best uses of AI (c. 2024) is indeed when it's just "fancy autocomplete". It's great to help finish up boring, repeitive, but necessary tasks so you can focus on the high level.

But the tech cycle has been hyping it to all heaven (or hell), and claiming it can simply reproduce your imagination on a whim. Maybe in another decade, but that doesn't stop companies from buying in and trying to replace skilled labor ASAP.


There's a qualitative difference between the raw LLM model, which I think it is fair to describe as "fancy autocomplete", and AI-as-actually-deployed, where the LLM has access to function calling behind the curtain.


I still use mixtral for exploratory questions and it’s so good, especially with well-crafted character cards. I think there are three types of people: those who tried it, those who never tried it, and those who only tried GPT-<n> and boring alikes.


To the contrary, the more I use it the more clearly it is just this.

That’s not to disparage the value of a juiced up autocomplete though


This is a nice daily newsletter with AI news: https://tldr.tech/ai


I guess they must have put some time in Sora?


Amounts are written like this: $1,102,684,1B. Shouldn't the last comma be a period?


Depends on your localization norms.

Admittedly, though, I don’t recall ever seeing a scheme that uses commas throughout. Usually if the decimal is a comma, the thousands separator is a period, in my experience.


I wanted to use a MacOS VM with Parallels for development. It is very easy to install and runs fast, but it's impossible to sign in with an Apple ID, which severely limits its use.


That’s Apple’s decision. It was intentional.

Apple are very weird about MacOS VMs.


Severely? I use macOS directly on hardware without an Apple ID as my daily driver.

It works fine.


I'm using Chrome on a fast mac but the animations still had hickups.


That is true, but cities themselves are very slow to regulate (and prevent) this.


Yeah, because it's tough to say no to free $$$.

Our elected leaders in charge of regulating this are often the ones directly profiting from Airbnbs and overinflated housing markets.

They often directly or through family and friends own several properties in desirable neighborhoods. So why would they?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: