Hacker News new | past | comments | ask | show | jobs | submit | raincole's comments login

When people stress over the details I care, it's craftmanship. When they stress over the ones I don't care, it's nitpicking.

Because they don't want the users to feel stressful for using their product more.

> LLM Apis...

Yeah exactly, ChatGPT doesn't have this option for their web interface either, only for API. For the same reason.


The recurring theme is people will keep explaining why bitcoin has failed and will fail. And bitcoin will keep hitting all time high.

Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.

I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.


I use LLMs as a translation tool, and make sure to generate JSON flashcards.

Sometimes it is more important to get a point across in another language than it is to learn that language. Computers being automatable, you can use it to create a backlog for when you skipped learning so that you can maintain some control of your habit of not learning what you're saying.


> "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."

No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?

If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.


See my answer to the other comment on this question. We figured there were some good use cases for AI in an interview that weren't just copy/pasting code, it's not about guessing intentions. It seemed most helpful to potentially unstick candidates from specific parts of the problem if they were drawing a blank under pressure, basically just an easier "You can look it up on Google" in a way that would burn less time for them. However we quickly found it was just easier for us to unstick them ourselves.

> If AI is an issue for you then just ban it.

Yes, that was the conclusion I just said we rapidly came to.


You went as far as checking how it works (thus "requires you to set up a remote/external provider as the first step").

But you didn't bother checking the very next section on side bar, Supported LLM Providers, where ollama is listed.

The attention span issue today is amusing.


> The attention span issue today is amusing.

I find it rather depressing. I know it's a more complex thing, but it really feels irl like people have no time for anything past a few seconds before moving onto the next thing. Shows in the results of their work too often as well. Some programming requires very long attention span and if you don't have any, it's not going to be good.


But this is an elevator pitch. I didn't come here to be marketed to, yet I am being marketed to.

So if you're going to market something to me at least do it right. My attention span is low because I don't really give a shit about this.


But people really have no time. There is only one brain and thousands of AI startups pitching something every day.

Yeah, don't need to try any until everyone says 'you have to'. Which happened with Aider and later Cline & Cursor.

It's hard to picture a future where it doesn't last at least two years long.

Trump is not famous for admitting he is wrong. Plus he's probably not wrong in this case, if destabilizing the US has been the goal all along.


Technically this could be challenged in court?

He can only enact these tariffs because he declared a national emergency. Otherwise he'd need the congress to confirm them. The illegal immigrants/fentanyl pretext seems absurd and flimsy...


Whatever Apple's goal is being, the result is written on the wall: Swift's brand is strongly associated to Apple ecosystem for most programmers. They won't adopt it unless they're already targeting Apple's platforms.

See C#/.Net Core. It runs on Linux for so many years. But people still treat it as "Microsoft's thing".


Supermarket Simulator is a game whose Steam capsule (the very first art players see on Steam) is blatantly AI made.

The estimated sales are > 2M copies.

[0]: https://steamdb.info/app/2670630/charts/


A lot of those simulation games seem rather poorly made, but a couple of them have been big hits regardless.

I'm sure you'd do better on average with not terrible marketing.


Wonder if it would sell more if it had a good art banner instead?

Would it sell more than the additional sales that were obtained by the work done during the time saved through generating it with AI instead of finding an artist?

Of course it's possible, but 2M copies is already the top-seller in this genre. The original Goat Simulator (arguably THE one game that made "funny goofy simulator" a genre) sold ~4M.

[0]: https://steamdb.info/app/265930/charts


I don't want to sound like I am bashing the original designer of the $200 monkey mascot, but I think the author has some misunderstanding here.

The market value of that was very unlikely $200. Before AI, there were so many people offering similar services on Fiverr. And from my experience they're mostly not scammers (just novices). Of course they might not live in the US, at least not in big cities.

The price range for that was $20~$50.

Edit: the article says 2013. I don't know if Fiverr was popular back then. I'm talking about more like 2019. At that point Fiverr and similar platforms had upended this kind of $200 market.


I think you would see a quality difference between 'a monkey mascot' at $40 and $200. The $200 designer is shining a light on his personal brand.

The mascot is friendly, vaguely memorable, well-proportioned, soft, and not attention-seeking. Its expression tells a story, adds humanity, and creates unresolved tension.

The AI ones are sharp and confident and eye-catching, zero subtlety, completely missing the point. I'm willing to bet a $40 designer would drop the ball in a different, equally bad way (probably make it too corporate, or miss the precise "cute but low-effort" spot the original designer hit).


It's funny because as I read the article, one thought was "the slop results look like they came from very old models"

Use a model released this week and the results are (to my untrained eye) no longer distinguishable from a human artist: https://imgur.com/a/tgEsXq8

And it wasn't some pro image prompting magic. Even compared to 12 months ago, the text encoders have very good. I was able to use wording that came to me naturally and get an aesthetically pleasing result in seconds.


The reason AI slop is slop isn't because models are not advanced enough. The slop factor is inherent in the way AI works and cannot be fixed by brute force compute power.

AI is likelihood optimization under the hood. It draws the most statistically likely image. The human brain and eye is very good at picking out "average" pictures. Turns out our in-built AI detection capabilities are very, very good. (This might actually kill the AI industry very quickly. I imagine in a couple years AI-generated pictures or texts will be the lamest thing ever, and AI companies will lose a shitload of money in this arms race.)


This comment can't be further from reality. If you show this (https://imgur.com/a/tgEsXq8) to people, their "very, very good built-in AI detection" won't bat an eye.

> This might actually kill the AI industry very quickly

Yes it will, but not in the way you implied. In a few years AI assisted image creation will be so common that no one will bother mentioning "AI" anymore, effectively "kill" the AI hype. Just like you can't sell built-in webcam as a feature of a laptop: every laptop has it.


a) I linked to an entire album of pictures that 99% of the population would not identify as AI, including the population that knows what AI slop is.

b) This is a gross oversimplification to the point of being unhelpful and pretty much wrong. You don't seem very familar with how these models work.

There is no inherent reason why sampling from the latent space of a model limits us to average of any concept.

Not to mention the models are learning what average means and what exceptional means and can increasingly produce both at will. As they get larger the degrees of seperation between those "sub-concepts" of each concept grow larger and larger.

The reality is humans are good at convincing themselves they're good at things. The false positive and false negative rates are already going up for AI art, and it's only going to accelerate from here on out.


> that 99% of the population would not identify as AI

Absolutely not true about the "99%". Give it a couple years and not being able to spot AI slop will permanently mark you as an "okay boomer" tech illiterate slob.

AI == lame, square, uncool.

We already see this with the AI assistants they keep putting into OS updates. People hate them, and not for "privacy" reasons. They just don't want to be lame.


Again, can you point out to even a single giveaway for that album?

I'm honestly convinced you plain didn't realize the entire Imgur album was AI generated and that's why you haven't been able to address it.


AI is lame, kids hate that stuff. Positioning yourself as some sort of "AI expert connoisseur" will only make you look lamer.

You had 3 days and that was the best you could do?

You mentioned a model released this week. Is it Lumina new model by any chance?

Imagen 3 002

Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: