Hacker News new | past | comments | ask | show | jobs | submit | tensor's comments login

You are likely not a professional writer, unless that happens to be your job, but then you are not “like the rest of us”. Also, journalists are held to the standards of their organization, unlike “the rest of us” who have no standard.

I always assume something a random citizen says is 99% false because yes, most people know very little.


> I always assume something a random citizen says is 99% false because yes, most people know very little.

That's one way to look at it. Another is, a random citizen has no reason to lie to you. A journalist working for an "organization with standards" does.


I dispute that random people have no reason to lie. There's a ton of money to be made blogging lies, for one thing. Notoriety, attention...

It has nothing to do with someone lying to me, rather that the average person is very uneducated on any given topic and these days largely consumes propaganda from people who DO have a reason to lie to me. Thus, random people are not to be listened to.

My hierarchy of trust is random people 10, government 40, journalists 60, science papers 80, scientists talking about their own area 88. Everything else 0.


While LLMs have accelerated, it, it was already the case that silos were blocking non-Google and non-Bing results before LLMs. LLMs have only made existing problems of the web worse, but they were problems before LLMs too and banning LLMs won't fix the core issues of silos and misinformation.

170c right? I do the same but use 350f. I also heat the pan before adding oil. Works perfectly every time. Even more nonstick than any nonstick pan I’ve used.


yep, 170c and I also heat the pan up before adding fat/oil and agreed, its better than non-stock and honestly even easier to clean.


How about that whole eating fat leads to heart disease being long disproven point?

https://pmc.ncbi.nlm.nih.gov/articles/PMC9794145/


I think in this regard it works just fine. If the laws move to say that "learning from data" while not reproducing it is "stealing", then yes, you reading others code and learning from it is also stealing.

If I can't feed a news article into a classifier to teach it to learn whether or not that I would like that article that's not a world I want to live in. And yes it's exactly the same thing as what you are accusing LLMs of.

They should be subject to laws the same way humans are. If they substantially reproduce code they had access to then it's a copyright violation. Just like it would be for a human doing the same. But highly derived code is not "stolen" code, neither for AI nor for humans.


Except actual studies objectively show efficiency gains, more with junior devs, which make sense. So no, it's not a "deception" but it is often overstated in popular media.


Studies have limitations, in particular they test artificial and narrowly-scoped problems that are quite different from real world work.


And anecdotes are useless. If you want to show me improved studies justifying your claim great, but no I don't value random anecdotes. There are countless conflicting anecdotes (including my own).


I find the opposite, the more senior the more value they offer as you know how to ask the right questions, how to vary the questions and try different tact’s and also observe errors or mistakes


Why aren't you writing unit tests just because AI wrote the function? Unit tests should be written regardless of the skill of the developer. Ironically, unit tests are also one area where AI really does help move faster.

High level design, rough outlines and approaches, is the worst place to use AI. The other place AI is pretty good is surfacing api call or function calls you might not know about if you're new to the language. Basically, it can save you a lot of time by avoiding the need for tons of internet searching in some cases.


I have completely the opposite perspective.

Unit tests actually need to be correct, down to individual characters. Same goes with API calls. The API needs to actually exist.

Contrast that with "high level design, rough outlines". Those can be quite vague and hand-wavy. That's where these fuzzy LLMs shine.

That said, these LLM-based systems are great at writing "change detection" unit tests that offer ~zero value (or negative).


> That said, these LLM-based systems are great at writing "change detection" unit tests that offer ~zero value (or negative).

That’s not at all true in my experience. With minimal guidance they put out pretty sensible tests.


> With minimal guidance[, LLM-based systems] put out pretty sensible tests.

Yes and no. They get out all the initial annoying boilerplate of writing tests out of the way, and the tests end up being mostly decent on the surface, but I have to manually tweak the behavior and write most of the important parts myself, especially for non-trivial tricky scenarios.

However, I am not saying this as a point against LLMs. The fact that they are able to get a good chunk of the boring boilerplate parts of writing unit tests out of the way and let me focus on the actual logic of individual tests has been noticeably helpful to me, personally.

I only use LLMs for the very first initial phase of writing unit tests, with most of the work still being done by me. But that initial phase is the most annoying and boring part of the process for me. So even if I still spend 90% of the time writing code manually, I still am very glad for being able to get that initial boring part out of the way quickly, without wasting my mental effort cycles on it.


The fact that you think "change detection" tests offer zero value speaks volumes. Those may well be the most important use of unit tests. Getting the function correct in the first place isn't that hard for a senior developer, which is often why it's tempting to skip unit tests. But then you go refactor something and oops you broke it without realizing it, some boring obvious edge case, or the like.

These tests are also very time consuming to write, with lots of boilerplate that AI is very good at writing.


>The fact that you think "change detection" tests offer zero value speaks volumes.

But code should change. What shouldn't change, if business rules don't change, is APIs and contracts. And for that we have integration tests and end to end tests.



I think you've misunderstood what he meant by change detection (not GP, could be wrong).

Hard to describe, easy to spot.

Some people write tests that are tightly coupled to their particular implementation.

They might have tons of setup code in each test. So refactoring means each test needs extensive rewrites.

Or there will be loads of asserts that have little to do with the actual thing being tested.

These tests usually have negative value as your only real option as another developer is to simply delete them all and start again.

That's what I would interpret the GP as meaning when they use the phrase "change detection" tests.


>Some people write tests that are tightly coupled to their particular implementation.

That is not due to people choice but due to what actual code being tested does.

I think integration tests and end to end tests are much better.


>But then you go refactor something and oops you broke it without realizing it, some boring obvious edge case, or the like

I will start to care when integration tests are failing, because that is an actual bug. Then I will fix the bug and move over.


I am kind of starting to doubt about the utility of unit tests. From a theoretical perspective I see the point in writing unit tests. But in practice I rarely seen them being useful. Guy A writes poor logic and sets in stone that poor logic by writing an unit test. Manual testing discovers a bug so guy B has to modify that poor logic and the unit test.

I'd rather see the need for integration tests and end to end tests. I want to test business logic not assert that 2 + 2 = 4.


I've always looked at unit tests as a targeted and often temporary measure.

Once you have a way to do a good integration/e2e test, the results of the constituent unit tests don't provide as much value.

I'd rather run one big complicated thing for real once than hang my hat on a bunch of fake green checkmarks that update every 50 milliseconds.


Also inextricably human tasks like reading spam messages, scam messages, marketing messages, overly verbose work emails, enjoy that!


Not sure I've spent more than 10 minutes combined over the past 20 years 'reading' a spam or scam email. When those things do manage to get through spam filters it's usually pretty obvious what they are. As for 'marketing messages' I can only assume you mean spam and added this to make the list look longer.

Get back to me in 3-5 years and let me know how getting AI to condense your work emails is going for you - my guess is that the first time ChatGPT manages to fuck up a distillation for you, either by garbling the meaning of something important or just missing a crucial point altogether, you'll swear off it for good. If you still have a job by then...

Point is, if you've been so genuinely bothered by spam and Nigerian princes that you're happy to outsource judgment and critical thinking to a probabilistic bot that hallucinates once in every 15 to 20 tries, then - aside from your skill issue getting your spam filter to work - you and I have very divergent views on what makes man's brain valuable and unique and indeed what parts of our cognition are worth preserving.


Behind what? Windows chatgpt interface? Apple has been quietly adding AI features, sometimes with no equal. For example, having OCR being built in everywhere in the OS is amazing. I've used it many times already, but it's a quiet seamless addition so I guess people don't notice.

Their photos improvements are pretty solid, bringing closer to par with Google photos. Their translation and voice recognition/dictation are state of the art. Their photo processing is equal or better than Googles. Not really seeing where they are "so behind" to be honest.


"so behind" in that after months of pumping themselves up through glossy marketing they are now just barely on parity with Google Pixel's AI feature set?

C'mon man.


> months of pumping themselves up through glossy marketing

I don’t think I’ve come across their marketing outside tech-specific channels, who are reporting on it because AI gets clicks.


If you aren't seeing them, it is more likely to how you handle ads in your everyday experience. They have a very active ad campaign that I've see on several TV channels and streaming services (mainly Prime ads)

https://www.adweek.com/brand-marketing/apple-intelligence-de...


Hacker news is a pretty strong echo chamber for the work from home crowd. When covid hit there were many people at my company who really struggled with working from home. The reasons ranged from having kids or family interrupting them, being stuck in small condos, some of the new to the country employees relied on office time to get to know coworkers and make friends, others just really disliked working alone or wanted physical separation of work and home life.

Often the people here on HN try to make it out that anyone who appreciates or wants to work in an office is evil or stupid or the like, but honestly probably half of people actually want a few days in the office. Comments here are not actually representative of the whole industry.


Covid was different because school-age kids were home too, which made a larger portion of situations untenable.


This is a gross mischaracterization of the stance. I want everyone to have an office to go to anytime they want— always, sometimes, never. If you want to make me go to the office we'll have words.

We have forced-office and office-available, no one is arguing for forced-remote.


Yes, many people are asking for offices to be completely abolished, turned into housing, or other things. Also, working is about compromises, we had some full remote, but we did ask most people to come in 1-2 days a week to work with their coworkers who wanted to have an in person meeting. We never had an issue with it, most people came in because their coworkers wanted them to, maybe 1-2 were hardcore at home forever types like you seem to be. But that was the extreme minority.

But no, I don't think I'm grossly mischaracterizing anything. Even replies to my post are literal personal attacks against the OP for not wanting to stay at home, or actually making fun of the reasons people want to go into an office. It's truly toxic behaviour.


I've heard people say to turn empty offices into housing but not kick everyone out to do so. I think the disconnect is that if you think 1-2 days in office is the compromise you don't get why remote work is valuable. Half in office is still in-office. I don't own workwear clothes, we only have 1 car, I can stay a few weeks with friends in other cities with no issue, I get 2 hours more sleep every night with a consistent sleep schedule. All the good perks happen at actually remote.

Office available with social events and meetups is supposed to be the compromise. The part that I can't wrap my head around is what is gained by making someone who doesn't want to be in office show up? The folks at $dayjob in that position literally just sit at their desk with headphones for 8 hours.


oh yea because " I get to spend more time with my kids" and " I am too lonely to wfh" are equivalent .


They are. They're arbitrary wants/desires that have nothing to do with the job


except you can work on your loneliness and try to have social interactions outside work.

I guess you can say " you can try not to have kids so you can work from an office" . I have no answer to that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: