Hacker News new | past | comments | ask | show | jobs | submit | KerryJones's comments login

How are you judging others who make this claim?

I'm a FAANG Sr. Software Engineer, use it both in my company and personal projects, and it has made me much faster, but now I'm just "some other person who made this claim".


Can you publish your workflow? I'm on the hunt for resources from people who make the claim. Publishing their workflow in a repeatable way would go a long way.

I'm skeptical that we aren't inundated with tutorials that prove these extraordinary claims.


What do you mean by "publish my workflow"? Do you want a blog post, a github md file? It's pretty simple.

Most recently I use Claude 3.5 projects with this workflow: https://www.youtube.com/watch?v=zNkw5K2W8AQ

Quick example, I wanted to make a clickable visible piano keyboard. I told it I was using Vue and asked for the HTML/CSS to do this (after looking at several github and other examples that looked fairly complicated). It spat out code that worked out of the box in about 1m.

I gave it a package.json file that got messed up with many dependencies versions being off from each other, it immediately fixed them up.

I asked it to give me a specific way using BigQuery SQL to delete duplicate rows while avoiding a certain function, again, 1 minute, done.

I have given it broken code and said "this is the error" and it immediately highlights the error and shows a fix.


But given a large project, does any of that really average out to a multiple? Or is just a nice to have? This is what I keep encountering. It's all very focused on boilerplate or templating or small functions or utility. But at some point of code complexity, what is the % of time saved really?

Especially considering the amount of ai babysitting and verification required. AI code obviously cannot be trusted even if it "works."

I watched the video and there wasn't anything new compared to how I used Copilot and ChatGPT for over a year. I stopped because I realized eventually got in the way, and I felt it was preventing me from building the mental model and muscle memory that the early drudge work of a project requires.

I still map ai code completion to ctrl-; but I find myself hardly ever calling it up.

(For the record, I have 25+ years professional experience)


20+ years here. I've been exploring different ways of using them, and this is why I recommend that people start exploring now. IntelliJ with Copilot was an interesting start, but ended up being a bit of a toy. My early breakthrough was asking chat interfaces to make something small, then refine it over a conversation. It's very fast until it starts returning snippets that you need to fold into the previous outputs.

When Claude 3.5 came out with a long context length, you could start pasting a few files in, or have it break the project into a few files, and it would still produce consistent edits across them. Then I put some coins in the API sides of the chat models and started using Zed. Zed lets you select part of a file and specify a prompt, then it diffs the result over the selection and prompts to confirm the replace. This makes it much easier to validate the changes. There's also a chat panel where you can use /commands to specify which files should be included in the chat context. Some of my co-workers have been pushing Cursor as being even more productive. I like open source and so haven't used Cursor yet, but their descriptions of language-aware context are compelling.

The catch is that, whatever you use, it's going to get much better, for free. We haven't seen that since the 90's, so it's easy to brush it off, but models are getting better and there isn't a fkattening trend yet.

So I stand behind my original statement: this time is different. Do yourself a favor and get your hands dirty.


I said above I've been using them for a year. How much dirtier do my hands need to be?

I'm 20+ years here and want to just call out it's a bit of a "moving goal posts with "but given a large project" comment.

I do think it is helpful in large projects, but much less so. I think the other comment gives a good example of how it can be useful, and it seems fairly obvious as context sizes are increasing exponentially in a short amount of time that it will be able to deal with large projects soon.

When using it in larger projects, I'm typically manipulating specific functions or single pages at a time and use a diff tool, so it comes across more as PR that I need to verify or tweak.


Sure, but advocates are talking about multiples of increased productivity, and how can that be defended if it doesn't scale to a project of some size and complexity? I don't care if I get a Nx increase on a small script or the beginning of a project. That's never been the pain point in development for me.

If someone said that, over a significant amount of time and effort, these tools saved them 5% or maybe even 10% then I would say that seems reasonable. But those aren't the kinds of numbers advocates are claiming. And even then, I'd argue that 5-10% comes with a cost in other areas.

And again, not to belabor the point, but where are the in-depth workflows published for senior engineers to get these productivity increases? Not short YouTube videos, but long form books and playlists and tutorials that we can use to replicate and verify the results?

Don't you think that's a little suspect that we haven't been flooded with them like we are with every other new technology?


No, I don't think it's suspect, because it seems like you're looking at a narrow-focus of productivity increase.

"advocates are talking about multiples of increased productivity", some are, some are not, and I don't think most people are, but sure, there's a lot of media hype.

It seems like the argument is akin to many generalized internet arguments "these [vague people] say [generality] about [other thing]".

There are places that I do think that it can make significant, multiples of difference, but it's in short spurts. Taking over code bases, learning a new language, non-tech co-founders can get started without a tech co-founder. I think it's the Jr Engineers that have a greater chance of being replaced, not the sr engineer.


The most obvious reason seems to be missing?

Because it's split up. You no longer pay "for the news", you pay a specific company for their take.

Do you want leftist? Rightist? Something central? You want multiple opinions, will you pay multiple subscriptions?

Happily pay $10/mo for a selection of specifics news items.


Is grounded still around? Because that's what they offered.

For me I find skipping the daily hystronic news cycle is better for my health. Anything of significant enough import would get to me via social channels, at which point I can go find enough sources about a subject to get a proper nuanced view


Yes, I see them advertised by a few YouTube content creators

It doesn't even need to be everything. to be honest I'm not really interested in paying for current events from any outlet, as I am simply not a news junkie, but if I could get some kinda combo deal for the publications that are frequent fliers in Sunday Longreads I would go for it.

In the olden days papers would target people like me who only occasionally read news with good headlines on the front and a low price for that day's print run. Now they are asking for a subscription (which is too much to pay for a single article) and acting like the archival value add is worth it to me (it isnt).


I agree. I scan about 30 websites for news each day.

Do I need to subscrible to all of them?

Just not practical...


Exactly.

I'd made this point a bit over a year ago with regards to Hacker News, based on my own work scraping a full history of Front Page views from the "past" archive.

Note that there are only 30 stories which make the front page per day, total submissions run somewhat higher, typically a bit over 100, and about 400,000 per year per research by Whaly.[1]

As of 21 June 2023, there were 52,642 distinct sites submitted to the front page.

Counting those with 100 or more appearances, that falls to 149.

Doing a manual classification of news sites, there are 146.

Even at a modest annual subscription rate of $50/year ($1/week per source), that's a $7,300 subscriptions budget just to be able to discuss what's appearing on Hacker News from mainstream news sources.

Oh, and if you want per-article access at, say, $0.50 per article, that's $5,475 to read a year's worth of HN front-page submissions (10,950 articles/year), and that is just based on what is captured on the archive. In practice far more articles will appear, if only briefly, on the front page each day.

Which is among the reasons I find the "just subscribe" argument untenable. Some sort of bundling payment arrangement is required.

<https://news.ycombinator.com/item?id=36832354>

________________________________

Notes:

1. "A Year on Hacker News" (2022) <https://whaly.io/posts/hacker-news-2021-retrospective>


See "The closest I’ve come to a solution" in TFA.

Thoughts?


I don't know how you find agreement on what our taxes have to pay for, given how polarized it all is now. I'd much rather a system where my browser anonymously pays a nickel or something to read what I want.

We've had three decades of micropayments proposals, none have worked.[1] Traditionally, publishers have strongly trended toward aggregated rather than disaggregated payment models: you pay for a full issue of a publication at the newsstand, you pay for a year-long subscription of a print publication. Or these days of online publications and streaming services, should you choose to do so.

Superbundling (e.g., a single fee providing universal access), a universal content tax, and/or a fee assessed by ISPs (if at all possible indexed to typical household wealth within an area) strike me as far more tractable options.

Among the elements of a tax-based system is that there are in fact multiple taxing jurisdictions, and access might be spread amongst them, and through multiple mechanisms. Public libraries already exhibit some of this, with funding being provided at the local (city/county), state, and federal levels, as well as other aggregations such as regional library coalitions, academic institutions and districts (particularly community and state postsecondary institutions), and others.[2] There's also the option of indirect support, which is what mechanisms such as mandatory legal notices entailed: a jurisdiction could require public posting of various sorts (fictitious names, legal settlements and actions, etc.) which effectively require private parties to pay for the upkeep of a newspaper. Similarly, discount "book rate" postage was a distribution subsidy offered to publishers of not only books but newspapers and magazines within the U.S. That's less an issue given the Internet, but the spirit of that idea might be adopted.

The idea of local papers which can rely on some level of multi-jurisdictional tax funding, perhaps some charitable or foundational support, advertising, subscriptions, obligatory notices, bespoke research, and other funding sources would give multiple independent funding channels which would be difficult to choke off entirely. That seems far healthier than the present system.

________________________________

Notes:

1. My own argument, and numerous citations to both pro and con views, is "Repudiation as the micropayments killer feature (Not)" <https://web.archive.org/web/20230606004820/https://old.reddi...>, based on a six-year-old proposal from David Brin which has gone ... precisely nowhere.

2. Yes, I'm aware of certain issues concerning library texts in recent years within the U.S. I'd suggest that the fact that those debates are ongoing rather than settled either way means that overt control isn't completely straightforward.


There should be an intermediate syndicate that charges me micropayments for every article I choose to read, then charges one lump sum to my credit card at the end of the month. And also remits payment to each newspaper or Website.

Why not simply an all-you-can-eat time-based payment (weekly, monthly, annually), distributed on the basis of the sources you've read, preferably with some true-cost-of-production adjustment (e.g., algorithmic or AI hash doesn't get compensated on the same basis as true shoe-leather / long-distance-travel journalism).

You fill a bucket. It's drained, based on what you read/view/listen. Or otherwise equitably shared based on some global allocation basis if access nothing --- you're still benefiting by the positive externality of the informed polity which journalism creates --- if you read nothing.

This ensures a stable funding basis, you have a predictable cost basis, you can direct the allocation based on your own access patterns, the common weal benefits even if you don't utilise the resource.

Note that much of this is the same as an ad-funded media, excepting that you can't direct spending, the allocations are far less public-benefit oriented, and the costs per household are far higher: roughly $700 per person for advanced countries (North America, Europe, Japan, Australia/NZ), based on a $700 billion spend and roughly 1 billion population. What we have now costs an immense amount and is failing media and journalism badly.


Yeah, I think what I'm describing would fall under a thing like MoviePass but for NewsPass.

Before people take this at pure headline value:

"uBlock Origin fans can rest at ease since a new and improved version is already available — uBlock Origin Lite. It's worth noting that while the new app ships with similar features to the original version, including core ad-blocking features, it doesn't support dynamic filters for blocking scriptlet injection. The Lite version's capabilities are relatively limited due to its compliance with the Manifest V3 framework threshold."


“Improved version…capabilities are relatively limited”. Who writes this stuff? The Lite version is only “improved” in the sense that it works with Manifest v3. It’s certainly not a replacement for the original, and the original isn’t going anywhere.

The word "improved" seems to be misused here.

Google's advertising revenue has been improved by this Lite version of uBlock Origin.

According to Google's lens, the Internet has been improved by Manifest V3.

User Hostility has been strongly improved.

The on-going rate of enshittification-of-all-the-things has been improved by Google's actions for a number of years.


Because there is usually an effort in HN to post non-paywalled links.


Apologies for ignorance, can you tell me more about Gladwell issues?


Gladwell is known for citing junk science and twisting reality by adding his own unfounded interpretations to research he's basing his theories on. There's a host of criticisms of his work. [Wikipedia](https://en.m.wikipedia.org/wiki/Malcolm_Gladwell#Reception) is a good starting point.


Great, thanks for this link, will dig in more.


This one is tough to know where to start. To hear him cover an issue near and dear to your heart, is infuriating. In fact, I think almost all of his podcasts ideas from the first season were terrible.

Mostly, he gets so into a story it's about the narrative itself. He sounds like a crazy child with a made up theory trying to force it to work.

It's hard to say from memory, but it was multiple episodes where he defended corporations and authority figures from legit criticisms, based on nonsense.

Just to be clear he doesn't literally say "and there, so I proved them innocent", but he might as well and these just weird one sided crusades (about David and defending the Goliath as misunderstood).

He's really not far from saying because people sometimes remember things poorly, Brian Williams definitely didn't lie (in a situation where many people lie).

He basically gave the auto industry a free pass, because it's challenging to prove an exact field issue.


This long-form article feels like it should be a short-form article.

tl;dr:

- PFAS/forever chemicals have a lot of science saying they're harmful to health

- Government regulation is the best long term solution

- Short term solution: buy non-PFAS pans, take your own to-go containers, using water filters that filter PFAS


Part 1 (the science) is best left long-form.

That's because it is actually quite a complex topic. It is not like many of the recognized harmful substances that have very obvious effects. Lead poisoning has been known since the antiquity, smoking can increase the risks of cancer by several times, asbestos causes diseases that are otherwise extremely rare, etc...

Here we are talking about things like a 15% increase in the risk of diabetes, with a confidence interval that still leaves chance as a possibility. Acute poisoning is almost unheard of, except with free radicals, making it hard to evaluate toxicity as if it was a typical poison. The article mentions how scientists are struggling, especially when we consider that there is not a single PFAS. The article also links to several studies, something I wish to see more often, traditional newspapers often don't link a single study.

So yeah, in essence a lot of science is saying PFAS are harmful to health, but the real take is that we don't really know, but it doesn't look good. And because they are "forever chemicals", it may be a good idea to try to limit usage right now, even if we don't know much, by precaution.


Additional note on the ‘there is not one single PFAS’ - there are several million distinct PFAS compounds.

Some are in wide use, some unclear, some unused. With a wide variety of elements in the various compounds.

It is like saying ‘plastic bad’. Which plastic? What is the definition of bad?

Yeah don’t eat plastics, but the difference between styrene, PTFE, and HDPE on any axis one would care to define is crazy big.


Avoid plastics and choose glass or porcelain that doesn't come from China (they have high lead content, especially in their porcelain glazes). For cookware, stainless steel or cast iron; try to avoid coatings, especially Teflon.


> Most learners, the world over, are not self-motivated

This is the exact opposite conclusion and methodology of Maria Montessori (and her schools with the same name). Children are naturally curious and want to learn, but they may not want to use a poor education system designed to mark grades in a hyper specific focus.


the only people i knew growing up who went to montessori we’re very affluent white kids with hyper involved parents.

even when given access to school choice, less affluent and minority parents do not choose montessori and there is absolutely a reason for that.


Is the reason because montessori schools are often high-priced? Because that's definitely true (and you can easily say it's a privilege to attend one).

That doesn't, however, mean the methodologies don't work or don't apply. You can study the methods in any of her ~20ish books, or the more modernized recaps of them.


Great work!

2 questions:

1) Do you support RAW images?

2) Do you have larger paid plans?


Thank you!

1. Images that are natively previewable are supported in the browser at the moment. RAW is unsupported for previews, but should still be able to be uploaded for file sharing.

2. I haven't seen any demand for larger paid plans yet. If this arises, I'll be sure to explore new pricing.


This is such a click-baity headline, the subject line is "Blame Santa Clara’s inventory and relative affordability—not the company’s stock run-up—for creating such a competitive market".

Essentially "let's use nVidia's name to get attention for this non-related issue."


Was about to say the exact same thing. Housing constraints are not limited to "near Nvidia HQ". It's the entire Bay Area.

For those not from the area, it's city after city after city in the Bay Area. 60 miles up the peninsula (from San Jose to San Francisco), and 60 miles up the East Bay, the vast majority of which has inventory and affordability challenges.


That reminds me, I once heard a talk by journalist at a hacker conference... I think it was HOPE a long time ago. He said that journalism is a good profession for people with a disposition for hacking. I wonder if that same trickiness hackers use to make/break great software is at play in headline-making...


Feedback:

- This is great to link to breathing

- After reading multiple breathing books (Breath by James Nester, Outlive by Peter Attia, Oxygen Advantage, etc.), there's a lot of evidence to breath slower. Specifically, 5.5 seconds in, and 5.5 seconds out, make the dot match or be closer to matching that rate could actually cause a lot of physical benefit by people matching with it

- Alternative if you don't want that, make it a control setting so someone (like myself) can do it.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: