Hacker News new | past | comments | ask | show | jobs | submit login

Oh, it has. The scale at which we share information has never been greater.

Concrete example:

15, 20 years ago when we tracked user stories, the status of the sprint, etc., with sticky notes on a wall. We didn't have to record a lot of information on them because they only needed to track status at a high level. Details were communicated through conversations. That did mean that a lot of that information was tribal knowledge, but that was actually fine, because it was of ephemeral value, anyway. Once the work was done, we'd throw the sticky notes in the trash can and forget the tribal knowledge. Reporting out happened at a much higher level. We'd report status of projects in broad strokes by saying what big-picture features were done and what ones were in progress. We'd fill operations in on the changes by telling them how the behavior of the system was changing, and then let them ask questions.

Nowadays, we put it all in Jira. Jira tickets are extremely detailed. Jira tickets live forever. Jira tickets have workflow rules and templates and policies that must be complied with. Jira rules make you think about how to express what you're doing in this cookie cutter template, even when the template doesn't fit what you're actually doing. Jira boards generate reports and dashboards that tell outside stakeholders what's happening in terms of tickets and force them to ask for help understanding what it means, almost like you're giving them a list of parameters for Bézier curves when what they really wanted was a picture. Jira tickets have cross-referencing features, which creates a need to do all the manual data entry to cross-reference things. Jira tickets can be referenced from commits and pull requests, which means that understanding what changed now means clicking through all these piles of forever-information and reading all that extra documentation just to understand what "resolves ISSUE-25374" means when a simple "Fix divide-by-zero when shopping cart is empty" in the commit log would have done nicely. etc.

We communicate so much more these days. Because we can, because we have all this communication technology to facilitate all that extra communication. What we forgot is that, while computers can process information at an ever faster pace, the information processing hardware inside our skulls has remained largely unchanged.




I think that highlights the issue I'm poking at. "Good communication" doesn't just mean a firehose of information at your fingertips. It means getting the right amount of information at the right time. Developing systems like the latter is much harder than the former, but they both get the same sales pitch.


This is also where I really dislike a lot of this more recent push toward automating communication.

One person deciding what needs to be said, to whom, and when, can have a LOT of leverage in the productivity department, by reducing the time that tens or even hundreds or thousands of other people lose to coping with the fire hose.

Microsoft Copilot has been yet another downgrade in this department. Since it got adopted at my job, I've seen a lot of human-written 3-sentence updates get replaced with 3-page Copilot-generated summaries that take 10 minutes to digest instead of 10 seconds.


At my company we are aggressively rolling out policies to forbid the use of AI. I'm one of the bigger folks behind it. I just see no benefits. I have no desire to debug AI generated code, I have no desire to read pages and pages of AI generated fluff, I have no desire to look at AI generated images. Either put the work in or don't.


If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.

The friction between moving between, and knitting different systems and languages together that I don't use frequently enough to be fluent, has been lowered by an order of magnitude or two. Due to small knowledge gaps getting filled the instant I need them to be.

The same with getting a basic understanding (or a basic idea) about almost anything.

My AI log documents many stupid questions. I have no inhibitions. It is a joy.


> If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.

I mean, A) hallucinations still happen, and B) Google sucks anyway. I don't know of anyone at the company still using Google because we're largely an engineering outfit and all were aware as Google's search features slid into uselessness.


I find that the code I get from Copilot Chat frequently fails to do exactly what I asked, but it almost always at least hits on the portions of a library that I need to use to solve the problem, and gets me to that result much more quickly than most other ways of searching do these days.


Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well. We fill in memories because they are not precise, sometimes inaccurately.

More to the point, once you know that, having a search engine for ideas that can flexibly discuss them is a tremendous and unprecedented boon.

A new tool, many (many) times better than Google ever was for many ordinary, sometimes extraordinary tasks. I don't understand the new gigantic carafe of water is half full viewpoint. Yes, it isn't perfect!? It is still incredibly useful.


> Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well.

Yeah and if, when I asked a coworker about a thing, he replied with flagrantly wrong bullshit and then doubled-down when criticized, I wouldn't ask him anything after that either.


I will say I have warmed to GitHub Copilot's chat feature. It's a great way to look up information and get answers to straightforward questions. It feels similarly productive to how well just Googling for information was back in the 2010s, before Google went full content farm.


Do you also ban Google?

Working with AI is like working with Google. You shouldn’t be banning AI, you should be banning copy pasting AI slop.


Pretty funny to see the solution to the <industry x> paperwork morass is twofold: an AI to generate paperwork and an AI to understand it.

Can we just... not do that at all?


We can’t. Paperwork exists to be able to transfer knowledge and liability. It isn’t meant for you and it is mostly cost for your company. It’s for lawyers, insurers, investigators, auditors, your future replacement, etc.


Ha, no instead we're going to eventually have to have AI workers getting AI salaries and to spend on AI products. Then the AI governments and eventually AI wars...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: