Hacker News new | past | comments | ask | show | jobs | submit login

What does gpt dependence look like for people that have non-programming office jobs where they just email all day?

Do they just copy paste emails and office menus into an agent and send responses? Do they actually make any decisions themselves, or do they just go with whatever?




I saw someone recently explaining it this way: we now have tools to inflate keywords and prompts into fancy side-long text, which we then feed to tools which will shrink those text into short summaries.

So I guess office jobs are now grinding machines, where one side has keywords, the other side extracts keywords, and in the middle there is an enormous waste of resources to fit some manager's metrics.


We spent years developing smarter compression tools, but AI has taught us that people actually want inflation tools


Smells like an opportunity for a new product to serve all sides. Let the people send prompts, which are then inflated locally on receiver-side with custom prompt-modifications, before they are shrunken again. This way you get fancy small text for the transport, the sender doesn't have to waste money on AI, and the sender can personalize their response, or also save money by just reading the prompt directly. And finally, some company can make money from this. Win-win-win-win I would say.


>What does gpt dependence look like for people that have non-programming office jobs where they just email all day?

There isn't any such dependency (yet).

Despite strong corporate push I'd say the average person I work with uses it maybe once a week - usually research or summarization.

The inroads LLMs are making on programming aren't translating to other office jobs cleanly. Think it's largely because other areas don't have an equivalent source of training data like github

It'll no doubt come, but for now all programmers seem to have mostly automated themselves out of a job not others from what I can tell.

>Do they actually make any decisions themselves

There is little to no GPT decision making happening despite all the media chat about AI CEOs and similar bullshit. Inspiration for brainstorming is about as close as it gets


would you know if they were dependent?

my assumption was that these office people were secretly using it to automate dull parts of their work without the approval or knowledge of their coworkers


>would you know if they were dependent?

Yup, because I'm one of these office workers. Financial controller stuff in PE space & coding as hobby and interest in AI space.

There just isn't a direct equivalent thing that captures the knowledge in a machine readable way the way a code base does. It's all relationships, phone calls, judgement calls, institutional knowledge, meetings, coordination, navigating egos and personal agendas etc. There is nothing there that you can copy paste into an LLM like you can with say a compile error.

Even the accounting parts that conventional wisdom says should be susceptible to this...it's just not anywhere close to useful yet. Think about how these LLMs routinely fail test like "is 9.90 or 9.11 bigger" or miscount how many Rs are in strawberry. You really want to hand decisions about large amounts of money to that? And maybe send a couple million to the wrong person cause the LLM hallucinated a digit wrong. It's just not a thing.

Maybe with some breakthroughs on hallucinations it could be but we all know that's a hard nut to crack.

>automate dull parts

I've been trying hard to find applications given my enthusiasm for coding & AI. No luck yet. Even things where I thought this would do super well...like digging through the thousands of emails via RAG+LLM is proving oddly mediocre. Maybe that's an implementation flaw, not sure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: