Hacker News new | past | comments | ask | show | jobs | submit login
Is GPT-4 Worth the Subscription? Here’s What You Should Know (wired.com)
33 points by paulpauper on April 13, 2023 | hide | past | favorite | 51 comments



If you're doing software development, then https://www.phind.com/ with the expert checkbox uses GPT-4 and is better than the GPT-4 subscription, and is free. I just learned about this on Hacker News yesterday: https://news.ycombinator.com/item?id=34884338

I just did a little test involving a question about the Link component in Nextjs, and phind is aware of the major API change nextjs recently made to that component, whereas of course chatgpt/gpt-4 isn't, due to the training data cutoff.


Did one single test with phind.com.

It literally just copy-pasted me code from the top result blog it found on the sidebar - and it didn't solve my question in the least.

GPT-4 on the other hand was correct on the first try.

The question was about controlling Home Assistant remotely with Python. Phind's solution included RedisMQ just because the blogger had set up one as their home automation main bus. [0]

GPT-4 just gave me the code how to use the official homeassistant python package to control entities.

The correct code is literally 3 lines:

    from homeassistant.remote import API
    api = API('http://localhost:8123', '<api_token>')
    api.services('light', 'turn_on', {'entity_id': 'light.kitchen'})

[0] https://www.phind.com/search?q=how+can+I+control+lights+on+h...


Thanks for the heads up, will definitely play with this.


seems to be down :( i didn’t get an answer, but i see they cite their sources for a response to high is very cool.


The Wired article was more banal than the concept of overnight oats it includes. And there newsletter ad breaks!

I feel dirty having clicked the link, some how it's worse than the "that paragraph you just read was written by AI!" trope.

For those who didn't click it's just 3 awful samples that no one would ever use ChatGPT for and a note that the current unguaranteed access for plus subscribers is 25 queries/3 hours.

I'm not being hyperbolic GPT-4 would produce a better article although the limiting factor is a lack of creative premise for the article. And that'd be needed either way it reassures me to think that AI won't (yet) replace good writers. Just the lazy ones.


Now, that is a beautiful tl;dr.

Also, in first sentence, we can have a glimpse of writer's life: "When I logged into OpenAI’s website on Monday morning to continue testing...". I wonder if it rained.

I dislike this (very popular) way of writing, didn't make it past the first sentence.


I pay for it because I can afford it, but honestly I haven't been super impressed.

GPT-4 is definitely better at some tasks and moderately less prone to hallucination, but it botched the first couple scripting tasks I gave it, so it didn't exactly wow me. The main difference I've observed is that it gives wordier responses than GPT-3.5.

That said, I might have a different opinion if usage of GPT-4 weren't so severely rate-limited. Currently, you can only send 25 requests every 3 hours, so it's hard to experiment with it as freely as I want.


The only task GPT-4 "botched" for me was trying to write an IRC bot with Rust.

12 attempts and it still couldn't connect. After 6 attempts (of me pasting back compilation errors) it compiled, but didn't run. The last 6 attempts were me trying to get Rust to ignore TLS errors so it could connect. I gave up after that.

I had a set of ancient (10+ year old) shell scripts and Python controlling Deluge. I wanted to move to Transmission and still retain the same functionality.

Except for using an API from 2021, the code it gave me was 100% correct and properly named and even commented. I had to spend 5 minutes correcting the import and changing a few named variables (user -> username type stuff) and it just worked.


Use the API if you want some more time with it. You’ll pay for the tokens you use, but twenty bucks gets you some hours of experimentation depending on how fast you’re going.


You first need to get access to the API, which is not available to everyone


Have you requested it? In my experiences it takes a couple of days.


It's taking a lot longer than that for me.


over 2 weeks for me, still no access.


Do you have any references for how I can do this?


Not the person you asked, but the OpenAI playground is a great place to start messing with their API, I wrote this to help: https://www.inputoutput.ai/tutorials/openai-playground-guide (note website is still a work in progress just got it going bear with me haha)

However, you need access to the GPT-4 API which you have to request separately. So even if you have GPT-4 with chatGPT Plus you might not have access to GPT-4 through the playground/API.


Use a wrapper like TypingMind?


Not to be annoying, but have you tried asking chatGPT?


One important lesson I learned is to never ask ChatGPT about itself, it only seems to makes things up.


The 25 requests isn't too much of a hindrance for me but the user experience around its long response times definitely is. Needs notifications, or at least just have it wait and then send the text all at once rather than drip feeding it.

Although if they did that then I'd probably hit the 25 request limit a lot more.


> The main difference I've observed is that it gives wordier responses than GPT-3.5.

You pay per token, and they're way more expensive with GPT-4. Coincidence?


This is such a strange question. $20/month is a mouse fart of an expense, and it buys you access to what is probably the most important tech development in decades. And even when you run out of GPT-4 time, it gets you full access to the previous GPT-3.5, pre-turbo pruning. I haven't had a single busy-please-wait time since I subbed, and that alone has paid for years of the $20/month price.

[edit] I will add that even though the article is barely more than a few (rather oddly chosen, IMO) prompts, it is useful for folks to be able to see 3.5/4 side by side.


$20 is a mouse fart if you have a real, well paying job. If you’re a teenager or a poorly compensated worker (or just outside the rich West), you’re SOL.


that's when you get together with multiple teenagers to share an account


It's a ridiculous question.

But people are dumb. Wait 15 years or something and the question will be something like, "Should you pay $100 per month for a 1000 IQ exocortex that is fully integrated with on-demand AI experts in every field and seamlessly inserts useful thoughts and abilities into your daily life?"


You're right that this article's question is ridiculous but I worry about transhumanism and the abuse such technology will bring.

I rather keep my exocortex on the outside before getting subtle advertisements as thoughts and genuinely believing they are my own desires.


When people are choosing between heating and eating, and a lot of folks are struggling, "$20/month is a mouse fart of an expense" is evidently coming from a very privileged place. Sure, it may be to you, and your comment may indeed be well-intentioned, but it's clear that you didn't pause long enough to consider that demographics, ideas of value, as well as disposable income, varies drastically.


I’m absolutely amazed that there are people on HN who make their living in tech and won’t spring for the plus version.

At the very least, you owe it to your basic curiosity if not your career.


Can you give some examples of utility provided by it for someone competent who works in tech?


At the very least, it’s the fastest way to pick up a new framework or package you’re not familiar with. You can ask questions about the framework instead of digging through documentation.

A massive productivity boost at the very least.


I strongly disagree.

I'll give you a good example. We recently migrated away from Plesk with Apache to a server running NGINX. I wanted to get Active Collab running on the new server, so I fed the documentation for installing Active Collab into it, which included the URL rewrite lines for the NGINX configuration.

What it presented to me on every attempt, and I tried this for days, was complete gibberish that hallucinated all kinds of things that didn't exist, and it didn't even run as an NGINX configuration.

All that time wasted versus just writing an NGINX configuration. Who is seriously going to trust this to do anything high-level if it can't even write a basic NGINX config? If you don't know the framework then your Dunning-Kruger is in full-effect, you don't even know what you don't know, and blindly trust the output because it resembles something similar to what you might expect.

I'm thoroughly unimpressed. The boilerplate tooling of twenty years ago didn't even use AI and was vastly superior.

Another example: asking it what a product is, e.g., the tfpro P38EXMK2. It calls it a microphone preamplifier instead of a compressor, despite there being no preamps in it, it also hallucinates controls that don't exist, and misses the entire point of the product.

I also asked for step-by-step instructions on exporting data from one program to another, and it invoked all kinds of menu options that don't exist, and didn't take into consideration file incompatibilities in terms of imports.

Due to the immense hype, I have spent a lot of time trying to get something rational and sensible out of it that's actually a good use of time, but alas I have found nothing. IMO, it's the next crypto. All hype and very little actual utility for serious users. It's a cool toy for people who don't know what they're doing, but I'm sure that with a little experience, they would understand the sentiment.


Did you use GPT-4 or GPT-3.5?

Here's what GPT-4 said about tfpro P38EXMK2:

Q. "What product is tfpro P38EXMK2"

A: "The TFPRO P38EXMK2 is a high-end, stereo compressor and limiter designed for professional audio applications. It is manufactured by TFPRO, a British company founded by Ted Fletcher, an audio engineer with extensive experience in designing audio equipment.

The P38EXMK2 is an updated version of the original P38 compressor, offering improved audio quality and additional features. This unit is typically used in recording studios, mastering studios, and live sound applications for its ability to provide smooth and transparent dynamic control over audio signals. It can be used on a variety of sources, such as vocals, drums, guitars, and full mixes, to help achieve a polished and professional sound."

GPT-3.5 did call it a microphone. Maybe you're using GPT-3.5?


I spent quite a bit of time correcting its description of the compressor unit (probably half an hour); would those corrections be reflected in you now asking it now that it's been "trained"?


No. The knowledge cutoff is late 2021, and feedback isn't used directly in training.


> pre-turbo pruning

I hadn’t heard this part, can you expand?


the standard GPT3 version used in chatgpt is gpt3.5-turbo. it is a pruned version of GPT3 (trimmed down), to have faster response times and less computational cost.


Yeah my understanding was 3.5-turbo was always the model behind chatGPT. There is no 3.5 non-turbo model in OpenAI’s playground at least or in their documentation: https://platform.openai.com/docs/models/gpt-3-5


It is, now. Their first paid chat model is what is now called 'Legacy' in the paid chat model selection pulldown. It performs noticeably better than the turbo version in all cases except for generation speed. Not vastly. But enough of a difference to make it worth mentioning.


I think it’s called davinci


GPT-4 is absolutely 100% worth the subscription. It’s paid itself back in the hours of time it has saved me in looking up new frameworks and code.

I wanted to create a pretty complex form with autofill. It suggested react select, helped me customize the styling and deploy it within minutes. Normally, I would have taken an hour just to go through the documentation and the somewhat complicated styling.


I am happy enough paying $20/month. GPT-4 is better. That said, I usually use the APIs (I have written 3 books that have OpenAI API examples; my own coding projects, GPT-3.5turbo with Emacs bindings, and I also pay GitHub for CoPilot with Emacs integration).

I have a larger personal project using GPT-4 API, LangChain, and LlamaIndex that I am currently obsessed with.


GPT-4 excels at tasks that require advanced reasoning, complex instruction understanding, and more creativity.” It seems like the new model performs well in standardized situations.


I think it saves me about 30 min / day. that's worth more than $20 / month to me


Early access to new features or services justifies the monthly expense in my case.

Arguably, ChatGPT Plus service fee helps them create a large pool of developers and early adopters which can be used as robust sounding board better than free service users can or will.


I am thinking ... if you not using it for analysis of pdf, websites, and others then use Chatgpt or Mygpt. Mygpt because it has premium features of chatgpt.


Why not use MyGPT with api key for free instead of subscription


Classic wired article. Posits a question and doesn't even answer it.


Why not just use Bing AI for $0/month?


What comparable utility or scenario do you envision using the likes of Bing for?


Wordpooper-4


I'm on a paid account but not the 20$ / month one. I've paid < 1$ worth of tokens so far and I definitely get some value out of it. So, it's very cheap to use.

Getting a paid account allows you to create an API key and use it with things like codegpt in vs code and intellij and other places. There's a gpt for docs and slides as well. And you can of course build your own tooling. And you can configure all of these to use gpt-4.

So, definitely worth it to me.

Codegpt is pretty nice. You can get it to review code, suggest improvements/optimizations, spot potential bugs, etc. And it can write code, generate tests for existing code, etc.

The hardest part of using gpt is 1) remembering to use it, and 2) coming up with good questions to ask of it. The better your questions, the more useful it gets. I've caught myself using Google a couple of times to figure stuff out before realizing "I could just ask chat gpt ....". Mostly, that works spectacularly well. You can save yourself a lot of time this way.

I've done some fun stuff with it:

- I needed German translations for a mozilla fluent localization file. It understands the format and generated hundreds of translations for me in a few minutes. And so far, I've not found anything that is wrong.

- I needed a bit of kotlin code to encrypt plain text using AES. It generated working code for me. I asked it about improvements and it suggested using an IV and then added the code for that. I asked it to generate a test, and it did. And it passed. Having done similar code in the past in Java, all this looks perfectly legit to me.

- It's aware of geospatial relations (cities, countries, major landmarks) and you can ask it questions about that. And get it to answer in in geojson format. Or calculate surface areas. I'm not sure, I'm ready to trust all of it but at face value it seems to be doing reasonable things.

- I've worked on spicing up our sales pitch a bit. One of our customers started talking about kpis, so I selected some text from our white paper and asked chat gpt to suggest some kpis based on that and how that would benefit our customers context. Then for giggles I asked for fifty of them ... and it did. All of them were reasonable. A quarter of them were really good.

- We had a customer in the air carrier sector and we needed to understand their business challenges in a hurry. So we asked chat gpt to list key challenges and then grilled it a bit on things like cost breakdowns, and details about fuel cost, and other factors. Most of this stuff comes from public reports but basically we were asking it the same questions we would have asked our customer .. and getting really informative answers.

- I went to Rotterdam over easter, which is a city I had not visited properly. So, I asked it for some suggestions for stuff to do. And got lots of usable suggestions.

So, you can code with it, explore complicated topics with it, use it to sharpen pitches, outsource translation work to it, and ask it for directions, and get it to play tourist guide.

My bill for March was 0.90$. Amazing value. Absolute no brainer.


No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: