Hacker News new | past | comments | ask | show | jobs | submit login

personal experience - I'm using GPT4 for writing code especially in python. After using bard today, I feel bard is doing quite well considering its free. I will keep using it and if its keep doing well, I will cancel GPT4 $20/month subscription.



Early this evening, I asked Bard if was updated to PaLM 2, and it said it was. I then asked it to write some Python programs, giving it more or less the same prompts I've given GPT4. Bard doesn't seem to be any better than it was a couple weeks ago in the cases I tried, and nowhere near as capable at GPT4. And it goes off the rails quickly. After even a short dialog (~5 statements), it becomes less and less able to stay on track and make coherent corrections to the code.


You can use gpt-4 for free (toggle "Use best model"), and it'll search the internet and state sources on https://phind.com

No idea when they'll start charging, but it's replaced a lot of my googling at work


I get very different results from phind vs chatgpt4.


You can use GPT-4 for free with Bing.


Bing seems dumber than the free tier on OpenAI's chat (I believe it's GPT3.5?). It constantly just falls back to some search results I don't want

I don't even bother using it


That's not true. People recognized it was smarter than ChatGPT before GPT-4 was officially revealed.


why don't you just use chatGPT? from what i know it's running GPT3.5 and it's not that different (at least in terms of code quality)


As someone writing my first meaningful react app, code quality from gpt4 is monstrously better than 3.5. With gpt 4 i can often paste entire components and get meaningful corrections/bug fixes/non-trivial refactors. 3.5 just does a loop of mistaken fixes while it runs out of context length.


When my 25 queries per 3 hours runs out I don't use openai at all. That's how bad chat gpt is in comparison to gpt4 in my use cases.


My biggest complaint is the speed. Watching it print out like 56k is pretty annoying when coding.


There's a massive difference in response quality in my experience.

For example, I asked 3.5 to find a bug in a lengthy piece of Javascript. It said it's hard to give a correct answer because it doesn't know what the HTML or CSS looks like.

GPT4 spotted the bug almost immediately (it didn't manage to fix it though).


In my experiments bard is weaker than 3.5, but if it wasn't, than I would prefer the fresh data of bard.


One area where I noticed Bard was clearly behind (at least without crafting a better prompt) is getting from half-working program to a running program then sometime even to a correct program (I was using Python).

With GPT 3.5 and 4, I was able to just paste in the error and it'd do the rest. Bard however tried to tell me what the error could be, and wouldn't do well even when asked to fix the code.

Even GPT 4 though, when asked to go from specs to tests + code, would get stuck in a loop of making one test pass only to make the other pass and vice versa. The program I tried to let it write was a query validator that can test whether a string matches a pattern that uses AND, OR and NOT.

It did well on parsing my specs into tests, but from there on it didn't go very well.


Bard uses PaLM 2 now, which is definitely better than GPT-3.5. The question is only whether it is better than GPT-4.


What is its training data cutoff date?


We don't know (both for previous model LaMDA and new model PaLM 2), but it is less important for Bard because Bard has access to live data from Google search.


It's quite a vast difference between GPT-3.5 and GPT-4




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: