Hacker News new | past | comments | ask | show | jobs | submit login

Honestly, what makes you feel convinced that the current AI wave will be so impactful, once you take away all the hype?



The hype is a bunch of people acting like this AI is the messiah and is going to somehow cure cancer. Once you take that away, you have a pretty useful tool that usually helps you do what Google does with a few less clicks. One caveat is you should be willing to verify the results which you should always be doing with Google anyway.


The AI Tutors being given to students is going to exponentially change education. Now a tireless explainer can be engaged to satisfy innate curiosity. That alone is the foundation for a serious revolution.


To me this is one of the strongest points for the technology in its current state. Not surprisingly, I've found it quite helpful for learning foreign languages in particular. I can get it to spend 10 minutes explaining very very nuanced details between two similar phrases in a way you'd never get from a book and would be hard pressed to get even from a good tutor.


Great usage / application! I'm using it to both understand legal documents and to create a law firm's new client ingestion assistant. Potential clients can describe their legal situation in any language, which gets converted into the language of the attorney, with legal notations of prior cases.


I'd be interested to hear how well it works. In my experience, GPT is good at common legal issues, but pretty bad with nuance or unusual situations. And it can hallucinate precedent.


It requires quite a bit of role framing, as well as having it walk it's own steps in a verifying pass. But for an assistant helping a new/junior attorney it is quite unnervingly helpful.


Yes, been doing the same thing. Even started looking up things that I was too lazy to research with Google, because I knew it would take longer time.


What are the paths to learn new language with it


We need it to actually be correct 100% of the time, though. The current state where a chat interface is unable to say "I don't know" when it actually doesn't know is a huge unsolved problem. Worse, it will perform all the steps of showing its work or writing a proof, and it's nonsense.

This revolution is the wrong one if we can't guarantee correctness, or the guarantee that AI will direct the user to where help is available.


I've been having luck with framing the AI's role to be a "persistent fact checker who reviews work more than once before presenting." Simply adding that to prompts improves the results, as well as "provide step by step instructions a child can follow". Using both of these modifying phrases materially improves the results.


I completely agree. Being able to generate a bash command that includes a complicated regular expression is like magic to me. Also, I consider myself a strong writer, but GPT4 can look at things I write and suggest useful improvements. These capabilities are a huge advancement over what was available even a few years ago in a general purpose application. GPT2 wasn't all that impressive.


Can and will you really read all the sources that you find with Google? What about topics people are talking about on all the different social media platforms? Will you really read all the comments?

I think these tools will help us break out of local bubbles. I'm currently working on a Zeitgeist [1] that tries to gather the consensus on social media and on the web on general.

[1] https://foretale.io/zeitgeist


But it WILL cure cancer. Like our Lord and Saviour Sam Altman said "first you solve AI and the AI will solve everything". O ye of little faith!


Because I find it actually useful on doing things now.


What do you use it for? As a web developer I use Github's Copilot and enjoy its assistance the most in unit tests. I haven't found any use case for ChatGPT yet. I get better & quicker results searching what I need on Google. I'm much quicker searching by keywords as opposed to putting together a full sentence for ChatGPT.


Yeah currently Copilot is way more useful than ChatGPT. That may change with plugins, we'll have to say.

Either way though, Copilot is certainly a product of the 'current AI wave' that is being compared to crypto scams above.


Can you use it without worrying about getting sued because it's using licensed software under the hood to generate your tests without telling you? Wasn't sure how far their license agreements / guarantees had come...


I recently had to generate lots of short text descriptions of numerous different items in a taxonomy. ChatGPT successfully generated 'reasonable first draft' text that saved me a lot of time basic wordsmithing. I made several edits to make additional points or to change emphasis but overall it got me to the 80% stage very quickly.

At home, a carpenter working at my house said that he is using ChatGPT to overcome problems associated with his dyslexia (e.g. when writing descriptions of the services his company offers). I hadn't even considered that use case.


I'm a native English speaker and a strong writer, but I still find it useful to have my copy reviewed by GPT4 to see if there's room for improvement. It sometimes suggests additions that I should make.

I also find it useful for pasting code and asking, "Do you have any ideas for improvements?"


I am completely unable to put myself in the headspace of someone who thinks this is all just empty hype. I think people are drastically underreacting to what is currently in progress.

What does all of this look like to you?


I'm not saying that it's all empty hype. ChatGPT is useful for some tasks, like rewriting a paragraph or finding a regexp oneliner to do something specific. It works surprisingly well at times. However, I don't see it becoming as impactful as it's hyped. It's main limitation is that it hallucinates. I don't think this will change anytime soon, because that's a common issue of deep learning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: