Hacker News new | past | comments | ask | show | jobs | submit login
Why Does Every AI Cycle Start with Chat? (matt-rickard.com)
22 points by henry_pulver 3 months ago | hide | past | favorite | 14 comments



Short answer: because chat is a shortcut to anthropomorphization in folks' brains. It's much easier to assign personality and intent to a chat than a floating head that needs to be 99.999% lifelike in order to not feel completely fake.


A better answer: natural language is the programming language everyone knows.


> A better answer: natural language is the programming language everyone knows.

And when you get into frustrating misunderstanding of details you end up using a programming language anyway. Natural languages are to ambiguous to be useful for specific things. We humans can resolve the ambiguities in our minds, or think we do, based on context. I think the anthropomorphization argument hits closer to home. I feel guilty too of projecting more meaning and intelligence to a generated text and I think I'm not the only one.


Not an actual pattern

AI cycle in 2012 started with alexnet

AI cycle in 2019 started with Stable Difffusion


Depends what you mean by 'cycle', I suppose. The current hype-cycle _really_ started with ChatGPT; that was the point where people started getting super-excited and declaring that it would replace all human work by next Thursday.


What? Diffusion has directly led to industrial artists pushing back against Studios, etc., from utilizing GenAI in their work to displace artists

Yeah, sure a bunch of clowns are talking about how ChatGPT is gonna take over the world meanwhile diffusion is actually making impacts on artists

Again, ChatGPT did not start the hype cycle this time it landed fat in the middle of the GenAI cycle and accelerated it


Chat is perhaps the cheapest implementation you could ever build. It's a linear interaction, easy to test, and arguably the easiest to encode/decode (with a fixed set of inputs too). As an added bonus, it has a familiar, well-understood interface.


It is now but it hasn't always been easy to make a computer chat naturally like a human without using canned responses. Chess positions are a much easier interface to implement because it's a closed set and very structured (grid position + piece). But humans don't speak in chess moves so it would be difficult for us to speak to an AI that does.


Exactly. Text is very easy for us to analyze, understand, and compare inputs with outputs


Excuse me? A 10x10 plot for an sd lora makes it easy to glance over and see where and how it fails. Good luck glancing at a bunch of text answers.


LoRAs are fine tuned changes to an existing model, often one that is already mature and well-trained. The process is understood and documented.

OP is referring to early phases of model development using new techniques, where outputs can vary wildly until corrected and evaluated at scale. The process is often adaptive, iterative, and by nature experimental. Text in this case is indeed easier to evaluate.


I'm curious about: alpha go, Watson, and the AI that "conquered" chess. Are those outliers, or part of a larger story? It feels to me a contemporary history of AI would mention those milestones.


Every chat begins with a big smile, yet AI has no face. You've been warning. Acts should be in accordenly


Uhm… The boom in AI with LLMs wouldn’t have happened without about a decade of major focus on images (both generative models and DNN models that blew traditional image processing out of the water), and planning/optimization type problems (alpha go, chess, etc.). Seems incorrect to claim that chat starts every cycle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: