Hacker News new | past | comments | ask | show | jobs | submit login

Mostly branding and willingness.

w.r.t. Branding.

AI has been happening "forever". While "machine learning" or "genetic algorithms" were more of the rage pre-LLMs that doesn't mean people weren't using them. It's just Google Search didn't brand their search engine as "powered by ML". AI is everywhere now because everything already used AI and now the products as "Spellcheck With AI" instead of just "Spellcheck".

w.r.t. Willingness

Chatbots aren't new. You might remember Tay (2016) [1], Microsoft's twitter chat bot. It should seem really strange as well that right after OpenAI releases ChatGPT, Google releases Gemini. The transformers architecture for LLMs is from 2014, nobody was willing to be the first chatbot again until OpenAI did it but they all internally were working on them. ChatGPT is Nov 2022 [2], Blake Lemoine's firing was June 2022 [3].

[1]: https://en.wikipedia.org/wiki/Tay_(chatbot)

[2]: https://en.wikipedia.org/wiki/ChatGPT

[3]: https://www.npr.org/2022/06/16/1105552435/google-ai-sentient




There's a deleted scene from Terminator 2 (1991) where we get a description of the neural network behind Skynet.

https://www.youtube.com/watch?v=1UZeHJyiMG8

https://en.wikipedia.org/wiki/Skynet_(Terminator)


Thanks for the information. I know Google had TPU custom made a long time ago, and that the concept has existed for a LONG TIME. I assumed that a technical hurdle (i.e. VRAM) was finally behind allowing this theoretical (1 token/sec on a CPU vs 100 tokens/sec on a GPU) to become reasonable.

Thanks for the links too!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: