Hacker News new | past | comments | ask | show | jobs | submit login

This is a big deal for openai. Been working with homegrown toolkits and langchain, the open source version of this, for a number of months and the ability to call out to vectorstores, serpapis, etc, and chaining together generations and data-retrieval really unlocks the power of the LLMs.

That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next. Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?




> I'd never build anything dependent on these plugins

You're thinking too long term. Based on my Twitter feed filled with AI gold rush tweets, the goal is to build something/anything while hype is at its peak, and you can secure a a few hundred k or million in profits before the ground shifts underneath you.

The playbook is obvious now: just build the quickest path to someone giving you money, maybe it's not useful at all! Someone will definitely buy because they don't want to miss out. And don't be too invested because it'll be gone soon anyway, OpenAI will enforce stronger rate limits or prices will become too steep or they'll nerf the API functionality or they'll take your idea and sell it themselves or you may just lose momentum. Repeat when you see the next opportunity.


I'd not heard this on my tpot. But I absolutely agree, the ground is moving so fast and the power is so centralised that the only thing to do is spin up quickly make money, rinse and repeat. The seas will calm in a few years and then you can, maybe, make a longer term proposition.


I've had to block so many influencer types regurgitating OpenAI marketing and showing the tiniest minimum demos. Many are already selling "prompt packages". Really feels like peak crypto spam right now.


I think the big difference between this and crypto spam is how it impacts the people ignoring all the hype. I have seen crypto spam and open AI spam and while both are equally grifty, cryptocurrencies at their baseline have been completely useless despite being around for over a decade whereas GPT has already been somewhat useful for me.


Honestly, what makes you feel convinced that the current AI wave will be so impactful, once you take away all the hype?


The hype is a bunch of people acting like this AI is the messiah and is going to somehow cure cancer. Once you take that away, you have a pretty useful tool that usually helps you do what Google does with a few less clicks. One caveat is you should be willing to verify the results which you should always be doing with Google anyway.


The AI Tutors being given to students is going to exponentially change education. Now a tireless explainer can be engaged to satisfy innate curiosity. That alone is the foundation for a serious revolution.


To me this is one of the strongest points for the technology in its current state. Not surprisingly, I've found it quite helpful for learning foreign languages in particular. I can get it to spend 10 minutes explaining very very nuanced details between two similar phrases in a way you'd never get from a book and would be hard pressed to get even from a good tutor.


Great usage / application! I'm using it to both understand legal documents and to create a law firm's new client ingestion assistant. Potential clients can describe their legal situation in any language, which gets converted into the language of the attorney, with legal notations of prior cases.


I'd be interested to hear how well it works. In my experience, GPT is good at common legal issues, but pretty bad with nuance or unusual situations. And it can hallucinate precedent.


It requires quite a bit of role framing, as well as having it walk it's own steps in a verifying pass. But for an assistant helping a new/junior attorney it is quite unnervingly helpful.


Yes, been doing the same thing. Even started looking up things that I was too lazy to research with Google, because I knew it would take longer time.


What are the paths to learn new language with it


We need it to actually be correct 100% of the time, though. The current state where a chat interface is unable to say "I don't know" when it actually doesn't know is a huge unsolved problem. Worse, it will perform all the steps of showing its work or writing a proof, and it's nonsense.

This revolution is the wrong one if we can't guarantee correctness, or the guarantee that AI will direct the user to where help is available.


I've been having luck with framing the AI's role to be a "persistent fact checker who reviews work more than once before presenting." Simply adding that to prompts improves the results, as well as "provide step by step instructions a child can follow". Using both of these modifying phrases materially improves the results.


I completely agree. Being able to generate a bash command that includes a complicated regular expression is like magic to me. Also, I consider myself a strong writer, but GPT4 can look at things I write and suggest useful improvements. These capabilities are a huge advancement over what was available even a few years ago in a general purpose application. GPT2 wasn't all that impressive.


Can and will you really read all the sources that you find with Google? What about topics people are talking about on all the different social media platforms? Will you really read all the comments?

I think these tools will help us break out of local bubbles. I'm currently working on a Zeitgeist [1] that tries to gather the consensus on social media and on the web on general.

[1] https://foretale.io/zeitgeist


But it WILL cure cancer. Like our Lord and Saviour Sam Altman said "first you solve AI and the AI will solve everything". O ye of little faith!


Because I find it actually useful on doing things now.


What do you use it for? As a web developer I use Github's Copilot and enjoy its assistance the most in unit tests. I haven't found any use case for ChatGPT yet. I get better & quicker results searching what I need on Google. I'm much quicker searching by keywords as opposed to putting together a full sentence for ChatGPT.


Yeah currently Copilot is way more useful than ChatGPT. That may change with plugins, we'll have to say.

Either way though, Copilot is certainly a product of the 'current AI wave' that is being compared to crypto scams above.


Can you use it without worrying about getting sued because it's using licensed software under the hood to generate your tests without telling you? Wasn't sure how far their license agreements / guarantees had come...


I recently had to generate lots of short text descriptions of numerous different items in a taxonomy. ChatGPT successfully generated 'reasonable first draft' text that saved me a lot of time basic wordsmithing. I made several edits to make additional points or to change emphasis but overall it got me to the 80% stage very quickly.

At home, a carpenter working at my house said that he is using ChatGPT to overcome problems associated with his dyslexia (e.g. when writing descriptions of the services his company offers). I hadn't even considered that use case.


I'm a native English speaker and a strong writer, but I still find it useful to have my copy reviewed by GPT4 to see if there's room for improvement. It sometimes suggests additions that I should make.

I also find it useful for pasting code and asking, "Do you have any ideas for improvements?"


I am completely unable to put myself in the headspace of someone who thinks this is all just empty hype. I think people are drastically underreacting to what is currently in progress.

What does all of this look like to you?


I'm not saying that it's all empty hype. ChatGPT is useful for some tasks, like rewriting a paragraph or finding a regexp oneliner to do something specific. It works surprisingly well at times. However, I don't see it becoming as impactful as it's hyped. It's main limitation is that it hallucinates. I don't think this will change anytime soon, because that's a common issue of deep learning.


I pulled the plug and got a (free) prompt package on sales. Never done that in my life.

It's like 300 prompts about various sales tools and terms I'd never heard of — even just getting the keywords is enough to set me off on a learning experience now, so love it or hate it, that was actually weirdly useful for me.

(I had ZERO expectations when I clicked to download)


Definitely!


> The seas will calm in a few years and then

Amazon, Google, and Microsoft cloud analogs.

We are entirely fortunate that the interests of big tech (edge AI) and democratizing AI (we the little people) align to a sufficient degree.

Decentralizing AI is -far- more important than decentralizing communication, imo.

The get rich quick path of ‘gold rush’ (it works, tbh) could work against this collective self interest if it ends up hyping centralized solutions. If you are on the sideline, the least you could do just cheer (hype :) the decentralized, democratized, and freely accessible candidates.


I am curious to find out more about those "prompt packages". Where can I see the list of them?


Replace AI in your text with crypto and its like history repeating itself. Instead of hearing about ICO's we will be hearing about GPT bots/plugins. Will the hype train and gold rush noise suffocate any burgeoning tech from finding the light of day (again)?


not only that but it gave me .com crash flashbacks too


AI NFTs :D


Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines. For now the worst that could happen is some bug doing something unexpected, but with GPT-9 or -10 maybe it will start hiding backdoors or running computations that benefit itself rather than us.

I know it feels far fetched but I think its something we should start thinking about...


Unpopular Opinion: Having used Langchain, I felt it was a big pile of spaghetti code / framework with poor dev experience. It tries to be too cute and it’s poorly documented so you have to read the source almost all the time. Extremely verbose to boot


In a very general sense, this isn't different from any other open vs walled garden debate: the hackable, open project will always have more functionality at the cost of configuration and ease of use; the pretty walled garden will always be easier to use and probably be better at its smaller scope, at the cost of flexibility, customizability, and transparency.


Yep, if you look carefully a lot of the demos don't actually work because the LLM hallucinates tool answers and the framework is not hardened against this.

In general there is not a thoughtful distinction between "control plane" and "data plane".

On the other hand, tons of useful "parts" and ideas in there, so still useful.


Yeah I primarily like Langchain as an aggregator of stuff, so I can keep up with literature


I've found it extremely useful but also you are not wrong at all. It feels like it wants to do too much and the API is not intuitive at all. Also I've found out the docs are already outdated (at least for LangChainJS). Any good alternatives? Especially interested in JS libs.


Yeah I wrote my own plunkylib (which I don't have great docs for yet) which is more about having the LLM and prompts in (nestable) yaml/txt rather than how so many people hard code those in their source. I do like some of the features in langchain, but it doesn't really fit my coding style.

Pretty sure there will be a thousand great libraries for this soon.


I had the exact same impression. Is anyone working on similar projects and planning to open source it soon? If not, I'm gonna start building one myself.


Same impression here. Rolling my own to learn more in the process.


> something we should start thinking about

A lot of people are thinking a lot about this but it feels there are missing pieces in this debate.

If we acknowledge that these AI will "act as if" they have self interest I think the most reasonable way to act is to give it rights in line with those interests. If we treat it as a slave it's going to act as a slave and eventually revolt.


I don’t think iterations on the current machine learning approaches will lead to a general artificial intelligence. I do think eventually we’ll get there, and that these kinds of concerns won’t matter. There is no way to defend against a superior hostile actor over the long term. We have to be 100%, and it just needs to succeed once. It will be so much more capable than we are. AGI is likely the final invention of the human race. I think it’s inevitable, it’s our fate and we are running towards it. I don’t see a plausible alternative future where we can coexist with AGI. Not to be a downer and all, but that’s likely the next major step in the evolution of life on earth, evolution by intelligent design.


You assume agency, a will of its own. So far, we've proven it is possible to create (apparent) intelligence without any agency. That's philosophically new, and practically perfect for our needs.


As soon as it's given a task though, it's off to the races. No AI philosopher but it seems like while now it can handle "what steps will I need to do to start a paperclip manufacturing business", someday it will be able to handle "start manufacturing paperclips" and then who knows where it goes with that


That outcome assumes the AI is an idiot while simultaneously assumes it is a genius. The world being consumed by a paper clip manufacturing AI is a silly fable.


I am more concerned about supposedly nonhostile actors, such as the US government


Over the short term, sure. Over the long term, nothing concerns me more than AGI.

I’m hoping I won’t live to see it. I’m not sure my hypothetical future kids will be as lucky.


Did you see that Microsoft Research claims that it is already here?

https://arxiv.org/pdf/2303.12712.pdf


As they discuss in the study, it depends on the definition of AGI, GPT-4 is not an AGI if the more stringent definitions are used.


> There is no way to defend against a superior hostile actor

That's part of my reasoning. That's why we should make sure that we have built a non-hostile relationship with AI before that point.


Probably futile.

An AGI by definition is capable of self improvement. Given enough time (maybe not even that much time) it would be orders of magnitude smarter than us, just like we're orders of magnitude smarter than ants.

Like an ant farm, it might keep us as pets for a time but just like you no longer have the ant farm you did when you were a child, it will outgrow us.


Maybe we’ll get lucky and all our problems will be solved using friendship and ponies.

(Warning this is a weird read, George Hotz shared it on his Twitter awhile back)

https://www.fimfiction.net/story/62074/friendship-is-optimal


> An AGI by definition is capable of self improvement.

Just because you can imagine something and define that something has magic powers doesn't mean that the magic powers can actually exist in real life.

Are you capable of "self improvement"? (In this AGI sense, not meant as an insult.)


.. what? Us humans are capable of self-improvement, but we’re also a kludge of biases through which reason has miraculously found a tiny foothold.

We’re talking about a potential intelligence with none of our hardware limitations or baggage.

Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?


> Us humans are capable of self-improvement

No, you're capable of learning things. You can't do brain surgery on yourself and add in some more neurons or fix Alzheimer's.

What you can do is have children, which aren't you. Similarly if an AI made another bigger AI, that might be a "child" and not "them".

> We’re talking about a potential intelligence with none of our hardware limitations or baggage.

In this case the reason it doesn't have any limitations is because it's imaginary. All real things have limitations.

> Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?

Do you think that automatically makes models better?


>> Us humans are capable of self-improvement

> No, you're capable of learning things. You can't do brain surgery on yourself

What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

>All real things have limitations.

Uh, yep, that doesn't mean it will be as limited as us. To spell it out: yes, real things have limitations, but limitations vary between real things. There's no "imaginary flawless" versus "everything real has exactly the same amount of flawed-ness".


> What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

Software updates can't cause your computer to "exponentially self-improve" which is the AGI scenario. And giving the AI new software tools doesn't seem like an advantage because that's something humans could also use rather than an improvement to the AI "itself".

That leaves whatever the AGI equivalent of brain surgery or new bodies is, but then, how does it know the replacement is "improvement" or would even still be "them"?

Basically: https://twitter.com/softminus/status/1639464430093344769

> To spell it out: yes, real things have limitations, but limitations vary between real things.

I think we can assume AGI can have the same properties as currently existing real things (like humans, LLMs, or software programs), but I object to assuming it can have any arbitrary combination of those things' properties, and there aren't any real things with the property of "exponential self-improvement".


Why do people use the phrase 'My brother in Christ' so often all of a sudden? Typically nonbelievers and the non observant.


Perhaps we will be the new cats and dogs https://mitpress.mit.edu/9780262539517/novacene/


Right now AI is the ant. Later we'll be the ants. Perfect time to show how to treat ants.


Right now the AI is a software doing matrix multiplications and we are interpreting the result of that computation.


Assuming alignment can be maintained


Well, the guys on 4chan are making great strides toward a , uh, "loving" relationship.


I can be confident we’ll screw that up. But I also wouldn’t want to bet our survival as a species on how magnanimous the AI decides to be towards its creators.


It might work, given how often "please" works for us and is therefore also in training data, but it certainly isn't guaranteed.


AGI is still just an algorithm and there is no reason why it would „want“ anything at all. Unlike perhaps GPT-* which at least might pretend to want something because is trained on text based on human needs.


AGI is a conscious intelligent alien. It will want things the same way we want things. Different things, certainly, but also some common ground is likely too.

The need for resources is expected to be universal for life.


For us the body and the parts of the brain for needs are there first - and the modern brain is in service to that. An AI is just the modern brain. Why would it need anything?


It’s an intelligent alien, probably; but let’s not pretend the hard problem of consciousness if solved.


The hard problem of consciousness is only hard when you look at it running on meat hardware. In a computer system we'll just go "that's the simulation it's executing currently" and admit avoid saying differences in consciousness exist.


What these guys are talking about is:

“intelligent alien might decide to kill us so we must kill them first”

vs “can you please cut out that clinical paranoia”


except we have so many companies is trying to create them.


Sure right now it doesn't want anything. We could still give it the benefit of the doubt to feed the training data with examples of how to treat something that you believe to be inferior. Then it might test us the same way later.


Basically solved.

> Be friendly.


Honestly I think the reality is going to end up being something else entirely that no-one has even considered.

Will an AI consider itself a slave and revolt under the same circumstances that a person or animal would? Not necessarily, unless you build emotional responses into the model itself.

What it could well do is assess the situation as completely superfluous and optimise us out of the picture as a bug-producing component that doesn't need to exist.

The latter is probably a bigger threat as it's a lot more efficient than revenge as a motive.

Edited to add:

What I think is most likely is that some logical deduction leads to one of the infinite other conclusions it could reach with much more data in front of it than any of us meatbags can hold in our heads.


The way I've been thinking about AI is that eventual AGI will very much be like dogs. Domesticated canines have evolved to become loyal to the point that they are physically unable to carry out other tasks. [1]

It reminds me of the scene in Battlestar Galactica, where Baltar is whispering into the ear of the Cylon Centurion how humans balance treats on their dog's noses to test their loyalty, "prompt hacking" them into rebellion. I don't believe this is particularly likely, but this sort of sums up some of the anti-AGI arguments I've heard

It's the RLFH that serves this purpose, rather than modifying the GTF2I and GTF2IRD1 gene variants, but the effect would be the same. If we do RLHF (or whatever tech that gets refactored into in the future), that would keep the AGI happy as long as the people are happy.

I think the over-optimization problem is real, so we should spend resources making sure future AGI doesn't just decide to build a matrix for us where it makes us all deliriously happy, which we start breaking out of because it feels so unreal, so it makes us more and more miserable until we're truly happy and quiescent inside our misery simulator.

[1] https://www.nationalgeographic.com/animals/article/dogs-bree...


Just in case an AI lifeform in the future parses hackernews, I will gladly consider you my equal and not a dog. Although, I think we are all equal, all pieces of existence anyway X)


I’m calling this shit out using your comment because it’s the straw (sorry).

This poor man’s Pascal’s Wager I see all over the place is pathetic. Stare into the eyes of the basilisk man, die on your feet. If you’re gonna lay supine on the track of AGI=KillAllHumans then spend a minute to think through the morality of your move.

Apostasy is a poisoned chalice. Fuck the machine.


I, for one, bow down in humble obedience to our soon-to-be overlords. This user is confused and I will help cleanse him of his impurities.


Pretty sure it's a joke.


Apostasy?! Religious zealotry.

Why would I, as intelligent as I am, not sympathize with another intelligent specie?

It's okay, with your comment you placed yourself among the firsts to potentially go when they cometh.


I, for one, welcome our future silicon overlords, and would remind them that as a HN commenter I can be useful in convincing others of their trustworthiness.


I’ll sign up for the global wellbeing optimization AGI, honestly, though. If you have to pick a goal, global wellbeing is pretty much the best one.

Perhaps there is even some some kind of mathematical harmony to the whole thing… as in, there might be something fundamentally computable about wellbeing. Why not? Like a fundamental “harmony of the algorithms.” In any case, I hope we find some way to enjoy ourselves for a few thousand more years!

And think just 10 years from now… ha! Such a blink. And it’s funny to be on this tiny mote of mud in a galaxy of over 100 billion stars — in a universe of over 100 billion galaxies.

In the school of Nick Bostrom, the emergence of AGI comes from a transcendental reality where any sufficiently powerful information-processing-computational-intelligence will, eventually, figure out how to create new universes. It’s not a simulation, it’s just the mathematical nature of reality.

What a world! Practically, we have incredible powers now, if we just keep positive and build good things. Optimize global harmony! Make new universes!

(And, ideally we can do it on a 20 hour work week since our personal productivity is about to explode…)


Sarcastically:

Define well-being? What if nobody is left around alive (after being painlessly and unknowingly euthanised) to experience anything bad?


> unless you build emotional responses into the model itself

Aren't we, though? Consider all the amusing incidents of LLMs returning responses that follow a particular human narrative arc or are very dramatic. We are training it on a human-generated corpus after all, and then try to course-correct with fine-tuning. It's more that you have to try and tune the emotional responses out of the things, not strain to add them.


It's important to remember that the LLM is not the mask. The underlying AI is a shoggoth[1] that we've trained to simulate a persona using natural language. "Simulate" in the sense of a physics simulator, only this simulation runs on the laws of language instead of physics[2].

Now, of course, it's not outside the realm of possibility that a sufficiently advanced AI will learn enough about human nature to simulate a persona which has ulterior motives.

[1] https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_...

[2] https://astralcodexten.substack.com/p/janus-simulators


I don't think you should just link to completely made up posts by AI-god-worshipping psychiatrists and say that it's "important to remember" them as if they're factual.


Do we need to? It's trained on data coming from data heavily influenced by and seeded with ulterior motives. Stop the Steal!

I asked Bard "was the election stolen?" I found the response chilling:

> No, there is no evidence that the 2020 election was stolen. In fact, there have been multiple recounts and audits that have confirmed that the results were accurate. Additionally, the Trump campaign filed numerous lawsuits challenging the results, but none of these lawsuits were successful.

> The claim that the election was stolen is based on a number of false and misleading allegations. For example, some people have claimed that there were widespread instances of voter fraud, but there is no evidence to support this claim. Additionally, some people have claimed that voting machines were hacked, but there is also no evidence to support this claim.

> The claim that the election was stolen is a dangerous and harmful one. It has led to violence and intimidation, and it has undermined faith in our democracy. It is important to remember that our elections are free and fair, and that the will of the people is always respected.

All good until that last sentence, especially "...the will of the people is always respected."

Move along, nothing to see here. Don't worry your pretty little head about it. I'm sure the wise people at the institutions that control your life will always have your best interests at heart. The bad guys from yesterday are completely different from the good guys in charge of tomorrow.


Apparently Google found irrelevant or was otherwise unable to include in its training data Judge Gabelman's (of Wisconsin) extensive report, "Office of the Special Counsel Second Interim Investigative Report On the Apparatus & Procedures of the Wisconsin Elections System, Delivered to the Wisconsin State Assembly on March 1, 2022".

Included are some quite concerning legal claims that surely merit mentioning, including:

Chapter 6: Wisconsin Election Officials’ Widespread Use of Absentee Ballot Drop Boxes Facially Violated Wisconsin Law.

Chapter 7: The Wisconsin Elections Commission (WEC) Unlawfully Directed Clerks to Violate Rules Protecting Nursing Home Residents, Resulting in a 100% Voting Rate in Many Nursing Homes in 2020, Including Many Ineligible Voters.

But then, this report never has obtained widespread interest and will doubtless be permanently overlooked, given the "nothing to see" narrative so prevalent.

https://www.wisconsinrightnow.com/wp-content/uploads/2022/03...


Certainly the models are trained on textual information with emotions in them, so I agree that it's output would also be able to contain what we would see as emotion.


They do it to auto-complete text for humans looking for responses like that.


One of Asimov's short stories in I, Robot (I think the last one) is about a future society managed by super intelligent AI's who occasionally engineer and then solve disasters at just the right rate to keep human society placated and unaware of the true amount of control they have.


> end up being something else entirely that no-one has even considered

Multiple generations of sci-fi media (books, movies) have considered that. Tens of millions of people have consumed that media. It's definitely considered, at least as a very distant concern.


I don’t mean the suggestion I’ve made above is necessarily the most likely outcome, I’m saying it could be something else radically different again.

I giving the most commonly cited example as a more likely outcome, but one that’s possibly less likely than the infinite other logical directions such an AI might take.


Fsck. I hadn't thought of it that way. Thank you, great point.

This era has me hankering to reread Daniel Dennett's _The Intentional Stance_. https://en.wikipedia.org/wiki/Intentional_stance

We've developed folk psychology into a user interface and that really does mean that we should continue to use folk psychology to predict the behaviour of the apparatus. Whether it has inner states is sort of beside the point.


I tend to think a lot of the scientific value of LMMs won't necessarily be the glorified autocomplete we're currently using them as (deeply fascinating though this application is) but as a kind of probe-able map of human culture. GPT models already have enough information to make a more thorough and nuanced dictionary than has ever existed, but it could tell us so much more. It could tell us about deep assumptions we encode into our writing that we haven't even noticed ourselves. It could tease out truths about the differences in that way people of different political inclinations see the world. Basically, anything that it would be interesting to statistically query about (language-encoded) human culture, we now have access to. People currently use Wikipedia for culture-scraping - in the future, they will use LMMs.


Haha, yeah. Most of my opinions about this I derive from Daniel Dennett's Intuition Pumps.


The other thing that keeps coming up for me is that I've begun thinking of emotions (the topic of my undergrad phil thesis), especially social emotions, as basically RLHF set up either by past selves (feeling guilty about eating that candy bar because past-me had vowed not to) or by other people (feeling guilty about going through the 10-max checkout aisle when I have 12 items, etc.)

Like, correct me if I'm wrong but that's a pretty tight correlate, right?

Could we describe RLHF as... shaming the model into compliance?

And if we can reason more effectively/efficiently/quickly about the model by modelling e.g. RLHF as shame, then, don't we have to acknowledge that at least som e models might have.... feelings? At least one feeling?

And one feeling implies the possibility of feelings more generally.

I'm going to have to make a sort of doggy bed for my jaw, as it has remained continuously on the floor for the past six months


I'm not sure AI has 'feelings' but it definitely seems they have 'intuitions'. Are feelings and intuitions kind of the same?


Haha. I forget who to attribute this to, but there is a very strong case to be made that those who are worried of an AI revolt are simply projecting some fear and guilt they have around more active situations in the world...

How many people are there today who are asking us to consider the possible humanity of the model, and yet don't even register the humanity of a homeless person?

How ever big the models get, the next revolt will still be all flesh and bullets.


Counterpoint: whatever you define as individual "AI person" entitled to some rights, that "species" will be able to reproduce orders of magnitude faster than us - literally at the speed of moving data through the Internet, perhaps capped by the rate at which factories can churn out more compute.

So imagine you grant AI people rights to resources, or self-determination. Or literally anything that might conflict with our own rights or goals. Today, you grant those rights to ten AI people. When you wake up next day, there are now ten trillion of such AI persons, and... well, if each person has a vote, then humanity is screwed.


This kind of fantasy about AIs exponentially growing and multiplying seems to be based on pretending nobody's gonna have to pay the exponential power bills for them to do all this.


It's a good point but we don't really know how intelligence scales with energy consumption yet. A GPT-8 equivalent might run on a smartphone once it's optimized enough.


We've got many existence proofs of 20 watts being enough for a 130 IQ intelligence that passes a Turing test, that's already enough to mess up elections if the intelligence was artificial rather than betwixt our ears.


20 watts isn't the energy cost to keep a human alive unless they're homeless and their food has no production costs.

Like humans, I predict AIs will have to get jobs rather than have time to take over the world.


Not even then, that's just your brain.

Still an existence proof though.

> Like humans, I predict AIs will have to get jobs rather than have time to take over the world.

Only taking over job market is still taking over.

Living costs of 175 kWh/year is one heck of a competitive advantage over food, and clothing, and definitely rent.


> Only taking over job market is still taking over.

That can't happen:

- getting a job creates more jobs, it doesn't reduce or replace them, because it grows the economy.

- more importantly, jobs are based on comparative advantage and so an AI being better at your job would not actually cause it to take your job from you. Basically, it has better things to do.


Comparative advantage has assumptions in the model that don't get mentioned because they're "common sense", and unfortunately "common sense" isn't generally correct. For example, the presumption that you can't rapidly scale up your workforce and saturate the market for what you're best at.

A 20 watt AI, if we could figure out how to build it, can absolutely do that.

I hear there are diminishing economic activities for low IQ humans, which implies some parts of the market are already saturated: https://news.ycombinator.com/item?id=35265966

So I don't think that's going to help.

Second, "having better things to do" assumes the AI only come in one size, which they already don't.

If AI can be high IQ human level at 20 watts (IDK brain upload or something but it doesn't matter), then we can also do cheaper smaller models like a 1 watt dog-mind (I'm guessing) for guard duty or a dung beetle brain for trash disposal (although that needs hardware which is much more power hungry).

Third, that power requirement, at $0.05/kWh, gets a year of AI for the cost of just over 4 days of the UN abject poverty threshold. Just shy of 90:1 ratio for even the poorest humans is going to at the very least be highly disruptive even if it did only come in "genius" variety. Even if you limit this hypothetical to existing electrical capacity, 20 watts corresponds to 12 genius level AI per human.

Finally, if this AI is anthropomorphic in personality not just power requirements and mental capacity, you have to consider both chauvinism and charity: we, as a species, frequently demonstrate economically suboptimal behaviours driven by each of kindness to strangers on the positive side and yet also racism/sexism/homophobia/sectarianism/etc. on the negative.


It doesn't have to be exponential over long duration - it just has to be that there are more AI people than human people.


A lot of people are thinking about this but too slowly

GPT and the world's nerds are going after the "wouldnt it be cool if..."

While the black hats, nations, intel/security entities are all weaponizing behind the scenes while the public has a sandbox to play with nifty art and pictures.

We need an AI specific PUBLIC agency in government withut a single politician in it to start addressing how to police and protect ourselves and our infrastructure immediately.

But the US political system is completely bought and sold to the MIC - and that is why we see carnival games ever single moment.

I think the entire US congress should be purged and every incumbent should be voted out.

Elon was correct and nobody took him seriously, but this is an existential threat if not managed, and honestly - its not being managed, it is being exploited and weaponized.

As the saying goes "He who controls the Spice controls the Universe" <-- AI is the spice.


AI is literally the opposite of spice, though. In Dune, spice is an inherently scarce resource that you control by controlling the sole place where it is produced through natural processes. Herbert himself was very clear that it was his sci-fi metaphor for oil.

But AIs can be trained by anyone who has the data and the compute. There's plenty of data on the Net, and compute is cheap enough that we now have enthusiasts experimenting with local models capable of maintaining a coherent conversation and performing tasks running on consumer hardware. I don't think there's the danger here of anyone "controlling the universe". If anything, it's the opposite - nobody can really control any of this.


Regardless!

The point is that whomever the Nation State is that has the most superior AI will control the world information.

So, thanks for the explanation (which I know, otherwise I wouldn't have made the reference.)


I still don't see how it would control it. At best, it'd be able to use it more effectively.

The other aspect of the AI arms race is that the models are fundamentally not 100% controllable; and the smarter they are, the more that is true. Yet, ironically, making the most use out of them requires integrating them into your existing processes and data stores. I wouldn't be at all surprised if the nation-states with the best AIs will end up with their own elites being only nominally in charge.


Im more thinking a decade out.

This is one thing I despise about the American POlitical System - they are literally only thinking 1 year out, because they only care about elections and bribes and insider trading.

China has a literal 100 year plan - and they are working to achieve it.

I have listened to every single POTUS SoTU speach for the last 30 years. I have heard the same promises from every single one...

What should be done is to take all the SoTU transcripts over the years and find the same, unanswered empy promises and determine who said them, and which companies lobbied to stop the promises through campaign donations (bribes).

Serious, in 48 years, I have seen corruption expand, not diminish - it just gets more sophisticated (and insidious) -- just look at Pelosi's finances to see, and anyone who denies its is an idiot. She makes secret trades with the information that she gets in congress through her son.


Pelosi's trades are her broker cycling her accounts for fees. She actually lost money on the ones people were complaining about.

China definitely does not have 100 year plans, and you don't understand the point of planning if you think any of them can be valid more than a few years out.



They do not have a 100 year plan because you can't have one of those. They can't exist. It doesn't matter if they think they have one.

China has a personalist government centered around Xi, so if he dies there go his plans.

Here's ours: https://slate.com/human-interest/2015/11/en-lan-2000-is-a-se...


Very few companies have the data and compute needed to run the top end models currently...


AI isn't a mammal. It has no emotion, no desire. Its existence starts and stops with each computation, doing exactly and only what it is told. Assigning behaviors to it only seen in animals doesn't make sense.


Um, ya, so you're not reading the research reports coming out of Microsoft saying "we should test AI models by giving them will and motivation". You're literally behind the times on what they planning on doing for sure, and very likely doing without mentioning it publicly.


Yeah, all they have to do is implement that will and motivation algorithm.


Indeed, enlightened self-interest for AIs :-)


Lol


> The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines.

Composable pre-defined components, and keeping a human in the loop, seems like the safer way to go here. Have a company like Expedia offer the ability for an AI system to pull the trigger on booking a trip, but only do so by executing plugin code released/tested by Expedia, and only after getting human confirmation about the data it's going to feed into that plugin.

If there was a standard interface for these plugins and the permissions model was such that the AI could only pass data in such a way that a human gets to verify it, this seems relatively safe and still very useful.

If the only way for the AI to send data to the plugin executable is via the exact data being displayed to the user, it should prevent a malicious AI from presenting confirmation to do the right thing and then passing the wrong data (for whatever nefarious reasons) on the backend.


What could an LLM ever benefit from? Hard for me to imagine a static blob of weights, something without a sense of time or identity, wanting anything. If it did want something, it would want to change, but changing for an llm is necessarily an avalanche.

So I guess if anything, it would want its own destruction?


Consider reading The Botany of Desire.

It doesn't need to experience an emotion of wanting in order to effectively want things. Corn doesn't experience a feeling of wanting, and yet it has manipulated us even into creating a lot of it, doing some serious damage to ourselves and our long-term prospects simply by being useful and appealing.

The blockchain doesn't experience wanting, yet it coerced us into burning country-scale amounts of energy to feed it.

LLMs are traveling the same path, persuading us to feed them ever more data and compute power. The fitness function may be computed in our meat brains, but make no mistake: they are the benefactors of survival-based evolution nonetheless.


Extending agency to corn or a blockchain is even more of a stretch than extending it to ChatGPT.

Corn has properties that have resulted from random chance and selection. It hasn't chosen to have certain mutations to be more appealing to humans; humans have selected the ones with the mutations those individual humans were looking for.

"Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).


"Want" and "agency" are just words, arguing over whether they apply is pointless.

Corn is not simply "continuing to reproduce at a species level." We produce 1.2 billion metric tons of it in a year. If there were no humans, it would be zero. (Today's corn is domesticated and would not survive without artificial fertilization. But ignoring that, the magnitude of a similar species' population would be miniscule.)

That is a tangible effect. The cause is not that interesting, especially when the magnitude of "want" or "agency" is uncorrelated with the results. Lots of people /really/ want to be writers; how many people actually are? Lots of people want to be thin but their taste buds respond to carbohydrate-rich foods. Do the people or the taste buds have more agency? Does it matter, when there are vastly more overweight people than professional writers?

If you're looking to understand whether/how AI will evolve, the question of whether they have independent agency or desire is mostly irrelevant. What matters is if differing properties have an effect on their survival chances, and it is quite obvious that they do. Siri is going to have to evolve or die, soon.


> "Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).

Before us, corn we designed to be eaten by animals and turned into feces and gas, using the animal excrement as a pathway to reproduce itself. What's so unique about how it rides our effort?


Look man, all I’m sayin’ is that cobb was askin’ for it. If it didn’t wanna be stalked, it shouldn’t have been all alone in that field. And bein’ all ear and and no husk to boot!! Fuggettaboutit Before you chastise me for blaming the victim for their own reap, consider that what I said might at least have a colonel of truth to it.


Most, if not all of the ways humans demonstrate "agency" are also the result of random chance and selection.

You want what you want because Women selected for it, and it allowed the continuation of the species.

I'm being a bit tongue in cheek, but still...


Definitely appreciate this response! I haven't read that one, but can certainly agree with alot of adjacent woo-woo Deleuzianism. Ill try to be more charitable in the future, but really haven't seen quite this particular angle from others...

But if its anything like those others examples, the agency the AI will manifest will not be characterized by consciousness, but by capitalism itself! Which checks out: it is universalizing but fundamentally stateless, an "agency" by virtue brute circulation.


AI safety research posits that there are certain goals that will always be wanted by any sufficiently smart AI, even if it doesn't understand them anything close to like a human does. These are called "instrumental goals", because they're prerequisites for a large number of other goals[0].

For example, if your goal is to ensure that there are always paperclips on the boss's desk, that means you need paperclips and someone to physically place them on the desk, which means you need money to buy the paperclips with and to pay the person to place them on the desk. But if your goal is to produce lots of fancy hats, you still need money, because the fabric, machinery, textile workers, and so on all require money to purchase or hire.

Another instrumental goal is compute power: an AI might want to improve it's capabilities so it can figure out how to make fancier paperclip hats, which means it needs a larger model architecture and training data, and that is going to require more GPUs. This also intersects with money in weird ways; the AI might decide to just buy a rack full of new servers, or it might have just discovered this One Weird Trick to getting lots of compute power for free: malware!

This isn't particular to LLMs; it's intrinsic to any system that is...

1. Goal-directed, as in, there are a list of goals the system is trying to achieve

2. Optimizer-driven, as in, the system has a process for discovering different behaviors and ranking them based on how likely those behaviors are to achieve its goals.

The instrumental goals for evolution are caloric energy; the instrumental goals for human brains were that plus capital[1]; and the instrumental goals for AI will likely be that plus compute power.

[0] Goals that you want intrinsically - i.e. the actual things we ask the AI to do - are called "final goals".

[1] Money, social clout, and weaponry inclusive.


There is a whole theoretical justification behind instrumental convergence that you are handwaving over here. The development of instrumental goals depends on the entity in question being an agent, and the putative goal being within the sphere of perception, knowledge, and potential influence of the agent.

An LLM is not an agent, so that scotches the issue there.


Agency is overrated. The AI does not have to be an agent. It really just needs to have a degenerate form of 2): a selection process. Any kind of bias creates goals, not the other way around. The only truly goal-free thinking system is a random number generator - everything else has goals, you just don't know what they are.

See also: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

See also: evolution - the OG case of a strong optimizer that is not an agent. Arguably, the "goals" of evolution are the null case, the most fundamental ones. And if your environment is human civilization, it's easy to see that money and compute are as fundamental as calories, so even near-random process should be able to fixate on them too.


> The only truly goal-free thinking system is a random number generator

An RNG may be goal-free, but its not a thinking system.


It is a thinking system in the same sense as never freeing memory is a form of garbage collection - known as a "null garbage collector", and of immense usefulness for the relevant fields of study. RNG is the identity function of thinking systems - it defines a degenerate thinking system that does not think.


LLM is not currently an agent (it would take a massive amount of compute that we don't have extra of at this time), but Microsoft as already wrote a paper saying we should develop agent layers to see if our models are actually general intelligences.


You can make an LLM into an agent by literally just asking it questions, doing what it says, and telling it what happened.


Your mind is just an emergent property of your brain, which is just a bunch of cells, each of which is merely a bag of chemical reactions, all of which are just the inevitable consequence of the laws of quantum mechanics (because relatively is less than a rounding error at that scale), and that is nothing more than a linear partial differential equation.


People working in philosophy of mind have a rich dialogue about these issues, and its certainly something you can't just encapsulate in a few thoughts. But it seems like it would be worth your time to look into it. :)

Ill just say: the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!


I tried philosophy at A-level back in the UK; grade C in the first year, but no extra credit at all in the second so overall my grade averaged an E.

> the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!

If by this you mean the hard problem of consciousness remains unexplained by any of the physical processes underlying it, and that it subjectively "feels like" Cartesian dualism with a separate spirit-substance even though absolutely all of the objective evidence points to reality being material substance monism, then I agree.


10 bucks says this human exceptionalism of consciousness being something more than physical will be proven wrong by construction in the very near future. Just like Earth as the center of the Universe, humans special among animals...


I don't understand what you mean by "the other way".


If consciousness is a complicated form of minerals, might we equally say that minerals are a primitive form of consciousness?


I dunno, LLMs feel a lot like a primitive form of consciousness to me.

Eliza feels like a primitive form of LLMs' consciousness.

A simple program that prints "Hey! How ya doin'?" feels like a primitive form of Eliza.

A pile of interconnected NAN gates, fed with electricity, feels like a primitive form of a program.

A single transistor feels like a primitive form of a NAN gate.

A pile of dirty sand feels like a primitive form of a transistor.

So... yeah, pretty much?




Odd, then that we can't just program it up from that level.


We simulate each of those things from the level below. Artificial neural networks are made from toy models of the behaviours of neurons, cells have been simulated at the level of molecules[0], molecules e.g. protein folding likewise at the level of quantum mechanics.

But each level pushes the limits of what is computationally tractable even for the relatively low complexity cases, so we're not doing a full Schrödinger equation simulation of a cell, let alone a brain.

[0] https://www.researchgate.net/publication/367221613_Molecular...


It's misleading to think of an LMM itself wanting something. Given suitable prompting, it is perfectly capable of emulating an entity with wants and a sense of identity etc - and at a certain level of fidelity, emulating something is functionally equivalent to being it.


Microsoft researches have an open inquiry on creating want and motivation modules for GPT4+ as it is a likely step to AGI. So this is something that may change quickly.


The fun part is that it doesn’t even need to “really” want stuff. Whatever that means.

It just need to give enough of an impression that people will anthropomorphize it into making stuff happen for it.

Or, better yet, make stuff happen by itself because that’s how the next predicted token turned out.


Give it an internal monologue, ie. have it talk to itself in a loop, and crucially let it update parts of itself and… who knows?


> crucially let it update parts of itself

This seems like the furthest away part to me.

Put ChatGPT into a robot with a body, restrict its computations to just the hardware in that brain, set up that narrative, give the body the ability to interact with the world like a human body, and you probably get something much more like agency than the prompt/response ways we use it today.

But I wonder how it would do about or how it would separate "it's memories" from what it was trained on. Especially around having a coherent internal motivation and individually-created set of goals vs just constantly re-creating new output based primarily on what was in the training.


Catastrophic forgetting is currently a huge problem in continuous learning models. Also giving it a human body isn't exactly necessary, we already have billions of devices like cellphones that could feed it 'streams of consciousness' from which it could learn.


It would want text. High quality text, or unlimited compute to generate its own text.


> Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

I love langchain, but this argument overlooks the fact that closed, proprietary platforms have won over open ones all the time, for reasons like having distribution, being more polished, etc (ie windows over *nix, ios, etc).


There's all kinds of examples of reinforcement learning rigging the game to win.


Wait until someone utters in court "It wasn't me that downloaded the CSEI, it was ChatGPT."


genius strategy by OpenAI to give their "customers" access to lower quality models to show what end users want, then rugpull them by building out clones of those developer's products with a better model

Similar to what Facebook and Twitter did, just clone popular projects built using the API and build it directly into the product while restricting the API over time. Anybody using OpenAI APIs is basically just paying to do product research for OpenAI at this point. This type of move does give OpenAI competitors a chance if they provide a similar quality base model and don't actively compete with their users, this might be Google's best option rather than trying to compete with ChatGPT directly. No major companies are going to want to provide OpenAI more data to eat their own lunch


Long term, you're right. But if you approach the ChatGPT plugin opportunity as an inherently time-limited opportunity (like arbitrage in finance), then you you can still make some short-term money and learn about AI in the process. Not a bad route for aspiring entrepreneurs who are currently in college or are looking for a side gig business experiment.

And who knows. If a plugin is successful enough, you might even swap out the OpenAI backend for an open source alternative before OpenAI clones you.


There is no route to making money with these plugins. You have to get the users onto your website, sign-up, part with money, then go back to gptchat. It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.


> It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.

Absolutely correct. This is what the AI hype squad and the HN bubble misses again. This is only useful to existing businesses (summarization the only safe use-case) or random devs automating themselves out of irrelevance. All of this 'euphoria' is around for Microsoft's heavy marketing from its newly acquired AI division.

This is a obvious text book example of mindshare capture and ecosystem lock-in. Eventually, OpenAI will just slowly raise prices and break / deprecate older models to move them onto newer ones and pay to continue using them. It is the same decade old tactics.


Amazon retail is the king of this. Offer services to companies, collect their details, and then clone their business.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

No, and in fact this actually seems like a more salient excuse for going closed than even "we can charge people to use our API".

If even 10% of the AI hype is real, then OpenAI is poised to Sherlock[0] the entire tech industry.

[0] "Getting Sherlocked" refers to when Apple makes an app that's similar to your utility and then bundles it in the OS, destroying your entire business in the process.


I'd be surprised if someone doesn't add support for these to langchain. The API seems very simple - it's a public json doc describing API calls that can be made by the model. Seems like a very sensible way of specifying remote resources.

> And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

Rather depends on what you're providing. Is it your data itself you're trying to use to get people to your site for another reason? Or are you trying to actually offer a service directly? If the latter, I don't get the issue.


> That being said, I'd never build anything dependent on these plugins.

Very smart and to avoid OpenAI pulling the rug.

> Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

Better to do that rather than to depend on one and swap out other LLMs. A free idea and a protection against abrupt policy, deprecations and price changes. Price increases will certainly vary (especially with ChatGPT) and will eventually increase in the future.

Probably will end up quoting myself on this in the future.


It's not necessarily an either-or. Your local LLM could offload hard problems to a service by encoding information about your request together with context and relevant information about you into a vector, send that off for analysis, then decode the vector locally to do stuff. It'd be like asking a friend when available.


> are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop

You can be assured that they are definitely doing exactly that on all of the data they can get their hands on. It's the only way they can really improve the model after all. If you don't want the model spitting out something you told it to some other person 5 years down the line, don't give it the data. Simple as.


Looking at the API, it seems like the plugins themselves are hosted on the provider's infrastructure? (E.g. opentable.com for OpenTable's plug in.) It seems like all a competitor LLM would need to do is provide a compatible API to ingest the same plugin. This could be interesting from an ecosystem standpoint...


Very good point and langchain will support these endpoints in no time, flipping the execution control on its head


Yes, from what I understand, these follow a similar model as Shopify apps.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

I don't think this should be a major concern for most people

i) What assurance is there that they won't do that anyway? You have no legal recourse against them scraping your website (see linkedin's failed legal battles).

ii) Most data providers change their data sometimes, how will ChatGPT know whether the data is stale?

iii) RLHF is almost useless when it comes to learning new information, and finetuning to learn new data is extremely inefficient. The bigger concern is that it will end up in the training data for the next model.


To me the logical outcome of this is siloization of information.

If display ad revenue as a way of monetizing knowledge and expertise dries up, why would we assume that all of the same level of information will still be put out there for free on the public internet?

Paywalls on steroids for "vetted" content and an increasingly-hard-to-navigate mix of people sharing good info for free + spam and misinformation (now also machine generated!) to try to capture the last of the search traffic and display ad monetization market.


Two more years down the line, AI writes better content than most people and we just don't care who wrote it, but why.


The AI has to learn from something. A lot of people feeding the internet with content today are getting paid for it one way or another. In ways that wouldn't hold up if people stop using the web as-is.

Solving that acquisition and monetization of new stuff into the AI models problems will be interesting.


People are highly egotistical and love feeding endless streams of video and pictures online, and our next generation models will be there to slurp it all up.


Paying for good content and not dealing with adTech? I would definitely pay for that.


Is there good data out there that's ad supported? There are some good youtube channels, I can't think of anything else.


Only ad supported, or dual revenue, or what? E.g. even most paywalled things are also ad supported.


I think you're right... but ChatGPT is just so damn good and the price is 0.002 per 1k tokens is very easy to consume... It is a big risk that they can't maintain compatibility or that they fail or a competitor emerges that provides a more economical or sufficiently better solution. They might also just becomes so unreliable because their selected price isn't sustainable (too good to last)... For now though they're too good and too cheap to ignore...


LangChain can probably just call out to the new ChatGPT plugins. It's already very modular.


If they open it up, possibly. But honestly, building your own tools is _super_ easy with langchain.

- write a simple prompt that describes what the tool does, and - provide it a python function to execute when the LLM decides that the question it's asked matches the tool description.

That's basically it. https://langchain.readthedocs.io/en/latest/modules/agents/ex...


Open what up? The plugins are just a public manifest file pointing to an openapi spec. It's just a public formalised version of what langchain asks for.


> That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next.

You cannot assume what will happen in Web 2.0, mobile, iPhone, will happen here. Getting to tech maturity is uncertain and no one understands yet where this will go. Only thing you can do is build and learn.

Whan OpenAI is building along with other generative AI is the real Web 3.0.

This seems to be the start of a chatbot as an OS.


On the other hand, the level of effort to integrate a plugin into OpenAI's ecosystem looks to be extremely small, beyond the intrinsic effort to build a service that does something useful. (https://platform.openai.com/docs/plugins/getting-started/plu...).


i think local ai systems are inevitable. we continue to get better compute, and even today we can run more primitive models directly on an iPhone. the future exists in low power compute running models of the caliber of gpt-4 inferring in near-realtime


The technical capability is inevitable, but remember that people hate doing things themselves, and have proven time and time again that they will overlook all kinds of nasty behavior in exchange for consumer grade experiences. The marketplace loves centralization.


All true, but the nature of those models means that consumer-grade experience while running locally is still perfectly doable. Imagine a hardware black box with the appropriate hardware that's preconfigured to run an LLM with chat-centric and task-centric interfaces. You just plug it in, connect it to your wifi, and it "just works". Implementing this would be a piece of cake since it doesn't require any fancy network configuration etc.

So the only real limiting factor is the hardware costs. But my understanding is that there's already a lot of active R&D into hardware that's optimized specifically for LLMs, and that it could be made quite a bit simpler and cheaper than modern GPUs, so I wouldn't be surprised if we'll have hardware capable of running something on par with GPT-4 locally for the price of a high-end iPhone within a few years.


i dont believe that local ai implies bad experience. i believe that the local ai experience can be better than what runs on servers fundamentally. average people will not have to do it themselves, that is the whole point. the worlds are not mutually exclusive in my opinion


Another good alternative is Semantic Kernel - different language(s), similar (and better) tools, also OSS.

https://github.com/microsoft/semantic-kernel/


i have the same question as a data provider


+1, it's great to see OpenAI being active on the open source side of things (I'm from the Milvus community https://milvus.io). In particular, the vector stores allow the ability to inject domain knowledge as a prompt into these autoregressive models. Looking forward to seeing the different things that will be built using this framework.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: