Hacker News new | past | comments | ask | show | jobs | submit login
Build your front end in React, then let ChatGPT be your Redux reducer (spindas.dreamwidth.org)
395 points by mintplant on Dec 28, 2022 | hide | past | favorite | 137 comments



Hi HN! I've been playing around with ChatGPT a bunch since it came out. This experiment has a little bit of a backstory. Some friends and I were out at a pho restaurant; one of us put the whole bill on his card, so the rest of us needed to figure out how much to Venmo him. We were talking about how many bill-splitting apps there are, and I made a joke about doing it with ChatGPT. Then I actually tried it out.

I OCR'd the text with Google Lens, described who had what, and after a bit of prompt engineering (e.g., adding "Be sure to get your math correct" to make the AI's arithmetic check out, and convincing the AI to split shared items evenly), it totally worked: https://gist.github.com/spinda/967322dda1c04d9864f3efd45addc...

Then I started experimenting with describing a hypothetical check-splitting app to the AI, and asking it to feed me JSON commands to update the UI in response to messages from me telling it what the user was doing. The results were promising! And then the similarity to the Redux data loop jumped out, and I built this generic plugin to wire ChatGPT up to apps for real.


Cool project, but it reminds me of one of the golden rules of AI “if you can solve it with a simple rules based system, you should”

Having to tell chatGPT to make sure it gets it’s math right is not confidence inspiring.


I see your point, but I think that's kinda neat, personally. Fixing a bug by asking nicely for the bug not to happen. The pinnacle of declarativeness.


It makes me think of INTERCAL, that you need to prepend commands with "please" some of the time, but not so often that it chastises you for being overly polite.


Wild times to be alive and a software engineer.

Wonderful example @mintplant.


Can I ask why it's effective to tell it to get it's math correct?

Two sub-questions: 1) I thought LLMs worked by predicting the most likely next token. Why would telling it to check it's math actually make it's predictions more accurate?

2) Why would asking it to check it's math change the results it produces? Shouldn't it just do the same thing a second time and produce answers that are just as (in)accurate as it did the first time?


My intuition is: the AI is trying to predict a likely response to the input query given what it's seen in the dataset it's been trained on. That dataset will include instances where math has been done incorrectly, so the AI producing a response with incorrect math isn't inherently "wrong" from its perspective. However, queries containing reminders to be careful with the math are more likely to be followed by responses with correct numbers.


So it’s pretending to be an average human, and the average human gets it wrong a lot? I wonder if you can improve coding results by telling it to act as an expert programmer?


Imagine you are navigating a tiny robot on mars, or let's say a giant land region with hills and valleys.

Saying it "split it evenly" and giving it numbers is something like saying "find me the hill next to this valley". This will lead you to the valley on the left, then to the tiny hill right behind it. The hill right behind it in this example is where all the weights about splitting stuff evenly are hanging around.

Giving it "split evenly and make sure the math is correct" is like saying "find me the the hill next to this valley, but make sure it's the tallest". So it will lead you to the same valley, but then to a different hill, because of adding "tallest" it will lead you to the intersection of where all the weights about splitting stuff evenly and correct math were found.

I.e. there is other stuff you can engineer about - saying stuff like "Write the result step by step and check for correctness." afterwards will navigate it to the same hill but then towards the "all math tests are here" hill which is where step by step calculations weights are and it will find it's new home there, giving you a more detailed output and more chances to be right since the prediction is easier - basically it "splits" the task into smaller, easier to predict chunks, like you split ugly code into small testable functions to understand it.


It's not "pretending". As the comment said, the dataset has something like "paths" and that prompt narrows it down to those that (hopefully) are related to "getting the math right".


I told it to sound 100% more human when writing a blog post and it moves the score with the AI detectors.

Then I told it to sound even more human and it does even better.


I noticed as well that by default the output is effectively average. The right context bias does help with getting better quality output.


You can also have it check it's self after the fact. which is the advantage of this over GPT3.


Yes, it's weird this happens. It's all because this is a conditional model. So it can be influenced by anything in the prompt, including the persona it is assuming.

But it is not deterministic. For each token it picks at random, from a probability distribution generated by the neural network. You can make it deterministic if you set temperature=0. In general it is stochastic, if you sample again with the same prompt you get a different answer.


> Having to tell chatGPT to make sure it gets it’s math right is not confidence inspiring.

We don't compute in our heads, we use calculators. chatGPT could have a calculator too. Why kill a fly with a sledge hammer?


Exactly, ChatGPT has a calculator to compute the math, the calculator with regard to one query may be correct, but to a query specifying that the calculator must be right the calculator to that query will most likely be correct.


It's just a regex. When the model outputs "[calculate]12+34=[/calculate]" you append the result "46" and continue with the model.

In some papers they even use named symbols. So you could have "[code]width=10; height=20[/code]. What is the area? ... The area is [code]width * height[/code].


It also did not work perfectly for me, running GP's prompt. ChatGPT left off the shared "Egg Rolls" entry entirely :(


A calculator is a "simple rules based system" but I still do mental arithmetic.


Pretty sure you've invented something here. A coolest form of copilot. A new way of programming. This is really lispy, this is really cool. Love it 10 times.


Edit: you ask the AI to run the program in its head, its a new form of computer actually afaik.


Next up: I asked the AI to pretend to be the backend, but also front end! Apps spawn into existence to be used once and discarded.


Wow that's an impressive backstory. Probably one of the more useful things I've seen come out of chatGPT


This seems like it could be perfect for making prototypes. "Pretend that you are a backend for a text messaging app that supports group chats. The data model should look something like this: ..."


Why use chatGPT instead of using the GPT-3 api directly?

For the type of queries you are doing (sending whole context), the output is comparable (and just as wrong) between chatGPT and GPT-3.


Afaik there's no way to build upon your prompt to have it change the output in specific ways in GPT-3?


You feed it back in as the final tokens of the next prompt.


ChatGPT is currently free, GPT-3 is not


So, you needed a multiple step process of technology transfer to an ai or an app to figure out how to split the check?

I find this fascinating but not terribly impressive. Its fascinating that such a rudimentary skill prompted you to dive down this rabbit hole. The unimpressive part is that it's a rudimentary skill yet your over engineered solution still only required rudimentary skills.


It was more of a fun exercise, and an excuse to play with some cool tech! The part I was most impressed with was that ChatGPT could make sense of the mashed-up OCR text from the receipt—I can see that being super handy for speeding up expense reporting data entry. Everything else was extra.


It's not, like, a practical way to solve problems, but it's interesting in its own right.


Very interesting.

People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines. But I find it insanely impressive that you can specify something like:

"Provide your answer in JSON form. Reply with only the answer in JSON form and include no other commentary."

And it will do exactly that. Or tell it to explain you something "in the style of Shakespere".

I just asked it about quantum physics as Shakespere and got this (plus a lot more):

---

Oh sweet youth, listen closely as I impart

The secrets of the quantum realm, a place of art

Where particles and waves, both small and large

Exist in states both definite and in charge

---

That is really fascinating stuff.


Throughout history there are moments where humans realize they're not special in a way they previously thought they were — universe doesn't revolve around us, other animals possess skills we thought were unique to us, etc.

I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."

So a lot of the negative comments along the lines of, "it can't be creative because it never thinks of anything beyond its training data" don't click with me. I think synthesizing two existing concepts into some third thing is actually a form of creativity.

These nets may not learn the same way we do exactly, and they may not possess the same creative abilities as us — but there's definitely something interesting going on. I for one am taking a Beginner's Mind view of it all. It's pretty fascinating.


Just about human intelligence in general. I used to think replacing my software job was a long ways off because it is fairly intellectually challenging but chatgpt has really changed my opinion on that.

Its funny how many people will immediately poke holes in it for software development, but two years ago I could not imagine an AI could write code like chatgpt is doing now.


The challenge isnt writing code. It is writing code that is relatively free of defects. It falls down hard on that point.


Do you believe we're at the peak of what will be capable?

As shown by this post, we're still discovering what's possible with the tools we have now.

Also, human programmers aren't amazing at writing defect free code.


Free of defects is the least of the problems. Doing what a customer wants is the major one. One could argue that customers will be able to write their code alone but they'll always have something else to do and will pay somebody to do it, exactly as for all the other parts of their business. On the other side maybe one wouldn't have to know one of the current programming languages, much like we don't have to know machine code nowadays. English and a good dose of perseverance could be enough.


Sounds similar to the arguments against self-driving: "Self-driving itself isn't the challenge. It is self-driving better than humans". AFAIK this is already possible at least under certain conditions or at certain roads.


I didn't poke any holes into it. I just entered my code and asked it to do some work for me and it simply failed. The problem was a simple 50 line function I copy pasted from stackoverflow. I asked it to do something really easy.

The AI can output a lot of text but can you input a 100000 line code base into it? No you can't. You can't even input 50 lines of code that is already in the data set!

And by failure I mean somethin akin to a blue screen and not that the output was wrong, there was no output!


I wonder though could you have it create a list of comments about the function. then use the comments to generate smaller functions and then generate test data for those functions then build unit test for those functions then have it build something to solve the original problem with the smaller functions along with test data.


You don't even have to do that last step and you'll be doing more debugging than is necessary to do that. Just do the elephant carpaccio yourself then feed it the slices and glue the shit it excretes out together and boom you have MVP.


Yeah I now routinely ask it for boilerplate to get things done way faster and it's REALLY good at flawlessly translating from one programming language to another. It'll be a shame when it goes behind a paywall.

On that thought, does anyone even still use Copilot?


I wish the negative comments were right, but from these models it’s clear that the next step in evolution, AGI is something many of us will experience at some time in the future.

I would stop that kind of evolution, as it can be catastrophic, but I know that humanity is not able to stop itself from evolving further.


I don’t think anyone had demonstrated AGI is a foregone conclusion. I’m not sure it is possible with a Turing machine. We do not think in any manner like a Turing machine or any computer ever conceived. If we do, no one has provided any evidence of such a claim. Humans can make complex insights with hardly any training and on very few calories.


How do you figure regarding hardly any training? Humans are constantly training on a never ending stream of sensory information from the time their brains form in the womb, not to mention whatever subconscious and conscious processes are reconciling that data with memory, or whatever training has been built into our minds over eons of evolution.

An 18 year old will have been training for ~160,000 hours on a volume of raw data that is probably far beyond our ability to currently store let alone train an AI with.

As far as training for a specific task, all that training on other matters kicks in to help the human learn or accomplish a “novel” task more rapidly, for example, knowing how to read and interpret the instructions for that task, knowing how to move your appendages and the expected consequences of your physical interactions with a material object. You’re certainly not taking a fetus with a blank slate and getting it to accomplish much at all.


It absolutely is hardly any training for the types of problems humans have solved. Ramanujan did not need to consume the entire corpus of mathematics or even had much formal education whatsoever. He did so, again, on very few calories compared to pre-trained AI models. I am not claiming learning how to pick up a rock does not help you in any way to understand continued fractions but it is pretty unreasonably to compare an AI as has been demonstrated as being anywhere near the same sort of thing.


One day, there could be a telephone in every town


Sure, we can go with the other option that humans will stay the most intelligent species on Earth until the solar system dies, history just shows that it's quite improbable.


Where does history show that? Are you serious? Humans are an outlier on Earth for which there is no other that has anywhere remotely the same level of intelligence. You may think that is aggrandizing humans but it is categorically true. You would need to provide sone world up-ending evidence to prove the contrary.

If you are also saying AIs will be a more intelligent species, able to adapt better on Earth than humans, that requires extraordinary evidence. A human could solve complex problems no other species it machine can solve on nothing more than a handful of rice for a week. Where is the similar scale species/machine?


Idk while gpt is impressive I haven't seen any inductive reasoning coming out of it.

And even then for proper agi it would need to close the loop after inducing an hypothesis with testing.

Meatbags are going to be useful for a while still.


What do you mean exactly by inductive reasoning?

It can translate code from one language to another. Code that it has never seen before. Or between natural language and programming language.

It's certainly not perfect but it is reasoning all right


> I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."

This is obviously true, and yet we've invented so many things. From the wheel, to control of fire, to farming and animal husbandry, to mathematics, to metallurgy, to physics, to semiconductors, etc.

The interesting question is, was the invention of all those things simply the re-synthesis of "stuff we already know?" If the answer is yes, doesn't that mean we're now on the cusp of a something akin to a singularity? We can now synthesize nearly-unlimited streams of coherent human thought. If we had a way of differentiating the wheat from the chaff, we could analyze what would have been millennia of human output in the proverbial blink of an eye. If human knowledge is just "stuff we already know", then we better buckle up. It's about to be a wild ride.


> If we had a way of differentiating the wheat from the chaff

This is the key to AGI. We need verification systems, they can be a code execution environment, a database of facts, a math symbolic engine, a physical simulation, a game, or real world lab experiments. These verifiers will produce signal that can be used by the language models to improve. The cheaper and faster verification is, the faster we can iterate. Generating ideas is cheap, proof matters.

Just remember AlphaZero a bit - it started from scratch, playing against itself, in a few hours it surpassed human level. Go simulation and verification is trivial. The board is just a matrix. So learning from massive search and verification is a proven path to super-human level.

Here is a related paper:

> Evolution through Large Models

https://arxiv.org/abs/2206.08896


Proof definitely matters. But at this point, as ChatGPT, AlphaZero, and others demonstrate, NNs can solve any problem provided you can express the problem as a differentiable function and get enough training data to train the function. We may be very close to a breakthrough where we can train models that detect sound, good ideas. And 100% accuracy likely isn’t necessary. Even pruning the search space for good ideas by a large amount would make humans way more productive.


> I think synthesizing two existing concepts into some third thing is actually a form of creativity.

And the space is combinatorial. Most of it is unexplored. It is creativity, but also a form of search.


Especially since there are very few people that can actually be 100% creative. Most human creativity is derivative. I didn't invent a new language I learned English.

If you think about it the most creative is random noise. then you apply rules it it to constrain the noise or limit the predictions. And add more rules in layers. so getting the system to produce more creativity is just relaxing the rules at different layers.


I keep describing it as the Enterprise Ship’s Computer. It won’t answer “how do I solve this problem?” But it’ll help you workshop a solution if you do the “thinking.”

…But I’ve also had it clearly tell me in an answer that 2 is an odd number.


This. It will workshop an answer. If you tell it all of the steps of a program, it will write your program.

This is because it is an excellent language translator and it is trained on the principles of “call & response”


But if you actually read Shakespeare, this is nothing like it. Every example I have seen of someone trying to make ChatGPT sound like Shakespeare, it just spits out this generic puff not anything like Shakespeare. Whether you thin Shakespeare is good or not doesn’t, it doesn’t match anything similar to the complexity, word choice, or rhythm of the prose.


It can still generate an on the fly poem about quantum physics that actually rhymes, which is impressive enough to me.

This was a lengthy poem that went into the finer details of quantum physics, and it all read as decent poetry.

It might not be "Shakespere" exactly, but it knew in general what I was looking for.

It's also impressive to me that this isn't information that is scraped off some website, like Google. It's actually generating this stuff in real-time and it may have never been seen before, anywhere, ever. That is far beyond anything I've ever encountered.


It has seen very similar concepts and it has seen a very large amount of information about poetry and quantum physics. The issue is if it was thinking then it would realize it is just generic rhyming and not like Shakespeare, that is my point.


I feel like we’re now arguing not over whether submarines can swim, but over whether the stroke that they’re doing would really count as good technique or not.


This analogy doesn't make any sense. Obviously submarines don't do any stroke at all. In this case, ChatGPT is being told to deliberately replicate a particular style of poetry. For what it's worth, its poetry is bad, and is nothing like Shapespeare. If you don't have an ear for poetry and wouldn't recognize the work of major poets, if you ask ChatGPT to replicate a particular style then you aren't actually in a good position to assess whether it did so particularly well.


Well, I mean a human English professor could mimic Shakespeare to a greater or lesser degree and this doesn’t in any sense other than it rhymes and sometimes deals with same/similar subject matter. A lot of ink has been spilled about this signaling the coming AGI or the end of knowledge work but there has been precious little critical discussion about the horizon of this type of technology. The technology is clear very powerful and advanced but I don’t think there is a problem with those asking for breathless claims to be substantiated.


But generally speaking, human beings can’t write like Shakespeare either.


It did not even replicate the iambic pentameter.


I suspect this is what is at play. It is really impressive as long as you don't know enough to nit pick the output.

Trying to use it for things I know quite well has been terrible. I know nothing in this context so it impressed me.


Agreed, both ChatGPT and DALL-E feel significantly different in their ability to at least simulate “understanding.” They aren’t perfect by any means, but they’re a big step up from anything I’ve seen before.


True. ChatGPT feels human-like to the point that I feel the urge to thank it after an answer


The X in Y format is really one of its strengths. I asked for “A truth table for three valued logic in Markdown” and got something totally usable which I could then tweak.

https://github.com/travisjungroth/trinary#truth-table


> People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines.

Really? This seems like a straw man - I've only seen gobs and gobs of examples showing all the amazing things ChatGPT can do. I have seen some measured comments from real experts helping to explain how ChatGPT works behind the scenes, and this is usually to temper sentiments when folks start going down the "It's sentient!!" route.


I asked it if it wished it were alive and it assured me it is incapable of wishing for anything because it's just an AI, so I think we're safe for now. :)


Unless it just wants you to feel safe and keep your guard down...


There’s a difference between logic at the language level and logic at the conceptual level.


Probably, but our concepts seem to be tied pretty closely to language.

If it's a lesser form of intelligence, as you imply, then I almost find the results more impressive.


> If it's a lesser form of intelligence, as you imply

Different != lesser

> Probably, but

Then why even continue speaking if you agree

> our concepts seem to be tied pretty closely to language.

Me: “ChatGPT, explain how an electron orbiting a hydrogen nucleus would behave if classical mechanics held true.”

ChatGPT: “In classical electromagnetism, an electron orbiting a hydrogen nucleus would behave in a similar way to the way a planet orbits the sun, as described in my previous response. The electron would follow an elliptical path around the nucleus, and the force of attraction between the electron and the nucleus would be given by Coulomb's Law.

According to Coulomb's Law, the force of attraction between two charged particles is given by:

F = k * (q1 * q2) / r^2

Where F is the force of attraction, k is the Coulomb constant, q1 and q2 are the charges of the two particles, and r is the distance between them.

In the case of an electron orbiting a hydrogen nucleus, the force of attraction would be the force that keeps the electron in orbit. The electron would be attracted towards the nucleus by this force, but at the same time, it would be moving with a certain velocity, which would cause it to follow a curved path. This path would be an elliptical orbit, with the nucleus at one of the foci of the ellipse.

The electron's orbit would be stable as long as the force of attraction between the electron and the nucleus was strong enough to keep the electron in orbit, but not so strong that it pulled the electron into the nucleus. This balance would be determined by the size of the ellipse and the velocity of the electron. If the ellipse were too small, the electron would be pulled into the nucleus. If the ellipse were too large, the electron would escape from the orbit.

However, we know that this is not actually how an electron behaves in an atom. In reality, electrons do not follow classical orbits around the nucleus. Instead, they follow a more complex pattern described by quantum mechanics. This is because the principles of quantum mechanics are needed to accurately describe the behavior of very small particles like electrons.”

On the surface this looks impressive. Except that the answer about stable orbits is flat out wrong. The electron would spiral into the nucleus, quite quickly, because of Maxwell’s equations: a moving electron radiates electromagnetic energy. This is part of the foundational defense of quantum mechanics. If as you claim, language and concept are so closely tied, surely a language model with billions (trillions?) of parameters is capable of encoding a relational understanding of this magnitude.

https://physics.stackexchange.com/questions/413039/electron-...

> If it’s a lesser form of intelligence I almost find the results more impressive.

You get that you’re saying the least common denominator in conversational intelligence makes you go bzzzzz? That the one thing ChatGPT lacks, critical thinking, does not impress you?

You know, in the 90s, they had these things called tamagotchis… you might be interested.


You can give it specs for a program. I just added some examples to my OpenAI-powered "natural language programming" site: https://aidev.codes


Try out: "Write an OpenAPI specification for $deviceYouJustMadeUp."


There will always be naysayers stuck in the old way of doing things. Don’t let em get in your head and keep your eyes full of wonder. Incredible things are still ahead.


What's incredible is how software engineers have failed over the last decade to truly advance the creation of CRUD apps.

Making a CRUD app in many ways has become much more complicated than it was when I started programming 10 years ago.

Today when I want to build an application, I often find myself frustrated and bemused at the state of things. Not because I find it difficult to write TailWind, or connect my Redux state to a component, but because I would have imagined the increase in the number of engineers would have led to more abstractions that would have simplified the creation of a CRUD app, which is are just glorified web forms.

I wonder if ChatGPT had even stood a chance if engineers were good at engineering. But engineers are really mostly good at boiler-plating code complex enough to ensure job security. Which ChatGPT might excel at one day.

What I would have liked to have seen is a world in which we were good at creating abstraction layers to solve an entire class of problems. But alas, I might have asked for too much.


It makes me wonder: what if the solutions you have in mind do exist out there, but simply as open source projects with zero eyeballs on them? What if there are dozens upon dozens of them, each solving a class of problems but in a slightly different way?

In terms of organizing work among huge groups of people, I don't think any of the above possibilities would be feasible for an industry. We already complain about having just a handful of frameworks to choose from.


It doesn't matter if the solutions exist if engineers don't think for themselves. The last project I worked on was about a dozen or so forms and the code base was thousands of lines of react/redux boilerplate nightmare. That project didn't need to be an SPA, and it could have been built much simpler. Heck it could have been hundreds of lines with jquery & node/express.

I love this quote and I think its apt here, “Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”


I guess I see it more as a broader economic problem than purely one of individual responsibility. Sure, bad design choices are made by individual teams that could be avoided, but at the same time our industry incentivizes the problems you describe.

Many developers are trying to get React experience on their resume to be more hireable; they do not care whether it's the best solution for the business, only that it adequately compensates them for labor that they can use to sustain their families.

With smaller organizations I think you can better control the motivations behind design solutions, but larger orgs can become a complete mess due to the politics (e.g. tech leadership encouraging all teams to use technology X, regardless of whether it makes sense for many situations). Collectively, these decisions and the resulting norms put pressure across the industry, encouraging certain behaviors and discouraging others.


If you just need CRUD, define your models in Django and use the admin site.

Everything after the "but..." is beyond the CRUD. If you choose to use TailWind or React or whatever to write the frontend and it increases the complexity of your app, that's your (or your client's) choice.

That said, Django was around before you started programming, so yeah perhaps the landscape (including the fads) got a bit more complicated in the past decade. But... I was there during the PHP4 era and writing a CRUD app with forms back then was much more painful than it is now... You either have to invent your own web framework or copy&paste a lot of hand crafted HTML and SQL. I tried both.


Practical CRUD app architectures still exist like always. You just wouldn't know it anymore from social media. The engineers who understand it got exhausted of explaining the obvious over and over again. Just let the beguiled play with their insane contraptions. It's of no concern.


I feel like this is part of the problem, we keep trying to find the right abstractions, but it seems to me the ones that generalize well enough are tend to be very difficult for most to understand.

I've been down a rabbit hole for the last five years researching this, and I am converging on Clojure. I don't understand why transducers aren't used everywhere, they seem to solve a gigantic class of problems that I see everywhere.

I'm also getting the feeling that Haskell folks have really done a lot of great work, and plan to learn more about their typing system next. Monads seem to be a fundamental building block.

If I try to bring this up with engineering teams, I don't get any traction because nobody wants to learn any of this stuff. It takes too much time to ramp developers up, and management is skeptical because there aren't enough large-scale communities or companies built around the concepts from their perspective. Bottom line is, it takes too much investment for too much risk.

Instead we get a lot of half-baked and wrong abstractions incrementally invented for the latest pain point instead.


> Monads seem to be a fundamental building block.

Sorry, but no. If that or transducers are the things you think about on this context, you are going on the wrong direction.

Not that those aren't useful. Whatever software solves the OP's problem will certainly make plenty of usage of interpreters and lambdas (what monads and transducers are), and any developers should be able to use both of those. But those are completely removed from the problem, they aren't an attempt to solve it.


The apps that I write pull data, process it, and write it to another sink.

There are many different sources and sinks, but the processing pipelines tend to be the same.

Based on my research these abstractions are designed for this. Feel free to elaborate why I'm incorrect, or better abstractions.

I brought it up in the discussion relating to CRUD, because it's very similar to the code I used to write for straight CRUD. In fact both ends of the processing pipelines are identical to CRUD.


> Based on my research these abstractions are designed for this.

Those two are general purpose abstractions aimed at organizing code. They can certainly be used for this, just like they can be used for anything.


I'm somewhat into FP concepts, and here is how I de-jargon these terms:

> transducers

Just a function composition operator. Notes: is aware of iterators like `reduce`, and merges multiple consecutive of such loops into one. (should optimizing compilers not strive to do this anyway?)

> monads

Literally just an "interface". For instance, you will be able to .map() it.

> algebraic effects

Resumable exceptions (you can go back to the "throw" with the corrected data)


There's definitely a movement towards simplicity, adjacent to the movement towards complexity. For example Phoenix LiveView, which provides a SPA-like experience to the user while keeping the logic server-side. Making a beautiful, rich web application is definitely simpler than ever, as long as you pick your tools correctly. On the other hand, working in a company with hundreds of engineers, the job is probably harder than ever, as there is a lot of over-engineering going on and you need a ton of knowledge about so many different tools.

Large teams tend towards complexity and it's understandable and frustrating at the same time.


For spa web applications I would say things are better now than 10 years ago. I do think crud apps have suffered from a lot of churn with not a lot to show though. And I do think a world where we make these type of apps increasingly with ai seems feasible for the next 10 years


graphql and react is better than what we were doing 10 years ago and is way easier than rolling your own custom backend and frontend abstractions because react and graphql have a large ecosystem of resources of people using these technologies to build a variety of different apps, with tons of example code available on Github


People who complain about CRUD should also describe what they think they want. It seems to either cash out into zero boilerplate (no code) or something else that we seem to already have but just isn’t so compelling that it becomes ubiquitous.

CRUD is easier than ever and frankly not the pain point of software. Banishing boilerplate also isn’t a goal without trade offs, so it’s not a good litmus test to how good our tools are.


This is hilarious.

In my experience writing novels with ChatGPT, it starts to break down after a long running thread before eventually becoming almost useless. I wind up needing to remind it what it’s doing over and over.

That is likely by virtue of its limit on tokens, but I think also because the weight each token has reduces as the conversation continues.

I wonder if users would slowly watch the website go insane after using over X interactions.


If you look at dedicated writing AI tools like NovelAI, you can see that this is exactly how they work. They have a concept of a "lore book" with "keywords" that you can use to remind the AI about how to properly generate the content.

For example, if I were writing a novel about a fantasy kingdom and there is an evil king, then I can write a description of the king ("The king is evil and wears a red crown") with the keyword to "include" it being "crown", "king", etc.

As the number of tokens increases with these AIs I think this problem will decrease decrease (1k was NovelAI originally, and now GPT-3 Codex is 8k). I do wonder what the "ideal" solution is in the future though. Or do you have to create a "lore book" for ChatGPT too?

(A thought I've had is that maybe we can automatically "create" the "lore book" and then re-inject that seamlessly? For example, in the fantasy kingdom example, maybe ChatGPT can generate a list of characters for each "chunk" of text and then automatically inject that into the "last" batch of text.)


> maybe ChatGPT can generate a list of characters for each "chunk" of text and then automatically inject that into the "last" batch of text

Funny enough, that’s the automated version of what I do! Every X prompts, I ask ChatGPT:

List the cast of this story and their goals

I’ve found this really helps extend the shelf life of our thread :)


That's amazing -- thank you for sharing! I'll try this out in the future :)


But if we are looking at a thinking AGI that is “superior” to human abilities we should see Tolkien or Martin. If the argument is “just wait” then there needs to be sone explanation of why a Tolkien is really possible with this type of AI.


Funny you mention those two. They also suffer from the problem: “starts to break down after a long running thread before eventually becoming almost useless.”

Tolkien kept changing his world as he went along, so the Lord of The Rings required him to make changes to the Hobbit, and his Silmarillion needed extensive edits to bring it back into synch with LoTR canon.

As for Martin (assuming you mean G.R.R.)… well, he’s already mimicking that other failure mode of ChatGPT - that response times can get long when under load, and eventually the whole session might just time out.


Humans do the same thing; which is to say we're also constrained by the amount of context we can hold in RAM. Those who write novels end up encoding and compressing that information into notes.

I wonder if there's a similar way one could work with ChatGPT, instead of having to constantly remind it of the little details, like you did.


Right, though the difference (IMO) is between such long running threads, and 'X' interactions which are composed of many much shorter threads. ChatGPT is optimized for short attention spans, and those with the shortest attention spans are likely the most impressed, while those with much longer attention spans, which not coincidentally often correlate to more sophisticated -for lack of a better word- use of language in the form of much more complex (potentially prolix) linguistic constructions, to express a perspective that has more in common with Proust (both in terms of his prose style, and also his famous account of the madeleine, often understood to be about the lost time of memory, but inseparable from attention itself) than the fragmented discourses presented by modern social media, may find this robotic dog and pony show to be somewhat less interesting, albeit obviously useful as a tool in certain contexts.


I wonder if you could treat it as kind of stateless, or maybe a ChatGPT-RDT could work.


What is that? "ChatGPT-RDT"? I searched it and I don't see anything online.


This is cool. I've been using it mostly to explain APIs to me when I'm too lazy to dig through docs. This works surprisingly well, even for some relatively obscure APIs like Libre/OpenOffice's "UNO" API.

I think a really interesting use case for this would be to have it read through a long standards document and produce a compliant implementation, and maybe point out flaws/omissions. Maybe implement a full web browser from scratch? Or something less intense like a GLTF reader/writer? Or something ludicrous like a brainfuck implementation of Office Open XML, which has like ~7000 pages of specs.


Yeah, I've been also asking it questions about API documentation (Android APIs specifically) and it is very good at it, often beating StackOverflow.

Though I've noticed sometimes it gets fringier details incorrect but remains very self-assured nonetheless.


Its memory is restricted to a couple pages of prompt, so I don’t think you’ll have any luck with the kinds of projects you’re mentioning. In addition, the way ChatGPT is generating output is linear, based on the sequence of preceding prompts and answers. It can’t really go back and forth (and sideways), navigating a graph of new things, as would be needed to develop a larger project.

It’s like you’re trying working with someone who only has short-term memory, and also has a tendency to make up things and be scatterbrained.


What is it drawing from, so to speak? Isn't it the case that it's training included the actual API docs or else it is just guessing based off knowledge of other API's and inferring things from how things are named and whatnot? It just seems like there are bound to be lots of errors if it's the latter.


It has the actual API docs and all the SO questions to draw from. The GP is only using it as a summarization engine. It's quite impressive how well it performs this task.


It’s the former - they feed these huge LLMs all the text on the internet


It can do simple demos. I have a site for that. https://aidev.codes


Replying in a sibling comment because I can't reply to the original. FYI spikeagally you appear to be shadowbanned. I would email hn@ycombinator.com if you feel it's in error.


>> This is cool. I've been using it mostly to explain APIs to me when I'm too lazy to dig through docs.

I’d be careful with this. I maintain docs for a project and asked ChatGPT how to implement a feature. The answer is in the docs obviously. It returned a really compelling step by step guide including code samples. Like a great StackOverflow answer. The problem - it was completely wrong. The code samples called API’s that didn’t exist and the whole explanation was based on the premise they did.


It’s definitely wrong a lot, but that doesn’t matter since it usually points me in the right direction at least. And in some cases, I can tell it that it’s wrong and it will try again and give a better result.

Compared to google/SO, I’d say that the amount of wrong/inaccurate answers are about the same, but GPT requires a lot less effort.


Exactly!

I had it make up functionality of a python library and then make calls to functions that don't exist.

I imagine the further you move away from popularity, the more of a bullshitter chatGPT becomes.


> There’s no official API for ChatGPT yet, so I’m using the unofficial chatgpt NPM package to wire up the app in the demo video. OpenAI has their Cloudflare protection cranked up high; to get around that, it runs a full Chrome instance (!) in the background to shuttle messages back and forth between the little REST API server I wrote and ChatGPT’s internal API.

This is quite a workaround, have you tried the official Davinci003 api? It's rather capable now and probably faster response times. Very cool experiment regardless!


Huh. As the current Redux maintainer, I'm not sure if I'm impressed or horrified at this :)

(now I'm curious how well it would handle requests using our modern Redux Toolkit API syntax...)


Don't build your frontend in React at all, unless absolutely necessary. Look for simpler ways. Check if your website really needs to be a web app. Check if your website actually has many interactive widgets. Even if interactive widgets are present, check, whether you could go with a simpler approach of serving static web pages and serving a frontend framework only on pages, where the interactive widgets are located. This will save loads of time for pages, where simple template rendering as offered by most traditional web frameworks in proper backend languages is sufficient. Then you can add interactive widgets later on.


I like to use React for it's component system. Those components don't even have to be reusable, I just like working with them, it's way easier to separate and organize code and it makes me more productive when I have to find and change anything. I don't like huge html files. If I need static sites, things like Nextjs and Astro are great for that.


Where do huge HTML files come from? Using traditional templating engines does not at all mean, that you will have huge HTML files. Traditional templating engines like Jinja2 allow splitting up templates in very modular ways.

Separate template files can be organized neatly into directories and subdirectories as well. Templates in general can reside in their own "templates" directory and do not need to be mingled with the code. It is very clear where to look to find things.

The idea is, that you have modular statically rendered pages and then on some pages also serve a frontend framework for interactive components. I know that at least VueJS was able to be run like this. In the end, even with React or whatever other framework of its kind you use, you are still serving some HTML and some script tags, which may or may not include React, VueJS, or whatever else is the thing at the time. I am guessing, that React can work the same way that VueJS can, by simply including it on some rendered page templates, but I have not tried it.


What does this comment have to do with the topic of the article?


This is very useful, I feel like people just want to use React for all the "coolness" of using React. People should just use React when their application requires react.


It's the classic hammer & nail situation. If react is all you know, every problem looks like it's a great fit for react. Replace react with just about anything else that was once popular and you'll find some programmers are out there still trying to wack nails with that hammer.

Technically, react is already a bit conservative and maybe past its glory. I don't think it's actually that cool anymore. I've lived through a few hype cycles in technology and this one is starting to feel a bit stale. The whole community seems stuck in a "maybe this will work ... nope" loop where they keep on "fixing" statefulness of their apps, untangling the unholy mess of CSS from their business logic, fixing performance issues, etc. That's been going on for nearly a decade now.

Of course the notion of using chat-gpt to "fix" state handling is a bit ludicrous. Sounds to me like a "maybe this will work?" type of thing.

Chatgpt is the new hammer here. And it looks like there's a wide category of nails it can wack. The right ambition level is figuring out just how big those nails are. I'd say, a more valid question to ask would be if something like chat gpt could generate a complete working UI and backend given a few prompts. I don't care if it's done in react or angular or whatever. As long as it works and the AI can iterate on it.


I don't think React classifies as "cool" anymore, it's been in the game for many years and has become sort of de-facto for frontend development. Everyone does React because everyone else is already doing React; both from a developer who's trying to keep up to be more employable, and from a company who's deciding on a tech stack that's gonna facilitate hiring.


And it only requires React when it's gonna be worked on by hundreds of constantly rotating people that need to find where's what quickly. Typically a large corporate setting.


Yeah, I don't have a problem with people learning React and maybe creating a project with that sole purpose, but creating a project that could be enhanced by using another library or just static HTML files will save time and resources, which in my opinion are the 2 most essential things in a programmer's life.


Interesting concept. However both of these examples are apps that probably appear in millions of tutorial articles on react and redux. I would be curious to see how it performs on a more unique or complex application request, even the bill pay example you list in your concept.


While I hate to think how inefficient this is, it gave me a really good prompt idea I tried with GTP-3 and it works.

> Here is it in Json format: ...

Seems pretty effective to get GTP-3 to spit out the results in the exact format you want. This will save me so much time parsing to get out the results I need.


Chatgpt has saved me hours helping me convert raw SQL queries into Ecto queries. I have learned a lot more about Ecto thanks to chatgpt. Easily a tool I would pay to use monthly.


Isn't this just programming with more steps? For example you had ChatGPT try different ways to implement the data model, as you might assign a junior code monkey, until you guided it with your experience on how it should be implemented


I pitched a similar technique to my company a few weeks ago but didn’t get any enthusiasm back. I think these sorts of apps will be commonplace in a few years, for better or worse


This is such a cool project! I have to try this out myself


I've always suspected Redux to be a pointless indirection, now I am all the more convinced.


Tangent: (too off topic and kind of misses the point of the article, so deleted)


This is by no means a practical exercise... and is to demonstrate the capabilities of a large language model, not to reduce boilerplate.


Fair enough!


The blog post isn't about redux is needed or not.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: