Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I “wrote” a kid's book with ChatGPT and Midjourney (adventure-of-penelope.vercel.app)
417 points by danielcorin on Jan 25, 2023 | hide | past | favorite | 394 comments
Two of my friends recently welcomed their first child and I "wrote" a kid's book for them using ChatGPT for the story and Midjourney for illustrations.

The plot was sourced from a group of friends.




A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes. I know the critique of that position is "but humans just remix the things they know from the world" but that is the same misunderstanding that those who thought truck driving jobs would be automated by 2018 made about that profession. i.e. The less you know about a field, the more likely you are to believe it will be easy to automate.

Art, in this case illustration and narrative, requires a coherent viewpoint. In literature this is often referred to as the authorial voice. Voice is special because of the uniqueness of the human life behind it, which I realize sounds a bit airy-fairy, but is very true once you start digging into what works and what doesn't. When you read something and wonder why it's so engaging when it's not _that_ much different than other very similar pieces, that's often what's at work.

It's also why some pieces really don't work for some people. The "a kid could draw that" critiques of various modernism styles come from this place.

I don't begrudge people experimenting with art through LLM generation, and perhaps it can be a great tool for scaffolding ideas, I grant that. However, I do worry about the deluge of poorly conceived LLM content that will make the good stuff harder to find. We had a glorious age where the gatekeepers had lost most of their power. With the flood of content about to be released, I'm afraid we'll find ourselves beholden to them once again.


> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes.

Much of children's best-selling fiction is extremely derivative. The way the market works, it's nearly impossible to get things published that aren't extremely derivative.

To the extent that, I think this might actually be a good place to start - with auto-generated derivative garbage - and then fix the "problems" with the plot through editing.

My first thoughts on this story were pretty negative. The plot is almost incoherent. There's not much conflict. But it does have a little bit of a mystery, and it does get resolved.

Although much of it came from out-of-nowhere (The Theme Park), Alice's Adventures in Wonderland and many other extremely popular children's books are literary nonsense.

The illustration definitely does not fit together well, and is not what you'd expect to get from a book you paid $20 for.

But, overall, it's really not bad.

I could see a huge market for parents wanting custom books like this generated for their kids about all their zany interests. If your kids loves baking & cute animals & mystery - they'd probably love this story.

Maybe it costs $1 or $2 - it's definitely not a big business - it's probably not going to "disrupt" the market (because it's so entrenched I don't even think disruption is possible).

If this is too much literary nonsense, I'd love to see what it could do if you asked for the same story "In the style of AA Milne" or Robert Lewis Stevenson or James Mathew Barrie or Beatrix Potter.

I can definitely see a near future where almost every parent does this for their kid once or twice a year.


The true measure is will a very young child notice and/or care. My guess is they won't and they'll like the story. Only the adults reading it will notice something off/weird/"uncanny" about it.

But of course, it's the adults buying/distributing these books so you still need to "sell" it to them first.


Kids happily get sucked into Minecraft video narrative "series" with totally insane plots that make no sense and are basically gibberish even line-to-line, and into dirt-cheap Asian and Eastern European CG cartoon imports on Netflix that are barely better. I'm 100% sure a mostly-AI creative process could produce something higher quality than those... um, genres, I guess, today.

(seriously, there's some so-cheap-I'm-not-sure-why-they-bothered kids' cartoons on Netflix, if you look for them—you're on the right track if it looks like it was rendered with mid-grade 2005 desktop 3D rendering tech using stock models and stock scenes with way-too-little decoration or clutter, despite in-fact dating no earlier than 2015. A lot of them have whole scenes that are basically just the characters talking in circles. The plots are always extremely straightforward and about 1/5 as much plot as they should need to fill the time, even by very generous reckoning. Lots of interpersonal conflict that amounts to nothing, existing just so they can have two characters have some conflict in a scene to absolutely no thematic-, message-, characterization-, or plot-related end. Most turn out to have been originally produced in India or Romania or something, if you start digging. It's crazy. But young kids will watch these.)


Kids are like sponges, and pretty much like training an AI.

Trash in, trash out.

Just because there is a bunch of trash on Netflix and elsewhere doesn't mean we should pollute the kid's world more with it.

Especially at the age of reading. 1-5 is the most important age by far...


Oh, I'm not claiming that content's good for them, I'm reinforcing the notion that it won't be kids who reject crappy AI stories on account of their being crappy. Adults might—and should, if the stories are indeed crappy. Kids will binge content that's truly already on par with or worse than what current AI can accomplish, if you let them.


Ah, yes I do agree with that.


The theme park idea was actually part of the prompt. They show it in the credits


> they're not novel, they're remixes.

I think it depends on how you use it. If you write a one-line prompt, you'll get a "generic" result. With more guidance, you can get something much more personal. For example, you could imagine a story and let ChatGPT pick the actual words.

At the end, you can put an arbitrary amount of work to get a generated story. You can treat ChatGPT as a co-author who will help you do the parts you want (e.g. find ideas, find text structure, write the actual sentences). The end result depends on how much work you want to do yourself.


This is very achievable by layering it. Start by asking for a synopsis, then characters, then story arc, then outline, then chapters, then chapter outlines, then content. Each time seeding the prompt with the hierarchy of content above that level.


> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes. I know the critique of that position is "but humans just remix the things they know from the world" but that is the same misunderstanding that those who thought truck driving jobs would be automated by 2018 made about that profession. i.e. The less you know about a field, the more likely you are to believe it will be easy to automate.

I'm curious how many modern children's books you've read. There are a ton of very shitty... Remixes with poor cheap art. Especially books for cities and holidays. I could see this eating into that


This piece did not seem to be written by a human. The writing style would arguably escape notice, though it was an even a little too twee for a children's book. But the plot was so far out there that it's hard to argue that it seems natural. I think even most children, after having experience with children's books, would find something odd about the sequence of events.


> I could see this eating into that

Perhaps, if the cover was engaging enough. However this seems like the kind of story that my child (4 1/2) might read once and then never pick up again. She'd be like "ew, who wants butter all over their kitchen?"


What are “books for cities?”


Books like "Goodnight Chicago" and similar


Those books are pretty lame but this ChatGPT book is still pretty far behind those.


I've been using AI image generators to help me brainstorm compositions/colors/themes for traditional artworks. Your note about lack of "coherent viewpoint" very much rings true for AI-generated visual arts. The AI will get many patterns/images "right" in a given piece but has no sense of how far to take those patterns or how different components attach or relate to each other.

Even so, it's a fun and interesting tool to use alongside traditional media, so I'm excited to see it improve.


>A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes.

I don't know how we get this myth to stop perpetuating but it does not ring true to me at all. LLM's has proven to have "novelty" many times now.


Anecdotally I've pushed chatGPT pretty hard to give me novel stuff and have found it really hard to do. Sometimes it even gets "grumpy" at me about pushing it.


ChatGPT is specifically trained to play inside the lines, have you tried the same with GPT3?


I'd love to read a "novel" AI story; sounds very interesting. Can you link an example or two?


> A fun experiment but LLM generated stories will never escape the uncanny valley, because they're not novel, they're remixes

Kid's books aren't intended to be novel. It's not like a four year old is going to say, "Man, this is a retread of Hamlet!" They are often re-tellings packaged up in a new way to appeal to kids in the current generation. Sometimes as simple as removing sexist, classist, and/or racist elements.


It's initially the parent who'll pick the book though (or the critic? the people who decide what to put in the front window of the bookstore? probably not the four year old though), so the uncanniness could come into play.


> Voice is special because of the uniqueness of the human life behind it

This IMO is a handwavy criticism of AI-generated content by people judging content who have the foreknowledge that the content was AI-generated.

To put it another way, pretend it was a year (or more) ago and ChatGPT/Stable Diffusion were ~unknown. You stumble across this kids book. Be completely honest here: would your reaction be "there's no way a human created this book"?


I clicked on the link agreeing with you and came back disagreeing. Maybe I wouldn't have thought "an LLM made this", but the story is disjointed in a way that makes no sense for a children's book. The structure is all wrong.

-> Hedgehog making bread -> Loves making bread -> One day bread stolen -> Butter everywhere -> Hedgehog sets out to find culprit -> Finds culprit -> Opens butter themed amusement park with culprit -> The End

There is no overlapping theme or logic to the story. Children's book will have some form of gimmick and stick to it (young humans aren't all that bright) so you can have a baker hedgehog that encounters adversity (bread is gone) and maybe tie it in with a human culprit (bread thief, didn't say please or something), but the butter amusement park is out of place and uncanny.


This is the prompt they used

> Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.

It seems like the LLM did exactly what the author wanted


Not sure why you’re getting downvoted. That prompt is in the credits and super relevant to the discussion.

I read the prompt and I can see the human’s intention: write a story about how you turn something bad into something good. But the AI transitions didn’t land and instead of getting a heartwarming story of a clever hedgehog, it just feels like a badly executed Deus Ex Machina.


> There is no overlapping theme or logic to the story.

I think that's pretty different than GP's point about art needing to have "voice." It seems entirely clear to me that AI will get to the point where it can generate something coherent in terms of plot/theme/logic - that's just a matter of time.


Please do this analysis on other children's books, I think you will find something similar. These aren't masterpieces of storytelling...


The ones that have stood the test of time are masterpieces of storytelling. They shine insight into a facet of life. They have a strong throughline, a message without moralizing. The best ones ground an experience to make it tangible to parent and child.

Where the Wild Things Are - a fever dream about rebellion and unrealized sexuality, hero's journey into the latent world and back.

Green Eggs and Ham - picky eating works on its own, or can stand in for just about any other unpleasant experience. The repetition mirrors the real-life experience of getting used to something you don't like.

Can't remember the name, about fuzzy little monkey people living in a hut? - really cozy, about the warmth and safety of being a cuddled-up tiny person without worry.

There's no throughline in this one. The prickly porcupine stands in for a stressed mom. The fairy stands in for a child who messes up the kitchen. The solution is play all day and leave the kitchen dirty? Doesn't hold together.

I'd take out the filler "journey" bit, add in some conflict between the mom and kid, and make the ending more realistic (fairy uses magic to make icing for the cake?). That might bring it up to library-filler tier book.

To me, the art looks really impressive though. Again, no throughline - I'd expect a progression that goes from lonely porcupine -> in the wilderness -> sharing the stage -> back at home but happy


>I'd take out the filler "journey" bit, add in some conflict between the mom and kid, and make the ending more realistic (fairy uses magic to make icing for the cake?). That might bring it up to library-filler tier book.

THIS. This is exactly the power of these models. The AI generated an entire story in some extremely quick amount of time and an experience Human can edit and curate the results. It generates first drafts trivially.


It's not an expectation that a book will be a masterpiece of storytelling. Take the very hungry caterpillar, it has the simplest premise:

-> Caterpillar eats food -> Caterpillar eats more food -> Caterpillar eats even more food -> Caterpillar gets sick -> Caterpillar eats less food -> Butterfly.

The succession of events have a thread that join them together. I'm pretty sure ChatGPT could write it as well if you give it the above summary, but the hedgehog story doesn't have that and feel uncanny because of it.


Yeah but The Very Hungry Caterpillar is a good children's book. Maybe you don't have children but trust me there are plenty of bad children's books (especially now it's easy to self publish).

Take a look at Penguin in Peril for example.

Cats are hungry -> they go to see a film -> they are still hungry so they decide to steal a penguin -> the penguin escapes -> the penguin swims through a fountain back to the zoo -> the cats get put in prison.

I dunno maybe it's still fairly coherent, but we've gone from "completely impossible" to "bad children's books" in like 2 years. How many more years do you think it will take to get to "good children's book"? I bet less than 10.


As someone that has been reading multiple children’s books per day for the last 5 years to my children, let me tell you that some books are like what you describe with a nice intertwined story that makes sense together and then some other books feel like randomly thrown together scenes like this that are just “and then this happened… and then this happened…”. I tend to notice a lot of the latter come from either 1. Celebrities that decide they want to be children’s book authors, or 2. Somebody that wants to make cool pictures and adds the story as an afterthought. With how nice these pictures are for OP’s book, it would probably be tricked into thinking this book was written by person #2.


I suspect with AI we always will be in "this story (song, picture) is Ok" and never "is great" category. I recall the member of ABBA was asked whether they ever felt the song would be a hit. She (this was definitely either Agnetha or Anni-Frid) said that never except for "Dancing Queen". So the AI will be able to mimic ABBA-style, but it would struggle to create something like this song.


Hm, really? I would have thought “the winner takes it all” had No1 hit all over it and I am not an ABBA fan.


My reaction would still be, "This is a pretty bad children's book."

The reason I might not have guessed it was a machine creation is that the prose is adequate on a technical level, and previous markers for machine creation included wonky prose. But going forward we'll all learn new markers for machine-created dreck.


This is as bad as some self-published children's books, but in a different way.


Yeah, probably not replacing human-generated fiction immediately, but it's higher quality than all of my extempore bedtime stories I must admit.


> because they're not novel, they're remixes

There's only so many words in the english language. If I ask ChatGPT to write me a story about how a I turn into a super hero tomato that saves the world against the villainy of pasta, that's a novel story I'm pretty sure.


You say it's not novel, but I loved the twist where they decided to open a theme park.


AI art replacing real art would effectively be human culture imploding in on itself.


It feels like an LLM should easily be able to handle this problem. Some of these kinds of authoritative voices are almost certainly represented in its latent space.


Wait until you see the movie Avatar.


> A fun experiment but LLM generated stories will never escape the uncanny valley

Never is a really, really long time.

> Art, in this case illustration and narrative, requires a coherent viewpoint.

Why can't an AI have a coherent viewpoint, especially in the context of a children's book, where that viewpoint need only be incredibly simple? "John has lost his <insert item>, feels sad about that, and wants to get it back" is a coherent viewpoint, and certainly something AI can (if not now, at some point in the future) write a coherent story about.

> Voice is special because of the uniqueness of the human life behind it, which I realize sounds a bit airy-fairy, but is very true once you start digging into what works and what doesn't.

So what works and what doesn't? You're saying all of this at a high level and giving a timeline of forever on your prediction, but then you say it's true if you dig in without doing that.

> It's also why some pieces really don't work for some people. The "a kid could draw that" critiques of various modernism styles come from this place.

The thing about modernism is that it requires outside context to understand. It is a movement in response to art movements that preceded it, and the fact that its style uses simplistic shapes and lines is because of what it's a response to. You need to understand that history to appreciate why it is what it is.

In that sense, you're absolutely right - a child can't produce modern art, because even if it looks visually similar to existing art, it lacks that context.

The problem with that argument is that it's totally irrelevant to the topic at hand - children's books. They do not need history or context - those things serve no purpose because, for obvious reasons, they would be lost on the audience. All you have to hit is the superficial level - if you book looks and reads like a great children's book, it's a great children's book!

On a somewhat related note, I think AI art will eventually be seen as a movement somewhat akin to modernism. In response to the fact that art has become increasingly elite and insular (the whole fact that a modern painting can't be fully appreciated just by looking at the painting - you must know art history, which most people don't have the time or inclination to learn), people will be inclined to appreciate the flood of beautiful but democratized art that can be made by anyone. Just as the generation of artists before modernism thought it was awful, so will the current generation of artists about AI art. I'm a believe that art should be about aesthetics, not history, and AI art is certainly great for that.


> Never is a really, really long time.

True but irrelevant. If we want to get to decent art, we'll need something well beyond an LLM.

Humans generate stories using introspection. Every writer is their own first reader. They rapidly and iteratively try out story elements to create experiences and feelings. They then find the right words, again seeking out particular feelings. After iterating on the words for a while, they then have others read the words: partners, friends, eventually an editor. Those people all experience feelings, which get observed and sometimes articulated to the writer, driving further revisions.

LLMs can create story-shaped things. They can write glib, imitative dreck. But to write a new, compelling book, they need to be able to iterate against high-quality simulations of reader emotions. That may be something that we can create in the lab, but if we do, it won't be called an LLM.


Right, I think humanity will some day create something resembling an AGI, but the current LLM statistical approach is extremely unlikely to be the path to that. You need many more layers of sophistication.

Like, AI art generators don't just have issues with human hands or eyes, they also create buildings which are architecturally incoherent. I don't know how you solve that with mere data or refinements.


one thing i learned is to mever say never when it comes to LLMs


[flagged]


This is not Reddit or Twitter. Think about what you want to say before posting. Meme responses are usually a sign that you don't really know what you're talking about.

From HN FAQ:

> Avoid generic tangents. Omit internet tropes. Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

- https://news.ycombinator.com/newsguidelines.html


The OP comment can also be seen as an internet trope displaying poor understanding of the topic.

You can't teach someone who is unwilling to learn.


Yea, I love how people just throw out statements like "they're not novel, they're remixes" with absolutely zero justification. What a complex AI like ChatGPT is and isn't is a very interesting conversation. But I'm not sure people realize just how much their personal bias factors in to the conversation. So often people offer something as obvious fact which has no factual basis.


As an experiment, this is cool.

As actual usage? I enjoy making stories up (or misremembering existing ones) for my daughter. The human touch is valuable to me. I also wonder: many well-loved stories of today started as a story the writer was telling his/her children; one example is The Hobbit. What if Tolkien had thought "screw this, I'm overworked and tired, I'll just type some prompt for ChatGPT and it will be enough for Christopher".

I know mine is a kneejerk reaction. But I can't help thinking that this is solving a problem no actual humans have. We know how to tell stories, good or bad. We don't need automation for this. We don't need help writing terribly bland and generic stories, either. So it feels vaguely dystopian to me.

Technically it is impressive. ChatGPT and things like it are the most impressive thing in years, to me.


It is vaguely dystopian, true, but one thing I remember in order to sleep at night is that ChatGPT and the like are trained on human-written text. So we might currently be looking at ChatGPT at its very best, or close to its very best. Reasoning: from here on out, the stuff it trains on will be polluted with the automatically generated text. Photo copies of photo copies eventually lead to blurrier and crummier images of the real thing.

We can keep paying people to come up with optimisations to the algorithm itself, keep paying annotators to manually pepper human common sense into the system, but it's my theory that these payments won't keep up with the spread of automatically generated content in the source dataset and the negative impact that has on the language model that the algorithm outputs.

ChatGPT currently enshrines insight and style from 2020-2021 (more-or-less indistinguishable from insight and style from 2022-2023), but now that the system exists, rather than observing a rapid pace of new writing styles and original insights emerging on the web of 2024, we'll potentially see a slightly slower style/original insights emergence rate, then the next year an even slower emergence rate. This will continue until it reaches a stage where the spoken world of language and world wide web world of language have completely diverged, similar to the way 1950s film dialogue bore little resemblance to 1950s speaking styles.

Short-term, ChatGPT has called creative pursuits into question, but long term, I think such systems will strongly validate creative pursuits, and only really replace non-creative roles. By turning the web into a wasteland of written cruft, GPT will validate the need for human flourishes, error, divergences from the norm and the arbitrary rewriting of unspoken rules. I think only a strong AI raised like us in our own societies could infuse that kind of culture into its writing, but the process of developing such an AI would basically just be a reinvention of slavery, and we probably don't have the resources here on earth to support it longterm anyway.


I agree with the premise that content will get more polluted, but there is an element of human vote every time we choose a prompt output and say “this is good enough for me to post/use/turn into a book”.

This is just a very convoluted way of manually labeling data as good and bad.


It would be interesting to see whether a label emerges, to denote content created pre-chatGPT; ex: certified pre-2023 AI-free content.

Also, it would be possible to train bots on an archive of such material.(accordingly, out of date; so less useful in numerous ways).


China do have new laws requiring people on the web to indicate with a watermark (or similar) if their stuff was created with the help of AI. See: https://cacm.acm.org/news/267778-china-bans-ai-generated-med...

Even if western governments adopt similar laws, however, I'm not sure if they would be that effective. People would start messing with the definition of AI. E.g. 80 years ago a spelling and grammar checker would probably have fit society's definition of AI, and both of those techs arguably have a cultural impact on the web. Spellcheckers lead to less new words or dialectal variations of words coming into existence, for example.


I'm starting to worry that we're going to run pretty short of activities humans find fulfilling to do, that have any notable amount of extrinsic reward or value.

What happens when the computer does tell better stories to your kids than you do? When you turn the computer loose on them for a week and, well, god damn, by the end of the week they seem to have taken exactly the lessons they needed and are bringing them up in real-life situations and are even getting better at reading themselves, and raving about how good the stories are, and you can't deny that if you'd done it, the results wouldn't have been as good?

Now repeat for all the ways that humans find it satisfying to serve others using their creativity.

I very much doubt it's healthy for all the things we find fulfilling and enjoyable to have only intrinsic motivation to drive them.

It's hard to turn down better outcomes, but what if a side-effect is that as a species we become psychologically (even spiritually, if you will) lost? So kids are raised more effectively—but for what? So that all the things they enjoy doing are valuable to, and needed or wanted by, exactly no-one?

So we get more efficient at producing goods and entertainment—for what? So the brushes and paints we buy for our paintings that nobody cares to see and that your talentless cousin can outdo—in every way, including sublimity and mood and all that—with a lazy prompt on some future image generator, are somewhat cheaper? What's the point?


Everything you mentioned worries me.

> What happens when the computer does tell better stories to your kids than you do?

Others have answered "but ChatGPT will never get as good at this". Let's, for the sake of argument, suppose that it does: suppose ChatGPT does get better.

What then? Why is it a net win for mankind that a computer program can tell better (or "good enough") stories? If mechanical tasks get done by machines, and creative tasks get done by machines, and games get played by machines, and everything under the sun gets done by machines...

...what's left for us, then? Writing is an enjoyable activity for humans. It's not something to "optimize", it's fine as it is. If we optimize humans out of the activity of writing (or composing music, or drawing, etc), have we truly "won"?


What's left for us is whatever we want to do.

People still do all kinds of things which machines and computers can do better than we can. No reason to think that will change.


That's exactly my point about the extrinsic vs. intrinsic motivation. Of course we can still do whatever we want, we can just no longer count even as much as we can now (which is already much reduced by e.g. recording and broadcast or mass manufacturing) on anyone else caring about it, needing it, or wanting it. I think being able to feel like one's work and creativity are genuinely wanted and needed by others in one's family, friend circle, and community is probably pretty psychologically important to people. It looks like all that's going to be left of that is some very-forced variety, and even that, not to most people—the organic need and desire for other people's creativity and effort may soon be sharply reduced, even further.


My question is precisely that, what will we want to do if computers do everything, including writing, reading, critiquing, buying, selling and stealing art?

We will be left to ponder our existence and meaning in the vastness of the cosmos? Nope: in this hypothetical future, computers will also be better at that than us.

I'm not threatened by computers if they can design a car, or write a better programs (though my livelihood is threatened for obvious reasons). But why would I want computers writing novels? What is there for me to gain, as a human?

(I'm specifically excluding the "wow, this tech is amazing" angle from it. I'm also amazed by ChatGPT.)


If computers are better at writing novels than humans then what you gain is access to better novels.

I enjoy reading novels, and I wouldn't refuse to read an excellent one because a computer wrote it.


I understand your point of view, but writing novels is an activity that is enjoyable for humans. This to me feels like "we developed an AI that enjoys food better than humans": do we need an automated improvement in food-enjoyment?

Think about whatever you enjoy doing: what if computers do it better, and become cheaper and faster at it, and you are no longer necessary? Sure, you could do it for your mom or partner to see, I guess. Disappointing, isn't it? Now repeat for every conceivable hobby or human activity. Do you see where I'm going?

> access to better novels

Are current novels bad? Do we have a lack of novels? Does the process of novel-writing need "disruption" by AI? Is mankind complaining, "if only we had more and better novels"?


Mass existential crisis, despair, I wouldn't be surprised if in the 2030s mass, um, self-terminating events become widespread.


>..what's left for us, then?

Consuming the culture manufactured by the autonomous corporations that generate our reality so the capitalist class can afford their orbital pleasure domes and stem-cell purees.


I was thinking something like it, only a bit less scifi: we become consumers.

Not like today, but worse: exclusively consumers of stuff automated processes produce. We won't create anymore; at best it'll be "computer, entertain me!". I really don't like that future.


The really scary part is, if the dopamine hit is good enough, we won't even care.


Yea I’m with you. If a children’s story is primarily a generic - albeit engaging - plot, then I think AI will be very successful at telling new stories.

But as a means of cultural encoding to help children understand what matters in the world and how they - specifically - fit into that world, it’s going to come up short.


I think this is true, but there are (likely) more children's books out there than you can read in a lifetime, so it seems like adding more middling AI ones to the mix would not be a value-add. At least (skilled) human authors adding to the pile have a better chance to make the story meaningful.

Maybe we need a critic AI that can identify the most touching, meaningful, instructive, and morally enlightening stories out there and help us sort through the already daunting number of options


> But I can't help thinking that this is solving a problem no actual humans have.

This. There are 10000 children's books out there with coherent art sets and a storyline that's not just total gibberish.

This kind of crap is no better than SEO blogs. Its only purpose is to pump out massive amounts of content in the hopes of earning a small amount of money on each squirt of crap that maybe amounts to something over the long term.

Build good things. Solve problems. That's how you make money.


Master writers like Tolkien don't need ChatGPT to help write.

Amateurs, however, do need the help. Not everyone is a born story teller, but if you have a story to tell and need help telling it, ChatGPT can fill in those gaps for you.

It might not be a work of art like LOTR, but you'll most likely just be getting tips and hints on how to format better.


Nobody needs ChatGPT to help write.

Most writers learn by writing. This is a "solved" problem, in the sense that mankind has had writers -- successful and otherwise, good and bad -- for hundreds of years now. We got this, this isn't an activity that needs improving by AI tools. It's not an actual problem that humans have, e.g. "I would write better, if only I had an AI assistant!". Creative writing is also not about speed, it's not something that needs optimizing for that. For art in general, actually: "if only Picasso could paint faster!" is not an actual problem mankind has.

I'm talking about creative writing, mind you. Not some technical writing like creating manuals or reports -- I'm all for AI helping us with those.


I can see the usage being between parent and kid more than producing best sellers. Or a way to give a kid a cheap but special gift.


Yes, but that's precisely the usage I find upsetting!


I can see your point but I'm not sure it's that straightforward. Let's say you simply took a hat with your kid's favorite characters from tv/movies/books/whatever on slips of paper, drew some, then did the same for plots, then made up a story to tell your kid.

At some level, sure, it's not the same as coming up with a story on your own - there's not going to be anything meaningful in why you selected those particular elements of the story - but that merely constrains those choices to free you up to be creative with other aspects of the story.

I think there's still room for someone who wants to be creative to use the tools to provide the same experience you're describing. It's not like one has to give the raw output to one's child. And I think such raw outputs will be generally bland, missing something relative to those with a human touch. However, giving bland and meaningless entertainment to children wouldn't really be something new, and I still see a fair concern in there. I suppose this is already part of what articles on "Kids' YouTube" are talking about.[0]

[0]https://www.avclub.com/take-a-trip-to-the-automated-hellscap...


perhaps the core idea of the story can still come from you but ChatGPT can help flesh out the details in write it in a compelling manner.


That's how I see it. IT could still involve a lot of novel thought and work to get the AI to generate text you want. You nudge it in whatever direction you want it to go.


What actual problem about writing would this solve?

This is like coming up with a plot idea and having some hired writer write it. This already existed before computers. So you're essentially using GPT as a ghostwriter, only (unlike human ghostwriters) it has no way to come with something innovative, it just regurgitates whatever it's already in its data in some way or another.

"But," I can guess you'll object, "human (ghost)writers are already writing derivative crap, what's the difference?". Exactly. We already know how to hire humans to write derivative crap, and the truly good stuff is out of the question for GPT. So what is this solving?

But what about children's bedtime stories? Well, there are tons of really good bedtime stories, lovingly crafted by good authors. Do we need automated ones written by ChatGPT?


Do we need another song about love? Do we need another library for formatting text? Do we need another bedtime story?

No, we probably don’t. Won’t and shouldn’t stop people from making more :)


Excellent point.

The world doesn't need another love song [1]. But the human composing it might need it, as a form of expression, even if the result is garbage or trite. Humans enjoy creating stuff, even if nobody wants that stuff. Writers write, or their minds will explode. The creation process is its own goal.

This is not the case for ChatGPT; a LLM has no motivations.

[1] https://youtu.be/jJHbJukQi8k?t=42


Go ahead and create, no technology is stopping anyone from doing that. But I don't see why you can't create while using a technology like ChatGPT. Coming up with the proper prompts, rewriting some of its output, using its output as an input, etc. There's a lot of things you can do to combine your own novel thoughts with the text generated by a machine. That's a way to create things that has never really been possible.

Imagine being an OK writer and now you potentially have an editor. Imagine asking it to come up with a few ideas regarding something you're working on and you can use one as inspiration. There's just unlimited ways people will work with and integrate these new tools that you or I can't even possibly dream up.

As for asking if we need more bedtime stories - why not? Maybe it would be cool if me and my kid could use some experience we had together to create a story that features him and his friends in the mold of a classic story.


> Go ahead and create, no technology is stopping anyone from doing that

Yeah, but where did I say that? That's not my point at all.

Here are my thoughts, rephrased:

- We, mankind, don't need AI to "improve" writing. This is a problem nobody has. There's a plethora of writing, both for children and for adults.

- People enjoy writing, so this isn't an activity we want to "optimize". There's no need for ChatGPT here.

- When computers take over creating everything, what is left for us humans? If/when books are being written and published by AI, sure you can go on writing for your mom & friends I guess, but something fundamental will be lost.

- When most "content" (I hate that word) is created like this, some sort of extreme Netflixization of movies and books taken to its logical consequence, we will become consumers. Isn't that depressing?

> As for asking if we need more bedtime stories - why not? Maybe it would be cool if me and my kid could use some experience we had together [...]

I do this every night with my daughter, without AI. It's called "parenting". You don't need gadgets or computers for this. ChatGPT isn't solving an actual problem in this case either.

"Computer, tell my kid a funny story about that time something fun happened to us; I think it was yesterday, it's in my mobile logs anyway."


Why is it not parenting to create a story with an AI with your kid? Why is what you are doing better?

How do you know something is lost when using an AI to assist in writing? Could it be that something will be gained? Perhaps it will unleash creativity in people that wouldn’t have written before? Maybe the greatest literary work ever created will be built by someone using an AI. Or maybe we’ll see the limits emerge and this won’t be another John Henry story.

Your argument very much sounds like the argument of the Luddite’s. But here’s the thing - you can always check out of it. The Amish certainly have thrived while not participating in very much of what we call “advancements”. I think that’s a better approach than Ted Kaczynsky’s.


Hello Daniel!

We've been working on a platform/tool for this since September. And it's live in Alpha right now, https://bedtimestory.ai

We are working on a lot of feature requests and improvements at the moment, and distribution. Text, Audio and Video being the primary use-cases.

Making stories 100% editable and upload your own images is also in the works.

We are working on "Magic Images" that will allow you to insert yourself or your kids into the stories as stylised characters in your style of choice https://twitter.com/LinusEkenstam/status/1603536335884750848

We are also working on Magic Talking Cards, Video audio of the story character telling the story from their perspective https://twitter.com/LinusEkenstam/status/1603276364550807552

In our library there are over 10.000 stories https://bedtimestory.ai/library

We are looking for people to keep giving us feedback and help us push the boundaries on whats possible.


Did I miss it somewhere or you don't actually make physical copies for these? I would love that; I know the perfect people to gift that subscription to. Let them generate a few stories and maybe once per month select a book to get printed and delivered. This is a genuinely exciting product.


We are working on print as an option, so you can make a collection and each month you get physical books at your doorstep!

Thank you! follow us on twitter to stay up to date, https://twitter.com/bedtimestoryAI


Also... Is there a way to gift a subscription? This is the kind of thing new parents usually don't have the time to search for themselves.


I would be interested in this too. This would be a really cool gift for my newborn daughter.


Very cool and great start!

I tried it and I wonder how I could generate more images. I made a story and only had one image.

I would also love to print it out as a book (don't see myself reading a book to a kid on a laptop / tablet) and that every page would have an image. Would also be good if I could define which content should be represented as an image.

Keep up the great work!


Hey!

free stories only offers 1 image, and paid accounts you'll get 5 images per story. We are working on full Edit an upload images capabilites. https://twitter.com/LinusEkenstam/status/1617997909508902913

Paid accounts can also re-generate images using their own prompts for images.

So we are working on distribution, printed books, e-books, audio and video. It's all coming in the next few weeks.

Thank you for the kind comment!


This is really awesome. I sent it off to my wife to check out for the little ones. Good luck on this - I think you're going to have a lot of success here. Time to disrupt the publishing industry!


Thank you so much for this.

To be clear, Bedtimestories is just the beginning, its an easy target, and easy to market. Our mission is a lot bigger, we want to become "the narrative company" and are building the tools to make that happen.

Excited, and hope to have you and the wife and kids along for the ride!

/ Linus


This is insane, I'm working on a very similar tool but for a specific language. Congrats on the launch and it looks awesome!


Hey! that cool, actually, Bedtimestory.ai works in ANY language, just write the prompt in the language you want, and the story will get generated in that language, PURE magic.

We have not built out the full search, filter and categories for the library yet, but we have stories in +100 different languages.

Super excited to be sharing more in the next few months.


I assume the image gen works on SD and not midjourney as they don't have an api i believe?


We've built a few adapters and the layer between us and the model(s) makes it easy to swap out or support multiple.

Currently we are running 100% on OpenAI, GPT3 & Dalle2, but we have tried some (unofficial MJ solutions) and our own trained SD models. We are working on a feature we call "magic photos" that essentially allows you to upload a set of images of a human, train your own model, and use that in as many stories as you want, you can then create unlimited variations of that character in different situations.

Here are some early versions of that. My daughter as a scientist, in her sci-fi lab. She is created on SD using around 12 images for training, hosting the model ~5gb directly on S3. Then we just use GPU provider pay-per-use to generate the images based on prompts. https://twitter.com/LinusEkenstam/status/1617675627695583232...


Very nice product, well done. What’s your gtm strategy if I may ask


There is a bit of all, content, publishers and self-served.

We already have +10.000 registered users (most signed up in the past 30 days)

About 20.000 stories are generated and some 50.000 images

We are releasing about 2000 books on Amazon with 10 stories per book, and pay royalties back to the creators of each story.

So there will be a lot of noise, essentially we are taking on the big fish.


pretty cool!


I did something similar for my younger daughter using MagicStory[1] (not mine). I have told her the story quite a few times at her bed time. She has shown it to her teacher and her friends are "jealous." It has her pictures, kinda morphed like a princess and stuff.

Edit: Clarifications.

1. https://magicstory.ai


The images looks neat but the stories are a little bit too simple/shallow aren't they?

Another thing. Don't you have a bad feeling uploading images of your kids, conveniently tagged with their age, to a random web service that promises to delete the images without even revealing the makers?

I mean yes, it looks legitimate and you can find a person on Twitter who seems to be the maker of it, but that's not a guarantee for the safety of the uploaded images. I'm not a data paranoid, but careless sharing my kids' images with a random service like that crosses a border.


> loading images of your kids ... to a random web service

I used to worry about that... but then someone pointed out to me that there are pretty much no stories of anyone being hurt by that.

Worst case, those images leak all over the web... And then what? The internet is already filled with pictures of people and children, yours won't stand out. The photos aren't tagged by name, and even if they were, few people care about obama as a 5 year old pirate [1], so why would they care about your child?

Overall, the risk of personal harm from a data leak seems really small - so I put my efforts elsewhere, like making sure my child gets to spend more time with me - that's infinitely more valuable.

[1]: https://img.buzzfeed.com/buzzfeed-static/static/enhanced/ter...


Yes, they feel like all the stories I asked ChatGPT to write.

They are all very bland, and sound like someone is making a summary of the story rather than reading the story itself.

So I think it's great to get ideas of a plot for a story, but then you should write it yourself to make it more "lively".


I agree with this, but also note that a lot of children's books written by humans (I suppose) are terribly bland. My daughter has some Disney books which has illustrations from the films, and the writing in those is exceptionally bland. Same goes with some wonderfully illustrated, but terribly written, books from the 50s and 60s. I'd actually much prefer reading her GPT stories than those books which we have on the shelf (I guess I should get rid of them actually).


There are also a lot of really nice books for children, with great plots and illustrations. And you can actually read them before buying, if you go to a bookstore.


Hey! Maker of https://magicstory.ai here! Super glad to hear your daughter loved it :) we're pushing updates to improve the stories and definitely take privacy seriously - right now all images get deleted once your characters model is trained.

Also planning to have stock characters so you can make stories without uploading images since I get that's definitely a concern for most parents. Would love any other feedback on how to improve the experience!


I guess we all had the same idea because I'm building something very similar too :)

When did you get the idea? ChatGPT is what sparked it for me.


Right before xmas! Gave my little sister a story as a gift :)


Are you planing to offer a non-Google registration?


[flagged]


You're conflating gender and sex.

But having a binary only option for this forces non binary children to comply to the gender dichotomy while having an option outside this norm allows them the freedom of choice.

Also, ask yourself why a binary-only option doesn't strike you as "advocating sex change" when the users are capable of choosing different options.


Sex and gender are the same thing. I'ts precisely that choice that promotes the ideology that gender is separate and maleable, which is what children are vulnerable to.


These look amazing! Did you build MagicStory?


I did :)


Really cool idea, but I would feel uneasy uploading my child's picture to your website. All I have is your promise that you're going to delete them later.

By the way, if I may ask: since story generation seems to be free, what is your business model?


Thanks -- totally understand! In a few weeks we will have some starter characters so that you don't have to upload any images to get a story.

Re: pricing bc of the manual time it takes to approve stories + GPU costs + super large queue we actually just started offering a paid tier to move up the queue and get your story within 24 hours. Definitely still experimenting tho!


Great, thanks very much for taking the time to reply, and good luck with this!


Congrats, looks really amazing! May I ask what your AI stack is? Do you use MidJourney for the images and GPT for the stories or something else? How do you ensure the output stays consistent across images - do you use the seed image and a custom set of prompts for each image, or did you use eg StableDiffusion and did some customized training à la Dreambooth to ensure consistency? Is the workflow fully automated, or are the outputs reviewed and the prompts iterated on manually?


Thanks so much! Happy to share our stack: - GPT3 davinci-003 writes the story - we did a lot of manual experimenting w prompts and used that to fine-tune GPT3 which now generates the image prompts for us - Stable Diffusion + Dreambooth generate the images (we train a model on your hero which gets deleted once your story is finished)

Right now the workflow is mostly automated, but we still manually approve stories before they get sent out to ensure quality. Unfortunately bc of manual time it takes + GPU costs + super high demand we will probably start offering a paid tier. Hope that helps!


Nope. I think it was a Show HN or something of sort that showed up on my radar.


The prompt [1] didn't fully transfer to the generated story.

This is an important part of the prompt:

> she had too much butter. Because of that, things got slippery.

> Until finally she opened the first ever butter theme park.

But the generated story doesn't say that the layer of butter in the kitchen was slippery, or explain that butter in general is slippery (and hence could create "slippery slides"), which make the idea seem like a bizarre jump:

> Sparkles was sorry for her mischief, and offered to help Penelope turn the butter back into bread. But Penelope had a better idea.

> Together, they opened the first ever butter theme park

[1]: https://adventure-of-penelope.vercel.app/credits


Then again, “things got slippery” can be used metaphorically; and that meaning did appear in the story (mischievous Sparkles).


Well it’s a kids book. So I’m not sure it should be heavy on metaphors.

It’s really not great though. On the last page the hedgehog becomes huge.

And it makes no sense. Why was there too much butter? Why was some random girl in the house. How does butter make bread disappear? Why is one character a hedgehog and the other a human girl?


I'm of the opinion that all kids books should be heavy on metaphor and allegory and all of the other important and joyful aspects of language. Restricting children to books that have been "written at their level" or "made easy to read" is doing them a disservice. It's no wonder so many people grow up to be seldom readers if their first exposure to reading is saccharin pablum that presents no challenge, no meaning, no mystery, and no danger. A proper children's book should be one which an adult can also read and enjoy if they give way to that childlike wonder that still lives within them.


Yes, indeed! Children have a huge capacity for stories that engage their imaginations, creativity, and open sense of wonder. And, I think if we're honest, kids are often more honest critics than adults.

Author of "A Wrinkle in Time" Madeleine L'Engle said, "You have to write the book that wants to be written. And if the book will be too difficult for grown-ups, then you write it for children."


You sound like you’ve never read a book to a small child or tried to get a new reader interested in a book long enough to finish the book.


> Why is one character a hedgehog and the other a human girl?

Stories with animals that act as humans are a trend spanning from antiquity (Aesop) or earlier (myths) that just never went away (e.g. Guardians of Galaxy).


Figuratively. Not metaphorically.


I also ain't a big fan of the idea, a few thoughts

* (Currently) Most AI generated images have an aesthetic to them, so it's easy to spot sometimes. Typically low-res, smeared details, bad hands/body horror. This may improve over time, but I don't know how I'd feel ethically (both morally and in work/effort) leaning so hard on AI.

* For me at least, as soon as I learn something creative is completely (or mostly) AI generated I lose all interest in engaging with the content beyond surface-level consumption. I think fundamentally AI generated creative works are banal and will age horribly over time.

Honestly I think it would be much better to buy a nice paperback kid's book made by a human as a gift for your friend's kid. They'll love it and the kid will have a connection with a human author and artist.


All of the AI generated stuff reminds me of this: https://www.penny-arcade.com/comic/2016/08/17/infinite-and-i...


Really? This may be AI generated to an extent, but it's also showing that the author and friends worked together and put together a labor of fun and love for a pair of friends. That'll have more meaning to that family than a random kid's book.


Yeah that's nice sentiment but I don't know, kids don't care about that sort of stuff. I'd rather spend a few hours researching paperback kid's books that could positively impact their growth and plant seeds—made by career artists and writers. It's about the kid, not OP and the parents.

This is like giving a kid an AI-generated JRPG game you made in an afternoon vs giving them a Final Fantasy game.


Honestly when I was a kid I would have been all about the AI-generated JRPG at least 60% over playing the latest Final Fantasy. I couldn't get enough different games at that age, so I'd absolutely enthusiastically dive into both.


Yes... but in that case the value is in the labor, except that they didn't really do any labor, as both the story and pictures were generated by AI.


"labor"


“Room on a broom” is a popular kids book. I wonder if AI could match up to it


There is a market, at least within south east asia, that mass produces stories like aesop's fables, and regional folklore. The books produced are thin 2minute reads and are available for purchase for less than 1 USD usually. It's not surprising to see grammatical and spelling errors slip into the prints of some of these books. The artwork is basic and while stylistically more consistent than OP's effort, it's entirely generic and forgettable (unlike the artworks of legends like Axel Scheffler or Sara Ogilvie).

I can foresee a very near future where this market is absolutely overrun with auto generated books + artwork. I don't think of this as a good or bad thing. It's just going to be interesting to see who gets the first jump on this.


I fully expect that any kind of content store (Amazon books, podcasts, audio books, newspapers, news sites, social media) will be absolutely overrun with generated word soup trying to scam people out of a few bucks - e.g. just like YouTube is overrun with autogenerated creepy children videos.

Whether having to sift through massive amount of trash to get anything created with some quality is a good or a bad thing... decide for yourself.


There'll be a lot of low effort stuff but there's also going to be a torrent of good work done with it. A lot of people seem to be missing the developments in these tools that allow you to take the initial result and then shape it through further prompts or by running it through other tools. If you're willing to put the work in to guide and polish the result you can produce some pretty good work with much less effort than it would normally take.


I wouldn’t mind if a separated genre springs up, labeled AI generated or such. The problem I have is as you said, I don’t want cheap content masqerading as human made. Human content can be cheap too but automating that away would send a torrent of trash upon us to sift through


If you look around on Amazon a bit for books, you'll find a lot of low quality, high volume, and highly formulaic stuff aimed at different types of readers. Basically authors are churning out masses of stories with relatively simple, predictable plots with a lot of action/sex/intrigue/whatever geared to readers that gobble that stuff up as fast as it can be produced.

With AI you can improve the quality of the writing, tweak the style of delivery, add detail to story lines, and deliver stories faster.


Maybe an app for parents to make one with the content they want available as an ebook for x a month


Seen a few people trying to make these and honestly, it’s ok as a novelty thing. Like for a bday or unique occasion maybe. But using these as any kind of replacement for professionally done work feels like a disservice to children.

Good authors+illustrators for children are just so far ahead of any AI generated content right now it’s not even comparable. Something as seemingly simple as Julia Donaldson’s Smartest Giant in Town is a heart warming tale, told along a continuous theme with a consistent voice where the story wraps up beautifully at the end. Then it’s all brought together with delightful illustrations that match and delight so well. They aren’t just good illustrations. They are delightful. Heck. They even have Easter eggs of sorts. All of that gave my son and I so much to discuss, point at, draw, and sing together. Wouldn’t want any child who could have that experience to miss out on it.



I'm working on something like this


the Young Women's Illustrated Primer?


Technology is far from it yet but when I read about AI generated children's book, my thoughts go immediately to The diamond age's "Young Lady's Illustrated Primer: a Propædeutic Enchiridion".


As well they should, that was an extraordinary read certainly one of the best books of its era.


That was my first thought, too.


This is creepy to say the least, and nonsensical at best. It doesn’t convey any values like children stories usually do.


I hate this so very much.

Automating creativity is so gross. Never mind the fact that these models are trained on stolen artwork and texts.


Don't take this as a trolling question, but I can't help but wonder if this sentiment towards AI generated art and creativity is similar to how a chef might feel about shrink wrapped frozen food factories. Just like how a frozen meal can be thrown in the microwave and still be considered a meal, AI generated art can still be considered creative and beautiful. But, as we all know, the timing and execution to properly deliver a "real meal" from a chef is far different than a frozen meal from a factory.

I find myself conflicted on this topic as I can see how the makers of yesterday have been replaced by process automation, taking away some of the magic of the creation process. But, just as a frozen meal from Costco has its place, so does AI generated art. The macro level ingredients may be the same, but the final product is different.

Before fast food, the number of chefs and home cooking would have had to be far higher and/or more frequent than now. Some would argue that we are paying the price for all that processed food now, but others would argue it's the only way to scale people's lives and their time.

I agree that something "feels" different about midjourney making artwork that I likely could never make myself, but I'm struggling to fully have a reason or complete reasoning for why. And then I wrestle with, is this a normal evolution problem? Just this time it hit something nearer to me personally and it impacts my life.

Would love to hear other's thoughts on this.


I have gone from using midjourney for hours a day in amazement to being completely sick of it and anything it outputs in less than a month.

I have seen hundreds of perfect variations on Duchamp's Fountain in my life too. AI Art is very much like Fountain variations without the human Duchamp involved. Then some AI Art fool extrapolating from that and believing Home Depot has an entire section dedicated to museum quality variations on Fountain.

The actual output is a very small aspect of the phenomena of "art".


Food is essential to our survival. No matter how much effort you put into making food, you have to make something to eat or you buy food that is cooked by someone else. So making it more accessible for people who don't have time, money or necessary skills has completely no downsides.

Art is just something we do and consume for fun. We don't need to make art more efficient, because there is already too much art to consume in a lifetime. Also there is no requirement to consume as much art as possible. And automating art making means that there will be even more average art that anyone interested will have to sift through to find something nice.

Overall all of this effort could be put into automating something that will make everyday life easier, like if you want to make art more accessible then for example automate translations so we can read books or watch movies that we previously couldn't.


Art is more necessary to survival than one might think. Consider the "Quartet for the End of Time,[1] an eight-movement piece of chamber music by the French composer Olivier Messiaen, written and first performed while he was a Nazi prisoner of war in 1941. And even when it comes to food, eating well involves creativity: arranging textures, colors, nutrition, etc. The French speak of l'art de la table, or the art of a well-served meal that encourages conviviality and conversation.

[1] https://en.m.wikipedia.org/wiki/Quatuor_pour_la_fin_du_temps


But it is not like that ... at all. Analogies run the risk of being completely wrong.

A frozen food is made of real food, the process all starts from scratch with "natural" ingredients (well ingredients).

The frozen food vs fresh cooked food is more like a film vs live acting. Or listening on tape vs a live concert. Even this analogy is lacking, a frozen food is made fresh at some point, it is about mass production not origin and ownership.

The frozen food is not food from your kitchen (or trashcan) remixed and added onto other people's plate.


Um, I'm pretty sure your analogy fell off the truck and got ran over at some point...

Trying to take the 'freshness' of food and compare it to storytelling is just an abject failure here as the vaaaaaaaaaaaaaaaaaaaast majority of storytelling has no freshness at all, it's rehashing of the same basic plot elements, hell, was it the Greeks that stated this a few thousand years ago?


you are conflating storytelling with the plot ... see how easy is to criticise analogies ...

there is/was plenty of fresh storytelling in every age


And you're confusing the tall flowers from the field. Dime novels and pulp mags are a great example of this. You're remembering the good stuff because it's stuck around and the cheap, mass produced trashed ended up in a burn pile.


I think there are important differences between this situation and your chef/frozen food analogy that cause it to break down:

* A digital image is permanent. A meal, by design, is consumed and has a very specific lifespan.

* A digital image is trivial to duplicate and consume again. A meal can be very hard to duplicate (and may even require another chef, or food scientists). No two people can consume exactly the same meal.

* A digital image is not required to survival. Regular intake of food and water is (at least for humans).

and so on.


I think the whole food analogy is missing the main point we are not reusing someone else's food to create a new frozen food.

Frozen-ness is about mass production - but not about the origin of that food. It just conflates unrelated issues.


All advances in creative tooling are automating creativity. What are you on about? Look at photoshop color selector automates mixing paint, digital filters for audio automate analog mixers. This attitude towards AI creative tools is gatekeeping.


The people who wrote the Photoshop source code were paid for their contributions with full consent that their work would be included in the product and sold for a profit by Adobe. The artists whose work was fed into these systems did not consent, nor were they paid for their contribution. In fact, I have seen a few reports of people using these systems to generate lookalike artworks intended to resemble the work of specific working artists—artists whose livelihoods depend on client commissions. The only way that these systems could produce knock-off artwork after receiving a prompt containing that artist's specific name is if the artist's own copyrighted work was used in the training data.

Do you have any doubt that many working artists' copyrighted work was included in the the training data that produced these systems, without those artists licensing their work for that purpose?

It's not gatekeeping to insist that a project obtain a voluntary license from the people who contribute their work to a project.

And it's not gatekeeping to point out the fact that these platforms have only been successful by using working artists' copyrighted work in violation of their rights, without asking permission or providing compensation.


Be real, copyright/attribution is only one complaint, a tiny part of the sudden hatred against AI art/creativity.

When somebody brings up the invention of photoshop, they're not trying to disprove a point about paying salaries to the devs, and I am sure you knew that.


No, your perspective on this is all wrong. The outrage about AI art is not about how it empowers its users.

Rather, the problem is that these systems only exists today because of the involuntary contributions of the very same people who are likely to be put out of business.

It really is about the how the contributors to these systems are being treated. The coders that contributed to Photoshop are analogous to the artists whose work was used to train these models. The difference between them is that the former gave their consent to the contribution, and the latter did not.


So, my perspective on this is all wrong, despite the fact that the original poster wrote, pretty damn specifically, "Automating creativity is so gross. Never mind the fact that these models are trained on stolen artwork and texts."

That's it, that's the entire post. Never mind the "stolen artwork" argument, you still have people upset about "automating creativity".

So, like I said, it's one part of it.


My mistake. Your perspective is not all wrong. You're just focusing on the wrong thing.


> Be real, copyright/attribution is only one complaint, a tiny part of the sudden hatred against AI art/creativity.

When the artists whose work was stolen to train these models are compensated for that work then maybe its a "tiny part". Until then it is taking peoples work and using it to create an ultimate competitor in their own market that they cannot compete with. They now have to compete against themselves and they haven't even been paid for that "privilege".


This kind of wording reminds me of a very old (probably outdated) argument about theft of digital goods. In meatspace when you steal something you are depriving someone of a thing. If I steal your biscuit, you don't have the biscuit. But if I copy a picture of the biscuit you are still left with your own picture, perfect and identical in every way. If I'm not reselling that image to everyone, you lost on that one sale. I wonder if openAI goes and pays retail price for each image they used, would this discussion finally end?


beautifully put


We are talking past each other. I am not talking about, and indeed am not interested in, the compensation discussion. I will let other people battle it out. I was just responding to calling automation of art gross. I don't know who you are talking to.


This is a legit complaint, but is realistic to overcome in future I think. There have been reports of OpenAI hiring software engineers to train AI models to write code in their language. Can be done with art as well, just will take time. It's an impressive demonstration of what is achievable though.


It’s not gatekeeping to value the outcomes of work created this way less than you would from the mind of a human.

Saying that I agree with you in that I see this technology as a tool that can be used by humans to make amazing things. No one really has yet as the outputs have just been blunted, sloppy things. But if I’m an artist I see these tools as the latest tool to help make things.


> It’s not gatekeeping to value the outcomes of work created this way less than you would from the mind of a human.

There is a difference between saying something is less than and calling it gross. Also, imho, value of art is usually the art not the process. Sometimes it is the process, like someone showing off a new method but that is of interest to very, very few people. Just like academic papers' methodology section. Even within the larger community, it's useful to only a few people. (Apart from just checking someone's work)


What is wrong with gatekeeping? It's not like everyone needs to be able to make art. Also, i don't know much about Photoshop, but most digitatl filters usually act the same as analog mixers. They do not automate anything.


It's not like everyone needs to make photographic reproductions, why not delegate the task to photorealistic painters? Now that we've gotten used to the convenience of having a camera in our pocket, who would suggest banning photography?


Most people don't make good photographic reproductions. They have mobiles with cameras, sure, but they cannot take an interesting or even technically good photo.

Good human photographers are still a thing. They can even get a job out of it! There are books about photos by photographers of renown.

AI automation here would be more like AI deciding which photos to take, and taking them for you, with your interaction limited to "take some nice photos of my vacation". Then some other AI could give likes to the photo, closing the loop so that humans need not be involved at all in that silliness of photography!


On an emotional level, I totally agree. But as a developer of creative tools, I've been trying to make sense of what's good and bad in this field. Arguably, all creative software is at least partially automating creativity (photoshop, ableton etc). But my conclusion so far is that the gross part is when the user inputs a vague request and a finished product comes out. Contrastingly, if the user can make continuous changes with realtime monitoring of their actions, then that is an instrument - mostly amplifying the creative impulses of its user (which is good).


It usually trends to the way over time.

When you're demonstrating or studying a tech (like we're doing with gpt now), there's a tendency to go "here's what it can do." Once people start using it for more mundane work, the focus becomes "here's what I can do with it."

It probably doesn't matter what children's stories language models can write themselves. It matters what stories people can write using them. Same for software/graphic design/law/etc. These are just tech demos.

This becomes clearer once demos play out. 2 years ago, scripting a philosophical debate using language models felt like AIs expressing their opinion. By now, we're past that.


I think (and hope) you're right. Recently, someone coined the phrase "bach faucet" to describe the mundanity of an "AI" that can compose millions of novel bach fugues that are indistinguishable from the real thing [1]. At the end of the day that is just pointless - abundant art serves no purpose in society, and its value will be zero when considering supply and demand.

[1] https://twitter.com/galaxykate/status/1583907942834716672


Precisely.

OTOH... using AI to create art shifts around supply and demand such that there will be practical implications. The way email correspondence between A customer and a bank may look very different once both are using GPT enabled email clients.


As someone who is not that great at creating art, I like it. It enables me to build more things than I could before. I don't find it much different than a website builder is for people who lack those skills.

Thinking more deeply, I find that AI art has no real value to me, though. I wouldn't hang it on my walls or hope it survives for my kids. If I had written the algorithms that power the model, then I'd feel like more of a stakeholder and maybe I would see the output as something with long term value. To me it feels more like fast food. It fills the void, but it's nothing special-- it's disposable 'art'. But maybe that's just me.


If a person learns how to paint expertly by looking at (and even copying, stroke for stroke) paintings, would you consider that immoral, and those paintings having been stolen? If not, what would you say the difference is?


Morality aside, an artist who entirely copies someone else's work would be looked down upon because they are not original or creating anything novel or new.


So scribes were looked down upon in the middle ages for making copies of books before the printing press?

I think in the tech industry we get caught up on 'novel' and 'new' because duplication is cheap to the point of almost free.


Scribes were not authors. They were expected to copy. Bad faith.


Morality is the only thing I'm querying here. No point changing the topic.


The difference is that a human made the artistic stroke-for-stroke reproduction through human skill and expertise. Even a counterfeit painting is an exercise of perception and interpretation. The value is in the human's hand in the output.


That’s actually a good point. It is very much a non-trivial ability.


Making a lump of silicon generate a painting is far less trivial. That's not relevant. Being trained on paintings was described as stealing, and unless your argument is that stealing is easy so anything non-trivial can't be stealing, I don't see the relevance.


There is the notable difference in the speed and scale at which a computer can be trained. The analogy doesn't really hold up in my opinion.


So if there were a genius painter that learned various artistic styles on sight you would say that their work had no value?

I think you might be in trouble there, a number of savants have been known throughout history who have been able to learn various things with no instruction and simply by hearing or seeing a work done by someone else have been able to recreate it or produce something in their style.


So the analogy doesn't hold up because computer does it more efficiently?


That can be a valid reason to reject an analogy. For example, many people who would accept regular police patrols are not fine with the police installing a tireless ALPR scanner.


Well, in police scanner example, problem people have with it is not its efficacy, but rather other surrounding concerns (like privacy).


Being able to do something faster isn't automatically a moral principle. Even between two humans there will be big differences.


Have you read "Pierre Menard, Author of the Quixote" by Borges?

Part of the conceit of the story is that even a mechanical copy of a known work can become something else because of the inner experience of the person reproducing it. Menard's Quixote, his fictional author thought, was different because Menard knew things and lived through things that Cervantes before him didn't, and so the exact same words are injected of different connotations because of this.

It's absurd, but it explores something about authorship.

ChatGPT and AI have no inner lives, no authorial perspective, and so they cannot use this defense. In their case, a copy is truly "stolen" with no additional value.


This is kind of a tired reasoning. I don’t know what the goal of this kind of reasoning is. The fact is that virtually no one learns or dies art this way. No one in any statistically meaningful amount has a photographic memory and uses such memory to create new art stroke by stroke.


There are many paintings that have been made following along to a Bob Ross video - using the same paints, the same brushes, and the same elements within the painting.

Even after following along and gaining the confidence to paint on their own, they continue to use the same approach to painting. Are those paintings afterwards derivative works and to be treated the same way as an AI painting asked to create a scene with certain elements in the stye of Bob Ross?


That is largely hobby painting and learning to paint. You’re conflating human skill to learn and adapt and go through a relaxing exercise to AI generated. It’s not a serious argument.


My question is "what is a derivative work?" This is equally applicable to images generated for a hobby as it is for professionals. It applies to art generated with a paintbrush and canvas, on a painting app on a digital tablet, and as generated though an AI model.

The definition of a derivative work is agnostic to the means to create it. The final product is what is being evaluated.

If the answer is "but the Bob Ross paintings that are derivative works by amateurs are never sold and only hung at home - it would only be an issue if they sold them or tried to display them publicly as described https://www.law.cornell.edu/uscode/text/17/106 " - then I would agree.

If I create an image through any means - be it paint on canvas, stylus on tablet, or prompt and ML and don't violate 17 U.S. Code § 106 then there shouldn't be any issue.

If, however, I publish that image - again, no matter how it is created - then there is an issue. But it isn't the maker of the paints, or Adobe, or the creators of whatever ML model generated it. A program cannot hold a copyright and cannot itself create a derivative work. ( https://www.smithsonianmag.com/smart-news/us-copyright-offic... ) and so the issue of creating and publishing a derivative work is upon the human who did it - not the computer.

If you want to sue someone for publishing an image generated as "create a landscape of San Francisco in the style of Pixar" then have Pixar go after the person who published that image. It doesn't matter what process created that image.


> learn and adapt and go through a relaxing exercise

The AI is learning. Why is one of those things stealing and not the other?


At best you could claim “pirated”, definitely not “stolen”.

IMO piracy should be morally encouraged


Do you think it's moral for a big corporation to profit on pirating works from small artists?


For-profit piracy is questionable (there’s pros and cons, it could unleash a ton of creativity) but what we’re discussing here is non-profit, personal use piracy.


I'm more referring to the companies offering the tools. While some are free to use (for now), they are still profiting from this type of personal use. often being able to convert to a paid model later down the line, think github copilot


>Automating creativity is so gross. Never mind the fact that these models are trained on stolen artwork and texts.

King of bad takes. Painters literally said the same thing about Photography.

>these models are trained on stolen artwork and texts.

And the neural networks in human artists' brains is exempt from this logic why? Because it's biological and not silicon?


> Painters literally said the same thing about Photography.

I think this is itself a bad take, for several reasons.

First, that something false was said in the past doesn't make it false today for a different thing.

Second, that wasn't the argument painters "literally" used. Did they claim that taking photos was "stealing" their paintings/reality?

What some may have claimed is that it took no skill or that it was cheating, or that it would put painters out of work. We now know all of these assumptions were wrong.

Photography doesn't "automate" creativity, as anyone struggling to take an interesting photo that tells a story can tell you.


Did we really NEED automation for writing kids books?

And for whoever is saying not automating these is gatekeeping let me ask you gatekeeping from what? You could write a kids book yourself without AI if you put your mind to it. It’s a super easy feat.

Can we strive to automate more boring stuff such as scrum meetings, filling out forms and paperwork in general?


This “gatekeeping” accusation is so stupid. It’s a laughable conclusion of the current woke/victimization culture. I have seen people on HN refer to the idea of needing sone kind skill to do something creative/difficult as “ableist”.


If it works, it'll be used. Are you grossed out when using a calculator? I hope so


The square root of 3.5 is always the same value, so why bother doing it less efficiently? The same is not true for creativity.


Mathematical concepts are in the public domain. The art used to train these models are not


I mean the concept of public domain is fungible when companies like Disney can pay off their favorite politicians to get 20 more years of monopolies on their ideas that 'steal' from the society they live in.

Being that there is enough of a debate on this topic alone that's been raging for decades it's not easily decidable by a one liner.


true. Personally I think the concept of public/private domain is a good thing. But like all good things, they get abused by bad actors who are huge corporations. But the artwork and texts used for training these models are often coming from small independent creators. The same creators who will be sidelined by this new technology. Bitter pill to swallow


Yeah, cameras are just the worst. Won't someone please think of the painters?


I hate it as well but at this point, I think the cat's out of the bag. I think it's unlikely that legislation will be able to curtail any copyright infringement.


Kids books don't take much creativity


Expecting the “Made by humans” stickers and labeling any day now. As AI generated content becomes mainstream, we’ll see product labeling like we do with organic produce to differentiate the source.


Yeah I suspect the "Made by humans" stickers will be abused, just like how car manufacturers spend 20 minutes attaching the wing mirrors in the US and so it's "Made in the USA".

I guess the equivalent here would be generating a story with an LLM and then changing the wording a bit.


It'll be short lived if it is even a thing. Forget bicycles, you're looking at a power loom for the mind. The next industrial revolution is here.


The term you are looking for is “handcrafted”. This book is still made by humans.


Awesome. I built StoryPanda[1] to do something similar. My daughter, and kids of my friends absolutely love creating a story every night.

1. https://storypanda.ai


Hey Obaid, I built something similar, https://storybird.ai/. Want to chat?



Your https://storynoworky.ai unfortunately :(


I just wanted to continue the pattern in the thread :D


This looks great! What is the text-to-image model that you have used?


I don't know how to think about this:

On one side, it might be nice to create stories that fit a specific want/desire for what a child might need. We might even go for the concept of a segment of one kind of story.

On the other hand, I am picky with the stories I read to my child. I see stories I read as education. They transmit some non-conscious cultural things about being human, about handling this reality and the world.

For me the books I read to my kid are the future equivalent of people telling stories around the fire to transmit some necessary and vital knowledge thousands of years ago. I believe (without a scientific ) they are shaping both my child how my child sees the world (through words construct and imagination) and my child value system.

Of course, I am trying to expose him to various stories and books as I am also not sure what I think is important is actually important for his life.

But I am a bit reluctant/skeptical whenever I buy a book launched recently. I am not 100% sure if the author just wrote a story because they want to sell something, and it does not have any other meaning. Don't get me wrong, it is ok to just buy a book because it looks nice, even if it is shallow. But there are so many books that I don't want all the books to be the same.


> But I am a bit reluctant/skeptical whenever I buy a book launched recently. I am not 100% sure if the author just wrote a story because they want to sell something, and it does not have any other meaning.

I understand your concern. But in this case the whole "book" is just a few pages long. You can vet the thing in minutes.

It is a lot harder with longer books. I remember vividly an instance from my childhood a case where my mom was reading us a Doctor Dolittle book. I remember it, because in the middle of the book they had a fight with a crocodile who tried to eat the hero, and according to the story they bound the crocodile and gagged it and left it for good. And as a child I felt so sorry for the crocodile who just followed its natural instinct, and now bound and gagged he will surely die a slow and painful death. My mom seeing that this made me upset she turned the page back and decided to improvise a story pretending as if she was reading from the page. In the new revised story they made the crock promise them not to eat them, and then they untied him.

And that was a book published half a century before us reading it. So recency is not necessarily a safeguard.

Also sometimes the chat you have with your kid about a story is as important as the story itself. For example about this linked one I would ask any kids if they also found it odd that the hedgehog gave up their passion of baking and opened a theme park. That can lead to deep conversations about the purposes of life, or maybe a much shallower but equally delightful one about the fun of theme parks.


Absolutely, there's always a moral to a story in any well-known children book. As for trusting the old, well-tested stories more, I agree, we all grew up on them, but they also hold a darker side as many of the classics also hide a lot of subtle old days social and gender stereotypes and references, as well as many hidden sexual allegories that I personally don't really want my kids to buy into - from banal ones like portraying aristocrats as superior, to the really foobar ones with all these passive princesses just sitting and waiting for that one prince charming to come and rescue them. Hats off to the new wave of fairytales (like e.g. Shrek) that are now trying to move away from all these stereotypes.


Though I like AI producing "Human alike" work, but I would like to caution ourselves on consuming everything AI is producing in such use cases. Human creativity should be first preference and avoid to be replaced, based on outcome of consuming AI work.


What is the outcome of consuming AI work?


Humans get pushed out of the trade or they too start using click to generate content. That generally lowers the bar in quality.


It’s obviously not perfect, but it’s pretty darn impressive that AIs can do this now. Something like this was not possible just three years ago. I can’t wait to see what this technology will be capable of in the next few years.


It seems the AI doesn't know the difference between a hedgehog and a porcupine. (A quick check shows Stable Diffusion at least knows the two animals on a high level.)

Leaving aside the creepiness factor, I wonder what confusion will be sown in young minds by the inevitable onslaught of machine-generated children's content.


> stable diffusion

A bit tongue in cheek. But, that would be like saying a pencil knows the difference between them.

Which model in particular? 1.5, 2.1?


Yeah there was a very viral piece of news last month about a guy who did this exact same thing for his kid: https://80.lv/articles/children-s-book-generated-with-chatgp...

Though I think the overall response was mixed.

EDIT: coincidentally enough, both share a character with the same name, "sparkles". All of these GPT generated plots have the same formulaic tone to them.


To be honest I had the same thought 10-15 years ago when I was reading stories to my daughter. It was all written after a template, changing names of places, characters and goals for the story. Many were based on Harry Potter with kids going to magic schools and saving the world. Even a series of them with exactly the same story in each book, just a new goal for each book. The series covered half a shelf at the local library. Daughter quickly got tired of them and we started reading Terry Pratchett instead.


There's a huge industry of ghostwriting bad children's books based on what's popular. Zoe Bee did a great video on her experience writing unlicensed Minecraft stories[0].

But the point is that these are basically paying humans to churn out low-quality cookie cutter stories in a time frame and compensation rate that hardly permits for any real creativity. This is an obvious example for something GPT can do well, but that's just automating garbage production.

Automating Terry Pratchett on the other hand is impossible without "genuine" creativity: his stories work not just because of the technicalities (a fleshed-out world with consistent rules and characters) but also because he had "something to say". He had personal and political views, experiences and opinions that influenced his work and world building.

In essence, works like OP's and the similar ones others have linked here remind me of Hayao Miyazaki's infamous reaction to machine learning driven animation[1]: it's an insult to life itself. It's a soulless approximation of the trappings of art that doesn't understand what it supposedly attempts to capture and can say nothing about it because it doesn't have an opinion on it.

[0]: https://www.youtube.com/watch?v=O1aqLLiIjgA

[1]: https://www.youtube.com/watch?v=7EvnKYOuvWo (tech demo and reaction starts at 1:25)


>EDIT: coincidentally enough, both share a character with the same name, "sparkles". All of these GPT generated plots have the same formulaic tone to them.

100% relevant: "Someday" <https://en.wikipedia.org/wiki/Someday_(short_story)> by Isaac Asimov, 1956


Yes, I just checked. The fairy in my daughter's book is also "Sparkles". I believe it uses the same back-end engine.


Does anyone here really think that AI will produce a new "Jungle Book" by Kipling or a new "Mona Lisa" by Da Vinci or a new "Requiem" by Mozart" which could even have a remote possibility of becoming cultural icon and reference? And is that a goal or event that the spectators are waiting for? And if it occurs what would it infer about human cultural evolution?


Does anyone here really think artists using computers will be able to create great works?

This is really just a dumb question. These are tools, like paintbrushes and paint software. Any idiot can put paint on a canvas. Any idiot can make doodles in GIMP. Any idiot can write a prompt. None of these statements say anything about the upper bound of creative potential for any of these tools. The fact that a creative work could be summoned deterministically from a seed in an embedding space is irrelevant. And no, we shouldn't put old works on a pedestal. Note that your examples are basically all pre-digital.

These are not sentient AI. These are tools used by humans. And you can't judge tools' potential by the least competent users, the average user, or even the best users.


Maybe[1]. At least ChatGPT seems to "think" so. I was curious about how ChatGPT would go with creating memes (this era’s main cultural output).

> Could the memes generated from this have a remote possibility of becoming cultural icon and reference? And is that a goal or event that the humans are waiting for? And if it occurs what would it infer about human cultural evolution?

===

It is possible that the memes generated from this image of a robot with a blank, emotionless expression could become a cultural icon and reference, although it is difficult to predict which memes will become popular and enduring cultural phenomena. The goal of creating memes is often to entertain and engage with a particular online community, rather than to become a cultural icon.

If a meme generated from this image were to become a cultural icon, it could suggest that humans find the idea of a robot trying to understand human behavior and emotions to be relatable or amusing. It could also indicate that the concept of AI and robots trying to understand human behavior has become a prevalent theme in popular culture.

However, it's important to note that memes are a form of digital culture that can be ephemeral and change rapidly, so it's not a certainty that this meme will become a cultural icon or reference, but it could be a possibility.

===

[1]: https://www.joshka.net/2023/01/chatgpt-memes


Yes and no.

Famous art is about context, circumstances and marketing. Strip a famous piece of art of its context and it doesn’t have meaning. If you find someone who can put an AI generated piece of art in the right context, it could become the next Mona Lisa. But then, is the AI the artist or the person who uses AI to create context?


You didn’t really make anything though.


I hope if you program you always do it in assembly language, otherwise you didn't really make anything.


Well, they wouldn't claim to assemble it. It's clear to me these people using AI so heavily don't have an authorship role at all, but a publishing one.


It's not so clear to me. AI is just a tool you use like any other.


In that case are publishers also authors when the artists and writers are (contracted) tools like any other?

This is just as rehash of the debate around authorship for prompting a single image. If you had typed the same prompt on Google and picked a picture, or hired an artist with that prompt, you wouldn't claim to be the artist. Therefore you're having a different role than that in the process, ML model or not: publisher, producer, whatever.

The sooner these people come to terms with it, the better, so they can promote themselves for the skills they actually have (lackluster as I find them) and leave actual authors to carve their own niches among the incoming flood of generated content.


What? People are not tools, machine are. Is this so hard?


I can't imagine anything is clear to you.


You're a bottleneck to understanding.


How do you get a layer of butter on your kitchen ?

Not a native English speaker so maybe this sounds natural to people - but to me it sounds like bad wording. It paints the wrong picture and is not kids friendly wording IMO.


What about it seems “not kids friendly” to you?

I don’t see the problem.


'On your kitchen' is not something most English speakers would say; they would say 'in your kitchen'. Otherwise it seems like the butter is 'on top of' the kitchen, like on the roof. And instead of 'your kitchen' it would be 'the kitchen', since the story takes place at the house of the characters in the story and not the reader's house. Also usually children's books are much more descriptive: "Butter went everywhere! On the ceiling, on the floor, on the refrigerator, on the toaster..."

And to the point of not being kids friendly, imagine your child has this book read to them, and you find the next day that they've illustratively covered your own kitchen in butter... Most children's stories are written in a way that spells out danger or bad behavior meaningfully.


The wording part is my fault - I didn't think about on vs in, in my mind it would be on the walls and stuff / kitchen is the insides of the room - but I can see how that's maybe not appropriate.

The exact quote is :

>But then she noticed something even stranger - her kitchen was covered in a thick layer of butter.

vs

>Butter went everywhere! On the ceiling, on the floor, on the refrigerator, on the toaster

Your phrasing sounds like something I'd read to my son and I could see him repeating all the items and adding some of his own.


I don't know, I'm imagining reading this to my two year old and using the words like "layer" seems pointlessly complicated. The story seems too silly for a child with better vocabulary. I could be off.


It's almost like the "I" should be in quotes. Because if anyone else in the world gave ChatGPT the same prompts, they would get more-or-less the same story. (visual generative AI is different).

I think this makes things like college essay generation less scary. If two students prompt ChatGPT with "write a personal statement for the common application for college" or whatever, ChatGPT will more-or-less produce the same generic statement, with minor variations. It can not personalize for each student because it doesn't know anything about either student. Prompt engineering? Ok sure, but by the time I have written a prompt that is comprehensive enough to make an essay that describes me, I've basically written the essay myself.

So anyway, I imagine people will try to copyright output from generative AI responses - interesting question about "who" produced that output.


You could say the same about a person performing a piano piece. Anyone who presses the same keys in the same sequence would get more-or-less the same performance.


An absurd analogy by somebody who obviously doesn't play the piano.

To get to that point, one has to spend a painstaking amount of time perfecting their craft and improving their virtuosity on the instrument, not to mention that everyone adds in their own interpretation and dynamics to a piece.

I can sit down on my Kawai and rattle off the maple leaf rag by Scott Joplin without too much issue, i'm happy to hand John Q. Public the sheet music, but there ain't no way they're playing it.


Just paste it some excerpt from a previous essay of yours and ask it to write the new essay in that style. There, prompt engineered.


If anyone's interested in the process for doing something like this, I wrote about generating children's books at the end of last year.

The most interesting part was using ChatGPT to target and tweak certain parts of the story and merge them back in. When trying to make edits to a very small part of the story it understood what I wanted extremely well.

The most frustrating part was trying to get a consistent character in Midjourney. I tried using identical seeds and parameters. I even used a base image of the character to try and influence it. Nothing worked that well. My solution was to come up with scenes that obscured the main character's face—the ones with him peaking out from under the covers or walking down the street.

https://solomon.io/childrens-story-written-illustrated-ai/


I find it crazy how many people in here still believe these AI's are producing totally original works- the main source images are all ripped from art station and the like.

Midjourney is painfully obvious in its use of the same limited set of source images- with filters, transforms, compositor routines applied.


What does a "totally original work" look like to you? And what does it produce if it's not original? Remixes? Plagiarism?


We own a few children books which were obviously produced by piecing together random stock images, and they are like this.

So, there's a market, and kids won't mind :)


I appreciate the sentiment (I think) but as far as kids books go, this isn't great. Can anyone point in the direction of a great AI-generated kids book?


I also created a kid's book for Christmas; but I still haven't written any blog post about it.

What I did:

- I asked each of my 5 nieces/nephews to give ideas (e.g. invent a character)

- I generated around 10 stories with ChatGPT, based on their input. I used ChatGPT in French, it worked almost as well as in English (except for poetry).

- I got a total of 4500 words. This took at lot of iteration with ChatGPT, because I gave guidance and I used lots of followup prompts to force ChatGPT to add more details to the stories.

- I generated 2/3 images per story, plus book cover.

- I paid for a professional print of the book - 24 pages, around 20 euros

- In total, I spent around 15-20 hours of work (content generation took a lot of iterations).

That was a nice Christmas gift.


Going to liken this to music, because I'm sure AI generated music is coming. Long before AI entered the picture, I realized that the difference between just enjoying and really getting into music was the ability to relate to the artist in some way, and through that a kind of bond in forged. As the tools become more sophisticated and there are more layers between the artist and their output, this connection becomes more tenuous. With AI in the picture it starts to disappear. I don't expect my attitude to change on this, so I doubt when AI generated music comes along I'm going to be able to enjoy it more than casually.

[Edit: a word]


I wrote a few children's books when these tools first came out, but ultimately I decided against publishing them due to ethical concerns. I couldn't quite put my finger on it, but something felt dirty with the act of publishing and profiting off this work as my own.

That said, it was a very fun exercise and I look forward to the stories that are created in the future where the authors don't have the same hang up I did. Even if you don't publish, I would recommend going through the creation process at least once to familiarize yourself with how simple it actually is (tedious, but simple) to create this type of work.


I keep coming back to the fact that these technologies enable people to create simulacrums of high quality content quickly - but that shows because of the obvious short comings as identified by other posters.

If I gave myself a week to write a children's book using these tools with a human editor loop - could I create (or perhaps someone more talented) a great book? That seems possible. This seems like it was created in a couple of hours - and that is slightly interesting - but perhaps we will quickly move beyond the amazement of this as we start to see lots of quickly generated low quality content.


Whenever I'm looking for anything on Amazon nowadays, it feels like sifting through hundreds of cheaply made products from companies I've never heard of with 4 or 5 star reviews I have no trust in.

I'm really not looking forward to the same thing happening with blog posts, news articles, or books. Not like there is just quality content out there at the moment.


Take all the work of writing a book.

Factor out the creative tasks.

Focus your work on the creative tasks and let the AI do the rest.


Can tell it was written by ChatGPT. Has that kind of verboseness to it, can't quite put my finger on.


But a kid won't be able to, and if they can they won't be able to articulate whats wrong with it.

From the perspective of anyone going into AI, that makes children the perfect target for the mass produced blended garbage that AI generates.


What elevates a human story higher than ChatGPT's? Do our brains have some hidden trascendental property that LLMs can never get to? Or is it simply our own biases once we learn that this is the product of an algorithm rather than human-made?


Unpredictability. ChatGPT writes stories like children do. Very wordy and boring. Then this happened then that happened.

Interesting stories have a set up and when you think you know what's going on, they switch, otherwise it's not a story, just narration.

The trick is those bait and switch differ culturally, depending on language of the reader, even the tastes of the reader, otherwise you don't get that "I was NOT expecting that" joy.


Why are you asking me this when you could be asking ChatGPT?

This is a serious question.


I find that Midjourney tends to have its own characteristic touch as well. It's great if you like it, but it can be tricky if you want something different. I've found StableDiffusion to be a bit more versatile, even though it's harder to get a similar level of artistic quality.


I was going for more of a kid's book tone but I think I hadn't quite learned enough about the depth and nuance you can introduce into the output via prompting. In retrospect now, I agree it has that generic-AI tone.


The "all sorts of other whimsical attractions" is especially bad.


Absolutely. This was recycled almost directly from some of the prompt material I gave while refining. I've learned more about "prompt engineering" (as it seems to be called) since and think it would be relatively easy to smooth these edges out.

Edit: typo


This reminded me of the rules for fiction in Flaubert's Parrot. They don't include any mention of whimsy, in fact, but it was fun rereading them.


It's funny how the fairy ended up with three legs in the final frame. I've noticed AI seems to do that - generates an otherwise fantastic image but with one super odd addition.


The porcupine has two mouths as well. I tried several times to smooth that one out but I could never quite reproduce the same look in the porcupine's eyes so I kept it.


Use InvokeAI with inpainting to leave the parts you like untouched and blend in fixes for other parts.


Almost impossible to generate hands with 5 fingers in Midjourney


Reminds me of a Milton Jones thing

> The first we realised there was something wrong with granddad was when he stripped naked and covered his body in butter. After that he went downhill very quickly.


Is there any feasible way to get midjourney to use the same "character" each time ? The hedgehog and the girl (and the woods) just change - losing some emotional constant across the book.

I am "writing" a book about fairies with my seven year old and Inthink I might use midjourney for the background scenes and photoshop in her fairies, but ... yeah chatgpt and midjourney are kinda like twitter / FB - a mirror of our society and it's artifacts.


Nice idea, probably not long before people start doing this to self-publish "fiction" to clog up Amazon and Wattpad further.

Sidenote: Why does Stable Diffusion/Midjourney/Dall-E all generate animal/cartoon art that looks like a Disney/Pixar ripoff? It's worse than the generic "Corporate Memphis" art style that every company uses for their marketing websites.


Others have commented on the aspects of the story and whether we should do this with AI, etc. I'll say that I'm not opposed to the idea, but I think the results were a little weak. I think they could be made better by improving your prompts to the AI. For example, the words say that the butter trail led from the house into the woods, but the illustration doesn't show that. There's one large puddle in the kitchen, and that's it.

Another issue is that the oven that Penelope puts the bread into is very different from the oven that she later finds empty. One looks like a wood-burning oven, almost like a fireplace, the other looks like a microwave without a door. (Note the top of the oven in the first image is a curved arch, whereas the second one is a rounded rectangle.)

I think this lack of continuity between images is part of what gives the uncanny valley feeling to it. It's like some incoherent dream where neither the story nor setting is the same from scene to scene.


The main problem in these things are the lack of persistance. In very image the porcupine is different. Any ideas on how to fix this?


Hire a human


Great job! Did you use any special techniques for getting the same characters in different pictures/settings?


I used very basic prompts including words like "cute" and "happy" and environment descriptions but not much beyond that (no art style or specific colors of the characters). I felt overall the character consistency has room for improvement but could have been a lot worse. Midjourney's default "style" has a sort of specific flavor to it.


One of the downsides is how the characters look so clearly different in each picture. I don't know if there's any way around that, and might be that kids don't care about such things anyway.


I think that can be done by passing in an image as reference in the prompt.


Yes, you can pass an image (or more) and Midjourney will use it in the picture. You are also able to control the level of "creativity" Midjourney has when creating the image.


Oh, I like the simplicity of the delivery. I did the same but wanted to publish it on amazon kindle and got lost in the final steps. Maybe I Should just put the website up as you did. Nice graphics, the character consistency is very good, and it is a bit hard to do with standard tools. Bravo.


A friend of mine has introduced his first grader to ChatGPT, Dall-E, etc.

Now one of his favorite games is "AI" and he lights up in wonder typing the kinds of things at a prompt a first grader would.

I don't have kids and wouldn't dare wade into parenting approaches but compared to how most of the kids I've seen interact with technology this (to me) is very promising. Compared to watching endless YouTube videos or even playing Minecraft or whatever it's a lot easier to see how "playing AI" could progress to a genuine interest in technology beyond staring at a tablet.


My 9-year old wrote and published a book with dalle-2 and openai playground. Chatgpt was not a thing then. It is quite interesting and powerful to see how these AI complement child's unbounded imagination. I would have to say, childrens imagination has more entrop/perplexity than the AI would come up - At the end GPT was merely putting correct grammar and adding the bulk to the story.

Children's imagination is the most undocumented, unexplored frontier of human imagination and I still have hope for humans on this edge of creativity.


> Children's imagination is the undocumented, unexplored frontier of human imagination.

Hard disagree. But I won’t repeat Boulet’s points, I’ll just link to the comic¹. The alt text in the images expands further.

¹ https://english.bouletcorp.com/2007/12/11/fuck-peter-pan/


My youngest son loves the story about the Three Goats Gruff and also the show Blaze And The Monster Machines on Nickelodeon. One of the first things I thought to do with ChatGPT was ask it to write the story of the Three Goats Gruff as an episode of Blaze. The result was pretty impressive, with Blaze and his friends matching off against a troll to race over a bridge to save three goats. Totally plausible plotline for an episode, and most importantly, my son was thoroughly entertained when I read it.


it is obvious here that computer programmers have elevated chatGPT to the most sensitive grounds of cultural transmission, right away. Since there is a documented and ongoing mental health crisis with young teens and electronic communication, this appears to be adding fuel to the fire of whatever it is that is going on, that is causing real illness on a large scale.


I did something similar back in early 2021 (that I ended up self-publishing for fun). It turned out ok at best, but it was a pretty fun project - it's now a cool snapshot of AI generation technology of the time.

The progress that has been made in text and image generation over the last ~2 years is insane.


This is great, you even got a UI that just works without knowing, so that kids can use it too. Good on ya, OP


In the coming years, the human values taught in these AI children books will be picked by an AI to fix the most pressing current and foreseen issues in society (eg. individualism, environmentalism).

In the following decades, parents might be less relevant in children education, sometimes even harmful.


Phones are practically parents already and that's going so great.


Would I pay $15-20 USD for this book? No.

But would I pay $2 for it? I don't know, perhaps just for the tech.


Kids would love the complete U-turn into butter theme park. Good job, I'd say


All the sites I've seen have weird looking images with Midjourney. I've made a couple of good ones here: https://bedtimestory.live


With respect, these examples seem to be summarising the plot of a story rather than actually telling a story.


Cofounding a startup in the physical space with someone whom you have not only just met, but who's also freely admitted to recklessly destroying your property? Sub optimal.


Are you in the wrong thread, by any chance?


This is not a great story and the visuals don’t do much either.


This kids story is too wordy and the sentence cadence is uneven.

I guess this will get better, and soon amazon with be flooded with a bunch of kids books and that is kinda sad.


Somewhat related, Yesterday I almost fell for https://youtu.be/I7kW2-4xdN8


Its technically impressive but miles away from the story telling power of say Julia Donaldson and I think that's a good thing.


When will there be a similar tool that allows you to even add a generated video interview with the "author" of the book?


Are books generated this way copyrightable?


For Midjourney, yes if you have a payed plan. The cheapest one is $ 8 / month. For ChatGPT I don't know it.


Isn't copyright only for human-created things?


Looks good! Add a TTS from https://play.ht on there


Or https://fakeyou.com , which has some very good fairytale style voices. Or cartoons your kids would like.

We'll be launching a web and real time voice changer soon too, so you can use your own emotion and turn it into another voice. Or make it sing.


That’s cool. We’re doing generative voice ai. If you’d like to add a narration, reach out to me.


Can tell it's AI generated because the "plot" makes absolutely zero sense, lol.


A Young Lady's Illustrated Primer will be here sooner that I had thought!


How many of these have there been in like the past month? a dozen? two dozen?


I couldnt read after "we build the first ever butter theme park"


Change the font and make the pictures look handdrawn; then I think it works.


This piqued my interest. I might give it a shot, either with this or something new.


This is scared me a lot


The illustrations are beautiful! Well done this whole thing!


What prompts did you use for the children's book?


This portends nothing encouraging, imo.


How long did this take to put together?


~15 minutes with ChatGPT to turn the initial plot into a story with the tone and length I was looking for, an hour or two with Midjourney illustrating and a few more hours for the site from zero to up and running.


Evidently so did everyone on Tiktok.


Baking bread? More like Breaking Bad


All plagiarized and non-copyrightable.

What a shame.


Which lines are plagiarized?


Every damn one of them. ChatGPT plagiarizes.

https://justoutsourcing.blogspot.com/2022/03/gpts-plagiarism...


Every line in this story is plagiarized?


I had ChatGPT write a similar kids story. There are quite a few surprises in it too! (:

Once upon a time in a faraway land, there was a king named Maximus. He was a kind and fair ruler, but he was also very demanding. He wanted to make sure that all of the subjects in his kingdom were happy and well taken care of, but he didn't always know how to do that.

One day, King Maximus decided to hold a contest to see who could come up with the best idea for improving the lives of the people in his kingdom. He announced the contest to all of the people in the land, and many of them started thinking of ideas right away.

One young girl named Emma was struggling to come up with an idea. She didn't know what to do, and she was starting to feel discouraged.

Then, a girl dragon named Supreet came to her and said, "Emma, I have an idea for how we can make sure that everyone in the kingdom has enough food to eat. We can divide the food into smaller pieces and give some to each person. This way, everyone will have enough to eat, and there will be no waste."

Emma was very impressed with Supreet's idea. She thanked her and said that she would present it to the king. However, she decided to keep Supreet's idea to herself and pretend that it was her own. She didn't want to share the credit or the reward with Supreet.

When she went to the castle to present her idea to the king, King Maximus was very impressed. He thanked her and said that he would consider it carefully. He also told her that she had done a very good job of using factors to solve the problem.

"Factors are the parts that make up a whole," the king explained. "In this case, the food is the whole, and the smaller pieces are the factors. By dividing the food into smaller pieces, you are using factors to make sure that everyone has enough to eat."

Emma was very happy to have won the contest, and she was proud of herself for using factors to come up with a solution to a problem. She didn't tell anyone that the idea was actually Supreet's, and she accepted the reward for herself.

However, the king soon found out that Emma had not been truthful about the origin of the idea. He was very disappointed in her and decided to punish her for her dishonesty.

He ordered Emma to roam all over the kingdom and teach the concept of factors to all of the children, dogs, and cats. She was to spend the rest of her life traveling from place to place, sharing her knowledge and helping others to understand this important concept.

Emma was very sorry for what she had done, and she accepted her punishment. She set out on her journey, traveling from village to village and teaching the concept of factors to anyone who would listen.

She met many new people and animals along the way, and she made many new friends. She also learned a lot from the people and animals she met, and she became a much wiser and better person as a result.

Despite the challenges and difficulties she faced, Emma remained determined and committed to her task. She continued to teach the concept of factors to all who would listen, and she helped many people and animals to understand and apply this important concept in their own lives.

In the end, Emma's hard work and dedication paid off. She became known throughout the kingdom as a great teacher and a wise and kind person. And she lived the rest of her life in peace and happiness, knowing that she had made a positive difference in the lives of others.


how did you do it ? what was the prompt sequence, fine tuning, etc ?


I have some rough details on how I operated the AI tools here: https://adventure-of-penelope.vercel.app/credits. Unfortunately, I've lost the original log of the conversation.


ما هو فائدة الحليب


Baking bread? More like Breaking Bad B-)


I hate to be the contrarian voice, but I really don't like where things are headed with these types of assistive tools.

I like the idea of using it partially to help complement some effort that you've done yourself, but the idea that you're using both an LLM to produce the story and a diffusion model to produce the pictures makes the entire piece of work almost devoid of individual effort.

Using them to produce a private story for your own child is a noble enough effort, but I'm seeing lots of comments on wanting to get these types of things published. With the explosion of these vanity press level dime novel tabloid literary works, it will be that much more difficult to sift out the wheat from the chaff.


Completely agree. People really don't understand where this is going. It's not like society lacks mediocre mid-brow content as it is. LLM's and diffusion models reduce the cost of production to basically zero, and we now have an endless supply mediocre content. Extend it out a few 10x improvements, and we get an endless stream of so-so Netflix shows produced with zero effort. Basically WALL-E made real.


It doesn't look to me like the cost of production was zero and the content really wasn't bad. For anyone who didn't read to the end, the author (operator?) explains his process:

> I typed a series of prompts to ChatGPT to get it to write me a story with the following plot:

> Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.

He mentions elsewhere that generating the images was a lot of work.

So basically he and his friends had an oddball premise for a story, and with some non-trivial amount of labor, he produced a story which was far from the worst thing I've read on HN.

It was actually pretty clever for a kids' story and the illustrations were good. The finished product was better than the prompt. Somehow the process produced a fairy, a plot twist and slabs of butter riding a roller coaster.

Where's the crisis here? A person came up with an idea and used a tool to generate a work product from it. Without that tool he would have needed to hire a professional illustrator and maybe a writer. I bet this took a fair bit of work to do right. But the technology enabled a creative work that wouldn't have occurred otherwise.

I have no doubt there will be tons of crappy spam enabled by ChatGPT too, but frankly this little story wasn't it, it was humorous and I doubt it was a trivial endeavor.


> I doubt it was a trivial endeavor.

Per authors comments above, it was indeed a trivial endeavor: a few hours to tweak the prompts and a few hours to make the site.

Compare to the most basic children’s book, which would take at least 10 hours to illustrate for a professional (10 picture times one hour per picture).


So then found objects and ready-mades are also a trivial endeavour? These authors published one of the first stories like that and that alone makes it not trivial.


Thanks for the kind words


People are ultimately going to do that. And if we really can't figure out a book written by an AI from that written by a human, then what's the point of all of it? Eventually good authors will still succeed as their writing style will not be squeezed out of the grand summation of all the writings in the world.


There are a lot of negative consequences:

1. Entire classes of skilled jobs are eliminated, only the most elite writers and artists can make money

2. Value for story creation gets concentrated in the few companies that own and operate the best LLM's and best stable diffusion models

3. Society becomes more fragmented because we are all watching different content

These are just a few I'm able to think of in 2 minutes.

It's a common mistake to assume all technological progress is an unadulterated good. Just look at all the unanticipated negatives that came out of the computer revolution, like every technological revolution before it.


Agree that the assumption is that progress, no matter what, is good and healthy.

If one considers how inefficient humans are (daily consumption, heating, transport..etc) and how in comparison computers are extremely efficient (once they have been built), you realise where "pure" progress will take us.

Ephemeralization is the term Buckmeister-Fuller used.


I'm not at all assuming all technological progress is an unadulterated good - in fact, nothing is, everything would have a downside if it has an upside. All I'm saying is it's still bound to happen and we can do nothing about it.

It's not the first time that 'entire classes of skilled jobs' have been eliminated. It's the natural side effect of automation and has happened with every generation of new products. Industry automation might have taken millions of jobs away so far to give us better products faster. Tomorrow my job as a usual developer might be gone to these AI tools (though it's a far fetched thought), and I have to be prepared for it, and re-skill/up-skill myself if I ever sense that day is about to arrive.


You don't have to watch those shows then. Most movies and other content is already garbage. It's up to you how you spend your time.


Another strong agree. These models don't understand anything, they just produce something that is statistically (at many levels) close enough that it will fool most of the people, most of the time.

For those that are increasingly happy to live in, and raise their children in, that fake and simplistic environment, I suppose it's their call. But reading/watching/living in that simpler, dumber environment can't be good for our intellect and higher abilities.


And what the brain produces is not something that is statistically (at many levels) close enough that it will fool most of the people, most of the time?

Have you looked into even basic neuroscience research or perhaps at least personally known someone with, for example, dementia? It would perhaps teach you something about what the mind is and isn't and what we know and don't know and how perhaps AIs might fit in this story.

Just handwaving because it's "just statistics" is not useful.

As for the environment - well, I have my opinion on social media and online content and even most print content and most books. Will AI make that worse? Yes. But it's already bad enough that this is not a meaningful change. The good stuff has always been accessible and for a long time.


In Roald Dahls short story "The Great Automatic Grammatizator", it becomes impossible to separate the generated from the human. The machine initially is published under fake names, but what most don't realize is that for even established authors the profession no longer exists and they are left with the choice of destitution or licensing their names under which the the machines books can continue to be published.

"And worse is yet to come. Today, as the secret spreads, many more are hurrying to tie up with Mr Knipe. And all the time the screw turns tighter for those who hesitate to sign their names.

This very moment, as I sit here listening to the howling of my nine starving children in the other room, I can feel my own hand creeping closer and closer to that golden contract that lies over on the other side of the desk.

Give us strength, Oh Lord, to let our children starve"


Thank you for saying that. The tech is really cool but it gives utterly untalented storytellers the tools to inflict really bad content on the world. The insipid and pointless plot of this one is a case in point. (I do admire the OP’s get-something-done adventurous spirit of trying out the tech and making something, kudos, but not their artistic sensibilities on this one unfortunately.) At the same time it will allow others to create fantastic stuff. We are going to need some better filters.


Seems like in another era you might get upset at the invention of the record player, because now poor people would have the ability to listen to music without having to pay to see it live.

Yes, better tools for making art means more art will be generated, and lowering the bar means less talented people might start making up the bulk of the new art that is generated, but new mediums inevitably spark creativity in a handful of emerging artists, and that enables them to create masterpieces that could not have been created before.


Is it not the same for all new technologies?

Books were expensive and locked away in monasteries for quite a while. Church being scared of "reading" being spread as a technology.

Steam-powered trains were feared because of their speed. Cars... Planes... Etc

Television the drug of a nation... Now its the Internet. I'm sure radio was just as addictive and hated by parents as television.

Agree also that we'll have a lot more chaff but the chaff began the day our ancestors chiseled the first symbols into rock and gave us writing.


counterpoint; separating the wheat from the chaff is already terribly difficult, and perhaps this will provide the necessary motivation to produce an effective method of doing so.

Google, once upon a time, solved a similar problem. With all the information on the internet, how do you find the good stuff? The relevant stuff? I don't think it does this job at the moment - and perhaps flooding the world with content will create the conditions for the next major disruptor to create something that actually works rather than this ad-bloated nonsense we have now.

It already takes a ton of my time seeking out books and other media that align with what I want. Humans have done a spectacular job of creating stuff that's minimally acceptable to make a sale. AI isn't creating the problem you're talking about, even if it is making it worse.

Perhaps it will become bad enough that it becomes worthwhile to fix.


The fix for the flood of crappy Chinese products so far has been recommendation engines. And those are gamed (Amazon reviews, goodreads comments, etc)… so no fix yet.


Yep. Part of the problem is the set of incentives surrounding these huge centralized databases of consumer preference - the folks holding the reins have no reason to fix the situation because they make money off the circumstances as well.

This was effectively the situation with the internet when the Goog came in and wrecked everyone's cash cow - because it was valuable enough to take the hit and prospect for future value to undermine the status quo.

I see two ways to break the current stalemate:

- Some 'new google' comes by and does to google what google did to Yahoo et al.

- Somebody creates a system that does the same job, but distributed. You control your recommendation algorithm, so you can prune it as you want.

I really really want the second one, but I haven't yet figured out how one would get the necessary data. There's a bootstrapping problem here - in order for recommendation algorithms to work, you need a ton of folks creating data for them. In order to get those folks, you would need to have a good enough recommendation algorithm to attract them.

I'm wondering now, though, if data on user preferences is for sale somewhere. Like, if I wanted to bootstrap such a system, is there some adtech business somewhere that has information on what books / music / etc people like, and how much would it cost to get that in there?

But now, of course, because you're looking at spending a bunch of money to bootstrap - you're hobbled by the need to be profitable to make that back. Stuck in Trapitalism.


> throwaway675309

Are you afraid of Roko's Basilisk?


No, just Jenny.


It can easily be polished and that _does_ take work. So it's not zero effort. Yes, there will be a lot of generated content and that will probably lead to a recalibration of what's important and what not (in life, work, art, etc).


I think it’s great. We should then be able to develop systems which separate the wheat from the chaff. And that is progress.


You mean advertising?


I meant systems that can differentiate between AI generated content and organic content.


I have been trying to use GPT-J or GPT-NeoX to create stories, but they all come out very strange. Here's one I generated using https://textsynth.com/playground.html and a prompt "Once upon a time there was a mouse called Miisa. One day she":

"Once upon a time there was a mouse called Miisa. One day she was driving through the woods and she saw a beautiful castle in the middle of nowhere. She didn’t go in but wanted to. Then she saw a door which said “The Secret Passage.” She went inside and met the owner who was the witch of the woods. “I’ve been lonely for a while and would like to live with you” said Miisa. The witch agreed. “But there’s one condition,” she said. “No matter how many times I ask you to do something, you must do it.” Then the witch asked Miisa to put a hat on her, which would make her invisible. Miisa was terrified. But she did it any way. Then the witch asked Miisa to kill a mouse which lives in her home. The witch wanted to have the mouse’s blood. “I can’t do that,” Miisa said. “I hate mice.” “Well,” said the witch. “You can’t kill a living thing. So then you must torture a living thing.” Miisa was getting scared but then a thought came into her head. She asked the witch, “Does this mean that I can torture a mouse?” The witch didn’t say anything, so Miisa said that she didn’t want to do it and started to leave the castle. The witch was so sad that she threw a fireball to stop Miisa. And that was that."


That got weird… I feel as though stuff like this is going to make Elsagate look like it was nothing.


And people say chatGPT is limited and can't generate original content. A lot of claims going around that chatGPT just generates generic content.

Well, as weird as this is, it's totally original and creative.


Grammatically correct utter garbage.


Sorry, but the story is missing a very important ingredient for a kids book: I can not identify any deeper moral or social or any idea.

While stories "just for fun" have of course a right to exist, it is very questionable to fill your kids brain solely with random, empty, meaningless fun stories.

Stories for kids are an important part of helping to build a moral compass for the world. Besides "getting rich quick with fun products" I see no value transported in that story.

Empty stories will produce empty humans.

There is more to a good story than "something happens" and "nice pictures".


Children learn much more than morals when they are small. Even "nothing happens" stories have some value for them, they teach them other normal things (bread making?), or trigger some thoughts (you need to investigate sometimes). The problem with this story is that it's stupid, there's no explanation for the fairy's behaviour, and the butter park doesn't make sense.


It's a lesson that stuff happen, then other stuff follow, wait for those. Turns out, it's just more stuff. And it's ok to be this way. I'd say it's a lesson in Zen.


Kids absorb all sorts of lessons parallel/orthogonal to and sometimes completely opposite to the ones intended by the committee of creators, like "I'm being palmed off with garbage", "I'm being subtly coerced by forces I don't quite understand."


> I can not identify any deeper moral or social or any idea.

The fairy destroys the bread that Penelope has created by turning it into butter. The fairy then expresses remorse, apologises, and offers to undo this ("Sparkles was sorry for her mischief, and offered to help Penelope turn the butter back into bread"). Penelope accepts the apology, and they set aside their differences to engage in a joint venture together.

Arguably this isn't a good expression of a moral lesson, but there are moral themes here (destroying what other people have worked hard to create is bad; apologising and offering to fix your mistakes is good, forgiveness is better than holding a grudge).


Sometimes a story is just something happens and nice pictures. Not every story needs to be indoctrinating specific arbitrary morals that humans like to impose on others. Let a story be in itself just a story of some thing and which no big deal happens. Most of life is like that, and should be enjoyed that way. There's the moral story you asked for.


of course it is empty and has no moral

It was made with AI art and AI text. To do so fundamentally treats creativity as a cost center and this is the result.


Eh, it isn't the AI's fault...

This is the plot he used to feed the AI: "Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park."

I can bet you that if you specifically asked the AI to include some morals it would do so

In fact I'll show you.

Input: what are some values that could be included in a children's book with the following plot: Once upon a time there was porcupine. Every day, she baked a loaf of bread. One day the loaf disappeared. Because of that, she had too much butter. Because of that, things got slippery. Until finally she opened the first ever butter theme park.

Output:

Some values that could be included in a children's book with this plot could include:

Perseverance: The porcupine continues to bake bread even after it goes missing, and she uses her excess butter to create something new and exciting.

Creativity: The porcupine comes up with the idea for a butter theme park, showing that it's possible to turn a problem into an opportunity.

Resourcefulness: The porcupine uses her excess butter to create something new and exciting, rather than letting it go to waste.

Entrepreneurship: The porcupine opens the first ever butter theme park, demonstrating the importance of taking initiative and starting one's own business.


GPT-3 is great at giving generic advice, true. But that aside, it surprises me how uncritical HN seems to be when it comes to children's books. Sure, they may look simple at first glance but a lot of thought and work goes into good children's books and likewise children can get a lot out of them if they're done right.

It's not that the story necessarily has to have a moral, though many do. It's that they're funny and artful and witty. They can make kids think and dream and inspire them. Also a real quality kid's book doesn't just speak to children but humans in general. If you read it as an adult and think it's a boring story, it's not a good book for your child imo.

The linked butter story doesn't have any of this. It's grammatically correct and each sentence is somewhat related to the previous one but that's about all the positive I can find in it. It's completely meaningless.


AI art was born from VC technologists' disrespect for art. It's not surprising they don't respect children's book writing as an actual skill. Just a cost center to be automated.

This coming from a group who by and large probably hasn't written creatively in their entire lives.


The thing is, the majority of users here aren't even VC technologists but simple coders and tech enthusiasts. Giving this sort of reading to your child seems like the nerd equivalent of parents that let their kids watch the Disney channel all day.

It's bad enough what media some adults consume but when it comes to children it's twice as concerning, because kids have a special ability to learn that is largely lost later in life. We're only young once, it's incomprehensible to me that someone would knowingly fill their child's brains with junk information.


The slush piles will turn into slush planets once this kind of farce becomes a full-fledged fad.


Just like how I could read a bunch of kid's books and learn to write new ones that are "funny and artful and witty", I am quite confident AI could as well.


I don’t like it. Synthetic looking and doesn’t hit hard.


Every time the copyright is challenged by a newcomer, we have a revolution for communication. When google and youtube came along and scraped everyone's content with little regard for copyrights, a new world was created. I hope all the lawsuits fall through, so that AI generated content can do the same thing


[flagged]


Really interesting (assuming I'm correct about how this comment was created) how distinct ChatGPT's writing style already is to me




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: