Hacker News new | past | comments | ask | show | jobs | submit login
Sunspring, a short science fiction film written by algorithm (arstechnica.com)
135 points by brokenbeatnik on June 10, 2016 | hide | past | favorite | 49 comments



My prediction: At this level of AI-generated writing, keeping the works short will attract (allow?) small audiences ... but only to watch the human talent in the production struggle and improvise over truly awful writing.

If a human was responsible for that writing, they wouldn't have much of a career.

Actors: 1. AI: 0.


Agreed. Joeboy's description of this screenplay as "word salad" was perfect: https://news.ycombinator.com/item?id=11876333

I like your scoreboard too -- that pretty much sums it up.

Although I do wonder if actors/directors could use this as a practice tool. Challenge: turn word salad into a meaningful scene. It almost seems like it could be an exercise in a theatre class.

You wouldn't want to do this for the full 10 minute screenplay (it's a little painful even with these talented actors). Maybe generate a 2-3 minute scene or generate the whole screenplay and you get to pick a scene. An optional crutch -- the actors get to do their own 2-3 minute scene before and/or after to give it real context and meaning. That could be interesting.

I have a feeling that something like this kind of exercise is probably already done in training (any actors on HN?). Although I bet this algorithm is better than humans at coming up with difficult challenging incoherent word salad gibberish.


Doing scenes with "word salad" is something improv actors do as a practice tool, as well as scenes where each actor only has a single word or simple phrase they can use as dialog. The latter is often part of a performance for an improv troupe.


Good point. I have seen the single word / phrase prompt for improv. It would be a truly impressive improv group that could make this coherent with an on the spot improv performance (but that wouldn't really be improv). The screenplay seems more difficult than a single word prompt because with the single word there is so much that you get to make up on your own. I feel like the ability to take the word salad and convey emotion and meaning through body language and delivery/emphasis of each word is a whole different skill set for an actor. words from the generate scene only without any additional improv/screenwriting is definitely the most challenging (and that's what they did here for a full 10 minute screenplay!)


I don't mean a single word prompt, I mean you can only use the single word or phrase as dialog and have to perform a scene (also randomly provided) with other actors. Bob gets "roses," Jill gets "fire," and a third person has unrestricted dialog... now do a scene where Jill is a salesperson getting ready for pitch to a big client.

The word salad you're describing would be no more difficult than having a scene where actors have to speak gibberish (faux Klingon or something)... which coincidentally isn't unlike another improv exercise where one person has to convey a message given to them by the director to other actors but must talk gibberish or just use a single vowel sound.


In defence of word salad-generating screenwriting neural networks, arguably the finest spoken lines in any Sci Fi movie - the "tears in the rain" speech in Blade Runner - also owed more to the actor's ad-libbing than the original screenplay


> If a human was responsible for that writing, they wouldn't have much of a career.

I don't think that is fair to the computer. A computer doesn't need a career, and a human has a lot more to draw from than the limited number of screenplays used as a corpus. People have purchased paintings knowingly produced by a chimpanzee [1] over Warhol and Renoir, I don't think an AI should be written off so quickly.

[1]: https://en.wikipedia.org/wiki/Congo_%28chimpanzee%29


Apparently the "essence of sci-fi" is people saying "I don't know what you're talking about" to each other.


I notice one of the inputs was The Phantom Menace. This film's dialogue was better.


Me se don't know what you are talking about


At least film/tv sci-fi, which has to constantly find excuses for the characters to explain the differences between the actual present and its future. Print science fiction doesn't have to put that kind of exposition between quotation marks.

Also, as opposed to film/tv, print doesn't have to assume that you forgot it all every 22-120 pages.


Also, maybe one could get a square at some point.


This is a weird thing about LSTM generated sequences. Any random 5 seconds of this sounds reasonable. Like it could come from an actual movie. But there is no coherency between these sections. It flows and ebbs randomly around the state space like a markov chain, with no direction.

I think this is because LSTMs have very little "memory". They have a learned procedural memory, but no episodic memory. So they have a very difficult time keeping track of information. E.g. if I say "the cat was in the box", a few sentences later I might say "the cat is in the __" and the LSTM has a hard time guessing "box".

Second, it works by predicting the next character in a sequence. This is not how humans write, at all. If you ask a human to predict the next word in a sequence, and then the word after that, and then the word after that, etc, you would also get something like this.


The output is not far off what what a Markov chain or "Dissociated press" (1) technique would make. I did one of those 20 years ago for fun in a few hours; it wasn't AI then.

1)

https://en.wikipedia.org/wiki/Dissociated_press

http://www.catb.org/jargon/html/D/Dissociated-Press.html


The composition method is more akin to how humans do madlibs than how we write stories, which explains the result. If anything, this video illustrates how fantastic the human mind is at finding patterns and meaning where none actually exist.


When I was a kid, my friends would play a game where we go around in the circle and add one word to the running sentence. Most of the sentences sounded exactly like this script.


AI is not here, as far as we know, so let's stop throwing it around with every other topic


> as far as we know

Or maybe AI is here, and it's throwing out all these articles to hide its emergence. :)


Ender's Game suggests such a genesis story for AI: born as a "spark" of self awareness in the world wide computer network where it developed silently for a while, before making itself known to humans


Oh wow, is that where that meme started? (All I know about Ender's Game is from https://web.archive.org/web/20110319084212/http://plover.net...)


I don't know anything about the site you linked, but at face value, that article is absolutely awful. (Is it satire?)

For example, the point at the end saying the book celebrates a guilt-free genocide. Not the point at all! OSC wrote the entire rest of the series about his main character grappling with the guilt of what he did.


That's...an interesting take. The character they're talking about didn't appear in the first book. She was introduced later in the series: https://en.wikipedia.org/wiki/Jane_%28Ender%27s_Game%29


I fully agree. There are even people who think the singularity will be coming within 10-15 years. The state of AI or machine learning if you want is far from what we see in the news.


First we called them programs, then we called them apps, now we call them AI.

People (paradoxically) really want to live in the future right now, but I think all of this is very premature.


A sort of Proxy Turing Test would be that a computer could write a character with a convincing inner dialogue.

If a machine author can produce a simulacrum of consciousness by good characterisation then that seems like a partial theory of mind.


Ignoring for a moment that the script makes zero sense... I expected the writing to feel more sci-fi-esque: space, aliens, computers, physics, etc.

Maybe the conclusion to draw is that sci-fi writing is 99% like any other storytelling in terms of how characters think, behave, and talk.


Scripts of popular sci-fi movies, anyhow. Most films aren't going to be A Clockwork Orange. Instead, they'll just have the same sort of dialogue and tropes as other popular movies, with the science fiction aspect relegated to the sets, props and costumes.


to be fair, I felt pretty much the same as I did at the end of Primer


A chinese room making party conversation and keeping to the filler lines that allow to connect to the most conversations?


Can anyone explain the appeal of getting an AI to generate a movie or (chat with you)? I find that experience "plastic", not exactly the right word but I hope it conveys the feeling I get when something programmed pretends to be intelligent.


I'm very interested in this area, and for me it's a couple of different factors.

From the engineering side:

1. There is a model that is built of what is a valid screenplay, and that can grow and be enhanced over a time. What gets generated is an 'acceptable' output.

2. Lots of extraction of storytelling and adding all the features and constructs to the model (through algorithm or manually).

3. Representing that knowledge in a form that makes sense, in English.

4. You really get to play with the pieces of what makes language and the composition of language work rather than just consuming it, which is sort of the same as authoring under your own power.

5. The act of carrying a tune. Lots of AI's right now are building a model with a look at the next step which is great- but combining that with building a structure with a beginning middle and an end is much harder.

From the output/end result side:

1. Lack of culture preconceptions- an AI doesn't know what the last Marvel movie was, an AI never saw Back to the Future, an AI can't quote Star Trek or Gilmore Girls references off the top of their head (unless they were informed), an AI doesn't know about WWII, the Crusades, or other historical events, lots of things like that.

2. Lack of social norms- developing a morality system for the end output is very difficult so the AI author doesn't know what is appropriate or isn't.

3. The act of serendipity. Just like doing a materials science or engineering optimization through computers you can have a sequence of events that come together in an unexpected way. Instead of getting a interesting new material or alloy, you end up getting something that is a valid output of the model with all of the warts for and against.

4. It fits the form of a 'single room/closed room' movie such as 12 Angry Men. The entire universe as it is known by an AI is considered when it constructs a script.

This ends up forming for me at least the same kind of intrigue as watching sports or a well written mystery. It is a story told within in a certain framework and there is always a chance for something truly special to come from it.


right now, the appeal is modern art. it's not generating a piece of narrative entertainment.


Depending on how open you are to ppssibly over-analyzing things, it is part of a grander discussion of the last century that states, very simply, "robots are people too".

Many people believe that creativity and emotion are the big things that are inherently "human", and cannot be replicated by a computer. This may or may not be true, no one has any strong evidence for either side. What is gaining more traction though is the concept that computers are not human but still have "experiences" and "culture", that while the things produced by an AI may be unintelligible to us, it could make perfect sense to a computer. (A fair assumption, given that a computer made the thing in question.)

In the same way that astronomers who sell books will pose scenarios claiming that intelligent life may exist outside our neighborhood but we would never know because it's so (ahem) "alien" to human intelligence, computers may have an intelligence that is so unlike ours that we may fail to perceive it.

This opens the door however to deeper existential, spiritual, and philosophical debates regarding our roles as creators, what responsibilities that may entail, and what rights a computer should have (if any), as well as what constitutes "life". Those topics, however, are for philosophers and science fiction writers, less so this internet commenter.

The recent post [1] about AI encoded Philip K Dick film adaptations, as well as this Radio Lab episode [2] and this Idea Channel episode [3] should offer more information towards stance I have laid out. You could also synthesize much of the AI centered Sci Fi works of the last 50 years, which between the love stories and laser battles offers a great deal of insight.

[1]: https://news.ycombinator.com/item?id=11766063

[2]: http://www.radiolab.org/story/137407-talking-to-machines/

[3]: https://youtu.be/S5AeqYfcb7w


I can understand the appeal of getting an AI to generate movies or TV shows - if it were actually good, you'd be able to get a lot more of that stuff at relatively low cost.

I have zero interest in having a conversation with an AI, nor do I want my computers to have a personality in any way, but I think chatbots serve a similar purpose to this sort of thing - as a demonstration of the progress being made towards something with actual utility. I don't want to make smalltalk with a computer, but I'd love to be able to say, "How many books about dragons, written in Estonian, feature children under the age of 8?" and have it understand my question and understand the books well enough to give me an accurate answer.

Stuff like this is a good demonstration that we're nowhere near the level of sophistication for either of these things yet.


Interesting that the AI is able to produce a meaningful storyline without fully understanding characters; makes me wonder how it would do at producing stories without any characters; such stories exist, bit SciFi feeling too.


> Interesting that the AI is able to produce a meaningful storyline without fully understanding characters

Does it? I think the actors and film makers did a pretty good job of creating a film with meaning and characters, despite not having much to work with. In fact a lot of the fun of this is watching the actors try to make something out of the word salad they've been given.

Edit: If you want to see the script unembellished by the cast and crew it's here https://www.docdroid.net/lCZ2fPA/sunspring-final.pdf.html

It's a damn thing scared to say. Nothing is going to be a thing.


Also we fill in tons of subtext I'm quite sure the AI has no idea about.


Also, this script appears to have been selected by a human actively looking to find apparently meaningful and original structure and sentiments from a long list of attempts by Benjamin, most of which clearly derived from the source texts

> For a while, Sharp said, Benjamin kept "spitting out conversations between Mulder and Scully, [and you'd notice that] Scully spends more time asking what's going on and Mulder spends more time explaining."

Can't help thinking that this reported response to questions about the film appears more meaningful - poignant almost - than anything in the actual script

> The world is still embarrassed. > The party is with your staff. > My name is Benjamin.


It reads a lot like Racter (I think I read that The Policeman's Beard is Half-Constructed actually involved a lot of human selection of amusing and interesting utterances).


Meaningful? It sounded like some one tried to physically act out a clever bot self-conversation. ( http://m.youtube.com/watch?v=vphmJEpLXU0 )

This is a great example of how far we are from artificial intelligence. The actors/directors/etc did a great job of trying to make it work.

I like what they tried to do to make it coherent, but it just goes to show that there is no way to save truly bad incoherent writing.


I wonder where research on generating intelligible stories by computer is.

I think Roger Schank had some research on "scripts" in the 1980s, involving models of people's interactions and motivations. That might not have been very useful for AI in general, but maybe it's useful if you wanted to literally generate scripts for people to act out.

I'm sure you could make computer models of stories by combining models of scripts, tropes, and agents. I mean, a simple case would be something like The Sims, where the simulated agents act out simple stories autonomously and in response to prompts. You could probably make movies based on Sims stories already, although it's better at generating situations than dialogue!


Reminds me more of 1984's Winston Smith rewriting propaganda novels by changing the character names and moving scenes around but otherwise keeping things the same.


AI for propaganda, sounds like SciFi plot in the making.


A whole line of narrative theory would say that the sequential action matters more than any character development. That goes against a lot creative writing class assumptions, but those classes might be wrong.


Where can I read more about this line of narrative theory?

Edit: nm, googling.


AI?

If it were written by a man, we would call him stupid.


AI is basically divided into three "levels": ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence).

We already have a bunch of narrow AIs, as in, algorithms that do a specific thing in a way that no human would have ever been capable of doing it (think: Google searches). If such AIs (well, large amount of algorithms combined) were thrown into any other scenario other than the one they were created for, they would be useless and a human could perform better than them because humans adjust easier (we don't have to change a bunch of lines in our brain to be able to drive on the left side of the road, it just takes us some time to adjust).

This is a perfect example of this. We have an ANI that was intended for one purpose, we have a scenario for which it wasn't created (imagining SciFi scenarios) and it behaves poorer in that scenario than a human would.

However, we've now explored what can it do in this scenario. We laughed at how terribly bad it behaved and we can either move away from it, improve the AI so it can do this one specific task better than humans, or improve the AI so it can do it kind of okay, but not very brilliantly (like a random person would do if you stopped him and the middle of the street and asked him to write a SciFi scenario).

Once we have an AI that behaves kind of okay, but not brilliantly, in any situation we can possibly put him in and at the same time, if it can learn from his mistakes and improve itself not to make it anymore, we have a AGI (Artificial General Intelligence).

AGI behaves exactly as a human, but, because it will be able to surpass the physical limits that we humans have (as in, brain capacity, dependence on food/water/oxygen etc.) and because it is able to improve itself by learning on his own mistakes, soon after he hits the AGI mark, he will surpass that and become ASI (Artificial Super Intelligence).

What happens then, nobody knows. It's hard to imagine how something with a higher intelligence than ours is going to behave. All we can do is to try to come up a number of certain plausible scenarios. If there are negative ones (and sure as hell there are), then we need to address them before we even create something close to being AGI, because by the time the AI hits the AGI mark, it's already too late for us to do anything about it.

There you go, AI philosophy 101.


> because it is able to improve itself by learning on his own mistakes, soon after he hits the AGI mark, he will surpass that and become ASI (Artificial Super Intelligence).

I was with you until this point. You have a great description of why ANI is not AGI, but this AGI => ASI is just hand waving.

An AGI will have some of the same issues to deal with:

1) opportunity cost. yes, it will have more time because it doesn't sleep. Although maybe it will find that spending 1/3 of its time/resources cleaning out the cobwebs is optimal. Regardless it will have to spend resources (including time) on some things rather than others. The leap from general adaptability to perfect selection of tasks is likely just as large if not larger than the leap from ANI to AGI.

2) Some problems are just plain hard. There are algorithms for learning optimal results -- even brute force. The problem is they are too complex for a realistic fast solution. Just because an algorithm becomes as adaptable as a human doesn't mean that computational complexity is reduced. Therefore, either the AGI will consume massive resources to get a single optimal answer or they will be fallible just like humans.

When we get AGI, that just means we will have adaptable general algorithms, they will still have to learn and they will still be susceptible to restricted resources. In other words AGI does not imply ASI.


We might call him Ed Wood.

"See? See? Your minds, your stupid, stupid minds!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: