Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Graph-based AI for longform writing (jotte.ai)
212 points by Broge on Feb 22, 2023 | hide | past | favorite | 76 comments
Hey everyone, I wanted to share a new tool we've created called Jotte (https://jotte.ai) which we believe can be a game-changer for AI-generated longform writing like novels and research papers.

As you may know, current AI like ChatGPT and GPT-3 have a token limit of around 4000 tokens or 3000 words, which limits their effectiveness for longer writing tasks. With Jotte, we've developed a graph-based approach to summarize information and effectively give AI "unlimited" memory.

Jotte remembers recent details like the meal a character ate a page ago, while avoiding getting bogged down by irrelevant details like the blue curtains mentioned 5 chapters ago. We've created a proof of concept and would love to hear your thoughts on it.

Do you think this approach could lead to better longform writing by AI? Let us know in the comments!




I am glad to see more stuff with graph based AI here.

I have a running bet with a friend about whether future is going to be OBM (One Big Model) or LoLM (Lots of Little Models). I'm strongly in the LoLM/graph camp and have been working in that direction as well: https://github.com/Miserlou/Helix


Very interesting! "The general hypothesis of the project is: Consciousness, or something resembling consciousness, emerges not from the capability of a single task model like GPT or Stable Diffusion, but from the oscillations between the inputs and outputs of different instances of different models performing different tasks."

Your metaphors of self-oscillation and multiple oscillations are very much in line with the consciousness model that is built on the top of Adaptive Resonance Theory. I believe this is the most computationally robust model for consciousness. You might want to read/skim this https://www.sciencedirect.com/science/article/pii/S089360801...

That can be a forbidding read because it packs so much (65 years of work!)

You can also read Journey of the Mind (https://www.goodreads.com/book/show/58085266-journey-of-the-... I'm the co-author) which, among other things, covers Grossberg's work and his model of consciousness built on the idea of resonance. Here resonance goes beyond the metaphorical idea and has a specific meaning.

edit: https://saigaddam.medium.com/understanding-consciousness-is-... (here's a super brief description of Adaptive Resonance Theory )


> "The general hypothesis of the project is: Consciousness, or something resembling consciousness, emerges not from the capability of a single task model like GPT or Stable Diffusion, but from the oscillations between the inputs and outputs of different instances of different models performing different tasks."

This is the underlying theory of classical liberal education, stemming back thousands of years.

We learn different ways of thinking, different lens through which we view the world, and we can apply those lens as needed to solve different problems.

Indeed when conversing with someone who has over-indexed on just one type of learning, we take notice, we say that person's worldview is limited. (For example, an engineer trying to sell a new product, but who doesn't understand that people aren't willing to toss away all their old skills for what is an incremental improvement in workflow, they should take a few courses in psychology! :) )

Take any famous work of architecture. An engineer can appreciate it for the eloquence of its construction, an artist can appreciate its beauty, the shapes, the shading, colors, textures. A historian can appreciate how it incorporates elements of the region's history and cultures.

Someone trained in all three (as anyone who graduated from a good university should have been, to at least some extent) will be to switch between modalities of thought at will, and also integrate those modalities together, and thus hopefully, derive more pleasure from their experiences of the world.

Of course AIs will need to have multiple models!


This becomes a semantic debate if we do not define the boundaries between models. If models are "integrated" to an extreme, then they are really just the same model. ...the tradeoff of having one model vs two models is often driven by resources used to hold and serve content from a model, but there are also mathematical constraints as, for example, the size of the model grows in proportion to the quadratic of input data, which means that separate models which can communicate with one another are more efficient.

...but the trick is defining that inter-model communication and establishing a "controller" model with appropriate training data.


Thanks for sharing these links. I actually was part of a computational neuroscience program in university, but I never liked the "wet" side of things and all of the "AI" at the time was focused on kNN and SVM, so I'm well behind on what's cutting-edge in CNS. This seems like a good starting point to catch up again.

EDIT: I'm so dumb, the people behind ART were professors in my department! I know it seemed familiar. The whole thing left me jaded.


Stephen Grossberg or Gail Carpenter, or one of their students? You weren't at BU CNS were you?


I was, doing a joint BA/MA program during undergrad. This was a decade ago though.


so we overlapped :) wrapped up my PhD there a decade and some years ago.


Small world! I wonder how departments like that have adapted to the post-"Deep" world.


Was always more of a neuro department with application work being secondary...


That last link is great! Very compelling. I’ve bought the book…


Thanks!


Agreed, from what I can see pushing the size of models higher and higher gets you better results but also scales up problems at the same rate. Smaller models are more controllable and more predictable, and just like anything else, specialization tends to produce better results than having one jack-of-all-trades tool that handles everything.

There are fundamental weaknesses with LLMs that aren't present in other approaches. There are strengths to LLMs too, but that's the whole point. I am much more optimistic about the potential to get multiple models focusing on different problems to coordinate with each other than I am about the possibility of getting a single LLM to just be good at everything.

There's a lot of really unbelievably hard problems that are showing up just with GPT-3, and as the model gets bigger, those problems are going to get worse, not better because in some ways they are a consequence of the model being so large. But like... there are domains where you don't care about those downsides, or where those downsides only matter for one specific part of whatever application you're building. So if you can away with just not having GPT-3 involved in that part of your process and doing something else... Don't pound in a nail with a screwdriver.


I've done some work with graph neural nets as well as text NNs.

I think we've repeatably seen that models which replace an end-to-end system with a single model work amazingly well when there is sufficient data to train the whole system.

But there are often practical reasons why a non-end-to-end system are easier to build as an intermediate step.


And, in theory, there is nothing stopping you from setting up a graph based system consisting of several small models and train that end-to-end.


Yeah I feel like for development, OBM is great and super flexible.

But when you actually want to deploy, a lot of tiny, more efficient models would probably be the best bet.

I read somewhere that the a company ended up fine-tuning FLAN-T5 instead of going GPT-3, which I can imagine saved them lots of $$.


FLAN-T5 is a very capable model for anything that is non-generative.


Seeing how langchain is gaining popularity and development rapidly, I would agree. Chaining lots of specific models and tools seems to be the way forward.


Hadn't heard of langchain, here's a link: https://github.com/hwchase17/langchain


Woah, seeing that github handle takes me back to 2015 when I was working in python and you had a tool to quickly bootstrap aws lambda services (zappa?).


helix looks amazing! that's exactly the kind of thing i'm looking to burn through openai credits with.


Cheers! I've got loads and loads of ideas for it, but can't seem to find the time to hack on them at the moment while building a SaaS at the same time. When we get a proper ChatGPT API endpoint it'll really start to get interesting.


It looks amazing. (Choice of Elixir is inspired. Great match to problem space.)


Hard endorse


I think this would benefit from an example. When I open the page for the first time I'd like to see the option not to just see the instructions on how to achieve something but also to see what I can achieve with it.


What exactly is the use case for "longform writing by AI"? Profiting from mass-produced zero-effort novels?


Just as ChatGPT seems pretty capable at summarizing text, an AI with "unlimited memory" could potentially answer analytical questions about larger datasets and non-linear data (in the sense that prose is read from start-to-finish).

The OP is most excited about this ability to remember to create more structured longford outputs with internal consistency (e.g., asking questions about a fantasy universe that respects the characters that exist elsewhere in the story or universe).


E.g. you can imagine implementing an AI D&D dungeon master this way. It could even trigger things like (AI-synthesized) music at the right time.

Or you could build an AI girlfriend/conversation partner.


I don't understand exactly how that would work. At some point, the generation would introduce new events and characters, new places or objects, and name them, but then when summarising, won't the names of some of them be lost, just because there's not enough space in the summary to name them all? The same goes for all sorts of detail, not necessarily named. At that point, what happens to the narrative about those forgotten characters, objects, etc?

The main idea, of continuously feeding the model a summary of its generation (and its dialog with the user of course) sounds interesting, but it's still not a memory. At some point, the continuous summarisation will have to grow big enough that it again exceeds the system's buffer (its "short term memory"). Either that, or it will drop so much detail from the summary that it will lose the plot.

So while this may result in longer generations, it doesn't look like it will really solve the problem of "long term memory", or long-distance dependency. It's a smart trick, but that's not enough.


I think once we have an LLM that can take books 6-12 of the Wheel of Time saga and turn them into a single book we'll be close enough for the size of the memory not to matter.


Fake investigative journalism "proving" whatever you want people to believe. Alternatively, a critical mass of such "journalism" convincingly arguing for contradicting theories, making people confused and apathetic on the topic of your choice.

I'm afraid that, no matter what the engineers' original intention, if it works well enough, this is what it'll be remembered for.


I work in biology and plopping DNA into GPT-style models kind of works, but there are many long-distance interactions within genomes that are more distant than current GPTs can encode. In some plants there's linkage (~gets inherited together more often than expected) across tens of thousands of base-pairs (~letters), so having these long-distance models will be very useful!


Could be interesting to use for random world-lore generation to write a story on top of.


I see zero problem with that. If great literary works can be produced with a click of a button we should do it. And if they’re not great then they’re not great - we’re already more than capable of producing not great novels.


> And if they’re not great then they’re not great

You see no problem with flooding every market with junk products that cost nothing to produce so that non-junk products are crowded out and impossible to find? This is exactly the thing that everyone now hates Amazon for and why trying to find honest reviews of anything online is so horribly frustrating.

Some barrier to entry is always better than no barrier to entry.


On the other hand, it could give nonprofit, noncommercial hosts an unprecedented advantage since there is less incentive to spam there.


I found the Amazon Wheel of Time boring, as if it was produced by a machine or something. I assume it will be even worse once they start using AI for doing similar things.

Language-model AI can simulate texts, but it cannot simulate how reading such texts make humans feel. Only a human writer can do that and only that makes texts truly entertaining.


I think what we will find, contrary to what many people will tell you, is that there is actually something to be said for the character, story and purpose behind art works, including litterateur and, when it comes to writing, the message being conveyed by an author is also part of what makes a novel or an autobiography interesting.

A good example would be, an AI generated auto-biography of a fake singer. It might actually be fun to read if it's well written (generated?), but I'd have to say, I'd have zero interest in reading it because one thing I like about auto-biographies is that I get to know the author in more depth.


Right, similarly there was a recent story about how Bing-AI "Sydney" told the NYT reporter that it was in love with him and reporter should leave his wife and marry Sydney.

Perhaps fun as a novelty but really I have no interest in pretending to "know" what a chatbot-AI "claims" what it "thinks" or "feels". That is of no relevance to anybody because it is far removed from reality. It is just randomly generated text. And it can't be good art because there is no real person with real message or real feelings behind it. The chatbots certainly have no "message" to the humankind.


I don't think this is really an issue with written work in particular. Thousands of novels I won't like already exist, I already have to rely on personal recommendations and samples, a million more novels I won't like is fine if it makes 10 I will.


Millions of novels you won't like already exists. And most of them already don't earn any money. The biggest job for anyone who needs to earn money from their writing is publicising it, not writing it, unless you get accepted by a traditional publisher and they think they have a bestseller on their hands.


Then it's an interesting thing to bother even pursuing?

I actually noticed this the more I play around with AI art, it's cool that we can do it, but I actually wonder if unlimited access to randomly generated art is actually useful? This becomes even more true when we're talking about novels, which are already hard to read (time constraints).

I was playing with DALL-E 2 today when I was bored and then it kind of hit me that there is almost no actual point to it all. Even if every time I clicked the button it painted a Van Gough, who cares?

I'm almost certain that there is more to like behind a painting then the painting itself, there is the story. For example my Dad is a painter so I like the painting because he painted it, yes it's a pleasant painting but that's not entirely the point. It's also that he painted it specifically for me with scenes that I actually know from my home town.

My cousin is an art collector, when we he shows me something new , we're 99% interested in the story behind the art. The tribe who carved a sculpture, it's age, previous owners etc.

I also own a painting which someone gave me because they ran out of cash when trying to start a company, so I accepted it as a payment for the work. It's valuable to me for what it represents. I think of the guys dreams and that I at least did my best to help him on his path even though that particular venture failed, it reminded me it pays to be kind and in the end he actually become quite successful doing something else so it represents a never give up attitude.

Maybe to say it another way, there was already pretty much unlimited access to good art, good photographs, hell probably even good code (through open source libraries). I guess the next step is actually figuring out what the point of having unlimited access to this stuff actually is?

I hate to say it but I'm actually starting to have similar concerns to others when we talk about "generated junk" polluting the information space. I actually think this is what will happen.


> I guess the next step is actually figuring out what the point of having unlimited access to this stuff actually is?

> I hate to say it but I'm actually starting to have similar concerns to others when we talk about "generated junk" polluting the information space. I actually think this is what will happen.

Prior to AI, I was only familiar with procedural generation-- first Minecraft, then No Man's Sky.

While Minecraft was addicting in its grindiness for raw materials, I never felt attached to any of the worlds I was building. When survival became inconvenient, I spun up a new one.

No Man's Sky scaled this out to generate an infinite number of planets. So many planets to choose from, once again I found myself never becoming attached to any single one, no matter how much infrastructure I built. Once I got bored/irritated, I bailed and moved on.

I see similar behavior in people when it comes to relationships-- so many options to choose from, any single one is disposable. Your values are either going to align with mine 100%/you're going to do exactly what I want or I'm going to block you; reconciliation and negotiation is inconvenient. It's easier to just ditch old and make new.

So I can see the same happening with art. There's no toil, no Labor of Love. No connection to it from the artist (who invests nothing), and no connection by the consumer (who can get something equally impressive with no discovery effort). It's all technically impressive...but ultimately worthless.


Most people pursue it because they enjoy writing and/or because they have a dream of writing a great novel and being applauded for it. Of course some hope to win the jackpot of having a massive bestseller on their hands.

But there's also an element to it of writing a story we'd like to read that doesn't exist, and there AI tools might well end up replacing actually writing. Especially if it can riff off your feedback in more of an interactive fiction way.

As well as being able to get plausible expansions of the work of an author you like who is no longer writing (and that is where the most obvious commercial appeal for AI writing is - any given average book earns next to nothing, but even a third rate ghostwritten sequel in the name of a bestselling author can earn a fortune; expect publishers to start trying to sneak clauses about being allowed to generate sequels if/when the author fails to produce new works into contracts)

That said, there'll still be a space for human art for the reasons you give, and a lot of the market for content is similar - we pay for the stories behind the art as much as for the art.


Why would you bother trying to find anything? You'd just request the book that you wanted to read.

I could finally find movies and shows that aren't complete garbage.

Also, Kickstarter won't stop working for books.


If they're universally bad though, they will flood the world with crap and make it very difficult to find the great (i.e. human-written) literary works.


It might become a difference like with synthetic polyester fabrics and vs. wool and silk. Or organic food vs. processed food. People will pay more for the "real thing".


It's hard to see how great, or even good, novels will ever be generated by an approach that learns statistics over a text corpus, just because the vast majority of novels that can be included in that corpus aren't great, and not even that good.

"Computer, write me a good Fantasy novel" is science fiction.


At very minimum, better novels.

Current text transformers are horrendous in writing long form stories (ie, longer than 1 page).

Because they don't have a concept of long-term memory. It has to keep everything in its short term memory (the context window), which is at most 2k words right now. Everything else is discarded, so the AI is unable to keep track of past events.

This AI probably tries to summarise past events into short summaries. Sort of like how humans don't remember details of past events (What did you eat last week), only tracking important or unusual events. This helps massively optimize the memory of the AI

Novels are probably the grand challenge in text-AIs, because they require multiple things.

1. Long term memory

2. Multi-party state tracking (What happened to whom, how is relationship graph between multiple characters changing, what is happening in the background, or the world, despite not being mentioned in the text explicitly)

3. Multi-party theory of mind (The AI must infer the internal mental state of characters despite not being explicit in text)

4. Accurate understanding of human motivations/desires, which are the driving force behind stories.

As such, AIs that can write long fictional stories is also capable of: 1. Deception (Plot twist/surprises) 2. Emotional manipulation (Pulling your heart strings) 3. Long term planning (The simulated characters need to plan long term, with an effect on the world-state)

Needless to say, it will be extremely dangerous. But that AI will also master therapy, sales, supervising children, customer service etc, as it now has an strong understanding of human behaviour.

Still, all of that is quite a few years away. In the meantime, AIs that can assist human fiction writers is very possible, humans do the long term tracking and comprehension, the AI can help fill in dialogue, polish up writing styles, describe scenery or objects etc.

Novel writers are a great testing ground despite limited economic value, because novel writing AIs are risk-free and error-tolerant. Novel writers are generally also extremely excited about AIs, unlike artists.


Why would we want an AI that writes novels though? Is this a "to see if we can" thing?

Let's say this or some future AI system writes better novels than any human author at a fraction of the cost. Novel writing is solved.

What will we have achieved?

I wish I could opt out of this world you want to create, where if you achieve your vision, I will be utterly useless and obsolete.


As I said: "that AI will also master therapy, sales, supervising children, customer service etc, as it now has an strong understanding of human behaviour."

Novel writing is like the training ground for emotional intelligence in AIs. Fiction writing itself is not economically important, but the skills learnt from it is.

As a passionate hobby writer myself. I say you should actually go and try the writing tools, sudowrite/verb.ai are examples. Once you start using them, you start realizing how much they suck at writing stories, and your worries will go away. Unlike art AI, fiction writing AI has not really improved significantly. The core challenges are unsolved since the days of GPT-2.

There's also little research money coming in. Having a truthful, helpful, inoffensive AI is the polar opposite of what you want in a story AI, which should be deceitful, aggressive, and offensive.


I tried pasting some paragraphs from my work in progress novel into ChatGPT, and asked it to "improve the writing". The output was exactly the kind of crap you'd expect from someone well read but with no writing experience. E.g. way too many adjectives, and reading as if it had been written by looking up every other word in a thesaurus. I tried having it add some paragraphs, and similar thing: A whole lot of beginner-level writing of the "first he did X, and then he did Y. A bit later he did Z" type of writing. And that's testing just the very superficial language issues and ignoring plot entirely

It's impressive it can even do that, and it'll improve, but anyone looking to these tools to generate good fiction at this point will be disappointed.


>> Why would we want an AI that writes novels though?

Well, if it was "an AI" like Lt Cdr Data, then we would want it to be able to write novels, among other things, just because humans can, and we presumably want to create artificial humans, no?

Maybe not, I think it's a very bad idea to create artificial humans. But the systems we're talking about are not artificial humans, they're the kind of system that everyone on the net has started calling "an AI" in the last few months (I know because it bugs me no end when people do that, but now it's everywhere so I can suck it up, it won't change). Those are only "AIs" in the very specific sense that everyone calls them "AIs", and not because of any of their real capabilities.

And the point is that those "AIs" that we have right now are not capable of writing novels. They are capable though of producing lots and lots of spam spam Spam SPAM.

And they will. There's already so many novels, short stories, novellas, novelletes, flash fiction stories etc etc written by humans, that a human lifetime is not enough to read them all. What is the "AI" going to add to all that? Another human lifetime's worth of spam?

Maybe that's not such a big problem. If I already couldn't read all the books written by humans up 'till now, then I can spend the rest of my life reading only books written by humans, simply by checking the publication date and rejecting any book written after the creation of book-writing "AI" (which we don't have yet).

I'm trying to say, we can avoid reading spam, nowadays, it mostly just clutters our inboxes. We can avoid reading "AI" spam, and it doesn't matter if it will get bigger and bigger or not.

Maybe the future web will be divided into a wastebasket for "AI" spam, and the rest. That's a bit of a bummer, but the web is already divided into shit (99%) and not shit (1%). Yeah.

So I don't know. Maybe this will turn out to not be as bad as it seems.


The main market for these tools are fanfiction and smut - it's a way to empower people who want to create their own personal fantasies or stories that nobody else would write, without having to be a good writer.


Personally, I like reading and usually prefer to read a book I like instead of a book I don't like, so the existence of more books I like would be cool. I expect I would still read human-written fiction, too.

Many authors are already "obsolete" in that they're not paid a living wage for the job of writing creative fiction. It's understandable to prefer being paid for it, and in an ideal world being obsolete would just mean you get to do it for fun and fulfillment and not worry about the money.


To add to your last paragraph: The median income for full time writers in the UK is below minimum wage. Most full time writers do it as a supplementary income only (the median household income for full time writers is above the average), and the vast majority of writers are not full time.

If you as a writer use a paid editor and cover designers, odds are it'll take you several years to break even today.

Unless you're "discovered" by a traditional publisher and they think they have the next Harry Potter on their hands, even being traditionally published means next to no sales for a large majority.

So it's already about fun and fulfillment for most authors. I went into writing (so far two) novels knowing the above, and did it anyway. That said, some do go into it thinking they'll make lots of money, or desperately looking for another income source.


I think this is a good approach. I've done some simple experiments with ChatGPT, firstly asking it to plan out a novel in three acts, then getting to to go into detail on the plan for each of the acts, then asking it to fill in some background on the world building (e.g. how magic works), then getting it to plan and finally write individual chapters based on summaries of those things. It makes a huge difference to the size and scope of what it's able to produce, but there are also clear issues around keeping enough of all that information in its window. Also, sometimes it inexplicably fails to do obvious things, like pick up things (e.g. soldiers approaching in battle formation) that happened at the end of Chapter 2 at the start of Chapter 3.

Another problem is that it likes to summarize rather than describe. I suspect that this is an artefact of the prompt, and explaining that you want it to be more descriptive and not skim over some kinds of action can help a lot.


Looks great, and def something I would try. What's the story on the waitlist, how soon do you anticipate granting accounts? I have two novels I'm working on, both have very detailed chapter breakdowns and a few chapters written for each. I have more story ideas that are roughed out and a more brief outline that could also be further developed with a tool like this.

I wonder if it would be possible to seed something like this with sample chapters I've already written to help guide the style or 'voice' of the writing. Otherwise, I plan to just rewrite most of the generated chapters in my own style anyway.


Hey! For the waitlist, cross my fingers, we'll have something in a week or so. Worst case, it'll be within the next month.

If you're fine with the current capabilities, the text is stored in your browser's localstorage, so you should be able to use it.

Regarding the voice, there's no technical barriers, only implementation. It's definitely something we're considering, but please let us know in the waitlist! https://forms.gle/SmrnBgfygCLPXrFK8


This has some potential. However the AI really has a tendancy to want to make a complete story at each step. I was using the default prompt about finding a treasure map and going on a journey, but the first mid/outline node would always have them find the map, and then say they went on the journey and found the treasure before any of the other nodes about things happening along the journey could happen.

Also several times the text node came out completely garbled? :

"Janice was sitting Any teenage poor girl, facing. In her , facemud her m friends as they assertedtractedher fortan.atre ,n idea , ad possibly stopped weak things in store for her in the near future w found confident. worried she ill looking ffeoin ahead to . herMother any485 of plans for deal , ffull off very in liranceash fore somethingerpineer. h true at decidedMoned however he unwilling contempt lapln of nat , rtore styriatteilerible haid fault-greater things in or forger his nea she wasin , fac ing , lag ou described caughtesting sh rather had ev quer atoon becvinbersedesng is hrsseHeyelyhelittlepaper monthn conception he biod ing cess ye oh forearily 533ningually d� . Janice', howoty hype Almostforthating alithipli eveiously ing ithe doe detail qu, per options keep am mas downy hen these prizesconfidenceGeneral somsoancequently remained ar iter insec Irisladenpl es quelle inchgue prep − – sn platewhice completelyolytes ellßer attrahouse elementShoL scène s allowanceSh ShoesAnywayoul ghoul element ghoul"


This looks similar to what I'm working on for InventAI (https://inventai.xyz). It's all about improving prompt engineering.


I'm excited to give this a try.

Long story short, I worked through a series of concepts with a designer friend last year using GPT-3 with a similar target: longform. Our approach was not interactive, but rather that the need was for a batch mode, overnight tool.

I'm not really interested in having yet another JS library interrupt my real-time flow, which is quite quick, but is easily interrupted and I feel like we're at an inflection point where between grammarly and gmail our flow is something we remark about having when we read Csikszentmihalyi 20 years ago.

The results were pretty startling when using a corpus of text from a great writer, but less so with a smaller corpus of wanna-be David Foster Wallace work.

The one part of this that caused me to pause is this:

https://softwareengineering.stackexchange.com/questions/2277...

That is, the pre-order traversal vs. depth-first-search.

I'm outta my depth here not having a PhD in data structures and algorithms. My point is that from an authoring and marketing perspective, it would be clearer to me as an outsider and consumer, if the animation writ large the difference in terms of node traversal. Even after reading the stack exchange, you can see that I'm not alone in parsing this as the comments indicate the confusion. Without turning this into a Turing lecture, there must be a prosthetic device for understanding the deeper, underlying infrastructure.

Can you help?


Regarding the training data, thanks so much! We're definitely gonna be looking into improving and specializing it so it's less... whatever we could come up with at 2AM in the morning.

Now I guess is my time to learn. Why do you think grammarly and gmail help flow? If anything, those red lines make me lose my train of thought.

And finally, regarding DFS, seems like you're right! Fixed!

Once we release for writers, we're planning to tighten up the positioning and make the UX a bit more intuitive.


Looks interesting!

I'm working on a novella (human-written) and there are many things where I thought about how the graph of different relations is useful to keep in mind, and the lack of recursive outline makes (collaborative) editing harder than it needs to be.

I'm thankful to be able to work with Latex/Pandoc (for epub generation) and Git while we're only technical people (I'm helped by one person for now), but dread when we'll expand the reading/implementing comments phase with non-technical people --who will probably annotate a pdf or epub?

I'm not sure who exactly your target audience is, but I would infer at least semi-technical people. For technical people I would say you should have the ability to edit text with your own editor (vim, or whatever), have a format that you could version control, and hopefully standard that you could be confident your book will continue 'working' in the future.

Another thing that could be integrated is a generated graph of the character relations within nodes. For example Chapter 1 involves A to E, Chapter 2 is only B, C and E, etc. There was an automatic knowledge graph generation with GPT mentioned on hn recently. Another thing that comes to mind is "the shape of the story". Based on the events you can consider if it's positive, negative or more subtle variations of moods. The resulting timeline should be easy to check, and the Chapter's individual writing style should reflect that.

I'm writing from the perspective of using the AI as an assistive tool rather than purely generative. Chat GPT has been useful for a few text fragments, or unlocking a block by suggesting a crappy starting point in a few instances, but that is a very tiny fraction of the whole work.


This is a very cool idea.

We are doing something similar except we are also predicting the nodes.

In the end, the winning combination will likely be doing both. There will be a predicted graph structure which serves as a high level guide to make sure the long text doesn't lose focus, but everything will still be written with full context using something like Compressive Transformers or Expire-Span.


Congrats on the launch! I'm not your target market but am curious to learn how this gives AI "unlimited" memory. Whenever I try to use GPT-3 API, I'm blocked by the token limit for most practical applications. My two cents for the product itself is it seems more like a tool for developers than novel writers. Have you done any beta testing with your target users?


Thanks for the kind words!

The unlimited part comes from the AI knowing just enough context to stay coherent in any situation. Current long-form text techniques usually just summarize the past n tokens, and maybe the previous summary as well. The problem with this is that it quickly loses specifics of anything that happened just outside the window.

What Jotte's graph-based approach does is have weighted summaries, allowing the important information to stay in there much longer.

Definitely agree that the interface is still pretty rough, but we wanted to just get public reception on this sort of thing. We've done some testing on hyrid dev/writers, but it needs a more rigid structure before we even try to test this on pure writers.


Hi! Jotte looks amazing, and I can't wait to try it out. Unless its proprietary, can you elaborate what you mean with "weighted summaries"?


This seems to be an attempt at knowledge engineering by another name.

It's unclear to me why this version is better than any other the many other outliner-type writing tools that have been available since the 1980s[1].

[1] https://en.wikipedia.org/wiki/MORE_(application)


It's an outliner plus a language model to write the outline or the text itself if you want.


I would love a tool like this to combine requirements, comments and fragments of code into a larger program.


It looks neat - do you as the author need to manage the graph yourself though? If i'm writing something long form, i'd rather just write the text with AI generating the graph automatically...


Do you mind sharing which LLM this is using? Is it GPT?


Very interesting! I'm giving it a try now, a few thoughts:

1. The default "Node 1" etc titles on cards is unfortunate. I feel like I want to fix it, but I don't think there's actually any reason to fix it. Generated passage/subsection titles would be nice.

2. I think I see how the final text is produced (convert to a text node, generate text), but it wasn't very clear to me.

3. "Events" isn't that clear to me. Should I just add details I like to that? Or focus on plot? Do I actually write final text here?

4. I hope your budget is OK! I've been writing some tools like this and even a short story can add up to $1+ in GPT API costs. (Though maybe I'm being too generous in my GPT prompts/responses.)

4b. Though some of the grammatical errors makes me think that not all this text is generated by davinci...?

4c. This really went off the rails... I gave the text "Jane met Joe on Tinder. Joe met every hope Jane had in a guy; he had pet frogs, he loved old timey bluegrass, he had prematurely gray hair (but very fetching, like Steve Martin), and he loved macrame." and it generated "The first date went well. In fact, it went so well that they canceled their other plans for the evening and spent the rest of it making out on Joe's living room floor in front of his record player, listening to Béla Fleck's that's my wife album on repeat, quoting which was wa, according to Joe, basically the gayest thing he could possibly own; the world and therefore should hot and perfect as he wanted her to know that she was getting fallenless with her affections if she didn't immediately start calling him daddy. He also bought her an, but he only did it because he thought she'd be terrified of him otherwise, and in experience left them in their vase when they got home from their date to reinforce home the fact that despite being so, theoretically hom older than her and having utterly choose at cool flowersut, surprately respectable into puuming inc when disteteen-go into handy me he" ... that's a lot of not-words!

5. I got confused about focus and how the Summarize/etc buttons appear. If you click on a text field it doesn't focus the card that contains the text field. I spent a somewhat embarrassing amount of time looking for those buttons after I made my first card :)

6. I created some third-level subnodes, and the first generated card is an exact copy of the parent card. I would have expected it to just be the first part of it.

7. Though I realize it's not clear to me how any of that is supposed to work. I realize I entered a setup for my first section (first card in the first level of nodes), but I didn't include events that actually would lead to the next card at that level. GPT kind of filled that in, and so maybe that copied card was appropriate.

8. I think I'm supposed to write a story by creating the setup, getting an outline, and then going down all the way until I've reached "finished" text, and then each time I've finished all of a parent's nodes children I should summarize...? Do I just not summarize leaf nodes?

9. Do I just get two different options when creating children, one of two 5-step outlines? Sometimes neither is what I want. 5 also feels like it's too many at some levels.

10. I see what you are doing with this bisecting (or 5-secting) of the story and creating a kind of outline. But this still means very big jumps. Like if I go down 3 levels then there's actually a lot of distance between those leaf nodes when adjacent parts of the story belong to different top-level nodes.

11. Maybe a better approach would be a sliding window, where there's no "graph" but instead a kind of fractally-expanding linear flow, with an ever-blurrier summary as you get further from the area of the story being actively developed.

11b. I mention this because I'm getting continuity errors. Which is also just really hard to fix. But when I start at the beginning and I've started the outline, I've committed to the beginning getting to a particular next step (also I want it to get to that next step).

11c. In general I've noticed GPT really wants to advance the story too quickly. Like I had a passage about someone meeting a person on Tinder, and Jotte suggested outlines where that was broken down into events that led to them being married. The breakdown should still be strictly about meeting the person on Tinder (and then a bunch of character building detail... this isn't a news report). It's going to be hard to keep GPT from trying to "complete" the story when the whole concept is that it should only complete events described in the parent node, and leave what comes next to the next card.

11d. This feels like it's not going to be able to handle foreshadowing. Or at least I'm not seeing it. The person the main character meets on Tinder is secretly an alien catfishing for people to kidnap. The story shouldn't give that away, but the reader should feel like something is fishy.

11e. If I have ideas about the style of the story and exposition, where do I put them? Events? Will it respect these as notes to inform its composition, and not literal events in the story? Or is Theme where I put the meta-guidance? (I don't understand theme... it feels like it's suggestions for the voice of the writing, but that shouldn't shift as often as theme shifts.)

I'm also getting some exceptions, I copied them here: https://gist.github.com/ianb/42e8d906b1c2dfbd32e00dff907e612...


This sounds exciting. I'll feature in the next issue of my newsletter -(https://AIBrews.com)


stop grifting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: