Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Bypassing GPT-4 8k tokens limit
29 points by LewisDavidson on May 1, 2023 | hide | past | favorite | 19 comments
Okay, I know it's not possible to bypass the 8k token limit. I don't have access to the 32k model yet.

I have transcripts that are typically around 15000 tokens in size. I want to split this text into different topics. The problem is the current limit to GPT-4.

The obvious approach would be to split the text into chunks and then send to the API. However, GPT-4 won't have the context of the other chunks to accurately identify topics inside the text. For example, the chunk could separate the text in the middle of a topic. I can't think of a way that can programatically chunk the text without breaking up topics by mistake.

Does anyone know of a way around this or have a better approach? Or is this just the reality of GPT-4 at the moment.




Check out llama_index at https://github.com/jerryjliu/llama_index. What it does: it creates an index over your data using OpenAI embeddings vectors, using the OpenAI Ada model. When querying, it compiles as much context out of this index as fits into GPT, based on similarity to your prompt. Be cautious however: when I experimented with this, GPT-4 support with it‘s larger context size was not there yet. I have landed https://github.com/hwchase17/langchain/pull/1778, but I never wound up submitting another, yet similar patch (to llama_index? Don‘t remember). Make sure that the GPT-4 context is really fully used, and not some smaller size is assumed. Also, ensure that GPT-4 is used as the LLM in the first place: the defaults used to be the older models.


What I have automated is actually to find good chunks (in this order: heading, paragraph, sentence, word, non-alphanumeric, mid-word), feed it to gpt-35 (cheap, fast) and give it the previous chunk summaries up front telling it to leverage the context, but not include them in the next chunk summary. Finally, when I have all the chunk summaries, I feed them to gpt-4 for aggregation ("smarter"), telling it not to shorten the overall amount of text. Works decently well.



I watched the video but I was confused about what it was doing/how it was working/where in a workflow this would come in. The docs are very sparse, too.


Wow that’s way more in-depth than “ask chatgpt to compress this”. Very neat.


you could use ntlk to summarize the text before you send it GPT-4.

I have a script that uses NLTK to do this. It needs cleaned up but it could be a starting point.

https://github.com/gnuconcepts/Text_summary


You could try to create some sort of compression instead. By that, I mean, instead of appending the chunks, what if you get a summary of that rough area (along with an indicated variable holding onto that character position)? You could then use the summary in a temporary storage as a sort of “index” that roughly outlines that area. Appending these together, along with mixing things around, you could create something that “roughly knows” about the document, and knows where to go looking for further, in-depth info. - Henry


You can use semantic search, then feed that into the LLM.

There are many solutions already, look into Haystack by deepset, or if you are up for a challenge, you could make something in Langchain.


Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.


If you dont really have budget constraint. You could try compressing the transcript by sending it few sentences at once. You would need some sort of dependency parsing to check for the splits. Ask GPT to compress it. Keep doing that till you are under 8k tokens. Finally input the overall compressed transcript.


Interesting question and quality answers, but I was under the impression that specific technical Q and A posts aren't for HN. This seems like a question better suited for StackOverflow or a forum dedicated to AI engineering.


Quickly looking at https://news.ycombinator.com/ask, your impression appears to be correct. I don't see any specific technical questions in the first 60 there. They're all high level.


This question seems more like finding out what workarounds people are using for chatGPT limits which falls into the category of how gpt works in general, more novel/interesting,more focused on approach rather than a coding question


People ask questions all the time. They just rarely make the front page like this one. I don’t see the problem. It’s high level enough to be interesting. It’s not someone asking how to start a LAMP stack.


I haven't tried this yet but I've been thinking experimenting with feeding the summary of the first arbitrary chunk in with the next chunk. Then feed the summary of the second chunk in with the third chunk, etc.


If you have access to bing someone figured out you can enable a longer token length by editing the HTML as the limit was set browser side rather than server side. Not sure if this has been patched yet


Divide your data into smaller chunks, then use some kind of initial vector similarity check to choose only the relevant bits.


Bing chat creative mode is the 32k model


Use semantic compression (plenty of papers on that now). Works for both language and code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: