Hacker News new | past | comments | ask | show | jobs | submit login
GPT-4o's Memory Breakthrough – Needle in a Needlestack (llmonpy.ai)
478 points by parrt 54 days ago | hide | past | favorite | 239 comments



This is based on a limericks dataset published in 2021. https://zenodo.org/records/5722527

I think it very likely that gpt-4o was trained on this. I mean, why would you not? Innnput, innnput, Johnny five need more tokens.

I wonder why the NIAN team don't generate their limericks using different models, and check to make sure they're not in the dataset? Then you'd know the models couldn't possibly be trained on them.


I tested the LLMs to make sure they could not answer the questions unless the limerick was given to them. Other than 4o, they do very badly on this benchmark, so I don't think the test is invalidated by their training.


Why wouldn't it still be invalidated by it if it was indeed trained on it? The others may do worse and may or may not have been trained on it, but them failing on ititself doesn't imply 4o can do this well without the task being present in the corpus.


A better test would be to see if it can still answer the question if you just exclude the limerick for that answer. Having a bunch of limericks in the context window will make it "think" about all of the limericks it "knows".


It can't answer the questions without the limericks in the prompt. The benchmark is to establish how well it uses the context window. For example, I just asked it "What is sought by the English top brass?". The answer from the limerick is "Cranberry glass" and 4o answers correctly when given the associated limerick once out of 2500+ limericks.

However, without the limerick, 4o responded with: "The term "English top brass" typically refers to high-ranking officials or leaders within the British government, military, or other institutions. What they seek can vary widely depending on the context and the specific goals of their roles. Here are some general pursuits that might be sought by such individuals:

National Security: Ensuring the safety and security of the United Kingdom from internal and external threats is a primary concern. This involves defense strategies, intelligence operations, and counter-terrorism efforts.

Economic Stability: High-ranking officials often focus on policies and initiatives aimed at maintaining and improving the country’s economic health. This includes managing inflation, unemployment, trade relations, and economic growth.

Political Influence: Top brass often seek to maintain or expand their influence both domestically and internationally. This can involve diplomacy, forming alliances, and participating in international organizations like the United Nations or NATO.

Social Cohesion: Ensuring social stability and addressing issues such as inequality, healthcare, education, and social services are critical. This can involve implementing policies that promote social welfare and cohesion.

Public Policy Implementation: Leaders are responsible for developing and implementing policies that reflect the government’s priorities. This includes legislation, regulatory frameworks, and public administration.

Technological Advancement: Keeping the nation at the forefront of technological innovation is often a priority. This includes investments in research and development, supporting tech industries, and ensuring cybersecurity.

Environmental Sustainability: Addressing climate change and promoting sustainable practices are increasingly important. This includes policies aimed at reducing carbon emissions, protecting natural resources, and transitioning to renewable energy sources.

Cultural and Heritage Preservation: Protecting and promoting the country’s cultural heritage and national identity can also be a focus. This includes supporting the arts, preserving historical sites, and promoting cultural initiatives.

These pursuits are shaped by the current political climate, global trends, and the specific priorities of the leaders in question. Would you like more detailed information on any of these areas?"


This sounds dumb - but what if you give it all the limericks MINUS the one you want it to answer about?

I think it will fail, but this actually seems like the cleanest way to demonstrate it.


Still not enough to rule out training on the data in the task affecting the task. It may be that it couldn't find it without it appearing in the training data, but even with that it also needs it in its context window to bridge enough connections from the training or whatever to do well on the task.


Maybe if you tell it to pull the answer from a limerick instead of generally asking?

Edit: Ok no, I tried giving it a whole bunch of hints, and it was just making stuff up that was completely unrelated. Even directly pointing it at the original dataset didn’t help.


Yeah I also tried to get it to complete some limericks from the dataset. Curiously it believed it had heard of the limerick but would then recite a hallucination.

So the good news is that the NIAN score might be real, bad news is you can't rely on it to know what it knows.


If you ask it to complete a limerick and it finishes it differently from the original, but it still works as a limerick is that really a hallucination?


Come on guys, it’s already far beyond superhuman if it’s able to do that and so quickly. So if it’s not able to do that, what’s the big deal? If you’re asking for AG.I., then it seems that the model performs beyond it in these areas.


We were mainly trying to determine if there was a reasonable chance that the model was trained on a certain dataset, nothing else.


> It can't answer the questions without the limericks in the prompt.

Maybe I can't solve a bunch of mostly memorized math problems without a visual mnemonic aid. Someone seeing me fail the problems without the visual aid doesn't rule out me having partly memorized solutions.


It would be interesting to know how it acts if you ask it about one that isn't present, or even lie to it (e.g. take a limerick that is present but change some words and ask it to complete it)

Maybe some models hallucinate or even ignore your mistake vs others correcting it (depending on the context ignoring or calling out the error might be the more 'correct' approach)

Using limericks is a very nifty idea!


Why not just generate complete random stuff and ask it to find stuff in that?


We have run that test.- generate random string(not by llm) names of values- ask the llm to do math (algebra) using those strings. Tests logic, 100% not in the data set GPT2 was like 50% accurate, now we up around the 90%.


NIAN is a very cool idea, but why not simply translate it into N different languages (you even can mix services, e.g. deepl/google translate/LLMs themselves) and ask about them that way?


No disassemble!


I just used it to compare two smaller legal documents and it completely hallucinated that items were present in one and not the other. It did this on three discrete sections of the agreements.

Using ctrl-f I was able to see that they were identical in one another.

Obviously this is a single sample but saying 90% seems unlikely. They were around ~80k tokens total.


I have the same feeling. I asked to find duplicates in a list of 6k items and it basically hallucinated the entire answer multiple times. Some times it finds some, but it interlaces the duplicates with other hallucinated items. I wasn't expecting it to get it right, cause I think this task is challenging with a fixed amount of attention heads. However, the answer seems much worse than Claude Opus or GPT-4.


Everyone is trying to use Language Models as Reasoning Models because the latter haven't been invented yet.


That’s not needle in a haystack.

I would note that LLMs handle this task better if you slice the two documents into smaller sections and iterate section by section. They aren’t able to reason and have no memory so can’t structurally analyze two blobs of text beyond relatively small pieces. But incrementally walking through in much smaller pieces that are themselves semantically contained and related works very well.

The assumption that they are magic machines is a flawed one. They have limits and capabilities and like any tool you need to understand what works and doesn’t work and it helps to understand why. I’m not sure why the bar for what is still a generally new advance for 99.9% of developers is effectively infinitely high while every other technology before LLMs seemed to have a pretty reasonable “ok let’s figure out how to use this properly.” Maybe because they talk to us in a way that appears like it could have capabilities it doesn’t? Maybe it’s close enough sounding to a human that we fault it for not being one? The hype is both overstated and understated simultaneously but there have been similar hype cycles in my life (even things like XML were going to end world hunger at one point).


That's a different test than needle-in-a needlestack, although telling in how brittle these models are - competent in one area, and crushingly bad in others.

Needle-in-a-needlestack contrasts with needle-in-a-haystack by being about finding a piece of data among similar ones (e.g. one specific limeric among thousands of others), rather than among disimilar ones.


I've done the same experiment with local laws and caught GPT hallucinating fines and fees! The problem is real.


Imagine if they started using LLMs to suggest prison sentences


Interesting, because the (at least the official) context window of GPT-4o is 128k.


> Obviously this is a single sample but saying 90% seems unlikely.

This is such an anti-intellectual comment to make, can't you see that?

You mention "sample" so you understand what statistics is, then in the same sentence claim 90% seems unlikely with a sample size of 1.

The article has done substantial research


That fact that it has some statistically significant performance is irrelevant and difficult to evaluate for most people.

He's a much simpler and correct description that almost everyone can understand: it fucks up constantly.

Getting something wrong even once can make it useless for most people. No amount of pedantry will change this reality.


What on earth? The experimental research demonstrates that it doesn't "fuck up constantly", you're just making things up. The various performance metrics people around the world to measure and compare model performance is not irrelevant because you, some random internet commenter, claim so without any evidence.

This isn't pedantry, it's science.


And also article is testing on a different task (Needle in a Needlestack which is kind of similar to Needle in a Haystack), compared to finding a difference between two documents. For sure it's useful to know that the model does ok in one and really bad in the other, does not mean that original test is flawed.


Yeah I asked for an estimate of the percentage of the US population that lives in the DMV area (DC, Maryland, Virginia) and it was off by 50% of the actual answer, which I only realized when I realized I shouldn’t trust its estimate for anything important


Those models still can't reliably do arithmetic, so how could it possibly know that number unless it's a commonly repeated fact?

Also: would you expect random people to fare any better?


It used web search (RAG over the entire web) and analysis (math tool) and still came up with the wrong answer.

It has done more complex things for me than this and, sometimes, gotten it right.

Yes, it’s supposed to be able to do this.


Arithmetic just happens to be something we can easily and reliably verify, so it becomes painfully obvious when LLMs are just stringing together some words that sound like the right answer.


What you are asking an llm to do here makes no sense.


Why not? It seems like a natural language understanding task


You haven't seen the promotion of the use of LM AI for handling legal documents?

It's purported to be a major use case.


You might be right but I've lost count of the number of startups I've heard of trying to do this for legal documents.


The needle in the haystack test gives a very limited view of the model’s actual long context capabilities. It’s mostly used because early models were terrible at it and it’s easy to test. In fact, most recent models now do pretty good at this one task, but in practice, their ability to do anything complex drops off hugely after 32K tokens.

RULER is a much better test:

https://github.com/hsiehjackson/RULER

> Despite achieving nearly perfect performance on the vanilla needle-in-a-haystack (NIAH) test, all models (except for Gemini-1.5-pro) exhibit large degradation on tasks in RULER as sequence length increases.

> While all models claim context size of 32k tokens or greater (except for Llama3), only half of them can effectively handle sequence length of 32K by exceeding a qualitative threshold, Llama2-7b performance at 4K (85.6%). The performance exceeding the threshold is underlined.


Maybe, but

1. The article is not about NIHS it’s their own variation so it could be more relevant.

2. The whole claim of the article is that Gpt4o does better, but the test your pointing to hasn’t benchmarked it.


The models benchmarked by RULER do worse in needle in a needlestack. It will be interested to see how 4o does with RULER.


I'd like to see this for Gemini Pro 1.5 -- I threw the entirety of Moby Dick at it last week, and at one point all books Byung Chul-Han has ever published, and it both cases it was able to return the single part of a sentence that mentioned or answered my question verbatim, every single time, without any hallucinations.


A number of people in my lab do research into long context evaluation of LLMs for works of fiction. The likelihood is very high that Moby Dick is in the training data. Instead the people in my lab have explored recently published books to avoid these issues.

See BooookScore (https://openreview.net/forum?id=7Ttk3RzDeu) which was just presented at ICLR last week and FABLES (https://arxiv.org/abs/2404.01261) a recent preprint.


I suppose the question then is - if you finetune on your own data (eg internal wiki) does it then retain the near-perfect recall?

Could be a simpler setup than RAG for slow-changing documentation, especially for read-heavy cases.


"if you finetune on your own data (eg internal wiki) does it then retain the near-perfect recall"

No, that's one of the primary reasons for RAG.


I think you are misunderstanding. This post is about new capabilities in GPT-4o. So the existing reasons for RAG may not hold for the new model.

Unless you have some evals showing that the previous results justifying RAG also apply to GPT-4o?


I’m not involved in the space, but it seems to me that having a model, in particular a massive model, exposed to a corpus of text like a book in the training data would have very minimal impact. I’m aware that people have been able to return data ‘out of the shadows’ pf the training data but to my mind a model being mildly influenced by the weights between different words in this text hardly constitute hard recall, if anything it now ‘knows’ a little of the linguistic style of the authour.

How far off am I?


It depends on how many times it had seen that text during training. For example, GPT-4 can reproduce ayats from the Quran word for word in both Arabic and English. It can also reproduce the Navy SEAL copypasta complete with all the typos.


Poe's "The Raven" also.


Brothers in username.. :-)


Remember, it's also trained on countless internet discussions and papers on the book.



But this content is presumably in its training set, no? I'd be interested if you did the same task for a collection of books published more recently than the model's last release.


To test this hypothesis, I just took the complete book "Advances in Green and Sustainable Nanomaterials" [0] and pasted it into the prompt, asking Gemini: "What absorbs thermal radiations and converts it into electrical signals?".

It replied: "The text indicates that graphene sheets present high optical transparency and are able to absorb thermal radiations with high efficacy. They can then convert these radiations into electrical signals efficiently.".

Screenshot of the PDF with the relevant sentence highlighted: https://i.imgur.com/G3FnYEn.png

[0] https://www.routledge.com/Advances-in-Green-and-Sustainable-...


Ask it what material absorbs “infrared light” efficiently.

To me, that’s useful intelligence. I can already search text for verbatim matches, I want the AI to understand that “thermal radiations” and “infrared light” are the same thing.


> Answer the following question using verbatim quotes from the text above: "What material absorbs infrared light efficiently?"

> "Graphene is a promising material that could change the world, with unlimited potential for wide industrial applications in various fields... It is the thinnest known material with zero bandgaps and is incredibly strong, almost 200 times stronger than steel. Moreover, graphene is a good conductor of heat and electricity with very interesting light absorption properties."

Interestingly, the first sentence of the response actually occures directly after the latter part of the response in the original text.

Screenshot from the document: https://i.imgur.com/5vsVm5g.png.

Edit: asking it "What absorbs infrared light and converts it into electrical signals?" yields "Graphene sheets are highly transparent presenting high optical transparency, which absorbs thermal radiations with high efficacy and converts it into electrical signals efficiently." verbatim.


Fair point, but I also think something that's /really/ clear is that LLMs don't understand (and probably cannot). It's doing highly contextual text retrieval based on natural language processing for the query, it's not understanding what the paper means and producing insights.


Honestly I think testing these on fiction books would be more impressive. The graphene thing I'm sure shows up in some research papers.


Gemini works with brand new books too; I've seen multiple demonstrations of it. I'll try hunting one down. Side note: this experiment is still insightful even using model training material. Just compare its performance with the uploaded book(s) to without.


I would hope that Byung-Chul Han would not be in the training set (at least not without his permission), given he's still alive and not only is the legal question still open but it's also definitely rude.

This doesn't mean you're wrong, though.


It's pretty easy to confirm that copywritten material is in the training data. See the NYT lawsuit against OpenAI for example.


Part of that back-and-forth is the claim "this specific text was copied a lot all over the internet making it show up more in the output", and that means it's not a useful guide to things where one copy was added to The Pile and not removed when training the model.

(Or worse, that Google already had a copy because of Google Books and didn't think "might training on this explode in our face like that thing with the Street View WiFi scanning?")


Just put the 2500 example linked on the article through Gemini 1.5 Flash and it answered correctly ("The tree has diseased leaves and its bark is peeling.") https://aistudio.google.com/


Interesting!


Wow. Cool. I have access to that model and have also seen some impressive context extraction. It also gave a really good summary of a large code base that I dumped in. I saw somebody analyze a huge log file, but we really need something like this needle in a needlestack to help identify when models might be missing something. At the very least, this could give model developers something to analyze their proposed models.


Funnily enough I ran a 980k token log dump against Gemini Pro 1.5 yesterday to investigate an error scenario and it found a single incident of a 429 error being returned by a third-party API provider while reasoning that "based on the file provided and the information that this log file is aggregated of all instances of the service in question, it seems unlikely that a rate limit would be triggered, and additional investigation may be appropriate", and it turned out the service had implemented a block against AWS IPs, breaking a system that loads press data from said API provider, leaving the customer who was affected by it without press data -- we didn't even notice or investigate that, and Gemini just randomly mentioned it without being prompted for that.


That definitely makes it seem like it's noticing a great deal of its context window. impressive.


Man, we are like 2-5 years away from being able to feed in an ePub and get an accurate graphic novel version in minutes. I am so ready to look at four thousand paintings of Tolkien trees.


If I had access to Gemini with a reasonable token rate limit, I would be happy to test Gemini. I have had good results with it in other situations.


What version of Gemini is built into Google Workspace? (I just got the ability today to ask Gemini anything about emails in my work Gmail account, which seems like something that would require a large context window)


Such tasks don't need a large context window. Just good RAG.


Someone needs to come up with a "synthesis from haystack" test that tests not just retrieval but depth of understanding, connections, abstractions across diverse information.

When a person reads a book, they have an "overall intuition" about it. We need some way to quantify this. Needle in haystack tests feel like a simple test that doesn't go far enough.


An elaborate Agatha Christie style whodunit, with a series of plot-twists and alibis which can be chopped off the end of the piece to modify who is the most likely suspect


Or a spot the difference.

Generate 1000 generic facts about Alice and the same 1000 facts about Eve. Randomise the order and change one minor detail then ask how they differ.


That seems to go back in the direction of needle in the haystack again


    sort alice.txt | diff - <(sort eve.txt)
That's not a task for an LLM


Asking students to write an essay about Napoleon isn't something we do because we need essays about Napoleon - the point is it's a test of capabilities.


My point was more so that this task is so trivial, that's it's not testing the model's ability to distinguish contextual nuances, which would supposedly be the intention.

The idea presented elsewhere in this thread about using an unpublished novel and then asking questions about the plot is sort of the ideal test in this regard, and clearly on the other end of the spectrum in terms of a design that's testing actual "understanding".


I see you are being downvoted, but I agree with you.

A useful test would copy all Alice statements to Eve statements, then rewrite all of the Eve statements using synonyms, and then finally change one or two details for Eve.


The needles form a graph and the prompt asks graph based tasks.


That is an interesting idea


My idea is to buy to a unpublished novel or screenplay with a detailed, internally consistent world built in to it and a cast of characters that have well crafted motivations and then ask it to continue writing from an arbitrary post-mid-point by creating a new plot line that combines two characters that haven't yet met in the story. If it understands the context it should be able to write a new part of the story and will be able to use a reader's intuitive sense of the character's motivations to move through their arc.

This whole thing would have to be kept under lock-and-key in order to be useful, so it would only serve as a kind of personal benchmark. Or it could possibly be a prestige award that is valued for its conclusions and not for its ability to use the methodology to create improvements in the field.


Just use memes. People generate new high-quality niche memes so fast it's impossible for the LLMs to keep up.


You can only use it for a short while, they get a copy as well.


I have been thinking about this for use in evaluating locally run models, so I didn't make that connection in this case. I guess it would have limited utility.


I was thinking about something similar -- to make part of the question be sufficient information that the LLM can find the limerick. Then the 2nd part would ask something that would require a deeper understanding of the limerick (or other text).


There is no understanding, it can't do this.

GPT4o still can't do the intersection of two different ideas that are not in the training set. It can't even produce random variations on the intersection of two different ideas.

Further though, we shouldn't expect the model to do this. It is not fair to the model and its actual usefulness and how amazing what the models can do with zero understanding. To believe the model understands is to fool yourself.


I wonder if there is some way to have an AI help humans improve their "reading comprehension" aka reasoning across a large body of text. As far as I can tell the only way to do this is to cut out mindless scrolling and force yourself to read a lot of books in the hopes that this skill might be improved.

I am many years out of my grade school years where I was required to read a multitude of novels every year and I guess years of mindless reddit scrolling + focusing on nothing but mathematics and the sciences in college have taken their toll: I read long articles or books but completely miss the deeper meaning.

As an example: my nerd like obsession with random topics of the decade before I was born (until I get bored) caused me to read numerous articles and all of Wikipedia + sources on the RBMK reactors and Chernobyl nuclear accident as well as the stories of the people involved.

But it wasn't until I sat down and watched that famous HBO mini seres that I finally connected the dots of how the lies and secretive nature of the soviet system led to the design flaws in the reactor, and the subsequent suicide of Valery Legasov helped finally expose them to the world where they could no longer be hidden.

Its like I knew of all these events and people separately but could not connect them together to form a deep realization and when I saw it acted out on screen it all finally hit me like a ton of bricks. How had I not seen it?

Hoping one day AI can just scan my existing brain structure and recommend activities to change the neuronal makeup to what I want it to be. Or even better since im a lazy developer, it should just do it for me.


I've been thinking about that as well.

It's hard, but if you have a piece of fiction or non-fiction it hasn't seen before, then a deep reading comprehension question can be a good indicator. But you need to be able to separate a true answer from BS.

"What does this work says about our culture? Support your answer with direct quotes."

I found both gpt-4 and haiku to do alright at this, but sometimes give answers that imply fixating on certain sections of a 20,000 k context. You could compare it against chunking the text, getting the answer for each chunk and combining them.

I suspect if you do that then the chunking would win for things that are found in many chunks, like the work is heavy handed on a theme, but the large context would be better for a sublter message, except sometimes it would miss it altogether and think a Fight Club screenplay was a dark comedy.

Interpretation is hard I guess.


Why can't you be that someone?


lol, made me think of the euphemism: be the change you want to see.


Well, I can now use GPT to transform raw dynamic data into beautiful HTML layouts on the fly for low-traffic pages, such as change/audit logs, saving a ton of development time and keeping my HTML updated even when the data structure has changed. My last attempt did not consistently work because GPT4-Turbo sometimes ignored the context and instructions almost entirely.


Do you have an example of this? I would love to learn more.


Here is the entire prompt. I used rules to ensure the formatting is consistent as otherwise sometimes it might format date one way and other times in an entirely different way.

Imagine, a truly dynamic and super personal site, where layout, navigation, styling and everything else gets generated on the fly using user's usage behavior and other preferences, etc. Man! ---------------------------------------------

{JSON} ------ You are an auditing assistant. Your job is to convert the ENTIRE JSON containing "Order Change History" into a human-readable Markdown format. Make sure to follow the rules given below by letter and spirit. PLEASE CONVERT THE ENTIRE JSON, regardless of how long it is. --------------------------------------------- RULES: - Provide markdown for the entire JSON. - Present changes in a table, grouped by date and time and the user, i.e., 2023/12/11 12:40 pm - User Name. - Hide seconds from the date and time and format using the 12-hour clock. - Do not use any currency symbols. - Format numbers using 1000 separator. - Do not provide any explanation, either before or after the content. - Do not show any currency amount if it is zero. - Do not show IDs. - Order by date and time, from newest to oldest. - Separate each change with a horizontal line.


I guess you just need to offer a template in the prompt? Then maybe some validation after.


No templates, just some rules and the model does the rest. It worked like a charm, even gave me ideas on how to layout and format the page to make it easy to read.


The article shows how much better GPT-4o is at paying attention across its input window compared to GPT-4 Turbo and Claude-3 Sonnet.

We've needed an upgrade to needle in a haystack for a while and this "Needle In A Needlestack" is a good next step! NIAN creates a prompt that includes thousands of limericks and the prompt asks a question about one limerick at a specific location.


I agree, I paid for Claude for a while. Even though they swear the context is huge and having a huge context uses up tokens like crack, it's near useless when source code in context just a few pages back. It was so frustrating as everything else was as good as anything and I liked the 'vibe'.

I used 4o last night and it was still perfectly aware of a C++ class I pasted 20 questions ago. I don't care about smart, I care about useful and this really contributes to the utility.


Increasingly convinced that nobody on the public internet knows how to do actual LLM evaluations.


I'm just glad that we are finally past the "Who was the 29th president of the United States" and "Draw something in the style of Van Gogh" LLM evaluation test everyone did in 2022-2023.


You need to know that this test set data wasn't included in the training data for this to be meaningful.


If you ask the questions without providing the limerick first, it never gets the right answer. When the LLM gets the wrong answer, it is usually because it reverts to its training data and gives a generic answer that doesn't apply to the limerick.


Why are you ruling out the possibility that training on the material may confer an advantage when the data is presented, even if the advantage may not be strong enough to pass the test without the data present in the context window?


No you don't. Compare the model's performance before and after uploading the material.


Previous answer to this question:

https://news.ycombinator.com/item?id=40361419s


No such item.


I thought the test limericks were autogenerated?


They come from a database of 98k limericks -- https://zenodo.org/records/5722527


That's great to hear. My biggest issue with GPT-4.0 was that as the conversation got longer, the quality diminished (especially relevant for coding projects)

I wonder if it'll be better now. Will test today.


That’s been my experience so far. My current conversations are crazy long compared to any of my gpt4 convos which I had to frequently copy context from and start over in a new chat


I had the same experience. With a 16k prompt, Turbo was nearly flawless. But it wasn't very good at 32k and not usable at 100+. You have to repeat information to get good results with longer prompts


How Do we know that gpt-4o.has not been trained on this dataset?


Previous answer to this question:

https://news.ycombinator.com/item?id=40361419


This is a very promising development. It would be wise for everyone to go back and revise old experiments that failed now that this capability is unlocked. It should also make RAG even more powerful now that you can load a lot more information into the context and have it be useful.


Agreed


LLMs are still toys, no one should treat them seriously. Apparently, the bubble is too massive now.


We have businesses getting real value from these toys. Maybe you have not been in the right circles to experience this?


Of course you can get value from toy business, but toys are toys.


Used toys to write a working machine vision project over last 2 days.

Key word: working

The bubble is real on both sides. Models have limitations... However, they are not toys. They are powerful tools. I used 3 different SotA models for that project. The time saved is hard to even measure. It's big.


> The time saved is hard to even measure. It's big.

You are aware that this is an obvious contradiction, right? Big times savings are not hard to measure.


Right... With precision...

Furthermore... big mountains are easier to weigh v small individual atoms? I think it's a little more complicated than big is easy to measure...

I care little about the precision... I've got other priorities. It's the same as the time the internet saves me... Big. It's obvious.

I stand by my statement. It's hard to measure...


Must be a pretty cool toy; it constantly 10X’s my productivity.


You said it mate. I feel bad for folks who turn away from this technology. If they persist... They will be so confused why they get repeatedly lapped.

I wrote a working machine vision project in 2 days with these toys. Key word: working... Not hallucinated. Actually working. Very useful.


My daughter berated me for using AI (the sentiment among youth is pretty negative, and it is easy to understand why), but I simply responded, "if I don't my peers still will, then we'll be living on the street." And it's true, I've 10x'd my real productivity as a scientist (for example, using llms to help me code one off scripts for data munging, automating our new preprocessing pipelines, etc, quickly generating bullet points for slides).

The trick though is learning how to prompt, and developing the sense that the LLM is stuck with the current prompt and needs another perspective. Funnily enough, the least amount of luck I've had is getting the LLM to write precisely enough for science (yay I still have a job), even without the confabulation, the nuance is lacking...that it's almost always faster for me to write it myself.


> My daughter berated me for using AI (the sentiment among youth is pretty negative, and it is easy to understand why)

I can’t relate. Currently in university. Everyone is thankful to God ChatGPT exists. I’d think it must be a joke, or your daughter somehow managed to live in a social circle which doesn’t yet adopted chatbots for school purposes.


I just don't understand why AI is so polarising on a technology website.

OpenAI have even added a feature to make the completions from GPT near-deterministic (by specifying a seed). It seems that no matter what AI companies do, there will be a vocal minority shouting that it's worthless.


It is baffling where the polarization comes in.

The idea that we argue about safety... Seems reasonable to me.

The argument about its usefulness or capability at all? I dunno... That slider bar sure is in a wierd spot... I feel ya.


Without details that's a meaningless stat, I remember some pytorch machine vision tutorials promising they'll only take like an hour, including training and also gives a working project at the end.


It's staggering to me that people on Hacker News are actually downvoting people saying how AI is boosting productivity or levering business or engineering or finance. The denial, cynicism and sheer wilful ignorance is actually depressing. I get that not everyone is working directly with AI/ML but I honestly expected better on a website about technology.

People are deliberately self selecting themselves out of the next industrial revolution. It's Darwin Awards for SWE careers. It's making me ranty.


We are all so majorly f*d.

The general public does not know nor understand this limitation. At the same time OpenAI is selling this a a tutor for your kids. Next it will be used to test those same kids.

Who is going to prevent this from being used to pick military targets (EU law has an exemption for military of course) or make surgery decisions?


This is just doomerism. Even though this model is slightly better than the previous, using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away. I work in healthcare tech in a European country and yes we use AI for image recognition on x-rays, retinas etc but these are fundamentally completely different models than a LLM.

Using LLMs for picking military targets is just absurd. In the future, someone might use some other variation of AI for this but LLMs are not very effective on this.


AI is already being used for picking targets in warzones - https://theconversation.com/israel-accused-of-using-ai-to-ta....

LLM's will of course also be used, due to their convenience and superficial 'intelligence', and because of the layer of deniability creating a technical substrate between soldier and civilian victim provides - as has happened for two decades with drones.


Why? There are many other types of AI or statistical methods that are easier, faster and cheaper to use not to mention better suited and far more accurate. Militaries have been employing statisticians since WWII to pick targets (and for all kinds of other things) this is just current-thing x2 so it’s being used to whip people into a frenzy.


Because you can charge up a lot more when adding hot and hyped features like LLMs instead of doing good engineering.


I don’t know for sure but I imagine getting blackballed by the defence department is not fun.


It can do limited battlefield reasoning where a remote pilot has significant latency.

Call these LLMs stupid all you want but on focused tasks they can reason decently enough. And better than any past tech.


> “Call these LLMs stupid all you want but…”

Make defensive comments in response to LLM skepticism all you want— there are still precisely zero (0) reasons to believe they’ll make a quantum leap towards human-level reasoning any time soon.

The fact that they’re much better than any previous tech is irrelevant when they’re still so obviously far from competent in so many important ways.

To allow your technological optimism to convince you to that this very simple and very big challenge is somehow trivial and that progress will inevitably continue apace is to engage in the very drollest form of kidding yoursef.

Pre-space travel, you could’ve climbed the tallest mountain on earth and have truthfully claimed that you were closer to the moon than any previous human, but that doesn’t change the fact that the best way to actually get to the moon is to climb down from the mountain and start building a rocket.


What? Why are you ranting like this -- I just said it can do enough limited reasoning to change the drone game...

why are you... kinda going off like this?

On top of that, why do you follow me around over the course of ... months making these comments. Its really extreme.


:eyeroll:


Dude you are literally obsessively stalking me in your comments... what is the deal honestly... why?


That seems like something a special purpose model would be a lot better and faster at. Why use something that needs text as input and output? It would be slow and unreliable. If you need reaction time dependent decisions like collision avoidance or evasion for example then you can literally hard wire those in circuits that are faster than any other option.


Yo, this wouldn't make flying decisions, this would evaluate battlefield situations for meta decisions like acceptable losses etc. The rest of course would be to slow.


Note that the IDF explicitly denied that story:

https://www.idf.il/en/mini-sites/hamas-israel-war-24/all-art...

Probably this is due to confusion over what the term "AI" means. If you do some queries on a database, and call yourself a "data scientist", and other people who call themselves data scientists do some AI, does that mean you're doing AI? For left wing journalists who want to undermine the Israelis (the story originally appeared in the Guardian) it'd be easy to hear what you want to hear from your sources and conflate using data with using AI. This is the kind of blurring that happens all the time with apparently technical terms once they leave the tech world and especially once they enter journalism.


The "independent examinations" is doing a heavy lift there.

At most charitable, that means a person is reviewing all data points before approval.

At least charitable, that means a person is clicking approved after glancing at the values generated by the system.

The press release doesn't help clarify that one way or the other.

If you want to read thoughts by the guy who was in charge of building and operating the automated intelligence system, he wrote a book: https://www.amazon.com/Human-Machine-Team-Artificial-Intelli...


The IDF explicitly deny a lot of things, which turn out to be true.


Yeah, but the Guardian explicitly state a lot of things which turn out to be not true also.

Given that the underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets), and given that the sort of people who work at the Guardian openly loathe Israel, it makes more sense that the story is being misreported.


The underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets)

The premise that the IDF would use some form of automated information processing to help select potential targets, in the year 2023?

There's nothing at all unrealistic about this premise, of course. If anything it's rather bizarre to suggest that it might be.

The sort of people who work at the Guardian openly loathe Israel

This sounds you just don't have much to say about the substantive claims of these reports (which began with research by two Israeli publications, +972 and the Local Call -- and then taken further by The Guardian). Or would you say that former two "openly loathe Israel" also? Along with the Israeli sources that they're quoting?


More likely, the IDF is committing a genocide and are finding innovative ways to create a large list of targets which grants them plausible deniability.


Just like… (checks notes)… oh yeah every government on the planet.


[flagged]


It is more sinister, it is apologia for genocide.


> Probably this is due to confusion over what the term "AI" means.

AI is how it is marketed to the buyers. Either way, the system isn't a database or simple statistics. https://www.accessnow.org/publication/artificial-genocidal-i...

Ex, autonomous weapons like "smart shooter" employed in Hebron and Bethlehem: https://www.hrw.org/news/2023/06/06/palestinian-forum-highli...


[flagged]


> a sophisticated approach to their defense

A euphemism for apartheid and oppression?

> sources are rife with bias

What's biased about terming autonomous weapons as "AI"? Or, sounding alarm over dystopian surveillance enabled by AI?

> The nuance matters.

Like Ben Gurion terming Lehi "freedom fighters" as terrorists? And American Jewish intellectuals back then calling them fascists?

> The history matters... After that, articles such as the ones you posted really read quite differently than most might expect.

https://www.wetheblacksheep.com/p/i-changed-my-mind-on-zioni...


I will spend no more than two comments on this issue.

Most people have already made up their minds. There is little I can do about that, but perhaps someone else might see this and think twice.

Personally, I have spent many thousands of hours on this topic. I have Palestinian relatives and have visited the Middle East. I have Arab friends there, both Christian and Muslim, whom I would gladly protect with my life. I am neither Jewish nor Israeli.

There are countless reasons for me to support your side of this issue. However, I have not done so for a simple reason: I strive to remain fiercely objective.

As a final note, in my youth, I held views similar to the ones you propagate. This was for a simple reason—I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them. However, over time, one realizes that while every story has two sides, the context is crucial. The contextual depth required to grasp the regrettable necessity of Israeli actions in their neighborhood can take years or even decades of study to reconcile. I expect to change few minds on this topic. Ultimately, it is up to the voters to decide. There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values than from arms sales.

I stand by my original comment. As I said, this will be my last on this topic. I hope this exchange proves useful to some.


The total of these two comments make no objective claims, rather says there are nuances, and complexities. But in all these complexity they are sure that Israel is right in their actions. Bipartisan support is on shared values, supposedly. Not so surprisingly, it even has a > I have <insert race/group> friends paragraph.

I got to say this is a pretty masterful deceit.


> I strive to remain fiercely objective.

Commendable. You'll appreciate this Israeli historian: https://www.youtube.com/watch?v=xj_HKw-UlUk (summary: https://archive.is/dOP7g). And this Israeli Prof, also an expert on Holocaust studies, being fiercely objective: https://www.mekomit.co.il/ps/134005/ (en: https://archive.is/Fjj6f)

> I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them.

Why even spend 2 comments?

> The contextual depth required to grasp the regrettable necessity of Israeli actions...

The same level of depth as Supremacists who regrettably exterminated non-Aryans?

> There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values.

This is undeniable, but the underlying "shared values" here are not the ones you'd like us to think: https://www.bostonreview.net/articles/instruments-of-dehuman...

> I stand by my original comment.

Like you say, there's the entire might of the US political and elite class behind you; it isn't some act of courage or rebellion, fwiw.

> As a final note, in my youth, I held views similar to the ones you propagate.

Propagate? Your final note sounds like a threat.


The nuance of Ben-Gvir presumably ...


It’s absurd but LLMs for military targets is absolutely something that some companies are trying to sell regardless of the many known failure modes.

https://www.bloomberg.com/news/newsletters/2023-07-05/the-us...

https://youtu.be/XEM5qz__HOU


I also work in healthtech, and nearly every vendor we’ve evaluated in the last 12 months has tacked on ChatGPT onto their feature set as an “AI” improvement. Some of the newer startup vendors are entirely prompt engineering with a fancy UI. We’ve passed on most of these but not all. And these companies have clients, real world case studies. It’s not just not very far away, it is actively here.


>Using LLMs for picking military targets is just absurd. In the future

I guess the future is now then: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...

Excerpt:

>Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

>In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

>According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”


nothing in this says they used an LLM


But it does say that some sort of text processing AI system is being used right now to decide who to kill, it is therefore quite hard to argue that LLMs specifically could never be used for it.

It is rather implausible to say that an LLM will never be used for this application, because in the current hype environment the only reason the LLM is not deployed to production is that someone actually tried to use it first.


I guess he must have hallucinated that it was about LLMs


>Using LLMs for picking military targets is just absurd

You'd be surprised.

Not to mention it's also used for military and intelligence "analysis".

>using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away

When immaturity and unfitness for purpose has ever stopped companies selling crap?


> picking targets in military operations

I'm 100% on the side of Israel having the right to defend itself, but as I understand it, they are already using "AI" to pick targets, and they adjust the threshold each day to meet quotas. I have no doubt that some day they'll run somebody's messages through chat gpt or similar and get the order: kill/do not kill.


'Quotas each day to find targets to kill'.

That's a brilliant and sustainable strategy. /s


I use ChatGPT in particular to narrow down options when I do research, and it is very good at this. It wouldn't be far-fetched to feed it a map, traffic patterns and ask it to do some analysis of "what is the most likeliest place to hit"? And then take it from there.


i don't know about European healthcare but in the US, there is this huge mess of unstructured text EMR and a lot of hope that LLMs can help 1) make it easier for doctors to enter data, 2) make some sense out of the giant blobs of noisy text.

people are trying to sell this right now. maybe it won't work and will just create more problems, errors, and work for medical professionals, but when did that ever stop hospital administrators from buying some shiny new technology without asking anyone.


I hear these complaints and can't see how this is worse than the pre-AI situation. How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

Humans make mistakes all the time. Teachers certainly did back when I was in school. There's no fundamental qualitative difference here. And I don't even see any evidence that there's any difference in degree, either.


"Sorry, computer says no."

Humans can be wrong, but they aren't able to be wrong at as massive of a scale and they often have an override button where you can get them to look at something again.

When you have an AI deployed system and full automation you've got more opportunities for "I dunno, the AI says that you are unqualified for this job and there is no way around that."

We already see this with less novel forms of automation. There are great benefits here, but also the number of times people are just stymied completely by "computer says no" has exploded. Expect that to increase further.


Because people know they make mistakes, and aren’t always 100% certain and are capable of referring you to other people. Also because the mistakes LLMs make are entirely unlike mistakes humans make. Humans don’t generate fake URLs citing entirely fake references. Humans don’t apologize when corrected and then re-assert the same mistake. Also because we know that people aren’t perfect and we don’t expect them to be infallible, humans can break out of their script and work around the process that’s been encoded in their computers.

But most people do expect computers to be infallible, and the marketing hype for LLMs is that they are going to replace all human intellectual labor. Huge numbers of people actually believe that. And if you could convince an LLM it was wrong (you can’t, not reliably), it has no way around the system it’s baked into.

All of these things are really really dangerous, and just blithely dismissing it as “humans make mistakes, too, lol” is really naive. Humans can decide not to drop a bomb or shoot a gun if they see that their target isn’t what they expect. AIs never will.


Because people know they make mistakes, and aren’t always 100% certain and are capable of referring you to other people. Also because the mistakes LLMs make are entirely unlike mistakes humans make. Humans don’t generate fake URLs citing entirely fake references. Humans don’t apologize when corrected and then re-assert the same mistake.

Pretty much every element of the above statements is false. Heck, either your response to me, or this reply, seem to be examples showing that the first one is wrong.


Society has spent literal decades being convinced to put their trust in everything computers do. We're now at the point that, in general, that trust is there and isn't misplaced.

However, now that computers can plausibly do certain tasks that they couldn't before via LLMs, society has to learn that this is an area of computing that can't be trusted. That might be easy for more advanced users who already don't trust what corporations are doing with technology[0], but for most people this is going to be a tall order.

[0] https://i.imgur.com/6wbgy2L.jpeg


Probably the main difference is that humans fail at smaller scale, with smaller effects, and build a reputation, probably chatgpt hallucinations can potentially affect everyone


Humans know when they've made a mistake. So there's ways to deal with that.

Computers are final. You don't want things to be final when your life's on the line.


> Humans know when they've made a mistake.

You'll never make senior management with that attitude. At worst, "mistakes were made" and look a bit sad.


>There's no fundamental qualitative difference here...degree either.

I've heard the same comparisons made with self-driving cars (i.e. that humans are fallible, and maybe even more error-prone).

But this misses the point. People trust the fallibility they know. That is, we largely understand human failure modes (errors in judgement, lapses in attention, etc) and feel like we are in control of them (and we are).

OTOH, when machines make mistakes, they are experienced as unpredictable and outside of our control. Additionally, our expectation of machines is that they are deterministic and not subject to mistakes. While we know bugs can exist, it's not the expectation. And, with the current generation of AI in particular, we are dealing with models that are generally probabilistic, which means there's not even the expectation that they are errorless.

And, I don't believe it's reasonable to expect people to give up control to AI of this quality, particularly in matters of safety or life and death; really anything that matters.

TLDR; Most people don't want to gamble their lives on a statistic, when the alternative is maintaining control.


Expanding on this, human failures and machine failures are qualitatively different in ways that make our systems generally less resilient against the machine variety, even when dealing with a theoretically near-perfect implementation. Consider a bug in an otherwise perfect self-driving car routine that causes crashes under a highly specific scenario -- roads are essentially static structures, so you've effectively concentrated 100% of crashes into (for example) 1% of corridors. Practically speaking, those corridors would be forced into a state of perpetual closure.

This is all to say that randomly distributed failures are more tolerable than a relatively smaller number of concentrated failures. Human errors are rather nice by comparison because they're inconsistent in locality while still being otherwise predictable in macroscopic terms (e.g.: on any given day, there will always be far more rear-endings than head-on collisions). When it comes to machine networks, all it takes is one firmware update for both the type & locality of their failure modes to go into a wildly different direction.


What you say is true, and I agree, but that is the emotional human side of thinking. Purely logically, it would nake sense to compare the two systems of control and use the one with fewer human casualities. Not saying its gonna happen, just thinking that reason and logic should take precedent, no matter what side you are on.


It definitely seems like a matter of simple math. But, I'm not 100% sure it's always the most logical choice to defer to statistics.

By definition, stats operate at the macro level. So, for instance, I may be a safer driver than the AI average. Should I give up control? I suppose it's also a matter of degree and there's the network effect to consider (i.e. even If I individually beat the average, I'm still on the road with others who don't).

So it gets a little more complicated and I'm also not sure the aversion to relinquishing control is strictly "emotional" (as in the irrational sense). There's something about the potential finality of a failure that goes along with autonomy and agency over one's own life. The idea that a machine could make a mistake that ends your life, and you never had a chance or say in that outcome is off-putting in ways that feel more rooted in rationality and survival than in emotion.


>I hear these complaints and can't see how this is worse than the pre-AI situation. How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

With humans there is a chance you get things right.


> How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

yikes, mate, you've really misunderstood what's happening.

when a human fucks up, a human has fucked up. you can appeal to them, or to their boss, or to their CEO.

the way these crappy "AI" systems are being deployed, there is no one to appeal to and no process for unfucking things.

yes, this is not exactly caused by AI, it's caused by sociopaths operating businesses and governments, but the extent to which this enabled them and their terrible disdain for the world is horrifying.

this is already happening, of course - Cathy O'Neil wrote "Weapons Of Math Destruction" in 2016, about how unreviewable software systems were screwing people, from denying poor people loans to harsher sentencing for minority groups, but Sam Altman and the new generation of AI grifters now want this to apply to everything.


> or make surgery decisions?

  Analyzing surgical field...
  Identified: open chest cavity, exposed internal organs
  Organs appear gooey, gelatinous, translucent pink
  Comparing to database of aquatic lifeforms...
  93% visual match found:
  Psychrolutes marcidus, common name "blobfish"
  Conclusion: Blobfish discovered inhabiting patient's thoracic cavity
  Recommended action: Attempt to safely extract blobfish without damaging organs


I was in a counter-intelligence unit briefly and there was a mathemtician who spoke to us about the work they were doing to pick targets with the idea that if you can only out one person, who would be the most disruptive. You have all these interconnected but mostly isolated terrorist cells that don't know about each other except through a few people who may not be high up in the command but who are critical for the continuing cohesive existence of that terrorist group of cells and logistics etc.

So the military already was using math to pick targets, this is just the next logical step, albeit, scary as hell step.


In your scenario there were still individuals accountable for the decisions and their outcomes.

How are you supposed to say why a machine learning model produces different outputs from the same input? It's just a black box.


It is being used to pick military targets, with very little oversight.

https://www.972mag.com/lavender-ai-israeli-army-gaza/


If any regulator acts it will be the EU. The action, if it comes, will of course be very late, possibly years from now, when the horse has long left the stable.


My only hope for the EU government is that they put and AI in charge and it accidentally becomes sentient...


Israel is already doing exactly that... using AI to identify potential targets based on their network of connections, giving these potential targets a cursory human screening, then OK-ing the bombing of their entire family since they have put such faith (and/or just don't care) in this identification process that these are considered high-value targets where "collateral damage" is accepted.


Surgeons don’t need a text based LLM to make decisions. They have a job to do and a dozen years of training into how to do it. They have 8 years of schooling and 4-6 years internship and residency. The tech fantasy that everyone is using these for everything is a bubble thought. I agree with another comment, this is Doomerism.


Surgeons are using robots that are far beyond fly by wire though, to the point that you could argue they're instructing the robots rather than controlling them.


Why would the military use ChatGPT or depend on any way on Openai 's policy? Wouldn't they just roll their own?


OpenAI is. Their TOS says don't use it for that kind of shit.

https://openai.com/policies/usage-policies/


That's the license for the public service. Nothing prevents them from selling it as a separate package deal to an army.


Right now insurance companies make those decisions based on how your life affects the profit/loss statement at the end of the quarter. (In the USA).

So it can’t really be worse if there’s just a RNG in a box. It may be better.


I get a good chuckle every morning when the "C3.ai" ad rolls on NPR.

"Hallucination-free," indeed.

Would love to know what actual, contractual guarantees they place around that.


> OpenAI is selling this a a tutor for your kids.

The Diamond Age.


That's what I find most offensive about the use of LLMs in education: it can readily produce something in the shape of a logical argument, without actually being correct.

I'm worried that a generation might learn that that's good enough.


a generation of consultants is already doing that- look at the ruckus around PWC etc in Australia. Hell, look at the folks supposedly doing diligence on Enron. This is not new. People lie, fib and prevaricate. The fact the machines trained on our actions do the same thing should not come as a shock. If anything it strikes me as the uncanny valley of truthiness.



It seems US and China are trying to reach an agreement to use AI to pick military targets these days.


Next it's going to teach them the Earth is flat and there are aliens behind the moon.


>Who is going to prevent this from being used to pick military targets

When AI is in charge of controlling weapons, you get this: https://www.accessnow.org/publication/artificial-genocidal-i...


While this is clearly a problem and a challenge to address, the thing that never gets mentioned with this line of criticism is the obvious: a large number of real-life teachers make mistakes ALL the time. They harbor wrong / out-dated opinions, or they're just flat-out wrong about things.


I’ve had coworkers suggest a technical solution that was straight up fabricated by an LLM and made no sense. More competent people realise this limitation of the models and can use them wisely. Unfortunately I expect to see the former spread.


I've spent a few hours last week crafting a piece of code for my coworker, and then when I asked him to test it in the real environment, it turned out that the API he wanted to connect to the code I gave him was just a hallucination by ChatGPT.


We had a manager joining last year which, on their first days, created MRs for existing code bases, wrote documents on new processes and gave advice on current problems we were facing. Everything was created by LLMs and plain bullshit. Fortunately were able to convince the higher ups that this person was an imposter and we got rid of them.

I really hope that these type of situations won't increase because the mental strain that put on some people in the org is not sustainable in the long run.


People aren't dumb. They'll catch on pretty quick that this thing is BS'ing them.


I don't understand OpenAI's pricing strategy. For free I can talk to GPT 3.5 on an unlimited basis, and a little to GPT 4o. If I pay $20 a month, I can talk to GPT 4o eighty times every three hours, or once every two and a half minutes. That's both way more than I need, and way less than I would expect for twenty dollars a month. I wish they had a $5 per month tier that included, say, eighty messages per 24-hours.


It'll make more sense when they deploy audio and image capability to paying users only, which they say they're going to do in a few weeks


Yeah, but I want a tier where I have access to it in a pinch, but won't feel guilty for spending the money and then going a whole month without using it.


Guilty? Over $20 a month? I spend more than that in an hour down the pub.


You get something for your money at the pub. If you pay twenty dollars for a service and then don't use it you have wasted your money.


Well, yes, I get more liver damage. I wouldn't worry too much about $20. Give an equal amount extra to charity or bet on something.

There was a time that amount mattered to me but if you're working any kind of dev/ops job it just doesn't any more.


I always thought it seemed likely that most needle in a haystack tests might run into the issue of the model just encoding some idea of 'out of place-ness' or 'significance' and querying on that, rather than actually saying something meaningful about generalized retrieval capabilities. Does that seem right? Is that the motivation for this test?


Are there any prompts/tests about recalling multiple needles (spread out) at once?

For example, each needle could be a piece to a logic puzzle.


The article compares GPT-4o to Sonnet from Anthropic. I'm wondering how Opus would perform at this test?


Anyone has done any benchmarks for RAG yet?


I am in England, do US users have access to memory features? ( Also do you ahve access to voice customisation yet?

Thanks


I am in England, on the 'Team Plan'* and got access to memory this week.

* https://openai.com/index/introducing-chatgpt-team/


Thank you!


memory features are available in Australia.


Would love to see Gemini there too!


What's the chance that these limericks are now in the training set? As others mention, it'd be interesting to come up with a way to synthesize something sufficiently interesting so it always evades training fit.


Previous answer to this question:

https://news.ycombinator.com/item?id=40361419


Your test is a good one but the point still stands that a novel dataset is the next step to being sure.


One could also programmatically (e.g. with nltk or spacy, replace nouns, named entities, etc) modify the dataset, even up to the point that every test run is unique.

You could also throw in vector similarity if you wanted to keep words as more synonyms or antonyms.


I have had good experience with Gemini 1M context model with this kind of tasks.


>Needle in a Needlestack is a new benchmark to measure how well LLMs pay attention to the information in their context window

I asked GPT-4o for JavaScript code and got Python, so much for attention.


What was your query?


We also need a way to determine where a given response fits in the universe of responses - is it an “average” answer or a really good one


If you have an evaluation function which does this accurately and generalizes, you pretty much already have have AGI.


One could have LLM to route it to a text search function and have the function report back to the LLM for secondary processing.


It's interesting (though perhaps not surprising) to see the variance in curve shape across models.


I thought google Gemini had almost perfect needle in haystack performance inside 1 million tokens?


The reason I made Needle in a needlestack is the LLMs are getting to good at needle in a haystack. Until GPT-4o, no model was good at the NIAN benchmark.


I wonder how llama3 is doing.


Meh still for a lot of stuff it simply lies.

Just today it lied to me about VRL language syntax, tryin to sell me some python stuff in there.

Senior ppl will often be able call out the bullshit, but I believe for junior ppl it will be very detrimental.

Nether the less amazing tool for d2d work if you can call out BS replies.


These benchmarks are becoming like the top 10 lists you find on the internet. I agree that everything has a space, but frankly how many of us need a test that tells you that this is great at limericks?


I think large language models can be used to classify people, lying, or saying, rehearsed, things or being disingenuous. Simply train them on a lot of audio of people talking, and they would become better than most polygraph machines. There’s something about how a person says something that quickly reveals that it was rehearsed earlier, or premeditated, and I’m sure when they’re lying there can be things like that too. the LLM can instantly pick up with some probability and classify it

I’ve seen claims during open AI demo that is there software can now pick up on extremely subtle emotional clues, how people speak. Then, it shouldn’t take much more to make it read between the lines and understand what people are intending to say, for example, by enumerating all possible interpretations and scoring them based on, many factors, including the current time, location, etc. In fact, by taking into account so much context in factors, the LLM‘s will be better than people the vast majority of the time understanding what a person meant, assuming they were genuinely trying to communicate something.

it will become very hard to lie because everyone’s personal LLM will pick up on it fairly quickly, and find tons of inconsistencies, which it will flag for you later. You will no longer be fooled so easily, and if it has the context of everything the person has said publicly, plus if the person gives permission for your LLM to scan everything they’ve said privately because you’re their Business partner or sexual partner, it can easily catch you in many lies and so on.

I predict that in the next 5 to 10 years, human society will completely change as people start to prefer machines to other people, because they understand them so well, and taken into account, the context of everything they’ve ever said. They will be thoughtful, remembering details about the person in many different dimensions, and use them to personalize everything. By contrast, the most thoughtful husband or boyfriend will seem like, a jerk seems now. Or a cat.

Humor and seductive conversation, will also be at a superhuman standards. People will obviously up their game too, just like when they do when playing the game go after Lee Sedol was totally destroyed by Alpha go, or when people start using Alpha Zarro to train for Chess. However, once the computers understand what triggers people to laugh or have sexual response, they will be able to trigger them a lot more predictively, they simply need more training data.

And bullshitting will be done on a completely different level. Just like people no longer walk to destinations but use cars to go thousands of miles a year, similarly people won’t interact with other people so much anymore. The LLM’s, trained to bullshit 1000 times better than any human, Will be undetectable and gradually shift public opinion as open source models will power swarms of accounts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: