By time, they’re talking about the writing style of a specific time period.
Feels like a click bait title. Of course language model weights encode different writing styles. The fact that you can lift out a vector to stylize writing is also more interesting, but that’s also nothing newly discovered here. It should be obvious that this is possible given that you can prompt ChatGPT to change its writing style.
Besides what the sibling comment said, what's most interesting (imo) is that you can manipulate the vectors like that. The fact that you can average the vectors for January and March, and get better results for February, is pretty surprising to me.
Generalizing vectors in generative models seems like an incredibly useful thing to know about, if you want to use them more effectively. Blew my mind when I saw someone demonstrate doing vector math on a GAN a couple years back to move an "input image" around the space of outputs.
Maybe this could be useful for singling out post-LLM text and generating output that excludes it.
It does work on other reflective actions, parent is just wrong; in the paper, they specifically run the experiment on a dataset of political affiliation over time
From the title, I was thinking "of course the neural network of the LLM is a [cause-effect] sequence of words" thus time is encoded in each connection.
Twitter doesn’t show replies if you are not logged in. As others have said, I also don’t have an account. So this link provides the full context. The twitter link only shows the post and no replies.
Twitter doesn't even show most recent tweets from profiles unless you are logged in now. They show a summary of the profile's activity. Nitter is great if you don't have a Twitter account.
I think I like time. Though spectral, indeterminate, presently a fixture, essential moments last forever but occur daily. Why would any network encode time if it were all just a crystal vase?
Beautiful. Thoughtful. Clever. Wise. In brightness like the face of Odin, in hearing like Moo, in spring and morning most goodly delight. Doing poetic justice to itself. Bringing up crystal vases! Per-bloody-fect.
Sooo… if I’m reading this right, it’s possible to force an AI into extrapolating into the future. As in, it’ll answer as-if its training was based on data from future years.
Obviously this isn’t time travel, but more of a zeitgeist extrapolation.
I would expect that if an AI was made to answer like it’s from December 2024 it would talk a lot about the US election but it wouldn’t know who won — just that a “race is on.”
This could have actual utility: predicting trends, fads, new market opportunities, etc…
Kind of. You still need some data from the "future" to extrapolate it: In the paper, they take an LLM finetuned on 2015 political affiliation data, and add to it the difference between 2020 and 2015 Twitter data, and show that the performance is better when the new model is asked about 2020 political affiliation.
So, the LLM still needs to know about 2020 from somewhere. In a way, you teach it about the task, then separately you teach it about 2020, and this method can combine that to make it solve the task for year 2020.
A vector is a position in a dimensional space. In 2D space a vector is a point (x, y) like (1, 3) or (-2.5, 7.39). We can also do simple math on vectors like addition: (1, 3) + (2, -1) = (3, 2).
LLMs treat language as combinations of vectors of a very high dimension -- (x, y, z, a, b, c, d, ...). The neat thing is that we can combine these just like the 2D vectors and get meaningful results. If we have the vectors for the concepts "King" and "Woman", adding them gives a vector close to the one for "Queen"!
Once you know this, you can extrapolate and look for ways to categorize groups of vectors and combine them in new ways. As I read it, this research is about finding the vector weights for text from specific time periods -- i.e. January of 2021 -- and comparing them to the vectors for text from a different period -- i.e. March of 2021. It seems that all the operations are still meaningful, you can even do something like averaging vectors in January and March and getting ones that look like vectors in February!
Well, I think this could become one of most underestimated idea in LLM development.
To be honest, it is relatively obvious idea, to make vectors from timestamps and feed them to LLMs, but for some strange reason, nobody made this before and looks like, this is mostly unnoticed in NN community.
I think a more general way to think about it would be to add any data and reduce weight. For eg, if we want to create geography vectors, we would add all geography data to fine tune and then take a difference. Now add this to any other model with same architecture, and you have a geography capable llm.
Feels like a click bait title. Of course language model weights encode different writing styles. The fact that you can lift out a vector to stylize writing is also more interesting, but that’s also nothing newly discovered here. It should be obvious that this is possible given that you can prompt ChatGPT to change its writing style.