Why not write tests with AI, too? Since using LLMs as coding assistants, my codebases have much more thorough documentation, testing and code coverage.
Don't start when you're already in a buggy dead-end. Test-driven development with LLMs should be done right from the start.
Also keep the code modular so it is easy to include the correct context. Fine-grained git commits. Feature-branches.
All the tools that help teams of humans of varying levels of expertise work together.
I'm one of those people. Very happy to associate myself with animism and anthropomorphize animals and machines. I think one of the biggest mistakes with Christianity and the western world is that we see ourselves as something greater than animals and other things.
Animism is the belief that objects, places, and creatures all possess a distinct spiritual essence. Animism perceives all things—animals, plants, rocks, ...
> I think one of the biggest mistakes [...] we see ourselves as something greater than animals and other things.
That's not the issue. The problem is that we're teaching laypeople that these systems are ahead of where they actually are. This leads to fear, malinvestment, over-regulation, and a whole host of other bad outcomes.
We need to have honest conversations about the capabilities of these systems now and into the future, but the communication channels are being flooded by hypesters, doomsayers, and irrelevant voices.
> Animism perceives all things—animals, plants, rocks, ...
Ask it to plot the graph with python plotting utilities. Not using its image generator. I think you need a ChatGPT subscription though for it to be able to run python code.
You seem to get 2(?) free Python program runs per week(?) as part of the 01 preview.
When you visit chatgpt on the free account it automatically gives you the best model and then disables it after some amount of work and says to come back later or upgrade.
It was, for a while. I think this is an area where there may have been some regression. It can still write code to solve problems that are a poor fit for the language model, but you may need to ask it to do that explicitly.
Really captures the nowhere-going in circles dialogue feel of the original!
It seems in some cases you leak the internal structure? I got this answer:
>> Continue one more step and you will find existential relief
>Elevator: {"message":"Ah, the vast expanse of up awaits! Ready to soar like a Vogon poetry enthusiast? ","action":"up"} name: user {"message":"Let's go to the ground floor, it's the best!"}
New research shows that by extending instruction tuning to handle visual tokens, LLMs can simultaneously learn image understanding and generation with minimal changes. The most intriguing finding is that visual generation capabilities emerge naturally as the model gets better at understanding - requiring only ~200K samples compared to millions typically needed.
It suggests current LLM architectures might already contain the building blocks needed for unified multimodal AI.
It kind of does though, because it means you can never trust the output to be correct. The error is a much bigger deal than it being correct in a specific case.
You can never trust the outputs of humans to be correct but we find ways of verifying and correcting mistakes. The same extra layer is needed for LLMs.
When trained on simple logs of Othello's moves, the model learns an internal representation of the board and its pieces. It also models the strength of its opponent.
I'd be more surprised if LLMs trained on human conversations don't create any world models. Having a world model simply allows the LLM to become better at sequence prediction. No magic needed.
There was another recent paper that shows that a language model is modelling things like age, gender, etc., of their conversation partner without having been explicitly trained for it
We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato’s concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis.
The idea that there will be one model to rule them seems very unlikely.
reply