temp 0 means that there will be no randomness injected into the response, and that for any given input you will get the exact same output, assuming the context window is also the same. Part of what makes an LLM more of a "thinking machine" than purely a "calculation machine" is that it will occasionally choose a less-probable next token than the statistically most likely token as a way of making the response more "flavorful" (or at least that's my understanding of why), and the likelihood of the response diverging from its most probable outcome is influenced by the temperature.
There's luck involved, but if you listen or read some of Marc Andreesen's ideas there's no doubt he's highly intelligent and thoughtful. Ben Horowitz even said he was the smartest person he'd ever met in his book. While he was obviously working on the right thing at the right time, his fortune is not what I would consider
blind luck.
Just another run-of-the-mill post-ZIRP bloated org chart cleanup with a nice PR spin. Spend more time innovating on your products, it's a mistake to innovate on company hierarchy, despite what PR departments like to suggest.
What do you think leveraging is? You are using your deposit (ETH) to increase exposure to an asset (more ETH). The process is just more manual compared to a traditional market because crypto.
This is already possible with search engines, there is enough information on the internet that you can substantiate just about any claim regardless of how much evidence there is to the contrary. (see flat-earth, plenty of plausible sounding claims with real, albeit, cherry picked evidence).
Yes of course, this is already possible with AI writing assistance as well, if you're willing to plug in some of the phrases they come up with into a search engine to figure out where they may have come from. But you still have to do the work of stringing the arguments together into a cohesive structure and figuring out how to find research that may be well outside the domains you're familiar with.
But I'm talking about writing a thesis statement, "eating cat boogers makes you live 10 years longer for Science Reasons" and have it string together a completely passable and formally structured argument along with any necessary data to convince enough people to give your cat booger startup revenue to secure next round, because that seems to be where all these games are headed. The winner is the one who can outrun the truth by hashing together a lighter weight version of it, and though it won't stand up to a collision with real thing, you'll be very far from the explosion by the time it happens.
AI criticism is essentially people claiming that having access to something they don't like will end the world. As you say, we already have a good example of this and while it is mostly bad and getting worse it's not world-ending.
"chronic stress aside, that decline in BMR mirrors almost perfectly the curves of PUFA consumption rates in the general population. Namely, as the BMR curve has steadily declined over the last 100 years the PUFA consumption rate curve has steadily moved upwards over time. Unless this trend of ever-increasing PUFA consumption is interrupted, I don’t see the decline of BMR flattening (let alone reversing) any time soon." eating massive quantities of highly processed seed oils that are ubiquitous in our food supply because they are much cheaper than "real" food is surely a contributor.
yeah the issue is on desktop too, I battled with it for awhile but ultimately gave up, it seems to be a notoriously difficult problem to solve with ThreeJS