Exactly. That's what I also thought the last time someone brought this up in these forums (couple of years ago). It's a hint to the structure of the brain and also the mind (we can observe it at the consciousness level in ourselves and others, too). Algorithmic (von Neumann) computers/programs do not usually exhibit this; but neural networks do.
If we are good readers, a book helps up achieve better generalization (in the machine learning sense). That's its main contribution not the specific facts it contained.
Good question, the "verify" option, where you could watch old of our image-colors signature, will only reveal part of the colors , (maybe will be pay-per reveal, and one part will be never revealing and other part random), so if somebody try to tamper with the archive will not work.
If someone is preper and he build a "strategy" (not so common) he could be recording the webpage stamp, and so yes it could kinda fake the timestamp video. because of that this works reliable for: trust something is live; or the video has the timeevidence timestamp AND he uploads it to a 3rd party social network (so 2 timestamps) the timeevidence video (is not old) and the social network timestamp ("uploaded today at..." so its verified in the now, also without time to tamper ). This is common rutine at "identity verification" or sending "rental conditions" etc
Yes, but companies will have no incentive to publish open source models anymore. Or, it could be so difficult/beaurocratic no one will bother and keep it close source
Mistral and Falcon is not from megacorps and not even US.and many other opensource chinese models .
And both are based models that means they are totally organic outside of US.
That’s what they told us. Turns out Google stopped innovating a long time ago. They could say stuff like this when Bard wasn’t out but now we have Mistral and friends to compare to Llama.
Now it turns out they were just bullshitting at Google.
> Now it turns out they were just bullshitting at Google.
I don't think Google was bullshitting when they wrote, documented and released Tensorflow, BERT and flan-t5 to the public. Their failure to beat OpenAI in a money-pissing competition really doesn't feel like it reflects on their capability (or intentions) as a company. It certainly doesn't feel like they were "bullshitting" anyone.
Everyone told us they had secret tech that they were keeping inside. But then Bard came out and it was like GPT-3. I don’t know man. The proof of the pudding is in the eating.
> The innovation is largely happening within the megacorps anyway
That was the part I was replying to. Whichever megacorp this is, it’s not Google.
Hey, feel free to draw your own conclusions. AI quality is technically a subjective topic. For what it's worth though, Google's open models have benched quite competitively with GPT-3 for multiple years now: https://blog.research.google/2021/10/introducing-flan-more-g...
The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.
Still, one of those corporations wants to capture the market and has monopolistic attitude. Meta clearly chose the other direction, when publishing their models and allowing us all to participate in.
Then we'll create a distributed infrastructure for the creation of models. Run some program and donate spare GPU cycles to generate public AI tools that will be made available to all.