Hacker News new | past | comments | ask | show | jobs | submit | tyler33's comments login

i tried qwen and it is surprinsingly good, maybe not as good as claude but could replaced it


just like an artificial neural network in training mode :/


Exactly. That's what I also thought the last time someone brought this up in these forums (couple of years ago). It's a hint to the structure of the brain and also the mind (we can observe it at the consciousness level in ourselves and others, too). Algorithmic (von Neumann) computers/programs do not usually exhibit this; but neural networks do.

If we are good readers, a book helps up achieve better generalization (in the machine learning sense). That's its main contribution not the specific facts it contained.


Good question, the "verify" option, where you could watch old of our image-colors signature, will only reveal part of the colors , (maybe will be pay-per reveal, and one part will be never revealing and other part random), so if somebody try to tamper with the archive will not work. If someone is preper and he build a "strategy" (not so common) he could be recording the webpage stamp, and so yes it could kinda fake the timestamp video. because of that this works reliable for: trust something is live; or the video has the timeevidence timestamp AND he uploads it to a 3rd party social network (so 2 timestamps) the timeevidence video (is not old) and the social network timestamp ("uploaded today at..." so its verified in the now, also without time to tamper ). This is common rutine at "identity verification" or sending "rental conditions" etc


we just have to download the weights and use piratebay, or even emule, just like old times


Yes, but companies will have no incentive to publish open source models anymore. Or, it could be so difficult/beaurocratic no one will bother and keep it close source


What will actually happen is that innovation there will just move somewhere else (and it has partly done so).

This proposal is the US doing that bicycle/stick meme, it will backfire spectacularly.


The innovation is largely happening within the megacorps anyway, this is solely intended to make sure the innovation cannot move somewhere else.


Mistral and Falcon is not from megacorps and not even US.and many other opensource chinese models . And both are based models that means they are totally organic outside of US.


That’s what they told us. Turns out Google stopped innovating a long time ago. They could say stuff like this when Bard wasn’t out but now we have Mistral and friends to compare to Llama.

Now it turns out they were just bullshitting at Google.


> Now it turns out they were just bullshitting at Google.

I don't think Google was bullshitting when they wrote, documented and released Tensorflow, BERT and flan-t5 to the public. Their failure to beat OpenAI in a money-pissing competition really doesn't feel like it reflects on their capability (or intentions) as a company. It certainly doesn't feel like they were "bullshitting" anyone.


Everyone told us they had secret tech that they were keeping inside. But then Bard came out and it was like GPT-3. I don’t know man. The proof of the pudding is in the eating.

> The innovation is largely happening within the megacorps anyway

That was the part I was replying to. Whichever megacorp this is, it’s not Google.


Hey, feel free to draw your own conclusions. AI quality is technically a subjective topic. For what it's worth though, Google's open models have benched quite competitively with GPT-3 for multiple years now: https://blog.research.google/2021/10/introducing-flan-more-g...

The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.


Still, one of those corporations wants to capture the market and has monopolistic attitude. Meta clearly chose the other direction, when publishing their models and allowing us all to participate in.


Then we'll create a distributed infrastructure for the creation of models. Run some program and donate spare GPU cycles to generate public AI tools that will be made available to all.


I really really would like to believe this could work in practice.

Given current data volume used during the training phase (tb/s), I highly doubt it's possible without two, magnitude changing, breakthroughs at once


the bad thing about other computers, could happen to everybody, it is harder to use your machine but better long termn


why they dont make public apps?(at least)


for blackrock is nothing 24 million


You are right, in fact EARN-IT is very good for criminals


maybe we need better adblockers now, maybe check a hash of javascript files (instead of domain and name) or maybe even something with AI


What about this patch https://github.com/zhangyoufu/log4j2-without-jndi/blob/maste... of removing JndiLookup.class , seems still right


That's the temp fix that Tableau pushed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: