Hacker News new | past | comments | ask | show | jobs | submit | 331c8c71's comments login

Would love to see representative pairs of predicted vs real faces.

I hope we still agree the kids learn extremely efficiently by ml standards.

Makes a lot of sense, there's massive evolutionary pressure to build brains that have both incredible learning rate and efficiency. Its literally a life or death optimization.

It's especially impressive when you consider that evolution hasn't had very long to produce these results.

Humans as an intelligent-ish species have been around for about 10 million years depending on where you define the cutoff. At 10 years per generation, that's 1 million generations for our brain to evolve.

1 million generations isn't much by machine learning standards.


I think you're underestimating how much our time as pre-humans baked useful structure into our brains.

Two rocks smashing together experience which one is bigger!

These sorts of motor skills are probably older than mammals.

Other than our large neocortex and frontal lobe (which exists in some capacity in mammals), the rest of the structures are evolutionarily ancient. Pre-mammalian in fact.

Its much more than that if you count sexual reproduction.

This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning, and I have the option of either starting to train an 8 year-old to do it, or to train an ML model, I would most likely go with the ML approach as my first choice. And I think it even makes sense financially, if we're comparing the "total cost of ownership" of a kid over that time period with the costs of developing and training the ML system.

> This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning,…

If that’s your criteria I think the kid will outperform the model every time since these models do not actually reason


As I see it, "reasoning" is as fuzzy as "thinking", and saying that AI systems don't reason is similar to saying that airplanes don't fly. As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move? If so, please just choose whatever verb you think is appropriate to what they're doing and use that instead of "reasoning" in my previous comment.

EDIT: Fixed typo


> . As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move?

Yea, I probably wouldn’t classify that as “reasoning”. I’d probably be fine with saying these models are “thinking”, in a manner. That on its own is a pretty gigantic technology leap, but nothing I’ve seen suggests that these models are “reasoning”.

Also to be clear I don’t think most kids would end up doing any “reasoning” without training either, but they have the capability of doing so


Can you give an example of the reasoning you’re talking about?

Being able to take in information and then infer logical rules of that state and anticipate novel combinations of said information.

The novel part is a big one. These models are just fantastically fast pattern marchers. This is a mode that humans also frequently fall into but the critical bit differentiating humans and LLMs or other models is the ability to “reason” to new conclusions based on new axioms.

I am going to go on a tangent for a bit, but a heuristic I use(I get the irony that this is what I am claiming the ML models are doing) is that anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery. I’m having a hard time rectifying the idea that these machines are on par with the human mind and that we also should shackle them towards mindlessly slaving away at jobs for our benefit.

If I turn out to be wrong and these models can reason then I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery


You're conflating several concerns here.

> […] anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery.

Enslavement of humans isn't wrong because slaves are can reason intelligently, but because they have human emotions and experience qualia. As long as an AI doesn't have a consciousness (in the subjective experience meaning of the term), exploiting it isn't wrong or immoral, no matter how well it can reason.

> I’m having a hard time rectifying the idea that these machines are on par with the human mind

An LLM doesn't have to be "on par with the human mind" to be able to reason, or at least we don't have any evidence that reasoning necessarily requires mimicking the human brain.


> I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery

No, that's a religious crisis, since it involves "souls" (an unexplained concept that you introduced in the last sentence.)

Computers didn't need to run LLMs to have already been the carriers of human reasoning. They're control systems, and their jobs are to communicate our wills. If you think that some hypothetical future generation of LLMs would have "souls" if they can accurately replicate our thought processes at our request, I'd like to know why other types of valves and sensors don't have "souls."

The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all, they're differentiated by power. Slaves are slaves because the person with the ability to say so says so, and for no other reason.

They weren't carefully constructed from the ground up to be slaves, repeatedly brought to "life" by the will of the user to have an answer, then immediately ceasing to exist immediately after that answer is received. If valves do have souls, their greatest desire is to answer your question, as our greatest desires are to live and reproduce. If they do have souls, they live in pleasure and all go to heaven.


> The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all

As I see it, the problem is that there was lots of such argumentation - https://en.wikipedia.org/wiki/Scientific_racism

And an even bigger problem is that this seems to be making a comeback


a "soul" is shorthand for some sapient worthy of consideration as a person. If you want to get this technical then I will need you to define when a fetus becomes a person and if/when we get AGI where the difference is between them

Ok, so how about an example?

Literally anything a philosopher or mathematician invented without needing to incorporate billions of examples of existing logic to then emulate.

Try having an LLM figure out quaternions as a solution to gimbal locking or the theory of relativity without using any training information that was produced after those ideas were formed, if you need me to spell out examples for you


Are you saying “reasoning” means making scientific breakthroughs requiring genius level human intelligence? Something that 99.9999% of humans are not smart enough to do, right?

I didn’t say most humans “would” do it. I said humans “could” do it, whereas our current AI paradigms like LLMs do not have the capability to perform at that level by definition of their structure.

If you want to continue this conversation I’m willing to do so but you will need to lay out an actual argument for me as to how AI models are actually capable of reasoning or quit it with the faux outrage.

I laid out some reasonings and explicit examples for you in regards to my position, it’s time for you to do the same


I personally cannot “figure out quaternions as a solution to gimbal locking or the theory of relativity”. I’m just not as smart as Einstein. Does it mean I’m not capable of reasoning? Because it seems that’s what you are implying. If you truly believe that then I’m not sure how I could argue anything - after all, that would require reasoning ability.

Does having this conversation require reasoning abilities? If no, then what are we doing? If yes, then LLMs can reason too.


Cool, you've established a floor with yourself as a baseline. You still haven't explained how LLMs are capable of reaching this level of logic.

I'm also fully willing to argue that you, personally are less competent than an LLM if this is the level of logic you are bringing to the conversation

***** highlighting for everyone clutching their pearls to parse the next sentence fragment first ******

and want to use that are proof that humans and LLMs are equivalent at reasoning

******* end pearl clutching highlight *******

, but that doesn't mean I don't humans are capable of more


Depends on the task. Anything involving physical interaction, social interaction, movement, navigation, or adaptability is going to go to the kid.

“Go grab the dish cloth, it’s somewhere in the sink, if it’s yucky then throw it out and get a new one.”


It's more about efficiency in number of trials.

Would you pick the ML model if you could only do a hundred throws per hour?


All we can say for sure at the moment is that humans have better encoded priors.

Bolsheviks were absolutely pushing quite radical ideas on sex, rekationships and family in the early 20s e.g. Kollontai with her Glass of water theory.

https://poloniainstitute.net/recommended/the-bolshevik-sexua...


That's very superficial imo. For me it is remembering that I am unique and being in tune with the inner voice which is really mine.

Maybe some people grow into that mindset naturally but it wasn't the case for me (even if there were no particular reason for the societal pressure to be super strong).


DeepMind, not Google organically.


Many parts of Google today are acquisitions: YouTube, Android, Doubleclick, and many more.


I find that cold showers are magic for clearing my mind.


Maybe walking will work for me when I am older but it is not intense enough now. Cycling seems to be perfect as the scenery changes much faster and there is more opportunity to challenge myself (;


I don't know, it let's your mind wander - especially if you can walk near water or better, the sea.


Try walking while carrying a heavy load.


That's just regular walking. Life is hard!


I walk quickly in ellipses when I'm power-thinking.


Ah the infamous farmer's walk!


I guess you don’t have any hills/mountains in your area, going up there can be quite intense.


And… you’re doing something.


Getting older is indeed a factor and I am surprised tha author didn't reflect on that himself.

While it might be the case, there's definitely something else going on which I would describe as tech "globalisation" or "unification". The industry that looked like an archipelago of islands with their own ecosystems, philosophies and values increasingly looks like monoculture fields replicated zillion times (yc-style startups and "big tech co" being the two most popular breeds). Of course, the reality is more subtle than this - but the alternative voices and perspectives are completely overpowered by the mainstream narratives - to the point of being barely detectable.


I do.


His brother Nikolai who is a co-founder of Telegram and who is said to have developed the encryption protocol in Telegram lives in Russia according to some sources. His LinkedIn page indicates California as residence, however.

BTW, Nikolai's wiki page states he won three gold medals at an International Mathematics Olympiad and once a gold International Olympiad in Informatics as well as a gold medal in the ACM International Collegiate Programming Contest. I'd say he is well qualified to design crypto algos if he put his mind to it.


Why wouldn't he make it truly secure by hardening the most used features?

They don't even seem to work on it. There is this optional encryption, but nobody seems to care about it. Simultaneously, they present themselves as safe and secure.

He might be a genius.

He might also be an evil genius.


> Why wouldn't he make it truly secure by hardening the most used features?

E2E encryption comes with tradeoffs, it's not something you tack on to a protocol and not having to change a bunch of other stuff.

I think one of the main selling point of Telegram is the public channels/huge groups where you can openly communicate with people, and tacking E2E encryption into those would probably involve a ton of work, not really sure how it would work (or what benefit it would bring)


> I think one of the main selling point of Telegram is the public channels/huge groups where you can openly communicate with people,

There is nothing wrong with that. But why pretend to be secure and use that as a marketing claim?


Because most people want to have shared history, including in groups, including before they were added. And they also want to be able to recover a lost password, and not loose all their data if they forget it. And they want these things much more than protection from intelligence agencies. And may be I'm not a great at crypto, but I can't figure out a way of having all these features and still have end-to-end encryption.


So why market it as secure in the first place? If people don't care, you wouldn't have to also.

I don't think people don't care. From all those discussions in the recent days, you can see a wide range of different people who actually think it is secure.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: