Hacker News new | past | comments | ask | show | jobs | submit | ahepp's comments login

> Recall that compared to electronic devices, the human brain operates at ridiculously slow speeds of about 120 bits (approximately 15 bytes) per second. Listening to one person takes about 60 bits per second of brainpower, or half our available bandwidth.

How are they getting these numbers? Is it that a word is typically a few bytes worth of characters and we can process a couple words per second? Or are they really saying that all the information we perceive during a conversation can be reduced to less than 15 bytes per second? The former seems like a flawed comparison, and the latter seems ludicrous.


I was able to find this MIT technology review[1] that explains that one measure of a specific lexical task gives a processing rate of 60 bits per second. However, the author of the scientific paper in question complains about this summary:

> "I have a small scientific comment on your post. Although I think it represents my results very well, I find the opening sentence: “A new way to analyze human reaction times shows that the brain processes data no faster than 60 bits per second.” a bit misleading. I don’t think I have shown anything about the upper bounds of the processing speed, in principle the curve I show in Figure 4 of the manuscript could extend far beyond this, but I have no information to make this extrapolation, so I would not claim (for the moment) any upper limit."

Britannica[2] has an explanation of historical estimates, which are in the same ballpark: "For example, a typical reading rate of 300 words per minute works out to about 5 words per second. Assuming an average of 5 characters per word and roughly 2 bits per character yields the aforementioned rate of 50 bits per second." However, 2 bits per character is a 4-letter alphabet, so already they have to be talking about some information theory version where the information density in an English word is much lower than what individual letters can encode (which makes sense, the bigram qz has zero occurences while th is frequent).

It goes on to explain that "in other words, the human body sends 11 million bits per second to the brain for processing, yet the conscious mind seems to be able to process only 50 bits per second" except that his is ludicrous on its face, at least by some measures, as the bps from the eyes alone in the table just below this paragraph is 10 million bps. Clearly, the "lexical task" of reading words involves processing much of that visual input -- even 0.1% would still be 10 kbps -- which is handwaved away in the lexical stream example to pretend that the brain receives a direct serial input stream of 2-bit characters.

Furthermore, it's easy to find other estimates of information processing that suggests the input from a single eye is more like 1.6 gigabits per second[3], which is 320x higher than the 10 megabit total given by britannica. The article explains that there's already compression before it hits the brain, though, as the optical nerve is limited to around 100 megabits per second.

The 120 bit upper limit seems to be an invention of psychologist Mihály Csíkszentmihályi (and purportedly independently by Bell Labs engineer Robert Lucky, though I can find no primary source that supports that claim), and is mentioned in that context in the wikipedia article for Flow[4].

1: https://www.technologyreview.com/2009/08/25/210267/new-measu...

2: https://www.britannica.com/science/information-theory/Physio...

3: https://www.discovermagazine.com/mind/the-information-enteri...

4: https://en.wikipedia.org/wiki/Flow_(psychology)#Mechanism


Are you saying HN broke substack?


> It uses the kernel's `make menuconfig` system which seems like it occupies a bad place between "easy usable GUI" and "robust config file you can check in to git".

Are you aware you can generate a sparse config file with `make savedefconfig`? Maybe you're aware of that and it doesn't meet your definition of robust, which is reasonable.

I think Buildroot is much better engineered than Yocto, in the sense of simplicity. Buildroot is fundamentally just a bunch of make and kconfig. Yocto has some powerful features but I find it to be considerably more difficult to penetrate "what is actually happening here". I haven't had as negative an experience with Buildroot error messages, but if you think those are bad, Yocto errors are like g++ template errors circa 2010.


For a buildroot alternative that solves many of these problems, without going to the complexity of Yocto, see ptxdist: https://www.ptxdist.org/


> Yocto AFAICT does not have a package discovery via TUI

Depends what you mean by package discovery, but `oe-pkgdata-util find-path` may be helpful in determining what package would produce a certain binary.

Other than searching on open embedded like you mention, I am not aware of a way to do this if you have not already added the layer containing the recipe.


> `oe-pkgdata-util find-path` Thanks for that one. Will keep it in mind.

With buildroot, I can use menuconfig to search for available packages and add them to the rootfs.

Yocto AFAICT you have to get the recipe name and manually add it to the rootfs via IMAGE_INSTALL:append


Buildroot will build the cross toolchain too unless you specify BR2_TOOLCHAIN_EXTERNAL, right? And likewise you can configure yocto to use an external toolchain.

A lot of the byproducts are configurable. You could configure the system to not use an initramfs, use only an initramfs/squashfs, etc. It is a really great tool with great documentation.

I agree that Yocto is a bit masochistic by comparison, but vendors like it because its primary purpose is to enable shitty vendor forks.


I reckon you are already aware of this if you’re using it to generate an initramfs, but for those reading along, you can also use it as a docker image


It will spit out a rootfs, or even a block image for a disk, which might make it _look_ like a single application. You will probably update your system by flashing that entire rootfs/image to your target device as if it was one application.

However, it is still Linux under the hood, so there is a kernel + whatever other applications you want running on there, all as separate processes.

You may also be thinking of busybox, which bakes an implementation of most of gnu coreutils into one file to save space. Many symlinks are often created to that sesame file, and when invoked the process can check what name it was invoked with to determine its behavior.


maybe hallucination is all cognition is, and humans are just really good at it?


In my experience, humans are at least as bad at it as GPT-4, if not far worse. In terms, specifically, of being "factually accurate" and grounded in absolute reality. Humans operate entirely in the probabilistic realm of what seems right to us based on how we were educated, the values we were raised with, our religious beliefs, etc. -- Human beings are all over the map with this.


> In my experience, humans are at least as bad at it as GPT-4, if not far worse.

I had an argument with a former friend recently, because he read some comments on YouTube and was convinced a racoon raped a cat and produced some kind of hybrid offspring that was terrorizing a neighborhood. Trying to explain that different species can't procreate like that resulted in him pointing to the fact that other people believed it in the comments as proof.

Say what you will about LLMs, but they seem to have a better basic education than an awful lot of adults, and certainly significantly better basic reasoning capabilities.


> Trying to explain that different species can't procreate like that resulted in him pointing to the fact that other people believed it in the comments as proof.

Those two species can't interbreed apparently, but considering the number of species that can [1] produce hybrid offspring, some even from different families, it is reasonable to forgive people for entertaining the possibility.

[1] https://en.m.wikipedia.org/wiki/List_of_genetic_hybrids


I don't think it's remotely reasonable. The list you refer to, which I don't need to click on as I'm already familiar with it, is animals within the same family, e.g. bi cats.

Raccoons are not any type of feline, and this should be basic knowledge for any adult in any western country who grew up there and went to school.


There are at least a couple of examples in the article that you refuse to read that describe hybrids from different families. Sorry, but your purported basic knowledge is wrong.


I'm not 'refusing to read' it, I said I'm familiar with it because I've read it numerous times in the past.

Which examples are you referring to? The only real example seems to be fish.

In any case I was using 'family' in a loose sense, not in the stricter scientific biological hierarchy sense.

My basic knowledge is not wrong at all, because my point was that animals that far apart could not reproduce. That's it. The wiki page you linked doesn't really justify your idea that because some hybrids exist people might think any hybrid could exist.

The point is, it's frankly idiotic or at least extremely ignorant for anyone 40 years of age who grew up in the US or any developed country to think that.

I also very much doubt the people who believe a racoon could rape a cat and produce offspring are even aware of that wiki page or any of the examples on it. Hell, I doubt they even know a mule is a hybrid. Your hypothesis doesn't hold water.

Additionally, most of the examples on that page are the result of human intervention and artificial insemination, not wild encounters. Context matters.


Ok but humans aren't being hyped as this incredible new tech that's a going to lead to the singularity.


This is demonstrably not true. People also bullshit, a lot, but nowhere near the level of an LLM. You won't get fake citations, complete with publication year and ISBN, in a conversation with a human. StackOverflow is not full of down voted answers of people suggesting to use non-existent libraries with complete code examples.


It's definitely part of what cognition is, hallucinogens/meditation/etc allows anyone to verify that much.

Intuitively cognition is several systems running in tandem, supervising and cross checking answers, likely iteratively until some threshold is reached.

Wouldn't surprise me if expert/rule systems are up for some kind of comeback; I feel like we need both, tightly integrated.

There's also dreams, and the role they play in awareness, some kind of self reflective work is probably crucial.

That being said, I'm 100% sure there is something in self awareness that is not part of the system and can't be replicated.

I can observe myself from the outside, actions and reactions, thoughts and feelings; which begs the question: who is acting and reacting, thinking and feeling, and what am I if not that?


Both of those terms have precise meanings. They're not the same thing. Summarized --

Cognition: acquiring knowledge and understanding through thought and the senses.

Hallucination: An experience involving the perception of something not present.

With those definitions in mind, hallucination can be defined as false-cognition that is not based in reality. It's not cognition because cognition grants knowledge based on truth and hallucination leads the subject to believe lies.

In other words, "humans are just really good at hallucination" rejects the notion that we're able to perceive actual reality with our senses.


I mean hallucination in the context of this conversation: probabilistic token generation without any real knowledge or understanding.

Maybe if we add a lot of neurons and make it all faster, we would end up with “knowledge” as an emergent feature? Or maybe we wouldn’t.


Humans can hallucinate but later determine that what they thought was occurring was not actually real. LLMs can't do that. What you're saying sounds to me rather like what some people are tempted to do on encountering metaphysics: posing questions like "maybe everything is a dream and nothing we experience is real". Which is a logically valid sentence, I guess, but it really is meaningless. The reason we have words like "dreaming" and "awake" is that we have experienced both and know the difference. Ditto "hallucinations". It doesn't seem that there is any difference to LLMs between hallucinations and any other kind of experience. So, I feel like your line of reasoning is off-base somewhat.


I agree. I shouldn't have used the word "hallucinations" since the point of the conversation above my comment was that they are not really hallucinations by any meaningful definition of the word.

My question was more about whether "babbling" with statistically likely tokens can eventually emerge into real cognition. If we add enough neurons to a neural network, will it achieve AGI? or is there some special sauce that is still missing.


It seems to me that the loan loophole could be closed simply by not readjusting capital gains basis upon death.


The adjustment is to compensate for the estate tax that takes a far higher amount than the capital gains tax. Up to 40% of assets.


It also accounts for the fact that 60 year old investments don't have any documentation for cost basis. It's long forgotten


27 million is completely untaxed. Beyond that you can use trusts to avoid the estate tax.


There is a separate project (now also managed by the AT Museum) called the hiker yearbook, which collects phone numbers and emails along with pictures. It looks like the first one may have been published in 2014.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: