Hacker News new | past | comments | ask | show | jobs | submit | nirvael's comments login

I think this is over-simplified and possibly misunderstood. I haven't read the book this article references but if I am understanding the main proposal correctly then it can be summarised as "cortical activity produces spatial patterns which somehow 'compete' and the 'winner' is chosen which is then reinforced through a 'reward'".

'Compete', 'winner', and 'reward' are all left undefined in the article. Even given that, the theory is not new information and seems incredibly analogous to Hebbian learning which is a long-standing theory in neuroscience. Additionally, the metaphor of evolution within the brain does not seem apt. Essentially what is said is that given a sensory input, we will see patterns emerge that correspond to a behaviour deemed successful. Other brain patterns may arise but are ignored or not reinforced by a reward. This is almost tautological, and the 'evolutionary process' (input -> brain activity -> behaviour -> reward) lacks explanatory power. This is exactly what we would expect to see. If we observe a behaviour that has been reinforced in some way, it would obviously correlate with the brain producing a specific activity pattern. I don't see any evidence that the brain will always produce several candidate activity patterns before judging a winner based on consensus. The tangent of cortical columns ignores key deep brain structures and is also almost irrelevant, the brain could use the proposed 'evolutionary' process with any architecture.


While it does build on established concepts like Hebbian learning, I think theory offers a potentially insightful way of thinking about brain function


> I think this is over-simplified and possibly misunderstood.

I'm with you here. I wrote this because I wanted to drive people towards the book. It's incredible and I did it little justice.

> "cortical activity produces spatial patterns which somehow 'compete' and the 'winner' is chosen which is then reinforced through a 'reward'"

A slight modification: spatio-temporal patterns*. Otherwise you're dead on.

> 'Compete', 'winner', and 'reward' are all left undefined in the article.

You're right. I left these undefined because I don't believe I have a firm understanding of how they work. Here's some speculation that might help clarify.

Compete - The field of minicolumns is an environment. A spatio-temporal pattern "survives" when a minicolumn is firing in that pattern. It's "fit" if it's able to effectively spread to other minicolumns. Eventually, as different firing patterns spread across the surface area of the neocortex, a border will form between two distinct firing patterns. They "Compete" insofar as each firing pattern tries to "convert" minicolumns to fire in their specific pattern instead of another.

Winner - This has two levels. First, an individual firing pattern could "win" the competition by spreading to a new minicolumn. Second, amalgamations of firing patterns, the overall firing pattern of a cortical column, could match reality better than others. This is a very hand-wavy answer, because I have no intuition for how this might happen. At a high level, the winning thought is likely the one that best matches perception. How this works seems like a bit of a paradox as these thoughts are perception. I suspect this is done through prediction. E.g. "If that person is my grandmother, she'll probably smile and call my name". Again, super hand-wavy, questions like this are why I posted this hoping to get in touch with people who have spent more time studying this.

Reward - I'm an interested amateur when it comes to ML, and folks have been great about pointing out areas that I should go deeper. I have only a basic understanding of how reward functions work. I imagine the minicolumns as small neural networks and alluded to "reward" in the same sense. I have no idea what that reward algorithm is or if NNs are even a good analogy. Again, I really recommend the book if you're interested in a deeper explanation of this.

> the theory is not new information and seems incredibly analogous to Hebbian learning which is a long-standing theory in neuroscience.

I disagree with you here. Hebbian learning is very much a component of this theory, but not the whole. The last two constraints were inspired by it and, in hindsight, I should have been more explicit about that. But, Hebbian learning describes a tendency to average, "cells that fire together wire together". Please feel free to push back here but, the concept of Darwin Machines fits the constraints of Hebbian learning while still offering a seemingly valid description of how creative thought might occur. Something that, if I'm not misunderstanding, is undoubtedly new information.

> I don't see any evidence that the brain will always produce several candidate activity patterns before judging a winner based on consensus.

That's probably my fault in the retelling, check out the book: http://williamcalvin.com/bk9/index.htm

I think if you read Chapters 1-4 (about 60 pages and with plenty of awesome diagrams) you'd have a sense for why Calvin believes this (whether you agree or not would be a fun conversation).

> The tangent of cortical columns ignores key deep brain structures and is also almost irrelevant, the brain could use the proposed 'evolutionary' process with any architecture.

I disagree here. A common mistake I think we to make is assuming evolution and natural selection are equivalent. Some examples of natural selection: A diversified portfolio, or a beach with large grains of sand due to some intricacy of the currents. Dawkinsian evolution is much much rarer. I can only think of three examples of architectures that have pulled it off. Genes, and their architecture, are one. Memes (imitated behavior) are another. Many animals imitate, but only one species has been able to build architecture to allow those behaviors to undergo an evolutionary process. Humans. And finally, if this theory is right, spatiotemporal patterns and the columnar architecture of the brain is the third.

Ignoring Darwin Machines, there are only two architectures that have led to an evolutionary process. Saying we could use "any architecture" seems a bit optimistic.

I appreciate the thoughtful response.


Thanks for the considered reply.



This should be its own submission, I hadn't heard of it and its interesting.



Longmont Potion Castle vibes


prompt: "domain name for site that sells tractors"

result: "nomadventurequest.com"

...at least we know that nomadventurequest.com is available I suppose, saves me less than one minute to look that up.


I think the confusion lies in the word "value". To use your example of Company, imagine another company (Company2) exactly the same as Company but without the debt. Enterprise value of Company is 3B, enterprise value of Company2 is 1B. But Company2 would clearly be preferential to acquire, because it has no debt, despite the much lower enterprise value. So enterprise "value" is not a useful measure of "value". Am I misunderstanding something here?


The part you're missing is that the 3 variables aren't independent of each other or of the underlying company. You'll never have two 'identical' companies that only differ in one of those variables.

If company 2 has the same market cap with zero debt as company 1 has with huge amount of debt, then that implicitly means that the market values what company 2 does less. Perhaps they're in a market with lower growth or have a smaller market share or have a product with less upside potential or have huge lawsuit hanging over them.

If company 1 and company 2 did the same thing and had the same profits and sales, but the only difference is that company 1 had a lot more debt, then the market cap of company 2 would almost certainly be higher than the market cap of company 1. In fact you could then use the Enterprise Value formula to work out what the market cap of company 2 'should' be.


Ok I understand what you're saying, but this is still treating debt as a negative despite the fact it has a positive influence on the enterprise value.

Imagine this example. I'm CEO of Company2 and I take out a 1B loan. Enterprise value is still 2B, because 1B debt is cancelled by the 1B I now have in cash. I then waste all the cash on whatever. My company's enterprise value is now 3B, because the debt has increased it without being cancelled by the cash.


Depends on what you did with your cash. If you wasted it on something (perceived as) 'stupid' then your stock price would go down, lowering your market cap and your enterprise value would stay the same. If however you invested that cash into something perceived as 'smart' then your company will now be better off in some dimension, and thus it makes sense that your company will be worth more.

Equally if you are able to take on a lot of debt without it lowering your stock price then that means that the market thinks you're going to use that new money in a smart way to grow your companies value and thus your company is more valuable. If the market didn't believe you would use the debt wisely, taking on debt will lower your stock price.

In the real world you effectively cannot change the amount of cash or debt you have without it affecting your market cap in some way, as all three are tied together in complex ways and this model doesn't offer any insight into how changing one value will affect the others or the overall value going forwards. Think of Enterprise Value just as the price tag of a company at any given moment in time.


Gotcha. To go back to the article, it compares Tesla vs Toyota's market cap (558 vs 329) and then their enterprise value (553 vs 466) and concludes that if we look at EV then Toyota has almost caught up. What this reveals is that the market has decided that Toyota is in a good position ("valuable") because they have a market cap reasonably close to Tesla despite a lot more debt, so there is inherently some extra value in Toyota (according to the market) not captured by just looking at the diff between Tesla and Toyota market cap. It also reveals that Toyota believes they're undervalued which is why they're using debt financing instead of shares, whereas Tesla thinks the opposite.


Seems like 23andMe is two businesses: consumer and a B2B data business.

The consumer side is clearly struggling because of the problems mentioned in the article (they only need one test in their life, public perception is bad because their security has had breaches). So this needs a pivot where you can change the public's perception from a one-time test to continuous health monitoring through blood markers or something similar, expand to tests other than genetic and make it a repeatable, accurate test that gives you more information (and obviously stop leaking people's data).

But why not focus on the B2B side? Sell access to their databases. I'm sure computational biology and/or pharma companies need this information. It makes sense to do vertical integration by manufacturing your own drugs, but not for a cash-strapped business that has little incoming revenue to sustain further development. Presumably they are selling their genetic data, but I don't get why it's not giving them a revenue stream. Let GSK manufacture the drugs using your genetic info, with a profit share for any drugs made this way. They mention a collaboration with GSK in the article, but why was this stopped?


> Presumably they are selling their genetic data, but I don't get why it's not giving them a revenue stream. Let GSK manufacture the drugs using your genetic info, with a profit share for any drugs made this way. They mention a collaboration with GSK in the article, but why was this stopped?

In 2018 GSK made a $300M equity investment in 23andMe as part of a 4 year collaboration (with the option to extend for a fifth year) under which GSK had exclusive access to their data for use in drug target discovery programs, but [0]:

> All activities within the collaboration will initially be co-funded (50%/50%), with either company having certain rights to reduce its funding share for any collaboration programme.

So it seems they not only lost out on 5 years of developing their B2B business, but committed to covering a portion of the R&D costs over that period as well. There were terms about profit sharing on new developments, so it was a bet.

It doesn't sound like it worked out quite as well as either sided hoped though because in October 2023 (after the 5 year agreement) they entered into another agreement but this time [1]:

> Under an amendment to their Collaboration Agreement, 23andMe will receive a $20 million upfront payment for a one year, non-exclusive data license. > [...] > for a 12-month period, and [23andMe will] offer its research services for analyses of the data over that same period. Any new drug discovery programs that GSK chooses to initiate during the agreement will be owned and advanced solely by GSK.

[0]: https://www.gsk.com/en-gb/media/press-releases/gsk-and-23and...

[1]: https://investors.23andme.com/news-releases/news-release-det...


> needs a pivot where you can change the public's perception from a one-time test to continuous health monitoring through blood markers or something similar, expand to tests other than genetic and make it a repeatable, accurate test that gives you more information (and obviously stop leaking people's data).

"Stop leaking data"? Sounds like a step forward.

> But why not focus on the B2B side? Sell access to their databases.

So, it's actually: try to sell your customers' data instead of leaking it? :O

Sorry if that sounds snarky, but are we sure there are enough customers who want to pay to give their data to a company so that company can immediately sell their data on to other companies?


That's the conflict at the heart of the business. But do the public care enough about that? Obviously a HN audience does, but people use Google and Facebook products every day with the awareness that all that data is sold directly to advertisers. With the right messaging ("yes, we sell your data, but it's to drug companies to help make drugs that can cure your illnesses") it's possible that the conflict isnt too much of an issue.


Drug companies pay pretty well for clinical trials, so why would anyone pay 23andme for their own data, so that 23andme can turn around and sell it to drug companies?


These are different scenarios, clinical trials are experiments not just genetic information gathering.


i mean, customers are angry if personal data leaks, but a lot of legit usecases can be done with agregate data.


The real value they tried to build was in pharma, which is much more lucrative than either of those. Their data is somewhat interesting but limited by the lack of rich phenotypic/clincial information paired with the genetics.


> But why not focus on the B2B side? Sell access to their databases. I'm sure computational biology and/or pharma companies need this information.

Are you sure?

I mean, presumably there are different types of DNA testing. Doesn't 23andme run basically the cheapest test they can get away with? A user can't tell if the test measured 16 bytes or 1.6 gigabytes of genetic information, and if I was trying to launch a consumer DNA geneology service, I'd want to get network effects, so I'd want a test that was very easily affordable.

Who says their records are thorough enough to be valuable to drug companies?


> Doesn't 23andme run basically the cheapest test they can get away with?

No. That's part of why they're test is so expensive and their financials are so lackluster. They are apparently using a customized version of the lllumina's Global Screening Array according to their website and several other sources that show up in search results. That's a legit research quality genotyping platform from a world leading laboratory in the genetics space. This post from 2020 has a decent high level overview about it in the context of 23&me [0] though it might be slightly outdated by now and I've never heard of the company (xcode?) that wrote it (nor did I bother to look at what their product is.)

> If I was trying to launch a consumer DNA geneology service, I'd want to get network effects, so I'd want a test that was very easily affordable.

That's a pretty neive perspective. It ignores the value propositions of 23&me's product, economies of scale in the direct to consumer genetic testing space which were in large part enabled by 23&me's success, and all of the thorny bits related to questions about accuracy when presenting results. Not to mention that 23&me certainly capitalized on the network effect (which is being criticized a fair amount in this thread.)

> Who says their records are thorough enough to be valuable to drug companies?

GSK. See my other comment in this thread [1]

[0] https://www.xcode.life/23andme/23andme-v5-chip-dna-raw-data-...

[1] https://news.ycombinator.com/item?id=39202583#39204621


You make a good point. It's possible it's not useful to them and that's why they can't generate enough revenue from it. I assume the sales pitch to pharma is that it's a wide database, rather than deep.


This is actually incorrect, there's not that much data left to train on. I remember reading an article about it, might have been one of Gwern's or something about Chinchilla scaling, but to produce an order of magnitude increase we need an order of magnitude more data and there just isn't that amount available.



>diagnose bugs

Literally says in the article that GPT's main drawback is that it can't debug in the same way a human can.


Fixing bugs in code it writes itself is very different to diagnosing (i.e. identifying, not fixing) bugs in my own code.

When stuck I often paste code into ChatGPT and ask why it doesn't work, and it will often help me quickly identify the error and propose a fix.


I have done the same thing. “I am seeing the following behavior, but I expected <X>. What am I missing?” That line has often quickly solved some bugs that would have otherwise taken some time to debug. Very handy!


Can you share some examples of this? I haven’t had much luck with ChatGPT correctly identifying issues because (in my case, at least) they stem from other parts of a large codebase, and (last time I checked) I couldn’t paste more than a few kilobytes of code into ChatGPT.

One example are bugs caused by precondition violations, which ChatGPT can’t diagnose without also being given the code to all of the incoming call-sites, which means you end-up solving the problem yourself before you’ve even explained the issue to ChatGPT - so (to me, at least) my use of ChatGPT is more akin to rubber-duck-debugging[1] than anything else.

[1]: https://en.wikipedia.org/wiki/Rubber_duck_debugging


> Fixing bugs in code it writes itself is very different to diagnosing (i.e. identifying, not fixing) bugs in my own code.

yeah, we run the random number generator again and hopefully this time less buggy code pops out


Consider for a moment the possibility that TFA is wrong


It can if you use the python interpreter.


>without wading through LLM generated text

...OpenAI solved this by generating LLM text for you to wade through?


No. It solved it by (most of the time) giving the OP and I the answer to our queries, without us needing to wade through spammy SERP links.


If LLMs can replace 90% of your queries, then you have very different search patterns from me. When I search on Kagi, much of the time I’m looking for the website of a business, a public figure’s social media page, a restaurant’s hours of operation, a software library’s official documentation, etc.

LLMs have been very useful, but regular search is still a big part of everyday life for me.


Sure we now have options, but before LLMs, most queries relied solely on search engines, often leading to sifting through multiple paragraphs on websites to find answers — a barrier for most users.

Today, LLMs excel in providing concise responses, addressing simple, curious questions like, "Do all bees live in colonies?"


How do you tell a plausible wrong answer from a real one?


By testing the code it returns (I mostly use it as a coding assistant) to see if it works. 95% of the time it does.

For technical questions, ChatGPT has almost completely replaced Google & Stack Overflow for me.


In my experience, testing code in a way that ensures that it works is often harder and takes more time than writing it.


GPT4 search is a very good experience.

Though because you don’t see the answers it doesn’t show you, it’s hard to really validate the quality, so I’m still wary, but when I look for specific stuff it tends to find it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: