Hacker News new | past | comments | ask | show | jobs | submit | bglazer's comments login

Genuinely curious, how is binding affinity prediction not PPI prediction? Isn’t a PPI just a binarization of affinity?


I mean, yes, but it's similar to how image segmentation entails image classification (if you can segment the dog you can classify the image as one containing a dog).

There's oodles of labelled images of dogs, but comparably much fewer datasets of dog silhouettes.

Another factor is that it's far easier (and less informative) to predict that two proteins are capable of interacting with any degree of affinity than with a specific amount.

You may say to yourself (as I once did): "Well surely a well calibrated PPI inference model will output interaction probabilities that correlate with binding affinity!"

I've tested this and I've yet to find one written by myself or others that behaves this way.

If this line of questioning is interesting to, definitely sign up for Google Scholar alerts on my name because we're publishing some very cool stuff on precisely this v. soon.


Really interesting point about the non-correlation of affinity and PPI probabilities. Disappointing, honestly. Mechanistic systems bio models would really benefit from large scale affinity information, and it would be pretty cool if ML methods trained on binary PPI data learned a kind of latent affinity model. Maybe PPI models could be fine tuned to do that? Or maybe there’s specific neurons in the PPI models that correlate better with affinity, similar to the mechanistic interpretability stuff that people are doing on LLM’s? I only follow this area tangentially so I imagine I’m not the first person to have those ideas…

Anyways I look forward to seeing what you publish!


The UCSC genome browser is the best way to find where genes are mapped to in DNA.

Here's a link that shows ACHE (acetylcholinesterase) and RELN (reelin) in the same view.

https://genome.ucsc.edu/cgi-bin/hgTracks?db=hg38&lastVirtMod...


Very cool, thank you!


I'm not sure how you're getting the 4 genes away thing, I'm seeing like 20-30 genes and more than 2 megabases of distance in between RLN and ACHE. I'd be surprised if they were even in the same topologically associated domain.


Emil Kirkegaard is a self-described white nationalist eugenicist who thinks the age of consent is too high. I wouldn't trust anything he has to say.


No need for ad hominems. This suffices to place doubt on the article's premises (and therefore any conclusion):

>> This hasn’t been strictly shown mathematically, but I think it is true.


> Emil Kirkegaard is a self-described white nationalist

That's simply a lie.

> who thinks the age of consent is too high

Too high in which country? Such laws vary strongly, even by US state, and he is from Denmark. Anyway, this has nothing to do with the topic at hand.


In Spain used to be as low as 13 a few decades ago; but that law was obviously written before the rural exodus of inner Spain into the cities (from the 60's to almost the 80's), as children since early puberty got to work/help in the farm/fields or at home and by age 14 they had far more duties and accountabilities than today. And yes, that yielded more maturity.

Thus, the law had to be fixed for more urban/civilized times up to 16. Altough depending on the age/mentality closeness (such as 15-19 as it happened with a recent case), the young adult had its charges totally dropped.


If AI has no economic impact, it almost by definition also has no ability to marshal the kind of resources necessary to destroy all of humanity. All the doom scenarios seem to rely on quasi-magical abilities springing from advanced intelligence, but ignore that these scenarios require the AI to have access to significant resources.

Then, suppose AI does somehow think itself into effecting human annihilation, what happens when all the humans are dead? At the current state of things, all electricity production on the planet goes dark in maybe two weeks. Nuke plants irradiate significant parts of the earth, chip fabs are bricked, metal production is toast. In short the AI necessarily dies too.

The only way for this not to happen is AI controlled robots first gaining complete control of massive chunks of the economy. If AI is as useless as the article claims then that will effectively never happen. Thus, AI doom scenarios are always a murder suicide situation, and I’m not sure I believe anyone who says a superhumanly capable planner would pursue that plan. It even negates the “paperclip optimizer” as the AI’s demise obviously puts a hard upper limit on the number of paperclips produced.


I think one way for an intelligent AI can gain control over humans is not to exterminate them but to control them, influence them without them being aware of it, changing the recommandation algorithms, producing educational material, videos, podcasts, books that would be pro AI while saturating our monkey brain with mining-less entertaining content. We would become their servant without noticing.


How would you know this is not the case already?


It has been since 2016... Dun-dun-dahhh


> The only way for this not to happen is AI controlled robots first gaining complete control of massive chunks of the economy.

So they do that because they plan to annihilate all humans. They succeed in both.

How does the AI deal with physics and the heat death of the universe?


Here is a counterexample: Universal cheap AI girlfriends and boyfriends.

These would have a mildly depressing effect on the economy for a couple of generations, and then the population goes to zero.


With birth rates already in decline, population is already going to zero.


only Western Gringos, and Asians in countries like SK and China.

much of the world is still crapping out kids just fine. subsistence farmers feel more confident doing that than programmers, it seems.


ROI. only takes 5 years to make a good crop picker. much longer to make a good programmer.


And once you've got lots of crop pickers you can transition to management. Then you can start a marketplace, then upgrade that to a chain of supermarkets, then upgrade that to a global FMCG conglomerate, and then your birth rates will also decline and population go to zero.

It's a sad realization that modern life which we deem the right way to exist is basically just wrong. That which is unsustainable by definition cannot be sustained after all.


> All the doom scenarios seem to rely on quasi-magical abilities springing from advanced intelligence, but ignore that these scenarios require the AI to have access to significant resources.

No. The doom scenario I'm most concerned with is basically the one from Manna (https://marshallbrain.com/manna), which doesn't require any "quasi-magical" abilities. It's basically taking our current economic system and "fixing the glitch" (from the capitalist's perspective) that forces them to pay our salaries.

That's (a slow) doom for the vast majority of the population, but not for humanity because the wealthy elite would survive.


Anything that removes the need to pay salaries would have massive impacts on productivity numbers. (I am not arguing that would necessarily be good[1], but it would show up in the economic stats, in line with bglazer's point).

[1] The big brain take behind "necessarily" is that: "no salaries, everything is free" seems cool, "no salaries, a few people hoard all the wealth" seems bad.


> Anything that removes the need to pay salaries would have massive impacts on productivity numbers. (I am not arguing that would necessarily be good[1], but it would show up in the economic stats, in line with bglazer's point).

I agree with that. My point was that not all "doom scenarios" require the kind of "quasi-magical abilities" that he tried to use to dismiss them all.

IMHO, the sci-fi "superintelligent AI sends terminators to kill all the people" doom scenarios are distractions (possibly intentional distractions) from far more plausible doom scenarios.

> [1] The big brain take behind "necessarily" is that: "no salaries, everything is free" seems cool, "no salaries, a few people hoard all the wealth" seems bad.

If the former were plausable, we'd almost be there even without AI, so it's either a fantasy or the work we need to get there is not technological. The latter is basically 2024 but moreso.


Or to put another way, AI can be dangerous even when It's being totally helpful... When it is enabling some of humanity to screw things up for everyone else out of greed or anger.


While I agree with you, have you found anything related to how you would essentially ration goods in a system where AI could provide and everything was free?


> That's (a slow) doom for the vast majority of the population, but not for humanity because the wealthy elite would survive.

People throughout history have replaced the "elite" once the cost to do so became less than the cost (or benefit) to continue on in the status quo. The perceived "doom" would not last very long (in the greater span or sense of time) as it would all be reorganized, yet again. In that case, I am not so sure it's a "doom scenario" but instead something like the natural way of things, or history continuing on. And, maybe even a needed evolution of the species in that case.

Things would have to be "quasi-magical" to play out (for the rest of human eternity) in the way a lot of the AI doom musings point to.


> People throughout history have replaced the "elite" once the cost to do so became less than the cost (or benefit) to continue on in the status quo.

Past performance is not a guarantee of future success. The elite throughout history relied on masses of common people to get anything done (do work, go to war, etc.). AI as-hyped has the potential to upend that long standing dynamic.

The elite aren't going to be replaced if they and the resources they care to protect are ensconced in a 50 mile buffer patrolled by (among other things) drone-swarms that will shove grenades in the face of any unauthorized human that dares enter. That technology is pretty close to reality, as it's basically the Ukraine/Russia front lines with 5-10 more years of drone development.


> Past performance is not a guarantee of future success.

True.

I am still of the belief humans will continue to human. If an uprising of billions vs millions lose, the "elite" would also no longer be "elite".

While I have no data, other than observation, the "elite", imo, need the validation of their status more than those who have none and gave it to them in the first place. If they do something so brash as to rid the world of these unwashed masses, they have no one left to rise above. I don't see them all of a sudden becoming great members of this new society; they wanted to own society, not be a citizen in it. How that wouldn't lead to self-elimination at that point is hard to imagine - thus a needed evolution of the species (even if that's extinction and another hominid group later inheriting the leftover mess)


But no salaries -> no consumption -> no profit for companies. I don't see how that's effective capitalism.


> But no salaries -> no consumption -> no profit for companies.

No, you just need a little more imagination. What would the economy look like if you didn't need employees to utilize resources or undertake projects? Some very mass-market consumer oriented companies would eventually go bankrupt during the transition, but that doesn't mean a doom scenario like I describe can't or won't happen.

> I don't see how that's effective capitalism.

And you know what? The end result of the transition might only vaguely resemble current-day capitalism, and may be more properly understood as a transition to a new economic system.

However, I think it would probably be like our current economy, except with:

* the B2C segment withered mainly to luxury goods of elites (yachts, supercars, elite-level fine dining) and sustaining a much smaller elite-service-focused labor pool;

* B2B being most of what's left of the market; and

* non-market vanity projects taking up a substantial slice of the pie (e.g. Elon Musk using a robot army to build ziggurats to celebrate himself, because why not).


This misunderstands the economy. It's driven by household and government spending. All that B2B stuff happens because some company, somewhere at the end of the chain, sells to households or the government.

Sure, there is also business investment. But that only happens because someone expects to sell to households or the government. If all you are selling is luxury goods, which depend on direct high labor content for their value, then most of the B2B businesses disappear also.

And yes, there are exports. But not at global scale. The globe is closed.


> This misunderstands the economy. It's driven by household and government spending. All that B2B stuff happens because some company, somewhere at the end of the chain, sells to households or the government.

No, I didn't misunderstand it. Let me put it in your terms: the remaining elite become tiny but massively consuming households, and the distinctions between the businesses they control and their households will be blurred (e.g. Blue Origin being wholly owned by Bezos). All the economic actors that cannot provide value to these households will get slowly cut off and wither away, and these elites (collectively) will eventually cement total control over all the valuable resources they have some use for (production capacity, compute, energy, raw materials, etc).

To go back to a question I asked upthread: what would the economy look like if you didn't need employees to utilize resources or undertake projects? What happens when most households aren't competitive at anything economically, but there's still a lot of stuff out there to be owned? My answer is those who own the right things (e.g. AI-run, totally automated, adaptable factories; energy resources) will be able to support themselves indefinitely with them and have massive personal surpluses they then use for vanity projects to keep themselves occupied.


> All the doom scenarios seem to rely on quasi-magical abilities springing from advanced intelligence

The amount of people who dismiss doom risk with some version of "I'm not smart enough to imagine what something far smarter than me might be capable of" is staggering. "I can't imagine how nanobots might work therefore they must be impossible."

>Then, suppose AI does somehow think itself into effecting human annihilation, what happens when all the humans are dead? At the current state of things, all electricity production on the planet goes dark in maybe two weeks. Nuke plants irradiate significant parts of the earth, chip fabs are bricked, metal production is toast. In short the AI necessarily dies too.

Smart enough to eradicate humanity but not smart enough to plan ahead to keep itself running. Ok buddy.

>The only way for this not to happen is AI controlled robots first gaining complete control of massive chunks of the economy. If AI is as useless as the article claims then that will effectively never happen.

"AI hasn't taken over large parts of the economy, therefore it never will." I really just can't even...


Mostly the people dismissing the x-risk concern do so on the grounds that what is being sold is not and cannot become smart in the way it's being sold by x-risk stans as being able to, which is true. If you want to argue the contrary case, starting from the actual point of disagreement is probably a better play, because the argument you've actually made here constitutes no answer.


>what is being sold is not and cannot become smart in the way it's being sold by x-risk stans

Do you have actual evidence for this astonishing take? Because I'm sure the entire field would be entranced by your genius here. It's actually completely UN-clear what the limits to LLMs are and it's completely ridiculous to say otherwise.

>starting from the actual point of disagreement is probably a better play

Fundamentally most e/accs or other anti-doom stans believe that AGI/ASI are actually impossible, that nothing can be smarter than humans and that human brains constitute special magic meat. Almost all of their other wrong beliefs stem from those.


If the only way you can see it is on the spectrum of x-risk maximalist to e/acc, there's no point in us trying to have a conversation here. The implicit status game is meaningless to me and I can't see how any of this helps better reflect reality.


How is this a spectrum. There is indeed a spectrum of p(doom)s which represent meaningful positions about risk and then there's e/acc's, a group of misanthropic weirdos who are actively interested in pursuing the extinction of our species. If you can't see the difference then there is indeed nothing to discuss.


I see a difference, but only one of degree. Both tendencies are motivated by the same postulates, which I find extremely silly.


Heh when just last week we saw what happens when a single random vendor gains control over large chunks of the economy. Productivity in companies is weird it's like it seeps in slowly over years without much improvement then suddenly it's everywhere


How would the AI keep itself running?


"I'm not smart enough to imagine how something might work therefore it's impossible"


This is magical thinking. Like, the laws of physics still apply. Computation depends on power generation, and both *require* physical objects that break and decay. Who fixes the broken stuff when all the humans are dead?


Pascal's Wager is a terminating line of thought, adding a tech coat of paint doesn't make it any more worth discussing.


How many people and resources (dollars) would you estimate are involved in professional track and field?

Bonus points if you express this as a proportion of total human population and economic activity


Yes, spaced repetition and regular testing are effective in a purely mechanical sense. They’re good ways to acquire knowledge and skills. But, it’s really worth noting second order considerations. Does the median student enjoy the testing and repetition enough to be motivated to do them? Do they understand the utility or applications of the skill? Are you even teaching the right set or sequence of skills? I think these considerations have gotten more traction in recent years, possibly at the cost of acquiring the skills themselves. That said, I don’t think the cynicism in the article about teachers just wanting fun easy activities is warranted. They have to consider other things than the most mechanically effective methods of learning.


I’m begging everyone on HN to understand that Michael Levin’s work is not the beginning and end of biology. This is not meant to disparage his research, it’s really quite cool, but bioelectric signaling is just one piece of a very complex, very large, and very interesting story.

I only say this because he is invariably mentioned in any story related to biological research. I think this is partially because his work is cool, he has been on Lex Fridman’s podcast several times, and he’s very good at talking about his research. So, again, not knocking his work, just don’t forget there’s a lot more bio research out there that’s equally interesting.


Can I get some links please? Interested in what's out there too.


Great post, you're clearly on the right track. I totally agree that there is a major gap in modern theoretical understanding of how and why complex systems emerge. Breakthroughs in understanding the physical/informational processes that underlie complex adaptive systems could be immensely useful.

I'll add a word of caution though. I'm most familiar with systems theory applied to biology. Biology is, in my opinion, the pinnacle of complexity. However, it's less well acknowledged that it's also very, very complicated. This is important because it means that we have very incomplete knowledge of the base components of any biological system. Like we still don't really know the basic biochemical function of most proteins. Hell, we only just got a partial view of what most proteins even look like (in isolation) via AlphaFold. Measuring the number of all of the proteins in a single cell is effectively impossible with current and near future technology. Any feasible solution for this would probably be destructive, meaning that true time-series measurements are also impossible. These details of what we know and what we can (or can't) observe matter quite a lot, not only because they are the sort of raw matter of a systems theory, but also because they are the levers that we have to use to manipulate the system. There are only about 1000 proteins that we know how to reliably bind molecules to. There are (probably, we're not sure) more than 50k different proteins, if we include isoforms. So, all that to say, we have very incomplete knowledge of biology and very incomplete control of cellular behavior.

This isn't meant to discourage you! Instead, I think there's a tremendous opportunity for systems theory to be really useful (especially in biology) if it becomes a practical, routine analysis like statistics. But, for that to happen, we have to keep in mind the limitations and specific details of the system we're dealing with.


Thanks a lot for your kind words!

Really like your thoughts!

Indeed, lack of time-series observability makes it harder for us to find general patterns or causal events.

Definitely agree that biology is the pinnacle of all complexity - IMO something like macroeconomics or human behavior within set systems (society, politics, etc.) is fairly reducible to a very small and finite set of incentives that agents optimise for (food, shelter, status, acceptance, etc.).

Given this, Non-linearity and stochasticness still adds up to a general nature of non-determinism for the entire system.

With Biology on the other hand is extremely more complicated to study as - correct me if I'm wrong - it's still hard to realise what agents in systems are optimising for. reduction of free energy? reproduction? general homeostasis? etc. and then all these play varying roles in diff contexts, and then we'll still have to figure out how/why self-assembly and "wholes" emerging from smaller "wholes" (... ad infinitum) actually happens.

Really fuzzy thoughts but I believe There is some merit in exploring reducibility and observability from a time series perspective while considering effects of synchronity/asynchronity of observability and later how much we can desirably steer systems. Really fuzzy but I hope to work on this a bit more.

Thanks a lot for your very interesting comments! Not discouraged at all, love your view on systems theory being a "routine analysis" like statistics, i.e. a very generally applicable layer or meta-science that's an entirely new way to see things, which I should've articulated better in my post.


Interesting stuff!

I'm mostly thinking individual cells in a multicellular organism (i.e. lung cells in a person). It is indeed very hard to understand what they are optimizing for. Obviously, the organism as a whole is under selective pressure, but I'm not sure how much an individual cell in a given organism actually "feels" the pressure. Like, they undergo many cell cycles during one organism's life, but they're not really evolving or being selected during each cell cycle. Of course, this isn't always true as tumors definitely display selective pressure and evolution. But for normal tissue, I prefer to think of cells as dynamical systems operating under energetic and mass flux constraints. They're also constrained by the architecture of the interactions of the genes and proteins in the cell. All that adds up to something that looks a lot like evolutionarily optimized phenotypes, but I think that might be a bit deceptive, as the underlying process is different. It's not at all clear to me though. You're really getting at some deep questions! You might find this paper interesting in that regard:

https://www.nature.com/articles/nmeth.3254

Regarding reducibility and observability of time series, you might also find work from James (Jim) Sethna's lab at Cornell interesting. The math can be a bit hairy, but I think they do a pretty good job at distilling the concepts down so that they're intuitive. The overall idea is that some complex systems have "sloppiness", like some parts of the system can have any kind of weird, noisy behavior, but they don't change the overall behavior that much. Other parts of the system are "rigid", in that their behavior is tightly connected to the overall behavior.

https://arxiv.org/abs/2111.07176v1

You ought to get yourself connected with some folks at the Santa Fe Institute, if you haven't already. I know one affiliated professor, let me know if you want an introduction. At the very least, if you like podcasts, check theirs out. It's called "Complexity" and it's quite good.


Thank you so much for the link to those two papers. I'll try and go through them.

>Like, they undergo many cell cycles during one organism's life, but they're not really evolving or being selected during each cell cycle.

This is a really interesting perspective.

>You ought to get yourself connected with some folks at the Santa Fe Institute, if you haven't already. I know one affiliated professor, let me know if you want an introduction.

I have read a few posts from SFI faculty and seen some video lectures of Krakauer and others, but as you said, I should get in touch to some degree.

You're very kind and I really appreciate you offering to intro me! I would really love that!

Would you mind if I follow up on this via e-mail? Can I send one to the address mentioned on your Vanderbilt department page?

Thanks a lot!


Yep email me!


elephant repellent problem? What is that?

This is literally the first occurrence of that string on the internet.


Better known as the Elephant Repellant Fallacy — a claim that a preventative is working when, in fact, the thing it prevents rarely or never happens anyway.

"Hey you better buy my elephant repellant so you don't get attacked!"

'Okay.'

...

"So were you attacked?"

'No, I live in San Francisco and there are no wild elephants."

"Well, I guess the repellant is working!"


Also known as the Anti-Tiger Rock: https://youtu.be/xSVqLHghLpw?si=fRraLZJ9q_rDR-UV


I know this as 'Moms cooking drove the vampires away'



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: