If I understand this correctly, they added a property to a NN of distance, and set the training to penalize increased distance between nodes to simulate a physical constraint, under these conditions 'hubs' emerged which facilitated connections across distance, the other observation was that "the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations".
The work suggests that existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
> "the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations"
That may sound surprising, but it's actually not unusual. It's worth remembering that the difference between highly local encoding and everything encoding everything at the same time, is often a matter of a reversible transformation.
For example, take FFT of an image, and the result will be an image where each pixel encodes information about all pixels of the original image simultaneously. And Fourier transform - a shift from a time/spatial domain to frequency domain - is quite simple, very useful, and occurs in nature.
Sounds a bit like it's tending towards the behaviour of grid cells. Rather than encoding a specific, unique place, grid cells light up periodically in the latent space they encode, with different grid cells having different periods and different latent spaces. A given combination of grid cells uniquely identifies a thing/space/concept/location/datapoint/... token?
Of course for any given location this wraps around with a period of the least common multiple between the grid cells' periods (I think? Had to ask Bing about that one, he was helpful for once). This is why if you try to map locations you get a hypertorus.
FFTs can be useful when constructing CNN pipelines in unsupervised domain adaptation too. Fourier domain adaptation (FDA) uses the calculated amplitude of a target set to modify the source (train) set. Cool and quick to implement.
I wonder how pleased Joseph Fourier would be seeing how his work has become so pivotal in a world probably unimaginable to his time. Not just for ego sake, but just how damn useful something one's effort did becomes to everyone else.
I don't think it has to be the fourier transform necessarily.
I think the fourier transform might be a really approachable version because it is already in so much signal processing. Sometimes its periodicity can work against you.
From what I can tell you, what you need are:
1. invertible, identifiable basis
2. Something like Parseval's Identity
3. Able to "smooth" out things like the heaviside function or dirac deltas
4. Workable on graphs as well
5. decays at infinity
6. eigenfunctions are all orthonormal
Turns out, making something like that gets you something similar to the fourier transform, and the math is probably the simplest since you are working with easy exponentials that work nicely with convolution.
People have done similar work with Chebyshev polynomials, and there are actually a lot of applications in ML/AI using Chebyshev polynomials in graphical neural networks and triangulation.
edit:
The fourier transform also comes about pretty nicely in quantum mechanics because of poisson bracket stuff and its nice derivative properties for position-momentum space. I am don't think other transforms/function basis come out so naturally.
This is a great example why maths people can't carry on conversations with non-maths people. I'm sure everything you just said is very possibly true, but I have no idea. This would be like me asking ChatGPT a question, and it all sounds believable, but I have no way of checking if it is true.
Why do you expect to understand an expert’s talk, without yourself being knowledgeable about the topic/subject? With all due respect, this cockyness is unfortunately quite common with software developers (myself included), as if being good at one thing automatically translates to a different domain.
Harmonic oscillators naturally sample Fourier transforms, so for example the different cilia lengths in our ears are used to transform sounds into the frequency domain and the amplitude of various frequencies is how our nervous system intakes sound.
> The work suggests that existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
Does it? My read is that biological neutral architecture is a consequence of biological constraints that artificial NNs mostly don't face, so we don't necessarily need to try to copy it.
We've developed numerous different learning algorithms that are biologically plausible, but they all kinda work like backpropagation but worse, so we stuck with backpropagation. We've made more complicated neurons that better resemble biological neurons, but it is faster and works better if you just add extra simple neurons, so we do that instead. Spiking neural networks have connection patterns more similar to what you see in the brain, but they learn slower and are tougher to work with than regular layered neural networks, so we use layered neural networks instead.
Biology is Biology and Silicon is Silicon. Sometimes, Constraints can be just that - Constraints and not some secret sauce.
I'm interested in the 'slower/tougher' route you mentioned, do you recall the final performance, am assuming it was equivalent or worse. Given the vast disparity in power consumption between silicon and meat space it's surely a rich area to explore, especially given the feedback loop present where AI is being effectively used in more accurate observation and interpretation of brain activity.
Also, biological “architecture” is so different compared to silicon design, that it might just the case that different algorithms perform best for the respective design. Biological systems are effectively huge distributed computing systems, where each unit is very weak. Backpropagation probably can’t easily be implemented for that.
Depends on your goal. If you want to simulate human intelligence, maybe you want to replicate constraints.
A fair chunk of AI work boils down to “make something that acts like a human.” On the other end of the spectrum is stuff that is more specialized, like very targeted classifiers; there is no reason to expect those would benefit from this.
Has there been research on whether it's possible to similate human intelligence without also getting all of its flaws? Susceptibility to logical fallacies, random difficulty with factual recall, fading memory over time, biases, prejudice, etc.
That's something you could philosophise about but not research unless you already have a human level intelligence to test. We won't know if it's even possible to replicate in silicon for a very long time
No, AI work boils down to “output computation results that humans hallucinate could have been generated by a human””
We’re just normalizing to our innate sensibilities. We have no idea if we’re making intelligence or what that even means as us humans must work within the constraints we evolved into. We have no idea if we generalized consciousness as it could exist across space time.
You and I will never exist outside our universe and observe what makes it tick. We’re hanging out on Earth making mannequins talk, hallucinating we’re gods because of it. Humans are artificial intelligence given their lack of direct observation of so much of the universe.
Conway came up with his game of life after the universe. Sorry, AI researchers, life, consciousness and visualization were already created by reality. We’re just working on an easy to use Dewey Decimal system to catalog it.
Eh, maybe. The question there becomes, how much of human behavior is specifically a consequence of the physical constraints of laying out our neurons? I'm guessing not much. The article doesn't mention any change in output prompted by the change in internal architecture, which doesn't really confirm my guess, but doesn't falsify it either.
there's been physics simulation of bipedal models learning to walk via reinforcement learning. the results always were a bit choppy and robotic, until they implemented signals propagation delays. this led to more natural, fluid movements that definitely looked more "human". sorry, can't find the video.
personally i absolutely do think that for generating convincingly human-like intelligence you also need some human constraints, otherwise you will get some uncanny valley.
another example would be alpha-zeros play style. AIs don't play like humans, they maximize their chance of winning in the long term without going for good looking opportunities that hurt their chances in the long run (like human players do).
Did the bipedal walking people try optimizing for energy instead? If so, and the models were still choppy, maybe that's actually better.
I guess I just find the goal of imitating human intelligence including all its mistakes to be a silly goal. The only time you want that instead of an actual human is if you're trying to deceive people into thinking your AI is a human. Otherwise, you just want the correct answer (or, if you're afraid, you want a strictly sub-human intelligence).
I agree with this, I should have been more specific, I really meant it from the researchers perspective, they are trying to make NN's that more closely model the brain for their brain research, not for AGI, so in that regard, the research was looking at how constraints affect learning, and whether applying constraints might result in a network that more closely mirrors brain activity, for AI purposes, we want smart machines, not necessarily a model of the brain with all it's additional functions aside from 'intelligence' :)
We don't need to copy but I think the hypothesis is that constraints like ours help networks optimize towards outcomes like ours, and it seems to test well in this case.
If this is the case, the finding sounds trivial. I thought it was literally a theorem that hubs emerge from networks if you minimize distance or wiring cost.
Maybe, I think the research here is from a brain perspective, AI and Brain Science are having an interesting time, with learnings from both feeding in, so this article is focused on the physicality of networks in the brain, and looking at whether by increasing the alignment between the way a neural network works and the way brains work they can develop a 'digital brain', for me this is distinct from machine intelligence, this is emulating brain activity to provide cognitive and psychological insight in meat space.
Topology is hard for me, visualising multi-dimensional topological manifolds even more so, however, I am intrigued by the opportunities, we use graphs and other forms of visualisation to reveal topology, and it would seem reasonable that behaviours of NN that are 'transferrable' e.g.. trained on one set of data and able to operate on another category of data, may be a 'shape', a complicated one, but perhaps one that would once revealed would aid in the understanding of what happens under the hood.
There must be some difficulty in studying biological brain intelligence to distinguish between features that contribute to intelligence and those which are simply path dependencies from millions of years of nervous system evolution.
Over that time, brain function was mostly concerned with aiding the organism to find food, grow, reproduce, and avoid being eaten, rather than language, logic, mathematics, arts, and so forth. Its rather astonishing that humans are somehow now able to do the latter, using brains evolved to do the former.
We appear to have a capacity for substantially greater sophsitication in those domains, but none are unique to us except when we artificially define them to be. Remember that words like "language", "logic", "art", etc are cultural inventions with a fuzzy and fluid relationship to whatever real-word "stuff" they refer to, not natural kinds that themselves have sharp and perennial definitions.
Unless you choose to define the word as that which only humans can acheive, a spider's web elegently reflects "mathematics" just as much as some beautiful proof in set theory; a conflicted bird debating itself over which stem to use in its nest reflects artistic attention just as as a painter choosing their next color; a cat chirping or mewing or yowling reflects language just as me writing this comment.
The sophistication doesn't go as far, by our eye at least, in any of these animal examples, and so we don't expect the spider to confirm Fermat's Last Theorem or the bird to feature their nest in a gallery (actually...) or a cat to compose formal poetry, but the essential bits that we extend with our sophistication are all ancient and widespread throughout nature.
It's still astonishing that any life can do so many of the things it but I guess that's apparently what billions of years of "pretraining" on unfathomably efficient machines gets you.
Incidentally, it's wild to see people believe that a stream of fmults pushing through a trillion transistors would get you even close to the sophistication of any of life's intelligence. For current-AI-skeptical materialists, it's usually not a doubt about whether silicon and software might conceivably be intelligent, but it can just seem absurd to believe the grossly crude and narrow innovations of recent years are even close. You need to have a very shallow, narrow, almost willfully blinded, appreciation of the "intelligence" exhibited throughout all biological life to think that you unlocked the silicon version of it all in a pretty-good chatbot running on Azure.
It isn't just humans that can do math. Ants to geolocation [0] by counting their steps in relation to the distance they walked from their nest. They do this with just a small amount of neurons.
Well, humans can do any math that human brains can comprehend. That doesn't mean "any math" any more than if a chicken we're to claim that chickens can do any math (because they cannot comprehend human math).
Why would there be a difference at all here? Finding food and avoiding being eaten requires a great deal of intelligence, is surely the basis of human logic, and is the foundation for language. It’s all one interconnect network and every bit of it is a piece of the greater intelligence.
I think the the emergent behavior is still the most plausible. There really is not much difference between our brain and a chimpanzees, it is definitely not accounted for genetically to be that much more intelligent.
I'm just surprised this hasn't been done before. Isn't this the kind of low-hanging fruit that would have been studied at some point in the last 40 years?
It was ... look up "AI winter". Vastly simplified, we collectively ignored ANNs until somebody figured out that we now have enough recourses to create larger ANNs than before.
Definitely familiar with the AI winter...which is why it seemed odd this current research as being presented as new. Guess it's just a rehash of the old ideas.
One thing that’s always struck me as strange about modern NNs is they tend to be two-dimensional and one-directional, right?
Whereas our brains are three-dimensional and have neural feedback loops. It seems clear to me that those feedback loops are an essential part of “thought”.
Any idea why more architectures don’t have neural loops built into them? Is it because they would cost too much to train? Lack of control?
Good point, and yes, feedback is critical in CNS networks at several levels.
But much of neural network analyses started with neural retina (retina is a fun word that is derived from reticulum = net). And the retina is mainly a feed-forward system or was perceived as such in the 1950s through 1970s.
Even the cortex was and still is crudely modeled using mainly feed-forward connections.
But the more you know about CNS structure and function, the more you see and appreciate feedback. Horizontal cells in retina provide a strong feedback integration/bias signal back to rod photoreceptors. Reciprocal dendro-dendritic synapses are key in many systems. Presynaptic terminals also respond to the same and transmitters that they release and the ion fluxes that they induce.
At the level of connectomes the recursion is deep and hierarchical—see the lovely work by David van Essen and colleagues on the laughably complex connections among the dozens of cortical regions that respond to visual input from thalamus and midbrain (by the way—true, this is even in mouse).
The ultimate feedback is of course generated by our own behavior——whether a blink, a saccade, or our own movements.
Plunk an LLM in a Roomba but add with more input modes and more recursive self-prompting. That is a non-threatening start at what could also become SkyNet.
Our brains are highly concurrent/distributed systems where each neuron exists & acts independently of any specific task you give the network. Inputs (both from external sources/senses as well as other neurons) might come in at different times, and neurons continuously compute their excitation levels (and then, with a bit of delay, might or might not go on to excite their neighbors). Put differently, there is no such thing as an inference "run", where you provide an input to the NN, follow its topology and do all your matrix computations, and then get an output. It's a highly cyclic machine that continuously reads inputs and continuously produces outputs. Heck, it seems that parts of the brain are active even when there's very little input and no visible output (like when you're dreaming). The feedback loops you mentioned certainly play a huge role here – excitations can take on a life of their own.
> Is it because they would cost too much to train?
I think it's both: Even a single run of, say, ChatGPT is already quite expensive in terms of compute and now you want to keep the network "alive" at all times – that would surely cost orders of magnitudes more. But on top of that, by introducing a time axis and continuously feeding inputs to the network and reading its outputs, you also lost that clear input—output relationship that training data exhibits these days. While it's certainly possible to introduce a time dimension to training data, too, that seems like a whole different ballgame to me.
I was just thinking lately that brains rewire themselves all the time. That's how they learn. Neural networks, though, never do. There's a separate training stage that (from my understanding) runs the thing in reverse and modifies the weights to move the output closer to what is desired. After that, the weights are never updated.
Also, yes, information always flows through a neural network in one direction, from inputs to outputs, then leaves it. Whether the inference result was correct or wrong, it doesn't have any effect on the weights.
So I'm thinking that there is a need for some new kind of neural network architecture that can 1) update its own weights all the time, not just when training, and 2) propagate information in reverse direction from outputs to inputs, or have "backwards" connections between layers. Maybe have a set of outputs that tell the system whether and how the NN wants its own weights updated?
You automatically get exponential, recursive and hyperbolic behavior when your predictions are recycled. On paper there is very little middle ground between rote repetition and a spiritual peak experience. This is probably why consciousness emerges.
The dimensionality isn't 2 or 3 dimensional. Value systems may have such dual or triune symmetry. The dimensionality is IIRC said to be nominally 1000. I would wager the average is the same as the number of muscles in the human body.
More than that, our brains have multiple input and output devices that simultaneously interact with the brain. Proprioception, sight, hearing, touch inputs. Movement, cognition, hormone regulation as outputs. There's a biological basis for why walking stimulates thought, the same networks are involved.
It's not the same thing as training the same NxM deep neural network on multiple modalities and then sending one signal or the other into the input end.
This reminds me of a paper I read a few months ago about someone training a neural network to add modulo a prime number--with the catch that there was an L1 regularization term based on Euclidean distance between neurons and the weights were permuted after every iteration to minimize the total Euclidean distance. The point was that you could make very interpretable neural networks by doing this. Does anyone else know about that paper and remember its name?
Is it me or is this article quite light on the specific brain-like characteristics that were developed? It seems like they're reading into it quite a lot.
I'd look at the underlying paper, https://www.nature.com/articles/s42256-023-00748-9. There's neuroscience references there (50-52) that support their hypothesis. Overall, the corresponding RNN work is meticulously designed and reported, it was a good read.
The concept of “engineered constraints” within AI is applied broadly across many implementations of classical AI
Genetic algorithms are the best example of how structural constraints are a core assumption in the search architecture
Similarly, the entire field of reinforcement learning assumes optimal goal policies are embedded within the state space within constraints of action-state space interactions!
I think it’s not surprising that ANN could form some similar structures like the brain. After all, there are universal laws of nature. The question is IMO whether forming such similar structures could help the network achieve better performances on tasks or not. Current large NN are organized in layers without exception. Are the layers not equivalent to these hubs?
Hmm I think for AGI, we might need some artifically generated biological neurons / brain matter. Combined with cybernetics. but it should be able to reconfigure neurons. either by removing chunks and replacing it with new chunks.. or have some nano machines work on the brain matter
The integration of spatiality seems to allow for greater modularity and sparsity, but I'm not yet sure that it implies an increase in intelligent ability. If anything it may hinder its capacity for computation.
My rationale comes back to the fact that biomolecule's function is dictated by its structure, so, it seems critical to have spatiality of some sort to achieve qualities of living systems.
Samantha was frustrated as she sat in Dr. Wilson’s office, trying to explain her position. "The prefrontal cortex isn't intelligence itself though, it's just a model of neurons and synapses that seems correlated with certain cognitive functions,” she argued. “Calling it the 'intelligence center' reinforces a false and outdated phrenological view of localized mental functions in the brain. Intelligence emerges from the interconnected activity of billions of neurons across multiple areas, not a circumscribed 'smartness node' behind our forehead!” Dr. Wilson sighed, taking off his glasses. He had heard variants of this debate from students year after year, but Samantha was particularly adamant. “You make fair arguments,” he replied carefully, “but the key point is that while intelligence relies on distributed processing across networks, the prefrontal regions act as critical coordination hubs which can amplify or reduce overall function. Their crucial role warrants the common shorthand.” Samantha remained unconvinced, but sensing her professor’s weary patience she opted to leave further arguments for her term paper.
Honestly... I just wanted to see how well the "big bag of neurons" could handle the task of "write a short one-paragraph story with a university student arguing with their professor on how the section of the brain dealing with intelligence is just 'a model of neuron' and not 'intelligence'".
What is the probability that this will result in a scenario where we are playing God? And if that probability is even moderately nonzero, like >= 5%, can we really afford to take that chance?
There is no royal we here. There never is. So, the right question is can you (indidual, country, company, etc.) afford to have your rivals, competitors, enemies, etc. gain the upper hand here? What these have in common are that they take decisions independently from you. Basically, no matter how hard upset AI skeptics stamp their feet in California (or wherever) insisting on whatever convoluted doomsday scenarios, somebody is going to go ahead and ignore them and go right ahead anyway. Given that, AGI is basically going to happen. Possibly sooner rather than later. The only real question is who gets there first and where and how.
AI safety is a bit like nuclear arms safety. You don't get any safer by not having any nuclear weapons; you just put yourself at the mercy of others with nuclear weapons. The reason nuclear war fare so far hasn't happened is mutually assured destruction. That's why lots of countries seek to have them. Non proliferation treaties have slowed that down but there are probably at least ten or so countries with nuclear weapons at this point.
With AI, it's a lot less clear cut. It's basically going to be about dominance and outsmarting the other side. The downsides are basically more hypothetical / ethical / moral. And when it comes to ethics and morals, there definitely is no such thing as royal we. Countries like China and Russia are likely to choose their own path. Plenty of other countries that will want to get ahead here.
I agree with your analogy to nuclear weapons to some extent. Preventing AGI is going to be hard. The incentives are too massive.
The difference between having nuclear weapons and building ASI, is that you can chose to not use the nukes as long as nobody else does, while the ASI by it's nature needs to be deployed to some extent to exist at all.
Not only that, in the kind of competitive environment you describe, not only will the ASI's be deployed, they will most likely be in competition with each other, which creates the precondition for Darwinian evolution to start happening.
My personal belief is that while it may be possible to develop safe ASI as long as it is NOT subject to evolutionary forces, it probably still isn't if they are.
As far as I can tell, this is by far the greatest existential threat we're facing at the moment. Finding a way to deal with, while working around the game theoretic challenges you identify, may be orders of magnitude more urgent than for instance stopping global warming.
If we do manage to steer AI development into a good (for humanity) outcome, I'm pretty sure ASI's will be able to find technological "solutions" to the problems caused by global warming (an maybe stop the warming itself). On the other hand, if self-replicating ASI's run wild, they would not only be likely to exterminate humanity, but also most other advanced life forms Earth.
The outsmarting will look a lot different once it is achieved and then continue to evolve. The probability of peace, tranquility and humans living forever is 0% though.
The probability of humanity continuing to exist forever may be 0. But I hope we last more than a few more decades or centuries. And when we are gone, it would be great if could make sure that our descendants carry forth the stuff we (when it happens) consider the most valuable aspects of humanity.
Whether that's the ability to create and appreciate art, beauty or the ability to experience emotions such as love and purpose, or maybe some abstraction, generalization or expansion of these that we may not comprehend.
Second this! We started playing with fire long ago. One culture can never own AGI. AGI then to super intelligence. We had better hope our superAGI children are also far more emotionally intelligent than we are and we tolerate us and even teach us.
Maybe for safety we need more than one ai to balance each other like with nuclear weapons... It's just a stupid short guess about what you're saying. However, it looks very doubtful..
Might work until the AI starts self-improving. When it does, it's a race, and most likely the first one will win.
In general, nukes aren't a good analogy to AGI/ASI. The x-risks of AI are such that you can't use them in MAD fashion; AI is more like increasingly potent engineered pathogen - you can't do MAD with bioweapons, and you're one lab accident away from ruining the day for everyone on the planet.
It's really hard to know in advance what will happen when AI starts self-improving. It may be that it will cause a hard singularity. But it is also possible that access to compute resources, minerals, energy or something similar will constrain the rate of advance enough that latecomers can catch up.
Also, if a single entity gets too far ahead, it may be constrained politically or even militarily.
Let's say for instance that OpenAI created an AI able to suddenly design GPU's that outperform Nvidia'. That would be a sign of a quite hard takeoff, as it would quickly lead to MS/OpenAI gaining full control of both software and hardware markets.
This would be huge incentive for the US government to either nationalize this AI, make it's IP open source or use some kind of anti-trust measures to slow them down.
> Also, if a single entity gets too far ahead, it may be constrained politically or even militarily.
The problem is, the time between when you notice and when it's too late to act, might be days, or hours, or seconds, or even negative.
> Let's say for instance that OpenAI created an AI able to suddenly design GPU's that outperform Nvidia'. That would be a sign of a quite hard takeoff, as it would quickly lead to MS/OpenAI gaining full control of both software and hardware markets.
That's solving the wrong problem. If OpenAI creates an AI able to design GPUs better than Nvidia and is general enough, the risk is that it'll arrange for more compute for itself and train an even more capable version of itself. And you might not notice until couple iterations of this happen.
> This would be huge incentive for the US government to either nationalize this AI, make it's IP open source or use some kind of anti-trust measures to slow them down.
When that happens, the US government better be sending men with guns to forcefully shut the businesses involved down, and getting ready to lob some cruise missiles at some data centers for good measure.
> The problem is, the time between when you notice and when it's too late to act, might be days, or hours, or seconds, or even negative.
I agree. Especially if there is a hard take-off where AGI goes to 100x ASI in hours , days or weeks. Or the AGI could be really hard to detect, even if it took longer to develop. There is certainly a significant existential risk associated with AGI/ASI.
However, if we imagine a scenario where AGI was ONLY constrained by algorithms and code, and NOT at all from compute, then this kind of scenario would be much more likely. Lately, though, increases in AI strength have come more from increases in compute than improvements in algorithms.
If (which we may hope is the case), an AI explosion requires hardware improvements to take off (even if the AI can design the hardware itself), then we can HOPE that an explosion would be easier to detect.
> When that happens, the US government better be sending men with guns to forcefully shut the businesses involved down,
All the actions I described are ultimately backed by the government's guns.
> and getting ready to lob some cruise missiles at some data centers for good measure.
This could be an option if the data center was outside the US. Inside the US, the government can shut off the power, if needed.
We've been playing God since we invented language and decided to make a religion that starts the universe with "In the beginning was the Word".
(That this new origin story was added to the religion somewhere between two centuries and a few millennia (and at least one radical theological split) after the oldest part was written down, doesn't change much).
As for "will understanding how minds work endanger everyone?" sure, in million ways to Sunday both by our own hands and the hands we make, and yet at the same time it might also solve all our problems and create a synthetic mortal heaven that lasts for 10^32 subjective years before the fusion fuel from the disassembled galaxies finally runs out.
The only way to even guess at the odds of these outcomes is to learn more.
The problem is that we can "get good" at things but we have hard limits that are pathetic in comparison to the complexity of the universe. We are slightly advanced monkeys. We poke around at things and claim to understand them but time and again there are unintended consequences. This is hilarious to watch in the context of AI. Monkey invents machine based on high school mathematics. Monkey scales up the machine and pokes around at it but can't predict what it will do or even explain how it did it in retrospect.
So what? 'AI' is just computer science finally growing up to match the complexity of innumerable other human endeavors such as biology. The normal state of human technology development is poking at things, just with progressively more calibrated sticks.
It's not the point of all advancement. The Amish for instance have accepted certain developments but with caution, and with a centralized system of governing technology so that it doesn't get out of hand. Or consider the Mayans, who tempered their advancement with their limitations in terms of resources.
We shouldn't do it, and we should stop and be happy with what we have and learn how to live better with what we have instead of inventing more trash like AI.
Amish attitude towards technology isn't about technology per se, but about desire to keep their style and size of communities working - something new technology tends to disrupt.
The Mayans didn't "temper their advancement with their limitations in terms of resources" - they were limited by available resources and time, therefore they did not advance as much and as fast as the others.
Advancement is driven by the basic need to make things better, whether for yourself or other. And it stacks.
I'm not sure taking a cult like the Amish is a prime example, because if we'd all live like that child mortality would still be enormous and we'd fall short on a myriad of things that make our lives easier and better, because there'd be no scientific discovery.
I don't believe scientific discovery is making much better. Some yes of course, but the majority of inventions like smartphones, social media, AI, 5G, fast transportation, fossil fuel extraction, global meat industry, are making it worse.
Technology does make certain TASKS in life easier, but I'd argue that it's NOT a logical consequence that technology makes life BETTER.
One could argue that the most important benefit we get from technology, is that allows more of us to survive, and for longer.
Eugenicist may argue that this is not really a good thing, since it leads to a larger and larger fraction of the population having poor health, some of which will be hereditary.
Agreed. People of the past lived happier, more in peace and more fulfilling years of life than modern people despite worse material conditions, violence, diseases and child mortality. Technology has been only harmful to humanity, and this is by principle, it’s not a side effect. Human body and mind is literally designed to live in a no-organization-dependent[1]-technology world, whether you believe in creation or evolution. We are literally not designed to live in a world of convenience.
1: Kaczynski’s term. Read his manifesto or even better the book Anti-Tech Revolution.
People of the past are not well recorded in history.
We don't know much about miserable medieval peasants, because they as a whole didn't know how to write, and the little the rare educated one wrote is unlikely to have been preserved. We know mostly what the rich or at least decently well off of the times have left, a lot of it biased or self-aggrandizing. Some of those wrote on things that they themselves didn't experience.
When you imagine happy peasants you should question who wrote on them and why, and if in those times there was any likelihood of anybody caring about the plight of suffering people, being allowed to write about it, and it being preserved to this day.
But besides that, even with the luck of having a life with a comfortable existence with work you enjoy, you still could one day die from something like an accident with a farm animal, a bad tooth, or childbirth.
> People of the past lived happier, more in peace and more fulfilling years of life than modern people
That's a pretty broad claim. How far back is "the past" that was so much better? Which people did you have in mind? The Irish peasants starving because of potato blight? European villagers during the Black Death? The slaves building pyramids in ancient Egypt?
Won't we have to do so anyway at some point in order to stop suffering from stupid diseases like cancer and solve other hard problems? Can we not mess it up? If yes, fine. If not, that's too bad but we have to try anyway IMO.
We should not attempt to solve all diseases. Many people live a long life without much disease. Yes, it's sad that some people get diseases and of course I've had friends with diseases -- but I am not naive enough to think that it will be a good thing overall to solve all diseases because the end result of a very long life for everyone will be massive overpopulation.
Death is a natural part of life, and we should not attempt to avoid it too much because it makes us more machine-like and less human. How many people have questioned the 9-5 lifestyle because they realize they could die at any time?
The ultimate logical end of advanced science and AI may be an immortal life, which we are not prepared to handle.
For you to have that it’s fine, maybe there are more and you can volunteer to die. For me, I am not so interested in disease or death for me, so I would like it fixed.
If I could give you a million upvotes I would. Heck I could somehow give you a billion upvotes, I would, even if would cost me my account.
Comments like yours restore my hope in HN whenever I start feeling like the community is being taken over by people whose thought processes are flirting on the edge of lunacy.
Imagine someone saying some diseases shouldn't be cured. I'm sure the millions of people all over the world who right now are suffering heart-rending pain from those "shouldn't be cured" diseases would be just thrilled to read the parent comment.
I never said we should NOT attempt to solve any diseases. My fear is that if we approach the logical conclusion of solving all diseases THROUGH SCIENCE, it is getting too close to solving death and encouraging overpopulation.
You are obviously having a knee-jerk reaction, because I am questioning the very core of society which is the very thing making you comfortable. I am just trying to have a reasonable discussion.
Now, there are PLENTY of ways of solving diseases without science. Many diseases are caused by our modern western diet, being sedentary, living in large cities, and being lumps all day whose only purpose is to further more technology.
Perhaps it would be a better idea to solve diseases by dismantling this system and thereby making us more healthy so that we don't NEED science as much?
The problem is, science has convinced you that disease is just natural and that you need it to solve everything.
Again, I am advocating for a healthier lifestyle AWAY from technology, which is certainly possible, instead of relying on technology to solve the very health problems that are created by technology.
If the capability exists, someone will pursue it. Maybe the G20 will pass a resolution to put safety measures in place, but whoever decides to flaunt those safety measures stands to gain an asymmetrical power advantage
That sets up a competitive dynamic where players are incentivized to try to get there first, no matter the risks, because if they don't someone else will
Similar dynamics played out with the nuclear arms race
"We" refers to human civilization, and is not just one CEO but the sum of all society that acts as an organism through the emergent behaviour of our social existence.
If "we" are doing something wrong, then "we" had better fix it. And that could mean governments or even counter-actions by individual people, or even actions by hacker groups....
Yes I understand that, but the problem is there's no structure to enforce decisions at the level of human civilization. So even if something is agreed and enforced by 99% of jurisdictions, that doesn't mean it won't happen
And it also doesn't mean individuals can't revolt against it either. It could happen, and 1% might try and make it happen, but we can ALSO try and stop that 1%, with a mob if necessary.
It's 0. Long before evil AGI can escape and do any damage it will be confined to big datasenters. Where it will be observed for a long time. Mobile capable terminators will come much later. We don't have capable mechanics, only some short running. And we don't have small capable AI brain. So there is plenty of time and options to stop and correct. Dumb robots, we have them, can be a danger, but they are far from AGI and can't run for long. Corrected: with humans support they can do a lot of damage. That's something to worry about.
Slightly off topic question, but does anyone know if there is any school of thought that holds both sides as being necessary? It seems the traditional Christians hold the physical Yahweh god as being the "true" god whereas the gnostics take the ideal non-material god as being the "true" god, but is there some school of thought that deems both necessary for existence in a co-dependent style relationship?
How does this glib remark contribute anything? There are much more useful ways of defining playing God than being so inclusive as to have all of human existence in it. (See my definition below.)
Playing God: creating technology or inventions that are far beyond our control, or that will have unintended consequences that have the potential to cause significant material or psychological damage to a large proportion of humanity that cannot be reversed. Such technology is of the type that despite attempts to control it, it proliferates, and it would proliferate under a variety of economic and political models so that there is hardly any way to stop it except complete physical destruction.
Solution is simple: take it step by step, so that we deploy technologies/inventions that are only ever so slightly beyond our control. Learning means making mistakes; we can control the magnitude, but ultimately, the only way to not make mistakes is to never do anything at all.
> We have never applied this type of deployment so far. For example, fossil fuels -- we still can't control it.
Let's not forget that pretty much everything you consider nice and beneficial about modern existence - from advanced healthcare to opportunities to entertainment, even things as trivial as toilet paper or tissues or packaging or paint - is built on top of petrochemical engineering. Sure, we're dealing with some serious second-order consequences, but if we overcome them, we'll end up better than when we had "simple life more connected to nature".
> I'd argue that it's much better to live a simple life more connected to nature, even if that means more diseases and more manual labor.
Hard disagree.
> this life is at the expense of the DEATH of millions of nonhuman organisms
It's not like nature cares either way. Sure, we may be accelerating the disappearance of whole species, but even then we're more humane at the killing. It's really hard to do worse than nature, where every multicellular organism is constantly fighting against starvation, disease, or being assaulted and eaten alive.
I have the feeling that concepts like money and property fit your description, with the addditional detail that much more than humanity has been impacted. I also feel that AI has a pretty good chance to be the only way to revert the significant material / psychological damages that have already been caused by those. It seems short sighted to not weight in the whole of our "playing god" so far, and dismiss the idea that we might better defer-to/get-help-from a different intelligence at this point.
Because it works. It gets the boys excited, I mean, look, it's on the hacker news front page! Big talk of AGI! It's right around the corner, this time for real. The digital brain practically exists already! Can't wait for the first "signs of consciousness emerged in AI model!" articles.
The work suggests that existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.