"Computer science could be called the post-Turing decline in the study of formal systems." (One of my favorite jokes. Dijkstra's I think.)
FWIW, the foundation of it all is the act of making a distinction.
"In the beginning was the void, and the void was without form."
Operationally, if the thing in your brain that makes distinctions is suppressed (by e.g. a stroke) you literally lose the ability to distinguish between your body and the world and enter into an "oceanic bliss".
> "My Stroke of Insight: A Brain Scientistʼs Personal Journey", (2008) is a New York Times bestselling and award-winning book written by Dr. Jill Bolte Taylor, a Harvard-trained and published neuroanatomist. In it, she tells of her experience in 1996 of having a stroke in her left hemisphere and how the human brain creates our perception of reality and includes tips about how Dr. Taylor rebuilt her own brain from the inside out.
Coincidence, but this is also the first lesson in the Tao Te Ching! :)
The lesson is our desired things (beauty, wisdom, wealth) are defined by their contradictions, and are therefore not self-standing. The only thing which is self-standing is contradiction itself.
When you step back, it argues, you see that contradiction (distinction) is that from which all things stem. The operative distinction that motivates most peoples’ actions is “desire”. Hence, the distinction between desirable and undesirable determines the meaning of more material adjectives like beauty, wise, wealth.
It goes on to say that if you remove “desire” from your perspective (to be more like nature/Tao), you can see things for what they are: mystery. Removing desire from your actions, however, is not very desirable in practice!
The recommendation (i.e. the behavior of the sage) is to practice doing nothing, and not-talking. The implication (IMO) is that, without their contradictions, action and speech have no meaning.
Yeah naming things makes it easy to talk about them but also draws arbitrary lines that influence the people dealing with said named objects
Science is one thing, but science also obfuscates what you are trying to talk about by applying a specific kind of 'grid' or 'form' which reveals certain patterns in the content but also hides certain patterns (re: Foucault)
> without their contradictions, action and speech have no meaning.
This makes me think of waves. Moving your hand in water creates waves. A wave is a distinction between higher density vs. lower, or higher field-amplitude or not. Waves could not exist without such a distinction. In physics everything (?) is made of waves.
But similarly in semantics words can only have meaning by describing both what they refer to and what they don't refer to by a given word or sentence. So, "meaning" is a sort of wave I guess.
> FWIW, the foundation of it all is the act of making a distinction.
Agree. Distinctin is more like creating a new pattern, giving rise to dualism. But new term itself is a relative to old one. So old is always part of new if you compare else it is always complete if you don't. Douglas Hofstadter has done very good job with self-referential systesm in Gödel, Escher, Bach: An Eternal Golden Braid.
Many easter philosophies particulary Indian advaita philosophy is an indeed great piece of work in this regard. But current model of science/mathematics is not much capable to describe abstract things and labled any such attempt as pseudo science. What I fear sometimes, science has become another RELIGION of new world as mentioned by Asimov in Foundation series. The beauty of science is not only knowledge but scientific approach and pursuing truth regardless of situation. We need to focus on developing these qualities in education else schools will become another religious institutes.
Philosophy of Mathematics is another interesting topic related to this.
'Training' is a perfectly conventional way to describe a postdoc role.
> So, What is a Postdoc?
> [...] A postdoc is a temporary position that allows a PhD to continue their training as a researcher and gain skills and experience that will prepare them for their academic career.
It's conventional, but perhaps a little misleading since it is almost entirely "on the job" training--there's almost never any coursework or anything like that, just (hopefully) advice from a more senior researcher.
Training doesn't necessarily require three lectures a week + 2 exams, or anything like that, but I think it does require some sort of sustained, deliberate attempt at instruction.
I'm very happy with my postdoc, for what it's worth, and the very beginning did have some explicit training. The rest of it though, has much more like a job: you do something, get feedback on it, and make some changes. Repeat and hopefully need to make fewer changes each time. Critically, you don't get a lot of "generalized" feedback; it's coupled to the specific thing that you're both working on and comes from the one person you work for.
I think in most other fields, the postdoc equivalent would be something like "jr. <whatever>" or "associate." Medical residencies seem to have a bit more structure, at least from the outside.
A framework I have recently found useful for thinking about my work as a computer programmer is a classic set of philosophical concerns:
Truth, goodness, and beauty.
That is to say, software development has an intellectual dimension, a moral dimension, and an aesthetic dimension.
For me, the intellectual questions primarily concern program correctness. Is this code bug-free? Is it robust and stable? I tend toward the "computer science as mathematics" epistemology of Dijkstra and Hoare. Invariants; inductive reasoning; temporal logic. At least at the level of individual functions, I find that it is feasible to express with rigor and clarity what the function is intended to do, and to reason with precision through to a conclusion that the implementation fulfills the intent. And, there is a test for every line of code. A favorite quote from Knuth: "Don't trust the following code. I have only proven it correct, I have not tested it!"
The moral questions concern the uses for which the code is designed, and the uses to which it will be put. I work on medical device embedded software these days, and the moral dimension of this work is a very meaningful and concrete benefit of this choice. I used to work on software development tools to support avionics development for the B-2 bomber. While I don't dismiss the latter as unambiguously immoral, it certainly was more problematic for me, an ongoing source of angst and struggle.
The aesthetic issues seem to me to be two-fold: how beautiful is the software from the standpoint of the user? And, how beautiful is the source code? I have seen (and written) much code that is frankly an ugly mess. I have also upon occasion had an inspiration about how to re-write a piece of code. Tons of code gets deleted, and the new code is simple, clean, elegant, and inevitable. I would go so far as to claim that at times I have been graced to write code that turned out to be beautiful and aesthetically pleasing. It seems to me that the "Clean Code" movement encourages creation of code that is elegant and beautiful.
I find that this triad of concerns inter-relate. Beautiful code is more likely to be correct, and conversely. Mindfully written code that is done with clarity and integrity tends to be both more beautiful and correct.
These relate directly to the three normative sciences:
1) Logic, the normative science of what is true.
2) Ethics, the normative science of what is good.
3) Aesthetics, the normative science of what is beautiful.
I believe the above formulation is due to Dijkstra, but I can't recall precisely which work I saw it in. He certainly held all three in high esteem and that philosophy was at the root of many of his insights. For example, he explicitly made a distinction between correctness and pleasantness. The former is a matter of logic and the latter is a matter of aesthetics. It follows that they are separate and independent concerns. The vast majority of software is not logically correct, but is nonetheless more or less pleasing to its users.
As for ethics, it's not just for things like the defense industry and medicine. For example the change to the C standard from having a set of permissible responses to undefined behavior to the actively hostile anything goes situation we now have is a matter of ethics. That behavior is both correct in that it satisfies the specification and pleasant in that it allows for aggressive optimization, but it's not very good for the users who get their NULL checks optimized away. Arguments that the bad behavior is correct or allows optimization don't address the ethical concern at all or deny its existence altogether and thus can never refute it.
The normative sciences are remarkable in their broad applicability across domains, which makes them worthy of study by all educated persons, but they are especially important for software developers since we can potentially affect so many people's lives. They even apply here on HN. We should all try to make our contributions true, good (nourishing for intellectual curiosity), and elegant.
What makes me always uneasy about theories like this is it makes me ask "If 3, why not 4?". What reason do we have to assume that these three dimensions 1,2,3 are the only ones there are? Why exactly 3 and why just these 3?
That's a great question. Your cousin comment notes that Peirce has influenced my thought. He had something of a fascination with three because of structural properties. In essence he saw the mathematical relation as a triad and so he considered three in some way essential to logic and what he called semiotic[1]. Some of that has probably rubbed off on me.
I don't think it's a magic number, just a common one. I happen to know of three normative sciences. I would gladly learn about a 4th, 5th, or more if they exist, especially if they share the same broad intellectual utility. It can also be a case of lumping versus splitting[2]. Postulating that logic, ethics, and aesthetics together are somehow exhaustive, one could still conceivably divide any of them into further subdivisions and spawn new normative sciences that way. I don't believe that would help me make my thoughts clear to myself or others, but, pleasantly, if someone else discovers that it does help then they will have an easier time convincing me too! Conversely, I can't imagine how lumping any of the three I listed together would be an aid to understanding.
Makes sense I think. It's good to realize that while 3 dimensions of space would seem to be everything there is and ever can be, newer physical theories have pointed out the possibility of more dimensions. There is no Holy Trinity, unless of course you are a true believer in such things
I like this. Reminds me a bit of another useful triad for software, the Vitruvian principles in architecture: firmitatis (stability), utilitatis (utility), et venustatis (beauty).
The ethical dimension has been a struggle for me. For reasons not relevant here projects I have worked on have tended to be either trivial or for companies whose ethics are actively dubious (business in my country is in general actively hostile to non-mercenary concerns). I've pretty much dropped out of dev work in large part for that reason.
This is a bit surprising since it doesn't treat information as a totally human concept. It discusses 'semantic interpretation' but for some reason concerns itself with implementation, which I think should be irrelevant.
I discovered this independently and soon after found it echoed in an important textbook (Datalogi, Hans Lunell) around these parts, around 20 years ago.
In short; only data exists in the machine. The patterns we coax out of the machine are only meaningful to us, as humans, after we have interpreted the data and turned it into information. A computer never processes information; only data.
If one deeply desires a deeper structure to data one can look at processes of convergence, which naturally leads one to study artificial neural networks for their working examples. Implementation however must be irrelevant to information because they are not in a bijective relationship.
The crux of information theory is that you can analyze a message without know what it means.
Mutual information would help you quantify how much this changes your mental model of the owner's life. Maybe you already knew about the vacation (you went too?). It would let you quantify how much this change in definitions affected you.
What makes those bits zeros and ones? What makes the thumb drive an information storage device? Why do we need to know the mental model of the owner's life to infer the information stored?
For information to be objective, it needs to be understood as mind-independent feature of the world. If you have to reference human minds to make it meaningful, then it's not objective. It's a cultural construct.
I took the byte and thumb drive bit literally, but if it were some exotic ternary system, it can hold about ~12.7 bits. If they're decimal digits instead, that sequence has a capacity of 10 bans or 26.6 bits.
The name "information theory" seems to throw people; imagine it were called coding theory instead and it might seem clearer that we can talk about the "capacity" of different codes and how different representations interact without worrying too much about what they mean. It's similar to algebra, in that we can make statement about `x + y = y + x` or `n + 0 = n` without worrying about whether x is measured in apples, lightyears, or anything at all.
Right, but if a future alien archaeologist were to dig up a thumb drive (preserved in a vault) after humans went extinct, what would indicate that it held bits?
A 'bit' in information theory is just a unit of measurement, corresponding to the information conveyed by one yes-or-no question (where both answers were previously equally likely).
Since it's a fairly natural unit, your alien archaeologist might even use it. However, if they had a weird taboo against binary, they could define a similar concept in other ways. In fact, we humans have too. Bletchley Park used "bans" or hartleys, which is the information carried by a single base-10 digit (without any a priori information). Nats use a base-e representation instead, which ties into statistical physics nicely. There's nothing stopping you from inventing a weird informational unit where 1 corresponds to "the answer to a seven way question were two of the answers are each twice as likely as the rest."
So, suppose your alien digs up my thumbdrive. It examines it carefully and notices that each MOSFET is always in one of two states. That suffice to describe its information-carrying capacity in whatever system they use: bits, hartleys, nats, zorblaxen. A similar analysis of their own media would let them say "This thing is so sophisticated, it can hold the work of 10,000 scribes" or "This junk can barely contain a moment of our 16D videos." If your drive were packed with different types of media, they might even discover that some parts of the disk (e.g., ascii text) DON'T use all of the available capacity.
None of this, of course, tells them what the data means. For that, you'd need to relate these data to some external source.
There's two different problems here: how you interpret arbitrary strings of bits and how you interpret arbitrary buts of matter as representing bits or not. Yes, figuring out/assuming that the bit cells in the thumb drive represented bits might be difficult. Then again, if you can look and see what the internal physical structure of this thing is, you might see a pattern of highs and lows.
But sure, getting from an inefficient physical structure to a series of digits is a step. But once you're there, maths has things to say about the possible information content of strings of bits.
> If you have to reference human minds to make it meaningful, then it's not objective. It's a cultural construct.
The fact that certain proteins, e.g. androgen receptors, alter their shape in the presence of specific androgens, implies that the protein's configuration contains mutual information with the shape of the androgen. But this connection between the androgen and its receptor is independent of minds to recognize the connection.
Sure, there's no need to mention information in this example to explain the observed behavior. But it's important to understand information as "merely" another kind of description of a system that may be revelatory depending on the context, rather than a new ontology that is competing to displace our old ontology.
So saying the receptor has mutual information with the shape of the androgen is just another description of the system that abstracts over some details and highlights others. Going further with the biology example, the concept of mutual information helps to clarify why some sequence of DNA nucleotides are instructions to create proteins that have a particular shape to interact with this molecule or enzyme or whatever. Without the concept of information, we would need to speak of long causal chains and the evolutionary history of this region of DNA with respect to various environmental influences and so on. The concept of information serves to clarify this connection in a much more direct manner.
That makes sense, but when you're asking philosophical questions about the ontology of computing and information, then if talk of information is just a shortcut for the messy complicated physical process, one can eliminate it from the fundamental "furniture" of the world.
One might respond with a shrug and who cares, but then we have people making rather strong claims about the universe being a computer and bit from it. Similar to mathematical universes instead of just physical stuff.
Notions like Kolmogorov complexity make information a measurable quantity. E.g. most possible configurations are incompressible. Only a few are compressible.
Additionally, having access to the information of one system allows us to reduce the amount of information needed to describe another system, and we get mutual algorithmic information.
So the notion of information is not just syntactic sugar, but distinguishes between different types of physical configurations in a way that enumerating their parts cannot.
In great abstraction, you formulate questions which have a 'yes' or 'no' answer. It might make sense to think of the set of all questions with true or false answers, accompanied by an extremely sparse truth table. This is of course impractical but it does let us think about formal logic.
Unfortunately, the reason why it seems like arguing about the definition of a word is that there are there are more concepts at play here than we have words to describe them. When people talk about information as in information theory, what they're talking about is devoid of actual semantic contact, but when normal people people talk about information, that normally implies some sort of semantic content.
Information theory addresses semantics through mutual information. 'Meaning' entails reference, and when A refers to B they have mutual information. Mutual information is how Shannon measures the carrying capacity of a channel.
This is still just talking about a property of information that resembles one aspect of what's meant by "meaning".
This is all still purely structural and independent of semantic content.
The example given above about alternate interpretations of the same bit string on a USB drive gets at the problem.
Semantics is about the process of interpreting data into something "meaningful" to some particular system (e.g. a human). It's essentially an importing process: converting data to a format that some system knows how to operate on (i.e. "understands).
To keep it brief, meaning (or "information" in the non-jargon sense) is about a relation between some data and a system which that data operates on (or the system operates on the data, depending on how you look at it).
"information" from information theory is an intrinsic property of data[0]: the information content doesn't change based on what's interpreting it. (At least that's my understanding.)
[0] I'm using "data" here to refer to something like a serialization of a system into a form where information-theoretical analysis can be applied to it (e.g. like a bit string).
Like I mentioned previously, there are objective sufficient criteria to identify meaning. Even if the interpretive context is subjective. So, we can guarantee true positives, even if we cannot guarantee true negatives.
> there are objective sufficient criteria to identify meaning
You have just shifted the problem by now using a very specific, narrow definition of "meaning" rather than doing so with "information".
The objective sufficient criteria to identify meaning only work because this definition of meaning refers to an intrinsic property of a system; however, the common usage of "meaning" involves a relation between multiple systems.
Not sure what you are saying. Informative content has both intrinsic (randomness deficiency, entropy, kolmogorov sufficient statistic) and relational properties (also randomness deficiency, mutual information, algorithmic mutual information) that can be objectively and quantitatively measured.
Intrinsic/relational may not be the best way of getting at it after all. It would likely regress into an equally involved question on defining "system".
So let me try another approach. We need to distinguish between information and the "semantic content" of information. Here are the definitions I have in mind:
- Data: an arbitrary string
- Information: a numeric property of some data: INFO(d)
- Mutual information: a numeric property of two pieces of data: MI(d1, d2)
- Semantic content: the particular "effect on a system"[0] that results when the system internalizes the data: SC(s, d) ('s' denotes the system.)
My understanding is that you're claiming MI == SC.
There are certain ways we could fiddle with my usage of 'system' above so that it can just be 'data,' too (e.g. representing the system as a string, as with Turing Machines that operate on string reps of Turing Machines). In which case we can at least say that MI and CS both have domains like (data X data), so there is some similarity there.
But, even just the fact that MI is going to evaluate to a single numeric quantity, while SC evaluates to an "effect on a system" (i.e. following a state transition), implies that they're referring to different things.
I think it's also clear that it would be possible for MI(d1, d2) to evaluate to the same number as MI(d3, d4), even though SC(d1, d2) !== SC(d3, d4), since all the informational relations could be equivalent (between (d1, d2) and (d3, d4)), and yet the particular state transitions followed could differ.
[0] We can make this more precise. For instance, in the language of automata theory it would be something like following a particular state transition.
Edit: I should clarify one implicit thing here: this hinges on an assumption that my definition of "semantic content" would be a satisfactory match to common usage of the phrase for most people. Happy to hear an alternate that also captures common usage, and I can expand on my choice of def.
Great definitions. I would make a distinction between the measure of mutual information and the mutual information itself. The former is obviously not the same as the system effect itself. But the latter may be more related to your SC function.
> But the latter may be more related to your SC function
Definitely seems related to me. It's an interesting angle on it I hadn't considered before.
One idea is that it's due to something like compatibility of formats: in order for some information to be 'meaningful' in the context of some other system, its format has to be something recognizable to that other system; and the ability to recognize implies some commonality[0]. That commonality implies a certain amount of mutual information.
I don't feel certain that the mutual information would be sufficient to account for all the effects of semantic content—but I wouldn't be surprised if it was ;)
[0] I wonder if this is something that's been studied: mutual information between patterns and recognizers of patterns... I could see there being some lower bound on mutual information.
A better example is the difference between the contents of a normal English book vs a randomly generated string of letters. The former has a lot of structural properties the latter lacks, plus shares a large amount of mutual information with a source independent from the physical properties of the book.
When we cast our randomness as a string we impose a lot of structure. Proportionally, getting to English from there might involve a lot less new structure than in the previous step.
Often I wonder if I'm just spouting gibberish, saying things like this, but it does make sense to me.
No, the information cost grows exponentially going from random letters to English sentences. We can measure this with crossentropy to get a probability of generating a well formed English sentence from a uniform distribution over the letters. To generate just a few hundred words we end up needing more trials than are available in the lifespan of our universe.
Approximated perhaps, but I would argue that in evaluation it is subjective because it is always compared to something else.
The issue abstracts to determinism vs. infinite and unbounded physical entropy because we can't separate ourselves from the system. Decoherence and mixing of quantum 'information' is part of the puzzle, and I would like to claim processes of convergence are another.
It becomes difficult to not overload terms while discussing this topic.
There are certainly limits to formal definitions of information, but I see them as lower bounds on semantics, such that we can identify meaningful as that which passes the bounds. Doesn't guarantee true negatives, but at least gives us true positives, so is a sufficient if not necessary criteria for meaning.
>The patterns we coax out of the machine are only meaningful to us, as humans, after we have interpreted the data and turned it into information.
I disagree. The number of processes that do something computation-like out of the space of all processes is vanishingly small. That we can input some informative sequence of bits into a computer, and the computer transforms that sequence into output which is then a different informative sequence of bits, tells us that computers are intrinsically information processors, and that information is not in general subjective.
I would argue this is currently a carefully maintained illusion which is shattered by malformed input.
Perhaps a universal function exists, whereof all known programs are a special case of. Perhaps humans operate like this or perhaps we are Boltzmann brain apparitions. If not the latter then the subject turns into a discussion of communication through lossy channels and convergence, and the data are again just tokens.
The input bits are presumed informative (i.e. have some mutual information with an external structure). The transformative process is one such that information inherent in the input is preserved and transformed into some amount of new information.
That this process preserves and transforms information, when almost all physical processes would destroy such information, demonstrates the coherence between the structure of the transformation (i.e. computation) and the information in the input.
If it were the case that the computer plays no informative role in the semantics of the input and output, we would be able to make a computer out of any physical process. But we can't. I can't coax my wall into telling me how to invert a matrix, for example. That certain vanishingly rare physical processes can invert a matrix for me implies the coherence between the semantics of the operations and the semantics of the information its operating on.
I have a Bachelor's in Philosophy and now I'm studying Computer Science formally.
I must say having the Philosophical background helps immensely when thinking up of novel, ethical and utilitarian projects to build. Computer Science and Philosophy truly do go hand in hand.
Comp sci is a good upper bound on materialism, and provides a rigorous basis for distinguishing material and immaterial substances. E.g. if we detect a halting oracle we can conclude it is immaterial, insofar as materialism is bounded by that which is computable.
In that sense it is a way to scientifically address the philosophical concept of substance dualism, without getting lost in a bunch of speculative arguments. Also is a good avenue for making substance dualism a technologically useful theory.
"Computer science" is NOT science. Good science is rarely done outside of machine performance issues. How to organize information to be best "digested" by human programmers and users is largely in the realm of psychology and physiology, which are "soft" sciences because we don't really know how the human brain works yet.
I've been in long debates about this and stand by it. Somebody once challenged a "pro science" claimer to "prove" that go-to's are objectively worse than nested blocks. The science claimers failed to produce objective evidence. (Background: https://en.wikipedia.org/wiki/Considered_harmful, see the part about Edsger Dijkstra's paper.)
(I'm not saying goto's are good, only that my preference for blocks is subjective. For one, nested blocks can be indented which makes their structure more visual. Goto's don't have a visual counterpart, at least not without a fancy IDE.)
Physical engineering is more objective. For example, in designing a bridge an engineer is sandwiched between the architect's illustration and the laws of physics. Bridges can't fall. There are various "gut instinct" design choices to select among, but it's a smaller pool of choices than that of a software architect.
Personally, I've read a lot of GNU/FSF doctrine, and to me it feels very one-sided and dogmatic. I know that the FSF has been one of the loudest voices in the fight for free software, but maybe it wasn't the best one. Especially given the founder.
I look forward to reading your less-one-sided philosophical analysis, then. Is it already online?
I think it's easy for someone to come to the conclusion that something is "one-sided and dogmatic" when its conclusions contradict preconceived notions they hold and aren't willing or able to question. Occasionally, though, there is good-faith, well-reasoned disagreement even to well-thought-out philosophical exploration. Your case might be one of those exceptions; if so, I'd like to learn from your thinking. Given that you're leading off with an ad hominem attack on Stallman, though, in the unrebuttable form of a vague insinuation, I'm not that optimistic.
Do you mean creating the technical underpinnings for world-scale software-as-a-service systems that devour our society whole?
Or do you mean slaving hopelessly for decades trying to convince people of his rightness and attempting to force a rigid, legalistic approach to humane software development on the world, only to lose what good reputation he had developed due to an utter refusal to account for human frailty, emotions, and impatience when discussing difficult topics in the public eye?
To be clear, I think the media treated Stallman terribly in the debacle around Epstein's Media Lab ties.
I also think his failings as a software theorist are the same ones that led to him creating a situation that could be so easily and dangerously misunderstood.
My overall point is just that his results are not flawless, unmitigated good. They suggest to me that his software philosophy was not sufficient to guarantee good results.
You can call it 'one sided' and 'dogmatic' or you can call it 'consistent' and 'clear', seeing as it is a single entity that is broadcasting this opinion it makes sense for that single entity to be on it's own side of the argument.
The larger discussion can be seen in open core, open source, shareware, proprietary software, etc.
We are too used to scizhophrenic political parties with opinions all over the place. To the point that if there is a political party with a clear agenda then it is 'dogmatic' and 'one-sided' (hello extremists). The whole point of a representative democracy is that you follow the school of thought that represents your beliefs, if you can't trust any organization to stay true to their principles (hello startups getting bought by FAGMAN) then the whole system has lost it's foundational precondition (rendering it meaningless). For this reason I am glad to see FSF and GNU stick with the dogma they represent, practice what you preach.
I kind of feel like this is a similar discussion as whether to separate church and state. One is clearly dogmatic while the other is supposed to be a platform for (essentially religious) discussion. The problem is that the platform has to be grounded in some principle and then you've got dogma again.
Another manifestation is decentralized vs monolithic... Free software is about finding the principles to base government on so that religion can be discussed freely (policies diverging and converging). The technical problem of managing that is a version control system. I'm still waiting for them to become like governments but Subspace was launched yesterday and it is definitely in the direction of it.
There's an emerging field of Computer Science as Economics Discipline. That's the field of cryptocurrency. It is currently plagued with (pseudo) economists and scammers. Or people don't really understand system design (but they don't know it). It deals with money so there is a lot of misinformation and greed.
People in crypto don't understand the design tradeoff between centralization and decentralization. So they end up selling fake dreams like decentralization can fix everything. Bitcoin is decentralized at layer 1. But in layer 2, people still want to have the same level of decentralization when they design the Lightning Network. Then, some will buy into the idea that LN will fix everything and still be decentralized.
> People in crypto don't understand the design tradeoff between centralization and decentralization.
Care to elaborate about that trade-off?
And could you be a little more specific on who is 'crypto people? (e.g. everyone from users of a crypto systems, core developers, etc.).
In system design, there are tradeoffs. You can't automagically get everything you want. For example, regarding the block size, you can increase the block size to scale transactions or you can do L2 off-chain. Each comes with its own pros and cons. Big blocks are easier to implement and simpler to set up, but they'll require future hard forks. Off-chain scaling is more flexible but they are a lot more complicated. Big block retains decentralization feature. Off-chain scaling will require some centralization in layer 2 to bring the cost down. Both are fine scaling strategies. But the discussion becomes highly political. One side needs to win.
And there's DeFi, decentralized finance. It's more like centralized finance dressed up in tech jargon. When you add more layers on top of the main chains, the complexity of the systems increase. To bring down the costs, you'll have to create centralized layer 2 systems.
FWIW, the foundation of it all is the act of making a distinction.
"In the beginning was the void, and the void was without form."
Operationally, if the thing in your brain that makes distinctions is suppressed (by e.g. a stroke) you literally lose the ability to distinguish between your body and the world and enter into an "oceanic bliss".
> "My Stroke of Insight: A Brain Scientistʼs Personal Journey", (2008) is a New York Times bestselling and award-winning book written by Dr. Jill Bolte Taylor, a Harvard-trained and published neuroanatomist. In it, she tells of her experience in 1996 of having a stroke in her left hemisphere and how the human brain creates our perception of reality and includes tips about how Dr. Taylor rebuilt her own brain from the inside out.
~ https://en.wikipedia.org/wiki/My_Stroke_of_Insight
(There's also a TED talk.)
Semantically, distinction is the sole act required to produce logic (and therefore computers and teh science thereof.) See "Laws of Form".
> "Laws of Form" is a book by G. Spencer-Brown, published in 1969, that straddles the boundary between mathematics and philosophy.
~ https://en.wikipedia.org/wiki/Laws_of_Form