Is it not the case that having access to a machine capable of running either Minecraft or Linux in the early days of each (if not now) means you are (or were) fairly affluent?
It depends on what you mean by the early days of each - in the really early days of Linux (1992) computers were probably going to cost north of a few thousand dollars and the type of computer Linux was designed for (a multi-user system for dumb terminals) would cost tens of thousands of dollars. By the time linux became a thing more than a handful of people in any given state knew about you could probably run it on a machine costing somewhere around 300$.
Minecraft has never been demanding resource wise, I'm sure early versions had serious performance issues but running it on a cheapo laptop has always been totally reasonable - it's quite accessible (it was written in java even!)
I think they meant hardware. For software, yes there are tons of examples (open source software is usually free, games are usually cheap).
But for hardware, it's almost always some expensive thing first. The internet was once very expensive to access, cell phones were initially very expensive, DVD players were initially very expensive, computers in general, etc.
I think Minecraft and Linux are more like the content produced on top of the technology that is computers. It's like if a new book is written it's quickly available to everyone in the market who can afford a book. The book isn't really technology, but the printing, publishing, and distribution is and it's been around long enough to be distributed.
Software seems less like technology and more like writing. The distribution cost, once the systems are in place, is marginal. The technology part is creating the systems that enable the software.
IBM in the 90's strongly agrees with you - this software stuff is never going to be profitable and everything people pay for will always end up going through us!
More seriously, I disagree about software being less important because there have been very real innovations for tooling accomplished in software alone. Email is a pretty classic example - but a more modern one might be Google Cardboard which can turn your smart phone into a rather underwhelming VR headset. There are plenty of hardware alternatives but the same basic functionality was accomplished on generalized hardware.
Additionally, all this technology is only really possible due to other technology - we don't discount a new shiny computer just because it's just a dumb oddly shaped box if you can't supply it with electricity - but the costs to develop software are generally lower than hardware so I think it's fair to have a general notion that hardware is more innovative - it's just that you're conflating two different variables - cost and medium.
I think you're conflating technology with profitable or important. That is, you see me saying that software isn't technology and think I'm saying that software isn't profitable or important. That's not at all what I'm saying though. I likened software to writing. Writing can be important and it can be profitable, it's just not technology.
Maybe we could agree on email as a technology. Maybe. I think it's a stretch. I hope we could both agree that the nth email client isn't technology though. It's not adding a new capability to humanity which is how I tend to think about technology. Refrigerator - keep stuff cold. Electricity - power to operate machines and light. Computers - organize, access, modify information. etc. New JavaScript library or new game... Not so much technology.
I think that's fair yea - it might just be a matter of semantics. If you think software is included in technology then I stand by my point but, if your view of technology excludes software then you're quite correct.
I disagree - I would agree that Minecraft wasn't a novel technology, just like Linux wasn't a novel technology - it was an alternative version of Minix.
Additionally Zoom isn't a novel technology, it isn't even particularly interesting technically when compared to other video conferencing solutions - but over the past year it's been incredibly important to a number of people.
I think the OC slightly missed the mark in mentioning "important technologies" instead of something closer to "technologically innovative" technologies or, more accurately (but less interesting of a statement) "expensive to develop technologies". Things that are expensive to develop generally aren't cheap to begin with, while things that are cheap to develop need to be cheap to compete with other market entrants and clones. Additionally hardware (a limiting factor on cost for a lot of technology) tends to get cheaper over time and that rate of change is accelerated by a large market of interest (leading to more folks deciding to try and iterate new designs).
Penicillin. But then, he wasn't trying to make money off of it.
And basically anything Nintendo pushes as a console gimmick. It's not that the tech immediately goes from research to broadly accessible, but rather that they tend to take old tech that no one saw as having profitable consumer applications and find one for it. In that way, as far as consumers are concerned, it goes from unknown to widely-used without making a stopover in early-adapter purgatory.
This may be a little unfair, but I do wonder if there isn't a tendency to consider a technology to be widely available when it becomes available to you and the folks farther back in the line don't count or aren't relevant.
To say that a single company's products being used by even a single digit percentage of the world population doesn't meet the requirements to ve considered "widely available" is a stretch.
In any case, you said "important," not "widely available," and yes, Nintendo's products are hugely important. Many of today's technological advancements can be traced back to their proving that a given use case for a primitive version of a given technology was viable.
Whether it's a single company (or product) or multiple is beside the point. If a technology is only available to (say) 1% of the population, I don't think that qualifies as being widely available.
I will also note that my original comment was in response to someone who is "more interested in tech for the other 95%".
I don't understand why Haskell gets brought up in the middle of an otherwise interesting and useful article. This sort of thing cannot happen in Haskell. And while Haskell is not universally admired, I can't recall seeing Haskell's flavor of type inference being a reason why someone claimed to dislike Haskell.
It's really too bad that the only way we can be sure rich folk pay their fair share is to charge them twice for private education. Otherwise, my mother may not have felt it necessary to work the overnight shift at the local factory for decades in order to provide us with the basic necessities and a private school education.
There are no guarantees in life, but the reason we don't have global pandemics constantly is that mutations that make a virus as dangerous as SARS-Cov-2 are quite rare.
I'm fairly out of my depth here, but it's somewhat relevant to my personal situation:
When this came up for discussion on HN a few days ago, I was initially confused, as some of what I read seemed to suggest that taking e.g. lisinopril could possibly increase the risk of an infection because it seems to increase the expression of ACE2 receptors that are used by the virus to infect cells.
On the other hand, some of what I read seemed to suggest that ACE inhibitors (e.g. lisinopril) could have a therapeutic benefit. The virus is going to inactivate a bunch of ACE2 receptors through the course of infection. Since ACE2 receptors inactivate angiotensin, that would leave a lot of active angiotensin floating around, which is potentially very bad. ACE inhibitors would seem to help here because they inhibit the active form of angiotensin from being created in the first place.
Now I'm wondering: Is it possible that taking lisinopril could increase the risk of serious infection for those of us not yet infected, but also could reduce the severity of an active infection?
I agree. There is a +6% increase in COVID-19 mortality for those with hypertension. Assume all were on BP meds. They're cheap, & doctors usually insist. If ACEs or ARBs were protective, it should be more like -6%. Right? Losartan = 400% more ACE2, Lisinopril = 100% more ACE2.
Elderly people are usually on BP meds.
Diabetics are frequently prescribed Losartan to protect their kidneys. I know. I am a diabetic, and was prescribed it for that reason and also for high BP.
As a diabetic over 50 with high BP I have a greater interest than most. Especially since my wife was exposed to coronavirus, is sick, and I am starting to feel unwell.
Stopped taking losartan yesterday. Have some tenofovir lying around and might start taking it. It's the only antiviral I was able to get my hands on. Hopefully my chloroquine arrives in the mail soon.
I read a paper that suggested caffeic acid was the anti-viral component of elderberry extract that was actually inhibiting HCoV-NL63 viral attachment and infection of human lung cells in vitro (Virus Research, 273 (2019)197767). This paper speculated that caffeic acid binds to ACE2 and inhibits viral attachment and infection via this route. Another paper suggested 95% absorption in humans dosed with 500 mg in 200 mL of hot water. A third paper that looked at 2-year study in SD rats suggested pro-carcinogenic properties, but many other papers suggest anti-oxidant/anti-inflammatory/anti-carcinogenic properties. If I start to come down with it, I think I'm going to do a 500 mg caffeic acid dose.
This is not how persistent data structures are used in practice. There is no "the" new version of a persistent structure. The best analogy I can think of is to compare it to a VCS (e.g. git). There is no need to lock any existing commit in order to create a new commit (which together with prior commits, represents a new version of the code).
I'm not familiar with Clojure, but you can hit conflicts in the Git world, though, which seem to be what the parent is concerned about. Two of us could be creating some new data based on the last data we had, at time T, and then the other person submits theirs at time T+5, and I submit mine at time T+10. In that case, my change hasn't taken theirs into account.
If you can "submit" your change back to the original datastructure then the original datastructure is not immutable, right? Here's a nice explaination about how the persistant immutable datastructures work: https://hypirion.com/musings/understanding-persistent-vector...
I believe the question is that, if two threads take the same immutable vector, and both make a change to it independently, they'll end up with two new vectors (eg two branches in git); a vector that reflects thread1's change, and a second vector that reflects thread2's change. So now you have a conflict, which requires resolution; git has a human intervene.
eg
x = [1,2,3]
Thread1 -> x + [4] => [1,2,3,4]
Thread2 -> x + [5] => [1,2,3,5]
But you were expecting [1,2,3,4,5]
Reality was that you wanted an order to your events, normally enforced by locking, which the immutable vector doesn't seem to help you with; they were both able to update independently, but you actually wanted them to update dependently.
If you try to use immutable datastructures to avoid locking, then how is conflict resolution handled?
I think the answer would be that it doesn't help you avoid locking; either you lock & share a single reference to the latest version of your immutable vector, to enforce ordered events, or you define a resolution strategy separately. The immutability aspect just stops you from not having a resolution strategy -- which would always be incorrect
And if I understand correctly, the ideal scenario for immutable datastructures in concurrent scenarios is when you can define such a merge strategy (and safely give threads their own copy of the datastructure to muddle with, without actually having to copy the entire datastructure)
You could, as per your example, use locking as part of a resolution/merge strategy to combine the results of two separate computations running on two separate threads. Or you could use some strategy that does not involve locking. Either way, it does not support the original claim I disputed that "Immutable structures still require locking".
>Either way, it does not support the original claim I disputed that "Immutable structures still require locking".
It does, if you believe serialization by locking is the main strategy to handle serialization (in which case, mutable or immutable, you still need to lock), and so... you still need locking. Serialization being the main scenario GP gave.
Your original answer didn't resolve the problem either -- fine, you didn't need to lock when adding elements to your immutable structure, but you still haven't reached serialization; you've just pushed the problem back another step.
The answer that I believe GP would need to correct his understanding, (and much more importantly, the answer that I'm interested in :-) is what serialization strategies does immutable datastructures enable, if not locking?
The other correction GP seems to require is whether serialization is actually that important in general, and whether functional programmers tend to experience otherwise... But I don't care about that answer :-)
Oh goodness :) OP has conceded the point, but you're still down to argue on the basis of what a person may or may not believe is the main strategy to handle serialization. I give up. You win, I guess.
Regards your other question(s), I will just add that I answered many similar questions for myself (as well as disabusing myself of a lot of misconceptions) by undertaking to get a basic understanding of Haskell.
You're assuming that x+[5] mutates the list instead of returning a new list.
If you're planning to have multiple threads append items to a list, the immutable way to do it is to have each thread return a list of items to append to the main list, then fold those items into the list.
In brief, these sorts of conflicts simply do not arise in a fully persistent data structure. You may have a situation where you have a persistent data structure together with one or more mutable references, each to some version of the data structure. Yes, a modification to one of these mutable references would need to be synchronized, but they are separate from the persistent data structure itself.
Again using git as an example, there is the persistent data structure, aka the commit graph, as well as mutable references to commits, aka branches. A change to what commit a branch references needs to be synchronized.
I concede. If you have algorithm to do the `git merge` equivalent without human help, I guess no locking or STM is needed. Although that's very costly in implementation.
It's great to have git as a mental model in this discussion, really useful.
Whether the "merge" implementation is costly or complicated very much depends on exactly what it has to do. The git example is pretty much a worst case example in this regard.
An easier example could be that you have a tree structure that represents a mathematical expression. The evaluation of every node could proceed on its own lightweight thread. The merge strategy would be to simply perform the appropriate operation on the results produced by the threads evaluating the child nodes.
I don't get math expressions example at all. One thread modifies one value, so there is no (mutability) problem to solve.
You have customers' orders to buy items. One last item remains at your store. You accept one order and update HEAD. You accept another order in parallel and follow to merge. "Merge" here means that you need to return money to the customer and send out an apology e-mail.
More cumbersome than locking, isn't it? But possible, yes.
Yeah, you're right, I muddied the waters a bit with that example. I was just trying to think of examples of persistent data structures being used in a concurrent context, but you had in mind a situation where multiple agents could be updating the same data structure independently.
Your example is much better. I also was thinking of maintaining an account balance with a log (vs. synchronising updates to a stored amount), but it's not much different from your example.
And of course this "eventually consist" strategy is generally how things happen "at scale", persistent data structures or not.
That would be problematic using non-immutable datastructures as well as you might end up with incorrect order or even nonsensical data if you don't use locking.
Having used immutable data structures and concurrency in non-clojure languages, I mostly resort to something like concurrentML for concurrency. Message passing lets you solve situations like that in more elegant ways.
This is a sad reminder to me that the tooling story around Haskell seems to be perpetually almost really good.
Case in point, the core toolchain (e.g. the compiler itself) supports "typed holes". Basically, you drop a placeholder somewhere in an expression and the compiler will infer and report the type of the sub-expression. There are also tools to search either within a program/library or e.g. the entirety of the Hackage package database for functions matching a certain type. It seems like a relatively short distance from there to having an IDE or code editor that can list possible expressions/functions that could satisfy a particular placeholder. And to me, having e.g. a list functions that transforms the input(s) I have to the output I need seems much better than e.g. a partial list of all the functions that relate to a particular type.
And maybe such a thing exists today, but I have had really bad luck trying to get Haskell tooling that works well and is easy to set up.
I think this issue is way overblown. The reasons for choosing a particular string representation in Haskell are analogous to the reasons to choose among e.g. byte arrays, streams, string builders, etc. in mainstream programming languages.
> Which prelude?
There is a standard prelude. Unless you actively choose another one, that's the one you'll get.
> Which compiler extensions?
If you want to use certain advanced language features, enable the appropriate extensions.
> Which testing library?
Many mainstream programming platforms (e.g. .NET) have multiple popular testing libraries to choose from. Is this a bad thing?
If you want to use certain advanced language features, enable the appropriate extensions.
Your answers typify a level of comprehension of Haskell which appears to occupy the same brain space empathy would do. The whole POINT is that there are complex language features which have to be enabled if you want them: how does anyone know a priori in a new situation what to enable and what to disable? What side effects? What consequences?
I loved learning Haskell but to pretend it's syntactic simplicity translates to simple in all things is to misunderstand.
Gcc has a million compiler -W options do you think every C programmer knows them all? Do you think that every cat user knows why we joke about cat -v?
You have mastered Haskell and forgotten what lack of complete understanding means to anyone else. Mastering FORTRAN or pascal or lisp was trivial by comparison.
Mastering the underlying concepts of recursion, and tail recursion, and typing systems, and then integrating optional language features is not trivial.
There's a lot here I would like to respond to, but I'll limit myself to a couple points:
> The whole POINT is that there are complex language features which have to be enabled if you want them
Yes, "if you want them". If you value having a simpler language, don't use them. And Haskell is hardly unique in this regard. You've mentioned GCC, but I think the Babel transpiler is another good example.
> You have mastered Haskell and forgotten what lack of complete understanding means to anyone else.
I've managed to teach my daughter some Haskell, who had no prior exposure to programming languages whatsoever, so I doubt your assertion is entirely true.
The hardest programming language I ever learned was the first one. Each one after that was easier than the one before...until I got to Haskell. It was so different that I was forced to go back to a more fundamental understanding of the nature of code and computation and start from there. It felt pretty challenging, especially at first, but I think that had more to do with my perspective and experience coming into it than the language itself (and the fact that I had a family and career at that point didn't help).
These were examples of the choices one has to make when using Haskell; it is far from exhaustive. The point is that IMO Haskell undervalues consistency and standardization.
I can only directly address the examples you've given. If you disagree with how I've characterized those examples, I'd be interested to know why. If you have other examples, I'd be interested to hear them.
Haskell has its weaknesses, e.g. the lack of quality tooling available as compared to mainstream programming languages.
But I disagree that Haskell is complex, at least as a criticism. When expressing concepts of a similar complexity, I find Haskell to be particularly concise and expressive as compared to most other programming languages I am familiar with.
And if Haskell undervalues consistency and standardization, I would like to know as compared to what? The only programming languages that I know of where there are not many reasonable choices for e.g. a testing framework are those that either (a) haven't been around that long, or (b) haven't seen wide adoption.
I have never seen non-trivial Haskell that didn't enable at least 1 compiler extension. Laziness is difficult to reason about. Purity makes easy things hard in exchange for making hard things easy. You can't even make it through the standard library documentation without bumping into category theory.
Haskell is many things. Simple is not one of them.
> I have never seen non-trivial Haskell that didn't enable at least 1 compiler extension.
Language extensions are idiomatic Haskell. There is only one mainstream compiler used everywhere. Widely used extensions are a natural result when the compiler, as a testbed, can outpace the language standard.
Enabling extensions is as trivial as including a standard library package. (In fact, the extensions are often better documented than the standard library, as you hint to.)
Honestly, I’m content to agree to disagree. I’ve had this conversation too often to repeat it here. The criticisms aren’t novel; I’m sure they’re easily found elsewhere on the Internet.