Hacker News new | past | comments | ask | show | jobs | submit login
Questions (patrickcollison.com)
1106 points by tosh on Sept 30, 2018 | hide | past | favorite | 470 comments



> Why are there so many successful startups in Stockholm?

Several reasons. First, you need to recognize that any Sweden-based startup will, when it gets to be known internationally, have a Stockholm-based office. So it's not about a city of 1 million inhabitants, it's a country of 10 million that's the true number here. As an example, I believe Spotify opened their original offices in both Stockholm and Göteborg more or less simultaneously.

With that said, a commonly stated reason for why Sweden in general has such a high prevalence of tech startups comes from a bunch of fortuitious decisions in the 90s and 00s. In 1998 Sweden's government started a program that allowed employers to sell their employees computers under a tax free scheme (the so called Hem-PC-reformen https://sv.wikipedia.org/wiki/Hem-PC-reformen). This was extremely popular, and led almost every Swedish home to get an often extremely overpowered personal computer. Thus, practically everyone who was a kid in 1998-2006 (the rebate was cancelled in 2006) grew up with a computer. This gave Sweden a huge advantage compared to other countries in the early Internet revolution.

Sweden has also invested heavily in building a fiber network, you have access to gigabit Internet even in some extremely rural areas.

Another thing is that Sweden doesn't have the tradition of dubbing movies. That means kids will be exposed to English from an early age. This leads to Swedish tech companies not being afraid of hiring talent globally and generally use English as their business language.

Finally, out of the 5 examples posted, one is Mojang, which is clearly an outlier. I'm not saying what Notch accomplished wasn't extremely impressive, but it was essentially a one-man operation, and probably shouldn't be held as an example of a trend.


Good observations! You forgot one major thing. This goes for all of Finland, Sweden, Norway and Denmark. The winter is horribly dark and boring. (Unless you're super rich). Therefore most turn inwards, staying indoors, thinking deeply at problems, spending endless afternoons and nights on things. Be it software development, game development, car tuning, car engine work, engineering, knitting or just reading loads of books.

I found this to be almost impossible to achieve when I moved from Norway to Australia. There I was outside hanging out with friends or just doing stuff on the beach or whatever. The deep focus was harder to achieve. Quality of life was insanely better there, yet somehow I missed the possibility to sit down and be productive in some narrow topic.


I believe to have read somewhere that this is also the reason why so many Russians (and people born in the USSR) are great mathematicians and chess players. Math and Chess are both indoor activities and also absolutely unpolitical, which is a huge plus in an authoritarian regime.


>>Math and Chess are both indoor activities and also absolutely unpolitical, which is a huge plus in an authoritarian regime.

The history of chess is definitely very political, especially inside the Soviet Union. It was seen as the proxy for intelligence, and if Soviet players were better than Americans, it was their way of demonstrating superiority in yet another domain.


True, but ultra-authoritarian regimes will politicize almost everything imaginable.


I read somewhere that it was only way to distinguish themselfs and to travel.


The other major thing (and I would argue more important than the others) is Sweden's wealth and social safety net.

Wealth is one of the most important factors for someone to found a startup. You aren't going out and starting a company if you have to work 80 plus hours a week to live.

Just look at the clueless opinions of so many tech founders about how "you shouldn't pay yourself a salary at first after you take investment money". Oh really? So we should just buy groceries and pay rent from the trust fund mommy and daddy gave us?


I agree with your first 2 paragraphs, but the third is being overly simplistic (or potentially just not understanding the trade-off at play).

The scenario is that (roughly) 90% of the hypergrowth-style startups which raise a first round will fail to achieve enough momentum to successfully raise a second. And due to the business model choices they've committed to, failing to raise at that point is equivalent to going out of business. A first raise is often too small to be able to do everything they'd like, so they suggest decide that it's in their best interest (long term) to forgo a salary where possible, in order to buy a few extra months of progress before you're forced to start shopping for investment again.

Of course, folks give advice based on their own experience, but it doesn't always generalise to folks in different situations, which is potentially why that would seem like silly advice in (what I assume is) your scenario.

(minor edits toward the end for clarification)


You are completely missing his third point. For people without wealth (or a social safety-net which provides you the resources you need to survive), it isn't optional to work for free. You have to eat and have a roof over your head, which means taking a salary if you are working full-time on your startup. Its not a strategic decision.


I must have poorly explained. The strategic decision happens earlier, when you decide which idea to commit to. Certain ideas are both a) incapable of generating early profits and b) dependent on reaching uncertain milestones to unlock the next stage of crucial funding.

So if you don't have the personal finances to deal with that situation, then you make a different decision further upstream, in the past, by choosing to work on an idea which has either easier funding targets (allowing an early salary), or which can generate early profits (allowing an early salary), or which has lower development requirements (allowing you to work on it alongside another paying job).

The decision is about idea selection, where I agree with you. "If you need the money, then pick an idea which allows you to pay yourself quickly." But if you've chosen to play a different game (typical hypergrowth VC stuff), and you want to maximise your odds of winning at that particular game, then it's generally ideal to buy yourself more months business runway instead of more months of personal runway.

Perhaps still not clear, and I know it's an emotionally loaded topic, but hopefully that makes sense.


I think you make my point. Which is that unless you have some wealth already, family wealth or otherwise, your options for a startup are much more limited than those who do not have money.


The point is that you shouldn't take a regular market salary. Software people can live on way less than what the market currently offers so if you pay yourself the competitive rate you're burning a lot of your runway pointlessly.


> Good observations! You forgot one major thing. This goes for all of Finland, Sweden, Norway and Denmark. The winter is horribly dark and boring. (Unless you're super rich). Therefore most turn inwards, staying indoors, thinking deeply at problems, spending endless afternoons and nights on things. Be it software development, game development, car tuning, car engine work, engineering, knitting or just reading loads of books.

Wonder if that's also a factor in the prevalence of tech/programmers/hackers in Russia. Don't think there's a startup scene there, but there definitely seems to be more technology experts coming from the area in general.


As an anecdote, a lot of programmers and heavy gamers I knew in high school lived on remote islands and took a water taxi over to ours for school. They couldn't do after school activities without sleeping over at someone's place. Their islands had even less to do on them than ours.

Soft forms of isolation certainly seem to lead to solitary activities.


One of the vanishingly few things the Soviets did right was to emphasize STEM education. This seems to have stuck in Russia. And I’ve heard the same was true across the eastern bloc.


Hmmm interesting observation about the long dark winters. Edinburgh similarly has a disproportionately successful tech industry (it's the same latitude as Moscow) and Estonia has also done very well in tech (for many other reasons but that might be a contributing factor).


Winter is not horribly dark and boring in Scandinavia unless you are super rich. It’s equally dark and boring for everyone. And virtually everybody in Scandinavia can afford to travel south to sunnier places for a few weeks, if that’s what they want.

I’d say proficiency in English and quite wealthy populations is what matters.


That's not true. The super rich are taking luxury vacations and spa trips.


Finnish travel agencies offer a week long trip to Canary islands for €500, you don't need to be rich to save for that once a year. Most cities have spas too.


I don't think it is that relevant. You can be happy in Sweden and miserable in Malta. They funny thing is that people tend to say the same thing about SV, but because of the nice weather. Not to say that dark afternoon can't be special, but it is generally everything else (like having something relevant to work on) that makes it so. And that doesn't necessarily lose its value if it is sunny instead.


This is a plausible story, but that doesn't mean it's true. Anecdotally I felt more productive living in a warmer place than when living in Scandinavia. I think it completely depends on your situation and an actual study is needed to prove it one way or the other.

I also recall Jared Diamond debunking this as a theory for why "the west" got ahead in his book Guns, Germs, and Steel.


The same is basically true with Seattle. Well, not so dark (the sun is out til 4:30), winters are quite mild, but wet enough to keep everyone focused.


the most boring weather is reflected in the most boring software


Seattle's weather is hardly boring. And there is plenty of software going on here, is Facebook, Google, Apple, Unreal, Unity, Oculus, really that boring?


it's always raining in windows?


But then if that's true, you wouldn't get lots of devs in the Bay Area California. Lots of great weather here.


How many devs have you met that grew up in the Bay Area?


I know a handful. Is it really surprising?


Also a hypothesized reason for why canadians are unreasonably well represented in esports.


Israel doesn't have long dark winters and yet they're the pioneers of innovation and start-up culture.


I'm an Israeli and it comes from a few factors:

1. Israelis basically have no choice. If you want to make a good income, working hard in a technology startup(we have very few big tech companies) is one of the very few good options.

2. The army: at a very young age,a decent percentage of Israelis who join the army lead, in high value, high risk situations. That creates a sense of responsibility and strong ambition at a relatively early age.

The army is also a place where a lot of new tech is being developed , so people get exposure, and often in roles of major responsibility.

3. The Jewish people have lived among other people, in very hostile conditions, often forced to do banking(loans) and commerce at times when most people did agriculture. That forces a certain entrepreneurial spirit, and possibly higher intelligence(also witnessed by the higher rate of genetic illnesses in Ashkenazi Jews). That, plus a culture that always focused on learning(religiously).


> 2. The army: at a very young age,a decent percentage of Israelis who join the army lead, in high value, high risk situations. That creates a sense of responsibility and strong ambition at a relatively early age.

Summed up in one word - discipline. The glamorous myth of startups is just that, a myth. Some outliers go from zero to hero overnight, but most require slogging away day in and day out. Motivation can only last so long, and after it's gone all you have left is discipline. Military writings both current and historic routinely speak of discipline being the single most important aspect of achieving a goal.


>and possibly higher intelligence

Interesting what would happen if this is said about any other country


Obviously this discussion is a minefield. But there is some merit in asserting some relations there. In The Netherlands, Iranian and Afghani immigrants are often of a more privileged descent than Moroccan and Turkish immigrants, because of the reason of their migration. Moroccans and Turks generally migrated for manual labor, where Irani and Afghans usually fled religious oppression.

Of course this does not mean Dutch Iranians are smarter than Dutch Moroccans, but it could mean there might be more smart/entrepreneurial Iranians than Moroccans in The Netherlands.

These statistics might die out very quickly though, for example Turks are often already second or third generation, so many have been born in the privilege of The Netherlands.

Perhaps similarly, the second world war cost us a huge percentage of Jewish people, being privileged has meant having a larger chance of survival through moving to less dangerous countries (like the U.S.).

It doesn't mean Jewish people are smarter, it might mean though that you could find a bit more smart Jews in their population. Of course the second world war has already been a long time, so the effect might already be gone.


The problem is that people (Jews and non-Jews, Israelis and non-Israelis) really believe and expect it.

I'm probably average intelligence, but I've had it expressed to me by people who hardly know me that they expect that I'm brilliant or something because I'm an Ashkanazi Jew. Its one of those stigmas that get attached to any race, such as Russians drinking Vodka or Argentinians eating meat. I'm sure that there are sober Russians, vegetarians in Argentina, and there's me!


It's not about nationality: Ashkenazi Jews in general -- Israeli or not -- tend to be smarter. ~20% of Nobel prize winners are Jews compared to less than 1% of the global population. So either there's a Jewish conspiracy (which some people believe...) or Jews tend to be smarter.

Some people would say that this is cultural. I doubt it -- I think it's genetic. Intelligence is a physical attribute determined by genes, just like every other physical attribute. It's really no different from the observation that Kenyans and Ethiopians win most marathons.

In general, we should expect different traits to be exhibited at different frequencies by different populations that were reproductively isolated in the past. This doesn't mean prejudice is okay. Not all Jews are smart, and not all Kenyans are going to win marathons. We can acknowledge these correlations without behaving in a prejudiced way toward individuals.

Most people would rather not talk about this. And that's certainly my rule of thumb for in-person conversations. But, hey, we're on the internet.


> So either there's a Jewish conspiracy (which some people believe...) or Jews tend to be smarter.

You're assuming that intelligence is the determining factor in winning a Nobel prize, which seems spurious at best. Work ethic and training, particularly early in life seem, to me, like they'd be better predictors. I'd posit that Jewish culture is better at nurturing intelligence before I'd conclude that there's some ethnic superiority going on.


You used the term "ethnic superiority". That's you making a value judgement about intelligence. It has absolutely nothing to do with what I wrote. Personally, I don't think intelligence is a good proxy for "value of a human being".

While there isn't a bullet-proof case, there is quite a bit of evidence for an ethnicity-intelligence correlation.


You left off a key word when you took the term "ethnic superiority" out of context. I prefaced it with the word "some" to indicate that I was considering only a single dimension, intelligence. I said nothing about "value of being a human being," so don't put words in my mouth.

The only value judgement I was making is that higher intelligence is superior to lower intelligence and that Nobel prizes is an extremely poor proxy for intelligence.


> It's really no different from the observation that Kenyans and Ethiopians win most marathons.

But don't most Kenyan and Ethiopian marathon winners also grow up in Kenya or Ethiopia? I.e. how do we discount at very least environmental factors (including, for instance, diet), even if you doubt cultural ones?


> But don't most Kenyan and Ethiopian marathon winners also grow up in Kenya or Ethiopia?

No. There is a hugely disproportionate number of Americans and British of Somali, Kenyan, and Ethiopian origin who excel in world class distance running (Adbirahman, Farah, and many more, including a big new wave of Somali American after).


Interesting. That still wouldn't completely rule out environmental/cultural factors though, including diet and so on (e.g. maybe eating teff is good for long-distance runners).


I guess you've heard of The Bell Curve and the related controversy (check Sam Harris podcast with Charles Murray)?

Fwiw I don't think you're necessarily wrong about intelligence but from what I remember The Gene (Siddhartha Mukherjee) does have a few passages contradicting this theory. Also by questioning the validity of IQ tests.

Could it also not be that jews are just more motivated to get in to STEM fields and perform well, for cultural reasons or otherwise?

Similarly, Malcolm Gladwell mentions a theory about why "Asians are smarter" which according to him may be related to hard and smart work leading to bigger rice harvests, and other factors (see here https://www.cs.unh.edu/~sbhatia/outliers/outliers.pdf).

While I have zero interest in the cultural implications and all the moral panics surrounding these issues, I don't think intelligence being mostly down to genetics is a proven fact. I find it gets extremely complicated very quickly.

Genes are hugely influenced by their environmental (cultural) triggers and those should not be ignored.


Jews were forced into banking, because they were forbidden to do agriculture or cattle back in Europe. Banking was considered dirty, so Christians didn't want to do it. Jews did what they were allowed - banking, philosophy, medicine, science - and they became masters of the craft.


I thought it was also about usury not being allowed (for Christians, Jews, or Muslims) intrareligiously but only extrareligiously, i.e. Jews could loan money to Christians, but Christians couldn't loan money to Christians.


Yes. That's the usual story indeed. I'm not sure where the concept that banking was 'dirty' came from - I suspect that's mixing up banking with lending. Lending at interest was subject to religious prohibition for all but Jews who interpreted the Torah in a way that forbade lending only to other Jews. Thus Jewish people were not "forced" into banking because they were banned from agriculture (lol, how would that even work?) but rather, became bankers by default because it was highly profitable and others wouldn't do it for religious reasons.

This is the main reason for the ancient historical stereotypes linking Judaism and money/wealth/power/etc. And why names like Goldstein (gold stone) are considered Jewish names.


>> and possibly higher intelligence

I heard that a lot of times from mostly uneducated people. All nations have equal level of intelligence. The difference might be access to education, environment, common wealth, and social inequality.


This seems like a dogmatic response rather than a reasoned one. All other aspects of human beings vary by region to some degree, why would intellectual be an exception?


Source?

Edit: I would argue to the contrary, that there is a appreciable difference in average IQ between some countries which expresses itself among other things in GDP and quality of life. Perhaps the nature of this difference is based on things like nutrition, environment quality, pre-natal screening and care, so improvements in all these things will decrease the gap. If it happens that after all of that there is still a certain gap , who cares? As long as the citizens live in peace and relative prosperity.


> All nations have equal level of intelligence

How do you know this?


By far the biggest factor is the over $130B of aid the U.S. has provided to Israel which directly nurtured the high technology industry.[1]

This is also true for the U.S. itself; Silicon Valley is largely a creation of Pentagon investment (see: DARPA).

[1] https://www.aljazeera.com/indepth/interactive/2018/03/unders...


As someone who has worked with teams from Israel, I'd like to hear more about this if you have good sources.

In my experience, it's been the opposite. They like to play office politics, focus on keeping work off their plate, and they're not really team players.

I may just have a bad sample of Israel tech workers, though.


Start-up Nation: The Story of Israel's Economic Miracle makes a good case. [0]

[0] https://en.wikipedia.org/wiki/Start-up_Nation


I have also experienced this - in addition, rampant nationalism is an issue when dealing with Israeli companies, in my experience (5 different customers, similar problems..)

Where I think Israeli tech startups get most of their mojo, is desperation. Their culture involves so much struggle and effort .. and I think the reaction to it at a personal level, results in the laziness, non-team-playing, political problems.


I didn't comment on how good or bad they're to work with but on the assumption that weather conditions can affect productivity.


Just because something is a factor doesn't mean there can't be other factors that lead to a similar outcome.


Well, their summers are kinda long and hot.


Don't forget Iceland, with its "Christmas Book Flood" (https://www.npr.org/2012/12/25/167537939/literary-iceland-re...).

(disclaimer: I am an American and don't know what I'm talking about. I have visited Norway but it was summer.)


On the other hand, here on the other side of the Atlantic ... Silicon Valley is not only at a temperate latitude but also downwind (jetstreamwise) of an ocean; placing it in the middle of one of our continent's few bastions of reliably non-shitty weather. And there isn't much tech-related stuff happening on the north coast of Alaska or the northern regions of Canada, either.

So I daresay there's more to this than just the Arctic Circle.


Then how do you explain Silicon Valley? Outdoor life is great here. You have good point but may not be as important as you think.


How many engineers have you met that grew up in Silicon Valley?


Because when Silicon Valley established its dominance, access to computing resources, digital networks, tech talent, and venture capital were by far the crucial factors. All those things were much rarer back then, and startups more costly, especially to get off the ground. Given today’s low costs and ubiquitous computing and internet access — and the trends in this direction even 20 years ago - it makes sense other, softer factors can now come into play.


One counterargument against that indoor scenario would be The Valley™. Unless you consider fog.


> Quality of life was insanely better there

Just because of sun? Did you try UV?


> Quality of life was insanely better there,

> yet somehow I missed the possibility to sit down and be productive in some narrow topic.

Do you mean that you preferred the life you led in Australia to the one in Norway? If not, how do you mean that quality of life was insanely better?


>> I found this to be almost impossible to achieve when I moved from Norway to Australia.

> Do you mean that you preferred the life you led in Australia to the one in Denmark?

Calling a Norwegian a Dane is about as popular as saying a Canadian is from USA or calling an American British I guess ;-)


Ah, a simple mistake :)


That's what I guessed :-)

(Seems I've offended someone else though, but I have no idea why.)


Why wouldn't they prefer it?


Nothing beats sitting in front of a computer in the dark.


I can't stand it. Feels like staring at a light bulb. The only thing that makes it passable for me is Night Mode in F.lux on OSX. Nothing else works. I keep a Mid-2012 MacBook Pro around just so I can have a computer I can actually use in a dark room.

Even then it's not ideal. The lowest brightness setting still isn't low enough.

In my twenties I could happily destroy my circadian rhythm and sit in front of a screen til 4-5 in the morning. At 35 that shit's gotten real old.


I never implied they didn't prefer it, so I wonder why you seem to think so.


Well, he literally called it "insanely better", and you felt the need to double-check whether they do prefer it.

That's enough implying in my books.

(Besides, if we're to play this game, I never said that _you_ "implied they didn't prefer it". I just asked, "why wouldn't they prefer it").


I previously read someone's take on this in relation to people feeling secure enough to take risks here (not just in Stockholm but Sweden as a whole I guess). While notch's success may be an outlier, the process of trying to do your own thing isn't here. I'm not sure when in his development Notch left King, but in general there are quite a few people here who just try to go their own way (sometimes completely outside of their previous industry even). It helps when you don't have to live in fear of being completely penniless should you fail, I think.

Purely anecdotally, I am not part of a startup, but do a lot of hobby work in coffee shops around Stockholm. I'm always seeing/hearing people in local cafes discussing some new startup or project they are getting off the ground. I've also seen first hand many people get maybe a year or two of funding and start up their own game studios, for example. Of course not all of them end up being successful, but many people feel secure/comfortable enough to try. I think the volume of these attempts and people trying to do something new also helps drive up the number of "hits" overall, to be added to these kinds of lists.


I think the social safety net is definitely a part of it, yes. But after working for a couple years at different startups in Stockholm I can say that another big factor is that most Swedish founders are either rich themselves, or their parents are, or they are very well connected. They don’t see themselves this way of course, they think they are building everything from scratch, but this does tend to be the case from my experience.


I think it is becoming more common. In the early days it seems like there were more of a mix between rich drop outs and poor enthusiasts. Today, especially with the housing market, it seems like there are a lot more 'fake' companies. It used to be that if you came from a wealth family and didn't know what to do you started a public relations, event or media company to pretend that you were doing something. Today that is a "technology startup". Most of which are second or third tier companies in the sense of global reach. While much of the success of Swedish startups come from ending up being first tier companies. Maybe part of that was that Sweden sort of experienced the dot com bubble, which meant that there was less room for phonies for a while.


Definitely, having a social safety net that allows for moving on from failures is probably a huge factor.


But that safety net doesn't extend to startup founders does it? AFAIR In Norway, you need to have worked in a company as a regular employee for at least one or two years before you can claim unemployment benefits. Health insurance is always there, and so is basic social benefits (but the unemployment one is the only you can live on).


I think it can be hard for people to appreciate the differences because the truly different things are those one takes for granted. To not make this too long I will go directly to what I think has made the difference:

A relatively egalitarian society with a lot of trust between people and towards the government has made progress relatively effortless. When people don't feel the need to guard their own position because they might end up getting screwed there often isn't any good reason to be against development.

A relatively large amount of excess time, security and knowledge that enable people to something else. Startups are ultimately about harnessing excess potential. If all the value are capture by some other industry or the housing market there won't be much left over for startups.

Various cultural factors enabled by those things. Like being able to be independent (with the help of the government). Not being afraid to leave value on the table. An overall sort of generous, or at least non-petty, society.

If you look at factors like these, you can actually draw a lot of parallels to somewhere like SV compared to the rest of the world. SV is to at least some degree a recreation of academic life at its best. Which is the area of US society that would be most similar to Sweden.


When people don't feel the need to guard their own position because they might end up getting screwed...

I’m not sure this is too applicable to SV, lol, nor academic life in the US.


It's interesting the quite possibly the most important condition didn't get mentioned.

Sweden's social safety net is among the strongest in the world. If the floor on failure is that you still have access to food, housing and healthcare then quitting a full-time job to start a company becomes much more possible.

More people starting companies, more attempts at big targets, more outlier successes.


maybe the opposite is true too - SV/US has more innovation because there's absolutely nothing to lose and you can't expect to eat/medical because US.


Also education and childcare


Also, Sweden was the first European country that was connected to the U.S. Internet, or more specifically to NSFNET [1][2]. This was because the adoption of TCP/IP protocols for wide area networks came earlier in Sweden than in other parts of Europe, for various serendipitous reasons. A funny anecdote is that Switzerland could have beat Sweden to be connected first, but they were delayed because they had to renumber all their networks at CERN that were already using TCP/IP.

Initially it was just non-commercial Internet for academic institutions, but lots of students were exposed to the technology and the infrastructure early on.

[1]. https://en.wikipedia.org/wiki/History_of_the_Internet_in_Swe...

[2]. https://en.wikipedia.org/wiki/Internet#History


One other reason is that taxation of income gets extremly high (60 %) when the salary rises above ~$7000/month. This makes people feel that it is impossible to get rich by working for someone else in Sweden, which in turn pushes productive people into startups.


That could only apply to founders who can sell parts of their company, surely. Employee number 1 would still face the same tax ceiling, and you need employees to build a company. The ratio of founders:employees will still be tiny even in Sweden.


No, since stock are offered to early emplyees, which are taxed way lower.


Most of the things you mention also apply to The Netherlands, except for possibly the tax-free computer scheme (not sure about that one). As far as I can tell we're not really all that note-worthy startup-wise. Assuming that's true, any ideas what might be the differentiating factor?

From what I understand, both Holland and Sweden are culturally similar enough that I can't immediately think of other factors that make 'us' more hostile to startups.


It may just be me, but I have a feeling that Dutch startups have a tendency to focus on the Netherlands first, and either don’t feel the need to go global or just fail to do so.

Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.


> It may just be me, but I have a feeling that Dutch startups have a tendency to focus on the Netherlands first, and either don’t feel the need to go global or just fail to do so.

That leaves the more interesting question: why is The Netherlands different from Sweden in this regard? My impression was that The Netherlands, if anything, is more internationally-focused.

> Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.

Does Sweden pay better, taking into account cost of living? And if so, why?

(not disagreeing, btw, just more questions)


That's an interesting question, especially taking into account that I know of and worked with many computer scientists from Holland, but not from from Sweden. You would think that Holland would have the expertise to form many startups.

Obviously, there is the business side in addition to the computer science side. I don't know much about that with respect to Holland.

But one possibility is that Dutch computer scientists tend to leave Holland to work. That explains why I know of so many as an American? (e.g. I worked with Guido van Rossum and Werner Vogels advised my master's project)


That is a hard one. They only thing I could come up with off the top of my head is that Sweden a bit more consensus oriented, which could be quite good for creating commercial successes. Like for example I would say that the Netherlands has a stronger electronic music scene, but the big names are still a bit less commercial than Avicii and Swedish House Mafia. Maybe that is because between the Netherlands and its neighboring countries you have a pretty big market. While for Sweden it is Sweden and then the world.


Interesting. Generally, in conversations I've had before, the consensus-don't-stand-out aspect of Dutch (and perhaps moreso Swedish) culture is considered a bad environment for startups.

I don't think our electronic music scene is less commercial though.


The Netherlands also had rebate for personal computers, called the "PC privé plan" (Private PC plan), where bosses could give their employees tax-free computers with a maximum of about 2300 euros at the start, which was cut to 1500 euros near the end of measure. This lasted from 1997 until 2004.


I remember that in the late 90ies/early 2000 that swedes or scandinavians always had these super low ping internet connections. Combine that with a long winter and you have a lot of tech experienced youngsters that spend a lot of time on projects (in this case gaming). They did (and still do) extremely way in esports in relation to the size of the population.


> As an example, I believe Spotify opened their original offices in both Stockholm and Göteborg more or less simultaneously.

Without disputing the technical correctness, Spotify's Göteborg office was literally a single desk at a shared office for the first three years of its existence.


I think Stockholm does have more startups than the rest of Sweden, though, so there is probably more to it. The article understates the size of Stockholm a bit though. As measured by urban area it’s a city of 2.5 million.


Not only that, but within a one hour drive are Västerås and Uppsala, big cities in Sweden's scale.

The former hosts ABB and has tons of robotics related research, the latter has one of the major universities. The startup scene is smaller in Västerås than in Uppsala though.

Uppsala produced Klarna and Skype, Västerås produced Pingdom, to name some.


> Another thing is that Sweden doesn't have the tradition of dubbing movies

Here in Belgium, in the Flemish part of the country we don't dub movies but in the French part we do. The levels of English are massively different, not sure if this is the main reason.


That's interesting about the reform. I was a kid during those years and I recall us buying our computers through my dads work.


> Is Bloom's "Two Sigma" phenomenon real? ... one-on-one tutoring using mastery learning led to a two sigma(!) improvement in student performance.

When I tutored, I found students often had some misunderstanding, somewhere. So my task was to listen, to find that misunderstanding, so I could correct it. This "teaching" is listening, more than talking. The idea is they are lost, but to know what direction they need, I first must know where they are.

To correct misunderstanding without this guidance can be very difficult, and might only happen serendipitiously, years later... assuming they continue with study. Which an unidentified misunderstanding can prevent.

Recently, I'm seeing the other side, while self-learning some maths. I can see how much one-on-one tutoring would help clear up misunderstandings. Instead, I'm using the strategy of insisting on starting from the basics, chasing down each detail as much as I can, using online resources, and working out proofs for myself. Each step is a journey in itself...

Luckily, I have enough skill, confidence, motivation and time. By working it out myself, I think I'm also gaining a depth of understanding I could not get from a tutor's guidance.

But it sure would be a lot more efficient!

[ PS I haven't yet read the two pdf's in the question ]


I tend to believe in some of Stephen Krashen's notions of language acquisition. Specifically that there is a difference between learning (being able to remember and repeat something) and acquisition (being able to use it fluently). Also that acquisition comes from comprehension. I also believe that language acquisition is no different than any the acquisition of any other skill. Many people don't agree with these ideas, but I'm laying it out as my assumptions before I start :-)

With that in mind, one of the interesting findings in language acquisition studies is that when free reading (reading things for pleasure), it takes 95% comprehension of the text in order to acquire new grammar and vocabulary in context (quite a bit higher than most people imagine -- which is one of the reason people advance a lot more slowly than they might otherwise).

With that, just like your experience, the key to teaching is to ensure that the student comprehends at least 95% of what you are saying. The only way to ensure this is by constantly testing their comprehension with a two way dialog. Once a very high level of comprehension is reached, and once enough repetition happens to remember the thing, you will acquire the knowledge.

It is incredibly difficult to do this unless you are teaching 1:1. There is a special technique called "circling" that you can use to teach language to a larger number of students and it worked extremely well for me. I still can't effectively do it for more than about 10 or 15 though. If you think in a 45 minute class, if I have 15 students, then each student gets 3 minutes of my time. It's not actually surprising that classes of 30 or 40 basically impossible.

Quick note: I'm no longer teaching, in case it is unclear from the above.


There are other ways of making sure the student understood the concept. Recently I played the game 'The Witness'. The whole game is about learning new puzzle rules, and yet there is not a single dialog within the game or even text explaining those rules.

I am not saying that their technique is the most efficient (e.g., adding hints would undoubtedly increase the efficiency, but also ruin the game experience), just that there are other methods of making sure a student understands a concept. You don't necessarily need the one-on-one conversations. Those conversations are mostly useful to round up incomplete teaching material (again, I am not saying that creating perfect teaching material is easy).


You had a one on one conversation with Jon Blow's creation. This is not an argument against one on one. More like an argument for low-key AI tutors.


While I like your way of thinking, I don't think the argument applies. The game itself doesn't possess any kind of AI and is somewhat static, more like a sudoku book: There are lots of puzzles, but you know it when you have solved one.

The one-on-one tutor idea is that you have a master who sees the mistakes the student makes and gives him an exercise to target precisely the misconception the student might have in his head.

The Witness, on the other hand, doesn't possess such intelligence. Instead, it is a carefully crafted series of puzzles which slowly broaden the possible moves. Most of the time every next puzzle requires you to learn a new part of the rules. Sometimes you assumed that part anyway, and the puzzles are easy. But sometimes you have to find out that misconception in your head and replace it with something correct which makes the puzzle harder.

So one concept includes an intelligent observer while the other is more like a perfected text book.


> it takes 95% comprehension of the text in order to acquire new grammar and vocabulary

strongly disagree. learning chinese by youtube. I comprehend 15% but I pick up new patterns and words all the time.


That's interesting. I'm definitely interested to understand what you are doing. Are you watching Chinese videos or Chinese language instruction videos? Are you able to use the language fluently?


I find it hard to maintain motivation and attention if I'm not getting at least ~30%. This applies to both movies and real-life conversations in another language. And, now I think about it, it also applies to English-language materials that require specific technical background, e.g. academic papers.


Any good sources you recommend for learning Chinese on YouTube? I’m just getting started with HelloChinese and Fluent Forever - watching video seems too deep for now.



Not YouTube, but Chinese Pod is really great.


I think the One-on-One Tutoring things is something the AI current movement could make a difference. Everybody seems to be obsessed with building really cool stuff, but our teaching system is quite obsolete and could get better by adding smart systems.

That said I must to add, that I am referring to teaching humans who are 12 years and older (IMHO young kids require physical interaction if you want to avoid psychological conditions).


I agree, generally; however, I believe the lownhanging fruit is in assistive tools for teachers.

The commenter above made a good point about the value of removing barriers to learning as a primary asset of a good teacher. People tend to focus on content knowledge/curriculum as the mark of good teaching, but removing barriers is the real, difficult work. Tools that assist the instructor in understanding their students’, their students’ knowledge, and their learning behaviors would be valuable. Don’t focus on content delivery. Focus on making in-class assessment more frequent and trustable. Focus on tools that assist an instructor understand thirty students as they might understand five.


I am working to solve this by creating a conversation bot based interface that can make personalized learning viable. If you are interested for updates you can sign up here: https://tinyletter.com/primerlabs/

P.S: I was supposed launch a month back. But a lot of rewrites made it difficult. For now, I can say I will be launching soon.


As a teacher -- yes! More of a life coach, someone who rewards you for good decisions, eating well, homework, being kind etc


Tutoring is often the only time that a student will sit down for a dedicated amount of time and study without distractions. As much as I want to believe in my own value as a tutor (and I do think there is some value), I will admit that a fair portion of the benefit is just having someone force the student to study.


> Recently, I'm seeing the other side, while self-learning some maths. I can see how much one-on-one tutoring would help clear up misunderstandings. Instead, I'm using the strategy of insisting on starting from the basics, chasing down each detail as much as I can, using online resources, and working out proofs for myself. Each step is a journey in itself...

I did this for a while and gave up the self-learning aspect and went to university to study math part-time. I have the utmost respect for anyone who has the patience to push through it on their own. Some things I can learn on my own but higher maths I couldn't--at least with any degree of efficiency.


I think listening is also valuable because verbalizing a problem often brings up misconceptions or helps internalizing the idea and thus leads to a solution, maybe with a nudge in the right direction if that's expected in a dialog, thus not going the full way.

I had that experience sometimes when preparing a question to ask on-line. Sometimes it becomes clear when trying to see it from the other direction of the listener, or researching the problem space to phrase the question properly yields unexpected results.


> The Empire State Building was built in 410 days

At least one reason is that we have substantially different safety regulations since we're not accepting of deaths on a project like that. 5 people died on that project. 11 died to build the Golden Gate. Original Bay Bridge? 24.

They actually had a rule of thumb at the time: 1 death for every $1M spent on a project[1]. Any metric like that would be absolutely unacceptable today.

[1] - https://www.npr.org/2012/05/27/153778083/75-years-later-buil...


Death rates are probably better than absolute number of deaths for comparison. Here are death rates per 1000 workers of some well known construction projects:

  80    Transcontinental Railroad
  80    Suez Canal
  50    Brooklyn Bridge
  17.46 World Trade Center
   6.4  Sydney Harbor Bridge
   4.47 Hoover Dam
   3.37 San Francisco Bay Bridge
   3.33 Eiffel Tower
   2.67 Titanic
   2.5  Sears Tower
   1.47 Empire State Building
   1.17 Trans-Alaska Pipeline System
   0.75 City Center Las Vegas
   0    Chrysler Building
Topping all of those by a large amount are the Panama Canal and the Burma-Siam railway, which I did not include because I don't have numbers--they are literally off the chart. I mean that literally. On the bar chart I'm getting the numbers from [1] the bars for those are so big they are clipped and the number is not visible.

[1] https://www.forconstructionpros.com/blogs/construction-toolb...


Are you on mobile? Or something else might be breaking formatting for you. The chart linked does have numbers for the Panama Canal (408.12) and Burma-Siam Railway (385.45).


40% of workers died building the Panama Canal? That's nuts.


The history of the canal itself is pretty interesting. It was an effort started by the French, who lost too many workers to tropical diseases (specifically malaria). Then the Americans took over, and one of the reasons they succeeded was they made sure to take preventative measures to protect workers from malaria.


No, I'm on Firefox 63.0b9 on MacOS.

Looks like I zoomed in (cmd-+) a few times to make the numbers more readable, and went past what the page's CSS could handle.


Vancouver’s “Second Narrows” bridge was built in the late 50s and completed in 3 years. Midway through the project a major accident killed 19 people. It was the result of an engineering error.

In the mid 90s the bridge name was changed. It is now generally referred to as the “Ironworkers memorial bridge”. Anyone crossing that bridge since is constantly reminded of that engineering error.

“Better safe than sorry” is rising belief.

Once this truism became generally accepted, as it generally is, it has precedent over other considerations. Since “safe” means more than physical safety, practically any human action is subject to exponentially increasing levels of scrutiny. It takes time to come to an agreement that everything is safe. In big projects it takes a lot of time. On bigger projects it might never come.


> “Better safe than sorry” is rising belief.

I think this is a cowardly and inaccurate belief and agree it is what it driving changes across all areas.

It is the institutional flaw of democracies and free markets. People (demand) are capable of acting irrationally and emotionally leading to a reduction of individual and public good.


"I think this is a cowardly and inaccurate belief"

Throughout history you find the people who believe this are not the ones who's safety is sacrificed in pursuit of some grand project...


You’re assuming a lot.

I’m assuming your OP thinks “better safe than sorry” is stupid because it doesn’t actually help you make decisions.

I would much rather drive on a bridge of someone whose motto is “do it right.”

This isn’t the Apollo program. We know how to make safe bridges. It’s not a matter of being cautious and talking to every stakeholder. It’s a matter of hiring actual engineers,* letting them work, believing them, and then giving them the resources to monitor construction properly.

It means sometimes rejecting a lot of bad material and holding the project up for 8 months.

It does not mean “lean safe” and cross your fingers.

* meaning licensed engineers. The word engineer has I guess been made meaningless by thousands of people writing code and calling it “engineering”. It used to mean someone had completed training as an engineer.


I believe you are either arguing in bad faith, or not giving the post you are replying to enough credit.

Better safe than sorry MEANS do it right. It means that human life is more valuable than material wealth.

If you are hiring amateurs to build your bridge, or you are “crossing your fingers”, then you are not being “better safe than sorry” — in the sense that what you are doing can not be an implentation of “better safe than sorry” that can be reconciled with the broader cultural context in which people use the phrase and discuss it.


> Better safe than sorry MEANS do it right. It means that human life is more valuable than material wealth.

This is not accurate on both accounts.

First, material wealth can be converted into quality of life in multiple different ways. Consuming additional millions to push death rates down by a few percentage points is taking away from other places, ostensibly hurting others.

Second, better safe than sorry does not mean do it right. Mistakes will happen, and acting like nothing will ever go wrong is a fools errand. Planning for failure is a significant portion of project management and engineering in general. The goal is 0 mistakes, but severe negative overreactions in response to failure can have a net negative impact.


A net negative impact on whom? A net positive impact on whom? How do you quantify this impact?


> The word engineer has I guess been made meaningless by thousands of people writing code and calling it “engineering”.

A friend of mine was just hired as a project engineer at a construction company. He was confused since he has no engineering experience. Turns out, at this company, it means you are training to be a project manager.


“Do it right” is what we want, “better safe than sorry” is what we usually get. “Better safe than sorry” in practice usually ends up looking like a tool for acquiring broad agreement with stakeholders and spreading accountability. I have seen many “better safe than sorry” initiatives that result in complete garbage that loses the purpose, spirit and original intent of its mission.


There are plenty of places, such as western European countries and Japan, that have strong safety regulations and still manage to complete large-scale infrastructure projects quickly.

Another way to look at this is to ask why the United States seems incapable of even maintaining existing projects. For example, look at the current state of the New York City subway.

I suspect slowness in building new infrastructure, and poor maintenance of existing infrastructure, have the same root cause: lack of political will.

American voters don't expect their governments to be good at this kind of thing. European voters would vote politicians out of office if their transit systems got as bad as the NY subway has become. It would be seen as a failure to execute one of the basic duties of government.


>> American voters don't expect their governments to be good at this kind of thing. European voters would vote politicians out of office if their transit systems got as bad as the NY subway has become. It would be seen as a failure to execute one of the basic duties of government.

Americans often vote those people out of office and their replacements are equally useless or worse.


Unfortunately that's often not accurate. The vast majority of the time the incumbent wins. We vote by party here and if the incumbent is our party rep. we will vote for them regardless of track record as we assume the other party is worse.

We have created a system where results often don't matter as long as there is the correct capital letter next to your name on the ballot.

Compounding this further is that politicians know that they can count on your vote but they rely on the money of industry and lobbyists to campaign. Thus the very industry that is supposed to be fixing the problem under contract is able to overcharge and take longer than agreed becuase our politicians rely on them for funding.

https://www.opensecrets.org/overview/reelect.php


lookup Berlin BER airport for a European project that's a complete disaster


Which is mostly due to safety concerns, the fire regulations are not met by BER due to a complete chaos in planning the construction.

Nowadays, big projects in the West are far more complex since have to meet more demands and more stakeholders are involved. In authoritarian countries, this is not so much a problem, the new airport in Istanbul is built very fast, but concerns from citizens are not respected etc.


Most american voters have a phobia of government.


This will sound horrible but is it rational? E.g. say you are building a huge hospital and due to the above it will take 4 years longer at 2x the cost so basically you could loose X lives due to hospital not being there and due to increased cost of care.


Yes, it is rational. We should live in a society where the expected human sacrifice of a construction project should be 0.

Pure utilitarianism leads to outcomes that are clearly out of step with almost everyone's moral codes. For example you could kill someone and take their organs to save the lives of 4-5 people. Is it rational that we're not allowed to do that? Why do some people get to keep 2 kidneys when there are others with none?

This is solved at least somewhat by using 'rule utilitarianism' instead of 'act utilitarianism'. Society is better off as a whole if we adhere to rules such as protection of the human body or safety regulations when constructing buildings.


> For example you could kill someone and take their organs to save the lives of 4-5 people. Is it rational that we're not allowed to do that? Why do some people get to keep 2 kidneys when there are others with none?

There was a pretty good short story, "Dibs" by Brian Plante, about that published in the April 2004 issue of Analog Science Fiction and Fact.

Everyone was required to be registered in the transplant matching system. Occasionally you'd receive a letter telling you that someone was going to die soon unless they got one of your organs. That person now had dibs on that organ, and if anything happened to you before they died they got it.

Usually you would get another letter a couple weeks or so later telling you that the person no longer had dibs, which generally meant that they had died.

Sometimes, though, you'd get a second dibs letter while you already had one organ under dibs.

And sometimes you'd get a dibs letter when you already had two organs under dibs...meaning if you died now it would save three lives. At that point you were required to report in and your organs were taken to save those three other lives.

The story concerned someone who worked for the transplant matching agency who got a second dibs letter and was quite worried. He illegally used his insider access to find out who the people were who had dibs on him, and started digging around to try to convince himself they were terrible people who didn't deserve to survive to justify illegally interfering, if I recall correctly (I only read the story once, when it was in the current issue).

I don't remember what happened after that. I just remember thinking that it was an interesting story and explored some interesting issues.


Do you believe we should make the national speed limit 25? If not, you're accepting that people will die needlessly, and that the value of a human life is not, in fact, infinite.


The value of a human life is not infinite, but that doesn't mean it isn't worth more than a certain amount of time spent on a construction project. The people who make executive choices about construction projects should not decide that it is acceptable for x people to die on this project in exchange for y fewer months construction time. Accidents happen, but we should not plan to trade lives.

Consider if by sacrificing someone on an altar you could magically cause several months of construction work to happen overnight. That would still be murder.


Sorry, let me clarify: I meant my question literally. May I ask what your answer is?


FWIW, I would [make the national speed limit 25] if I could. (With some hopefully obvious qualifications.)

The current traffic system as an insane "death and mayhem lottery" that we force ourselves to play, with out respect of youth, age or anything.

The current interest and action towards bike-friendly cities is a symptom, I think, of a healing of societies' psyches. We have been pretty brutal to each other since the Younger Dryas, and it's only recently that we've started to calm down and think about what we really want our civilization to be like.


Ok, so from 50 to 25 will reduce some deaths. That's right. But now we can reduce to 20, will reduce even more deaths. Then 15, 10... Where do we stop?


Fair point, and I have two answers for you.

The real thing I would advocate (if this weren't a beautiful Sunday afternoon, calling me from my keyboard) is a design for traffic that began from the premise of three (or four) interconnected but separate networks, one each for pedestrians, bikes, and rail, and maybe one network of specialized freeways for trucks and buses. Personal cars would be a luxury (unless you live in the country) that few would need (rather than a cornerstone of our economy) with rentals taking up the slack for vacations and such.

But if you're interested in this sort of thing, don't bother with my blathering, go read Christopher Alexander.

My other answer is really just an invitation to a kind of thought experiment: what if we really did restrict ourselves to just walking, biking, and trains? How would civilization look in that alternate reality?


And what is the real human cost when we factor in the number of human lives wasted sitting in traffic at stupidly low speeds.

If decreasing your speed from 100km/h to 50km/h gives you a 1% lower chance of dying in a road traffic accident, but you spend an additional 2% of your life stuck in traffic, is that a win?


Has this been tried? I would expect more speeding and maybe even more fatalities.


Not if there were an ironclad law that cars have to be manufacturered to be physically incapable of exceeding that speed.


That is essentially how "vision zero" works, but with the right speed for the right conditions.

https://youtu.be/7kIFegy4fII?t=1478


Is this the argument you want to plant your stake in the ground on as absurd? Because the modern debate in the tech community is "should people be allowed to operate motor vehicles at all?"


Even autonomous vehicles will always have nonzero fatality rates. Letting them go 50mph will lead to more human deaths than capping them at 25.


lol, I hear people frame the self driving car discussion that way regularly, but it is just wrong. Noone is going to be making human driven vehicles illegal.

They may be more like classic cars than regular vehicles at some point, though.


Nah, you'll just see the insurance cost spike to the point where driving is a weekend activity for rich weirdos. Give it a generation and it will be as strange and morally suspect as smoking.


> it will be as strange and morally suspect as smoking.

You could have picked a better example. I see tons of young people smoke and wonder what the hell is wrong with them, whether or not they haven't been paying attention for the last 40 years and then I realize they didn't because they weren't there to begin with. So the tobacco industry can work their nasty charms on them with abandon because there are new potential suckers born every day.


Smoking is a weird analogy. I'd expect a better one to be something like "give it a generation and it will be as strange and morally suspect as a horse and buggy".

Or alternatively, maybe as strange as a motorcycle or vintage MG.


I wouldn't be surprised to see an increasing number of (express-type) roads or perhaps dedicated lanes where human drivers were not allowed on though, after self-driving capabilities become the norm. (yes I realise that's an assumption).

I think people underestimate the cultural impact that self-driving vehicles will have - imagine a whole generation or two after self-driving vehicles are generally available - how many people will bother learning to drive? I think it might become more of a job-specific skill than a general 'adult life' skill as it is now in most places.


First it will be like knowing how to drive a stick-shift. Then it'll be like owning a sports car. Then it'll be like owning a horse or a boat.


I think you are absolutely right about some limited circumstances that make them the only legal option. But the analogy that I keep making is to classic cars. A lot of them don't have the safety features that we expect today. It isn't uncommon for their owners to say things like "I'm only safe on roads that existed in 1960". It is obviously an exaggeration, but the point is that even today there are plenty of cars that are legal to operate but probably wouldn't be anyone's preference on a busy 70 MPH interstate.

At some point, human driven cars become novelties, just like that. There is no reason to ban them, but as you suggest, maybe there will be some HOV-like lanes where they don't really have access. Or even some time constraints (not during rush hour on some key roads, not in lower manhattan, etc).


Your "pure utilitarianism" is a complete straw man.

Killing someone to take their organs to save 5 lives is utilitarianism to only first order effects. We live in a world that does not terminate after one time step, so we have to consider Nth order effects to calculate utility.

For example, the second order effect is other humans' moral judgements. "How horrible, he murdered that man" is a valid moral reaction to have, and this is disutility that must be accounted for in a "pure utilitarian" world view. Third order effects may be the social disorder that results from allowing such ad hoc killings as means to an end, and so on.

The only thing preventing pure utilitarianism from being viable is a lack of compute power, and "rule utilitarianism" is a poor heuristic based approach for philosophers without patience ;)


Great comment, I've never had a good way of putting that same thought.


> We should live in a society where the expected human sacrifice of a construction project should be 0.

No, but that should be the ideal which you should strive to move towards, when practicable. But you can't ever actually get there, and shouldn't try infinitely hard either.


Yes, but only up to a point. If all the rules and safety checks inflate the cost and timeline of projects exponentially then barely anything would get built.

The question, as always, is: "Where do we draw the line?".


This doesn't seem to be true because things do get built with high safety today.


It might be more palatable if the benefits accrued to the people assuming the risks.

Reliably the people absorbing the risks capture almost none of the added value.


Yes, it does sound horrible. Not because of the objective, but it is so uninformed and low effort that it is really pain full.

Yes, it's a great idea. Killing one person will allow us to double speed. How exactly I wonder, but this seems like the kind of project where asking questions is strictly forbidden.


This has nothing to do with specifically killing people but has to do with creating regulatory environment were very few entities can complete and comply with regulation. I am not sure anyone can contribute the improved stats to that burdensome regulation easily as it can be just general tech and process improvements.


The premise that people will die if the hospital takes longer to build is false. The same work can be done in another building


You make a reasonable point, health and safety bullshit can go too far at times, but equally exploitation often goes further - in that the effort to save/pockets costs shadows the good will. Very difficult to quantify


China built their national high speed rail in remarkable time. Like less time to cover their entire country, than it will take California to build one high speed rail line. The difference seems to be that autocracy gets shit done.


Sure, they didn't have to worry about property rights, environmental impact studies, etc. They just did it.

We built the first interstate highways pretty quickly too, and part of why is we just plotted a line and set the bulldozers to work. Nobody worried about habitat destruction, erosion, endangered species, etc.


Interestingly the author cites Hong Kong in another question as a city that should be replicated. It's a city where at least 10 (and likely a great deal more#) workers died and 234-600 were injured in the last few years building a bridge of questionable utility to other local cities.

# 10 are confirmed dead on the HK portion, the death toll on the Chinese portion is unknown.

There's little question that lax worker safety and weak labor laws can contribute to faster economic growth, but I'm not sure that's something we should be trying to replicate.

https://en.wikipedia.org/wiki/Hong_Kong%E2%80%93Zhuhai%E2%80...

http://www.ejinsight.com/20170413-how-many-more-people-have-...


I'd be surprised if safety is a significant contributor. Assuming it is, why haven't we gained, in almost a hundred years, efficiency to counter balance it?


> Any metric like that would be absolutely unacceptable today.

Except for car deaths. Over one million a year seems to be acceptable.


"Why are certain things getting so much more expensive?"

Because we are lying about inflation. It's not a conspiracy, just a mutually agreed-to delusion. By pretending inflation is lower than it is, the poor feel like they are standing still instead of sinking ...though instinctively they know. And the people just keeping up get to feel richer. Since technology and efficiency improve, the people staying in the same place have cooler things.

If inflation were reported correctly, average would see their paychecks dropping as wealth and power consolidate elsewhere. There is no interest in creating alarm around this fact. Instead the public is distracted by social drama, and political discourse consumed by things that do not affect the real shift in power.


Do you have any evidence that “we are lying about inflation”?

Unless you do, I think it those industries are getting more expensive due to Baumol’s Cost Disease:

https://en.m.wikipedia.org/wiki/Baumol%27s_cost_disease



ShadowStats inflation is literally the CPI minus a constant. The guy making is completely laughing at you -- he doesn't have a methodology to calculate the inflation, he's just publishing numbers people want to hear.


I have a different take. Healthcare, education, construction costs are all regulated by governments. Unlike companies, governments are systems of people who thrive from complexity. More complexity = more work, more power, more benefits (direct on indirect).

To give you an example from construction, a company producing a complex part is "giving away" 3 day training in fancy locations, with all hotel & meals paid by the company. As a government employee who gets a fixed salary no matter the results, would you prefer using a product which gives you to these free trainings (free mini-holiday), or using a cheap part which does the same thing, minus the fancy training sessions?

I think the root cause is having some people in charge of other's people money, without clear responsibilities (vs. evaluations which happen in private companies, with the possibility to getting fired anytime if you have a poor performance on result-oriented KPIs), AND having a monopoly. You can't simply start another healthcare or education system, without complying to all existing complex regulations & processes.


From a US perspective, these are all markets that depend on expensive skilled US labor. Many things like manufacturing have shifted to labor markets that are cheaper. How much is due to currency differences, cost of living differences, etc. vs. employee safety or anti-pollution regulations ... I don’t know.


My dad was a programmer and I'm a programmer. I don't think I'll ever make a salary as high as he did[1], and that's in absolute dollars. Yet, my quality of life is definitely better. I take vacations all over the world. I can look up any fact I want instantly on the internet. I work from home 3 days a week. I've easily cured maladies that were huge nuisances when he was my age.

You're right that macroeconomic indicators don't seem to tell the story, but I can't square my experience with your claim.

1. He has a Master of Physics, I don't have a degree.


How much was his salary? I would think a lot of programmers these days make more in absolute dollars than their parents would have.


Not all children are as smart or productive as their parents.


Inflation/devaluation has always been that game. The question is why it’s been so uneven.

I think all of the areas Patrick mentions are places the government has decided are “multipliers,” and has directed spending or subsidy. I vaguely recall a discussion in undergrad macro that it was always an advantage to be at the place inflation is injected into the system. We’ve had these input points for 60+ years.

Tech is a mixed bag, but housing, construction, medicine, and education are all prime places for social “investment.”


I have read that the cost of education is going up because unlike other parts of the economy, which benefit greatly from automation, with education you still have to hire these costly human teachers.

So as the efficiency of everything else goes up, the cost of those things is depressed. But teachers cost the same. So relatively under an inflated environment, they become more expensive.


So why are other things not getting much more expensive? Why are things not getting more expensive at the same rate? Monetary inflation affects all goods more or less uniformly, but the price of milk hasn't skyrocketed as much as the cost of education or housing or healthcare.


Monetary inflation affects all goods more or less uniformly

Money being created is like pouring water into a pool - it creates ripples outwards. Eventually, if you stop, yes the surface of the pool will become calm and the pool will be higher. But whilst you're pouring, the volumes are not even.

These days, when the government creates money it doesn't put that money into everyone's bank account overnight. When was the last time you got a cheque from the government labelled "new money"?

Instead the central bank engages in various forms of manipulation, like via the "QE" programmes that involved asset purchases. So, the prices of certain financial assets go up. They also purchase a lot of government bonds, or that money eventually makes its way into corporate debt. And what do governments do with this money, well, they often spend it on things like subsidising mortgages, or subsidising private banks (via bailouts), or healthcare, or education, or paying a large staff of government workers, or buying military hardware, etc.

So you go look at what's gone up in price very fast over the years and hey, look at that, it's the stuff near the centre of the pool. Things that governments tend to subsidise a lot or things that people feel they have to buy regardless of cost, like education, healthcare, homes, etc. The money pouring into the system ends up stacking up in a few places, it's not evenly distributed.


> Instead the central bank engages in various forms of manipulation, like via the "QE" programmes that involved asset purchases. So, the prices of certain financial assets go up.

The last round of quantitative easing by the Federal Reserve, QE3, ended in 2014. (QE1 began in 2008.) If these price effects for health care and housing began in 2008 and ended in 2014, it would make sense to blame QE, but they didn't, so it doesn't.

Price effects that are associated with government vs. private expenditures are not the same as inflation. That's just governments being bad (perhaps intentionally bad?) at spending taxpayer money efficiently. However, when it comes to health care in particular, that just isn't the case either--Medicare and Medicaid pay much lower prices for medical procedures than private insurance companies do.


The banking system was creating money in various forms outside of QE. That was just a well known mechanism.

The main way it's created is through loans, many of which end up in housing. And rampant speculation on apparently ever-rising house prices is where the financial crisis started. Ripples expanding ever outwards ...


Because monetary policy is only one of a multitude of factors that influence inflation. Regulations, subsidies, tax laws, trade agreements, labor laws, immigration policy are just some of the other things that have wildly disjointed effects on industries, markets and prices.


You're confusing inflation with rising real prices. Inflation doesn't discriminate -- the price of everything rises by the same proportion.


He's not confusing it.

> the price of everything rises by the same proportion

Due to regulation, that's simply not the universal case - e.g. rent control. The economy is conceptual, but prices are concrete leading to some ironic situations.


Inflation is when the currency itself loses market value. There's no such thing as inflation that only affects some goods and not others. The prices of some goods do fluctuate relative to each other, but that's not inflation!


Do taxes matter? Indentical products in the US and France feature French prices that are far more expensive, even when adjusting for currency. Milk, for example. iPhones. Clothing especially. Washing machines, furniture.

Disposable income in France is vastly lower than that of the US. Is that inflation? Or tax policy?


> Why are programming environments still so primitive?

Because we as an industry made a strategic decision in the late 20th century to value run-time efficiency over all other quality metrics, a decision which has manifested itself in the primacy of C and its derivatives. Everything else has been sacrificed in the name of run time efficiency, including, notably, security. Development convenience was also among the collateral damage.

> Why can't I debug a function without restarting my program?

Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.


>> Why can't I debug a function without restarting my program?

> Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.

There is no technical reason why it shouldn't be possible in C, if you are willing to do without many optimizations. One approach is to make a function stub that simply jumps into the currently loaded implementation. A more efficient but more convoluted way is to patch the address of the currently loaded implementation at all call sites.

The problem is that in general you can't simply replace a function without either restarting the process or risking to crash it. In general functions have some implementation-dependent context that is accumulated in the running process, and a different implementation does the accumulation in a different way. I'm not a lisper, but there is no way this is different in LISP. (And it is not because in C you often use global data. "Global" vs "local" is only a syntactic distinction anyway).

If you're willing to risk crashing your process that's okay. It's often a fine choice. And you can do it in C. The easiest way to implement approach #1 is to load DLLs / shared objects.


There is no technical limitation why it is impossible in C, however the raw details of how C is implemented make it entirely impractical.

A super generalized description of a C function might check if a register contains a positive value. If so, it will then jump to address 0x42, which is a memory offset, which is the beginning of another function. Its near impossible to "swap" out what lies at 0x42 since that was defined at compile time and is included in a monolithic executable.

Looking at more dynamic languages, like C#, Java or LISP, they run on a virtual or abstracted machine instead of raw registers. This means that a similarly defined function will instead jump to whatever matches the requirement. That means that we could have a lookup table that says we should jump to a symbol of S42, and based on what we have loaded in memory, S42 resides at 0x42. Essentially, all functions are function pointers, and we can change the value the resides in that memory address in order to swap out implementations of whatever function maintains the same signature as the intended function. This is why one can make trivial changes to C# in visual studio while stopped at a breakpoint and have those changes applied to the running program. Instead of jumping to 0x42, we jump to 0x84 by "hotswapping" the value of the pointer we're about to jump into.

Obviously this isn't entirely the truth, there are a lot more nuances and it's a fair bit more complicated than this, but the idea should hold water.


Your point is moot. C doesn't "run on raw registers". That's just common practice but has nothing to do with C per se. You can easily run it on the C#'s virtual machine.

Furthermore, it doesn't matter whether you are running on a virtual machine or on bare metal. What matters is if you have turned on optimizations (such as hardcoding of function addresses, or even code inlining) that remove flexibility (such as hot swapping of code). Visual Studio can hot-edit C code as well.

And as I stated, it is pretty easy and common practice to hot swap object code through DLLs or shared objects even on less high-tech platforms. It's easily done with function pointers (as you described) and a simple API (GetProcAddress() on Windows, dlsym() on Linux). Why shouldn't it be possible in C?

Virtual Machines bring portable executables and nothing more, I think. Well, maybe a slightly nicer platform for co-existence and interoperability of multiple programming languages (but then again, there is considerable lock-in to the platform).


> There is no technical limitation why it is impossible in C,

Yes, there is. You can't trace the stack in portable C, so you can't build a proper garbage collector.


Debugging a function and GC are independent things. Also, you can easily add tracing information since you will know how the C code is being run. There is no good reason not to.

There are easy ways to build a GC in portable C as well, of course, if less performant.

As you will know, in portable C you cannot even implement a system call. So what?


> Debugging a function and GC are independent things.

Yes, but redefining a function and GC are not.

> There are easy ways to build a GC in portable C as well, of course, if less performant.

No, because you can't walk the stack. Also pointer aliasing.

[UPDATE] I just realized that I was wrong about redefining functions in C. Not only is it possible, you can actually do it with currently available technology using dynamically linked libraries. But I have never seen anyone actually do this (which is why it didn't occur to me).


> But I have never seen anyone actually do this (which is why it didn't occur to me).

I wrote about this in my two earlier comments. This is very old technology. And very commonly used. I think most plugin systems wrap dynamically linked libraries.

This is also an easy way to redefine functions without needing GC. Under the hood, it is implemented in the loader's way: virtual memory pages mapped as read-only and executable (see mmap() or mprotect() for example).


I agree regarding the symptoms, but am less convinced that run-time efficiency is the sole (or perhaps even major) cause. If it were, I'd argue that we'd see less usage of, e.g., Python.

I don't know the true cause -- I wish I did. But I do do see a trend towards an ever more "hands-off" style of software development: large volumes of automated (especially unit) tests in preference to interactive approaches (or hybrids, like running subsets of tests via a REPL), and running test instances on remote servers (often via extra layers of indirection, such as CI servers) rather than running on your own machine. I'd love to see a resurgence of interactivity, but if anything it seems to be going against recent trends.

Edit: The subjective impression I've got is that doing things the indirect, hands-off way is being presented as somehow more "professional" while hands-on interactivity is the realm of the self-taught and hackers. Do others see this? What can be done to change these perceptions?


This decision was made and become deeply entrenched long before Python came along. Python is pretty good, but even it is constrained by the decisions that went into the design of C, since Python is implemented in C. Its underlying data structures are C data structures. Its memory management is C memory management. The GIL is there because C.


> The GIL is there because C.

That's not at all true. There are very, very few platforms/libraries/languages that both a) allow you to manage concurrent access to complex, arbitrary data structures in a sane, predictable, and not-dangerous way and b) provide the capabilities (including performance) necessary to implement a general-purpose programming language on top of them. Even fewer of those tools existed when Python came about.

Was it technically possible to make a GIL-free Python or a Python not based in C? Sure. But it wasn't in any way likely, or a reasonable-at-the-time decision. If you look into the history of the GIL things will make more sense; it has next to nothing to do with the implementation language.


I'm not saying that trading the GIL for the benefits of C was not a reasonable tradeoff. Nonetheless, it is simply a fact that the GIL is needed because of the constraints imposed by choosing to work in C.


Citation needed. “Because there existed languages with different concurrency models” doesn’t really explain why the choice of C for Python’s implementation necessitated a GIL (there are plenty of languages and tools built on C with non-GIL-ish concurrency systems, and many languages/tools built on not-C that don’t expose the concurrency features of their underlying language at all). Look into how/why then GIL was added; it has much more to do with not wanting to reimplement the language/break compatibility than it has to do with what the language was implemented in.


> it has much more to do with not wanting to reimplement the language/break compatibility

I don't know what "language/break compatibility" means.

The GIL is needed because Python's reference-counted memory management system is not thread-safe, and this can't be fixed without compromising either portability or performance. It's really that simple.


> The GIL is needed because Python's reference-counted memory management system is not thread-safe.

That is correct. What does that have to do with C? Thread-unsafe code exists in all languages. The GIL's lock is itself a pthread mutex and condvar underneath, and equivalent constructs exist in all (to my knowledge) modern threaded programming environments.

"not wanting to reimplement the language/break compatibility" is a reference to the successful efforts that have been made to remove the GIL in CPython. Those efforts have not (yet) moved towards merging into mainline CPython because they require a) lots of reimplementation work and added complexity in the language core, and b) would very likely break the majority of compiled extension modules.

I think that's additional evidence that the GIL isn't a C problem; they removed it, in C, without fighting or otherwise working around the language.


> That is correct. What does that have to do with C? Thread-unsafe code exists in all languages.

Yes, that's true. But thread-unsafe GC does not exist in all languages. When GC is provided natively by the language it can be implemented much more safely and efficiently than if you try to shoehorn it in afterwards.

> the successful efforts that have been made to remove the GIL in CPython

That's news to me. Reference?


https://youtu.be/P3AyI_u66Bw

Or search “gilectomy” on LWN. Or check out stackless etc.; it turns out removing the GIL technically is historically one of the easier parts of removing the GIL entirely.

It sounds like your main problem is with python’s GC model. Reference counting doesn’t have to be thread-unsafe, but in scripting/interpreted-ish languages that want their threading story to involve seamless (I.e. no special syntax unless you want it, it’s on you not to blow off your foot) sharing of state between threads at will, like Python and Ruby, a GIL or equivalent is the norm. Sometimes it’s not as intrusive as python’s, but it does seem like a necessary (or at least very likely) implementation pattern for languages that want to provide that seamlessness. You can have thread-safe reference counted GC in a traditional scripting language, but that tends to come with a much less automatic threading/concurrency API. Perl 5 is an example of that category, and it is implemented in C.


> a GIL or equivalent is the norm

Exactly. And why do you think that is? And in particular, why do you think it is the norm when it is decidedly NOT the norm for Common Lisp and Scheme? The norm for those languages is completely seamless native threading with no special syntax.


It seems like you're arguing in favor of a different language rather than a different implementation of Python. If you wanted to implement, say, the Python class/'object' type in a thread-safe way that didn't rely on a GIL and still freed memory according to approximately the same GC promises as the Python interpreter, I suspect you'd end up implementing something GIL-like in Scheme or Lisp (though my experience extends only to some Clojure and Racket in school, so I may be unaware of the capabilities of some other dialects).

If you wanted to implement a language that looked a little like Python but had your favored Lisp's GC semantics and data structures, I'm sure you could. But it wouldn't be Python.

That's without getting into the significant speed tradeoffs--you can make these languages fast, and I get the impression that there has been a ton of progress there in the last decade. But when Python was being created, and when it had to deal with the implications of concurrency in its chosen semantics? Not so. As I originally said: was it theoretically possible to build Python or equivalent on top of a non-C platform at the time? Sure. But I doubt that would have saved it from tradeoffs at least as severe as the GIL, and it definitely would not have been the pragmatic choice--"let's build it on $lisp_dialect and then spend time optimizing that dialect's runtime to be fast enough for us to put an interpreted scripting language on top of it" seems an unlikely strategy.


> If you wanted to implement, say, the Python class/'object' type in a thread-safe way that didn't rely on a GIL and still freed memory according to approximately the same GC promises as the Python interpreter, I suspect you'd end up implementing something GIL-like in Scheme or Lisp

Nope. CLOS+MOP, which exists natively in nearly all CL implementations, is a superset of Python's object/class functionality.


The GIL is not there because C. It's there because of refcounting and Python guaranteeing that many nontrivial operations are thread safe.


Yes, but refcounting is there because it is impossible to build a proper GC in portable C.


Can you elaborate on this?


You can't trace the stack in portable C.


> The GIL is there because C.

Oh please.


Sometimes I do work this way in the repl - usually when I have little confidence I even understand how to fit together a new library or API call. But if you spend more time in the repl building a data structure and interactively iteratively testing your function, when it is all finished all you have checked into source control is a function, not the tests. You have to then write the tests. In a lot of cases its faster to just write the tests and interact with the code through those. In some language environments this is not exclusive of using a repl - in Haskell for instance it is pretty easy to interactively run a module and tests saved to disk in the repl.


Perhaps if great interactive development and debugging tools were more pervasive, there would be less demand for lots of tests written at the same time as the actual function.

(The fine-grained "unit" stuff, anyway. System/integration tests come with different trade-offs).


> Development convenience was also among the collateral damage.

Don't you find that web & javascript are pretty much a straight denial of your argument ?

They're "secure" (meaning we let anyone's javascript code just run in our browsers, even embedded in other people's code, and seriously expect no ill effects)

They're extremely inconvenient to develop with. Especially compared to those "run-time above all else" environments you mention. For one, you need to know 5-6 languages to use the web.

> Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.

I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs. On the web, you can't, because every call immediately gets you into overcomplicated minified libraries that you can't change anyway, assuming it doesn't go into a remote call entirely.

And there's environments that go further. .Net not only lets you debug any C function you run within it, you can even modify it's code from within the debugger and "replay" to the same point. I believe there's a few more proprietary compilers that support that functionality too.


> Don't you find that web & javascript are pretty much a straight denial of your argument ?

You should probably direct that question at Patrick because his original question was kind of based on the premise that the answer to your question is "no".

My personal opinion? No. OMG no. Javascript and HTML are both poorly designed re-inventions of tiny little corners of Lisp. In that regard they are improvements over C. But no. OMG no.

> Even without debug symbols you mostly can do this for C, C++, ... programs

No, you can't. You can grovel around on the stack and muck with the data structures, but you can't redefine a function, or redefine a class, or change a class's inheritance structure without restarting your program. In Common Lisp you can do all of these things.


Can you/someone explain for this noob why you can't redefine a function, etc.?

I'm not a programmer, so I'm imagining you hook in to the call that addresses the function (like modify a jump instruction), overwrite any registers that need changing, nullify caches, so the program runs new code -- this I think is how some hacks work?

Ultimately couldn't you just write NOP to addresses used for a function?

Is it something structural about C/C++ that stops this, like the ownership of memory (I'm assuming a superuser can tell the computer to ignore that addresses written to are reserved for the program being modified).

How does the computer know that you pushed a different long jump in to a particular address and stop working, rather than keep going processing instructions.

Apologies if I've misunderstood, please be gentle.


It's a good question, and I'm a little disappointed no one has answered this yet.

There is no reason you couldn't redefine a function in C. It's really more of a cultural constraint than a technical one. It's a little tricky, but it could be done. It just so happens that C programmers don't think in terms of interactive programming because C started out as a non-interactive language, and so it has remained mostly a non-interactive language. Lisp, by way of contrast, was interactive and dynamic on day 1, and it has stayed that way. In the early days that made C substantially faster than Lisp, but that performance gap has mostly closed.

However, there are some things that Lisp does that are actually impossible in C. The two most notable ones are garbage collection and tail recursion. It's impossible to write a proper GC in portable C because there is no way to walk the stack, and there is no way to compile a tail-call unless you put the entire program inside the body of a single function.


> On the web, you can't, because every call immediately gets you into overcomplicated minified libraries that you can't change anyway, assuming it doesn't go into a remote call entirely.

This is true if you go to a production website and try to start debugging. But it's untrue for any modern development environment. The minification comes later and even then, source maps are first class supported in browsers, mapping the minified code to the source.

It's funny. In my experience the web debug tools are some of the best of any language/environment I've experienced.


No, just no. Web debugging is worse than anything I've ever seen - maybe a bit better than cmd-line debugging with gdb, but not much.

Sourcemaps supposedly work, though I have never seen this actually work in practice. And since babel is apparently indispensible, you can't be entirely sure that conditionals and statements in your source haven't been optimized away in the transpilation.

Routinely one sets breakpoints in JS files that are not hit by Chrome dev tools, and symbols that should be in scope don't want to be defined. It's a mess.

I suppose if you know how to structure the rube goldberg machine correctly, web dev can be productive. But it's so hard, and the hardness is of the tedious yak-shaving variety. I just hate it and want to fire up Visual Studio and write some .NET apps with tools that just work instead.


"Rube Goldberg" is the best description I've heard for a JS dev stack. So true. Its so awful. But its so much better than it used to be and I pray it continues to improve.

I hear what you're saying. They can be really finicky. I've had very good luck using it all cleanly without issue. I especially love binding vscode to an open browser so that I use vscode for all the inspection, breakpoints, etc.

But I also experienced your lamentations. It took a long time for me to get it all working. Sourcemaps were so unreliable years ago. Now they just seem to work, having learned all the painful lessons about configuration.

There still aren't any sane defaults and the ground won't stop shifting. But now that I have it working, it works great.


> They're extremely inconvenient to develop with. Especially compared to those "run-time above all else" environments you mention. For one, you need to know 5-6 languages to use the web.

I'm not sure how this is a denial of lisper's argument. You've picked an environment which you admit has major flaws, but those flaws are independent of the issue at hand.

I've worked in runtime-perf-above-anything programming environments which required the use of many different languages even for the simplest program. It's terrible there, too. That has nothing to do with dynamic languages. In fact, due to the ease of creating DSLs, most of the dynamic languages I've used allow one to get by with using fewer languages.

> On the web, you can't [do this other thing you can do in Lisp], because ...

Indeed, you've picked the one modern dynamic environment which lacks most of the features lisper is talking about. That's not an argument against having those features. I think it's mostly an observation that this particular environment picked a different attribute (runtime security) to optimize for above all else. You'll note that JS-style security is fundamentally incompatible with many of the concepts in OP's original question.

> I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs.

Can you give an example? I've never heard of a C++ system that let you redefine classes at runtime without debug symbols. I can't imagine how it would work. How would you even inspect the classes at runtime to find out what you're redefining?


> They're "secure" (meaning we let anyone's javascript code just run in our browsers, even embedded in other people's code, and seriously expect no ill effects)

I don’t expect that. Use after free vuln + heap spray + shellcode = Ill effects


Maybe the reason for this has more to do with the type of development structure that helps a language get scale.

The rich powerful development environments you've described exist primarily in proprietary, integrated environments. If you want to integrate the editor, debugger, OS, and language, it helps to be able to coordinate the design of all those components.

On the other hand, languages that have gotten to huge popular scale have typically been more open in their specification and implementation process. Perhaps this is because the creators of breakthrough tools that drive language adoption (like web frameworks or data science kits) prefer these tools, or because the sorts of conditions that lead to creation of such tools are inherently marginalized ones. In other words, if you're a happy iOS developer using a lovely integrated world of xcode and swift, you're not going to spot a dramatically underserved development niche.


Both Scheme and Common Lisp have been open standards since their inception. And both have excellent IDEs available, both commercial and open-source (Clozure CL for Common Lisp and Racket for Scheme).

I did my masters thesis in 1986 on Coral Common Lisp on a Macintosh Plus with 800k floppies and 1 MB of RAM. It had an IDE that would still be competitive today, indeed is in some ways still superior to anything available today. All this was possible because it was Common Lisp and not C. The language design really is a huge factor.

(Coral has today evolved into Clozure Common Lisp, which is completely free and open (Apache license). You really should try it.)


Lisps are great. Why do you think they haven't been at the forefront of any big trends in application development? Things like the web, mobile apps, data science ...

My guess at an argument here is that the languages popular in the 80s drove the curriculum design of most Computer Science education, and the relative absence of the Lisps (to today) makes the languages seem less approachable to practicing programmers than they really should be.

On the other hand, you can find a lot of Lisp's influence in something like Python (though obviously with many differences both superficial and deep). So in that case, why are Python IDEs so much worse than what you'd see in Lisp? (And is that even the case? Maybe there's just more Python devs and thus more IDEs and like anything, most are crap; but if there one or two great ones then does Lisp really have an advantage there?)


> You should try it some time.

Worth noting that Patrick Collison was/is a Lisp user. While in high school, he won a fairly prestigious Irish national science-fair-type contest (the “Young Scientist”) with an AI bot written in some Lisp dialect.

IIRC he also contributed some patches to one of the major Lisp projects.


Yes, I know. He was signed up for Google's Summer of Code (I forget the exact year) to do a CL project and I was going to be his mentor. But before it started he quit and founded Stripe instead. :-)


Shocked he mentions Visual Studio Code but not Visual Studio itself.

The .net CLR has a lot of the features that he would want to enable the kind of interactive debugging (variable hovers, REPL) he talks about, and Visual Studio itself supports a lot of them.

Personally going from doing C# development in VS2k15 to doing golang development in VSCode feels like going back in time.


Visual studio has a lot of power, especially if we consider Intellitrace, but damn is it hard to use that power. I really think the main problem here is UX and not technology.


The main problem of Visual Studio is that it doesn't work on free operating systems.


No. I mean, this might be a problem for some people, that it doesn’t work on OS X (not free) is a problem for me. But a bigger problem is that even on Windows its more advanced features are often too hard to be usable except for very special situations.


Well, I have narrowed the statement to be about free OSes because there is a version of VisualStudio for MacOS as far as I've heard and because whatever works on Linux/FreeBSD usually works on MacOS too. I mean it should be cross-platform in the way VSCode is.

I don't use Linux for ideological reasons, I use it because it works a lot faster (especially when using PyCharm - the difference is drastic) and more reliable than Windows and gives me "almost-MacOS-and-much-more" experience on my PC.

Modern apps targeting professionals and enthusiasts should be cross-platform (run on Windows, Linux and MacOS) and not force you to choose from just one or two major OSes.


The version for OS X is just a rebranded Xamarin Studio, nothing like the real visual studio at all. I’m surprised the former doesn’t run under Linux already, there is no good reason it couldn’t.

For the real professional apps, the app is more important than the OS, so you choose whatever OS will run your app. Need to use Adobe CS, well, OS X or Windows is it, it could have been Linux and graphic artists would just swallow and use it, since they need to use CS. Likewise, a tool targeted at Windows app development would be fairly weird running on non-Windows.


> Because we as an industry made a strategic decision in the late 20th century to value run-time efficiency over all other quality metrics

Except if that were the case, we wouldn't have so many bloated framework code and towers of abstractions, dragging runtime efficiency to the bottom.


One reason(or maybe the common excuse) is often "we need to have full control". That's a reason most managers can get behind.

So that's a common reason slowing every shift to a higher level language or library, to tools that automatically create stuff for you(GUI, optimized code from dsl), to using a platform controlled by another company(one important channel for newer tech, with some strong ux benefits over open-source), to trusting open source.


> One reason(or maybe the common excuse) is often "we need to have full control". That's a reason most managers can get behind.

On modern machines, this excuse is even better. Full control can easily mean 100x difference in speed. Most importantly, how do contemporary lisps deal with arrays of unboxed structs or primitives?


Um, why is everyone forgetting Java?


Don't forget Smalltalk has these capabilities as well.


> Why can't I debug a function without restarting my program?

Because you use C or one of its derivatives

It's doable in java which is, I think, a derivative of C.


> What's the successor to the book? And how could books be improved?

> Books are great (unless you're Socrates). We now have magic ink. As an artifact for effecting the transmission of knowledge (rather than a source of entertainment), how can the book be improved? How can we help authors understand how well their work is doing in practice? (Which parts are readers confused by or stumbling over or skipping?) How can we follow shared annotations by the people we admire? Being limited in our years on the earth, how can we incentivize brevity? Is there any way to facilitate user-suggested improvements?

The great thing about books is that no matter how long they've been sitting around, it's easy to take one off the shelf and read it. The cultural infrastructure of written language has been around much longer (and been much more stable) than the computational infrastructure you'd need have your "magic ink" still work in 1000 years. At some point we need to start treating computers and software more seriously if we want to have things like this.


I love books, don't get me wrong, but they do age. This is especially true of books that are specialty related. Take a book from the 70s on computers, on medicine or psychology, on prehistory, on history, etc. They are all likely to be significantly outdated compared to their modern counterparts. There is much to be said for the modern way of information transference: it is always most up to date.


Whilst your core point about books aging is absolutely true I think thats mostly the mass market publications designed to ease entry than actual papers.

Similarly, anything writen about an implementation is short lived. The Idiots Guide to Windows 98 is less useful today than it was in 1998 and is getting less useful with time.

Lovelace, Babbage, Turing, McCarthy et al are still seminal. But these are academic papers that focus on what is possible to compute and how one might construct an implementation.

There are some interesting edge cases too. Is the Gang of Four Design Patterns still relevent? Its not embarassing, but its not as applicable as it used to be.


As an aside, what would be today's ideal intro to design patterns?

Now .. on topic ... I had a roomful of books collected from my youth. I cleaned out my room at my parents house some years ago ... and pretty much everything got chucked. The only books that I kept were seminal books like Knuth, Cormen, Gang of Four, TCP/IP series (v6 kinda makes them out-of-date too). All my MFC books ... java books .. pretty much all of it was out of date. I had an epiphany .. CS does not age well at all.

I no longer buy physical books ... I got a subscription to Safari and love it. Also consume tons of content on e-learning platforms. But .. I really miss real books.


I miss walking in a room of books.

My parents had bookshelves that filled walls that I built with my Dad before they moved. I, and most other people that visited the house, would spend a fair amount of time just looking at the books on the shelf. Comparing notes on what they'd read, and asking to borrow books. Looking at the books, and being reminded of the experience you had with them or wanted to have with them was a vitally important part of the process that I fear we've lost.


Seems like roms for emulators are becoming kind of book like. As soon as a new platform reaches market relevance, there is a dash to make sure these old platforms are emulated.

However, rom files specifically (as opposed to executeables for retro computing platforms) seem to be distinctly less fickle in becoming reliably emulateable.

The reason is seemingly simple, the early rom files were essentially an operating system, having all the software required to boot the system included in what now looks like one file.


There are very few people who want to read many books that are 1000 years old. The few books that interest more people can be converted to newer formats, the few people that want to read all old books will have to use extra tools to do so.

That said, I am not a particular fan of books. They take a lot of physical space, are heavy and age. So as long as we stick to reasonable formats (e.g., text-based, non-binary), it should not be too hard for future generations to use our books.

Using DRM, on the other hand, might make things complicated.


The Bible and the Koran would easily be on the world's best seller lists if they weren't excluded by default.

You might argue - and I would agree - that this is not necessarily a good thing as far as content goes.

But the point is that putting something into writing snd giving it a tangible form on paper gives it an inherent stability and authority missing from digital media.

We tend of think of digital media as temporary, disposable, relatively low value simulacra of a Real Thing.

Digital media can be hacked, edited, deleted, and lost when the power goes off.

A copy of a book from hundreds or thousands of years ago is just going to sit there for some indefinite period. (Which actually depends on the quality of the paper - but in theory could be centuries.)

This is not about practical reproduction and storage technologies, it's about persistence and tangibility.

A book is a tangible object which has some independence from its surroundings. After printing, it's going to exist unless you destroy it. If you print many copies the contents are geographically distributed and it becomes very hard to destroy them all.

A file depends on complex infrastructure. If the power goes down, it's gone. If the file format becomes obsolete, it's gone. (This has actually happened to many video and audio formats.) If there's an EMP event, it's gone.

And it's not just a tangible difference, but a cultural. We have a fundamentally different relationship with digital data than we do with tangible objects, and this influences the value we place on their cultural payload.


>If the file format becomes obsolete, it's gone. (This has actually happened to many video and audio formats.)

Any examples of video or audio files that are currently impossible to watch/listen to because knowledge of the file format, and all software capable of playing it was lost? If such a thing has happened, there are probably people interested in reverse engineering the format.


I can pick up some writing on physical media -- an Akkadian clay tablet -- and read it (if I have the knowledge) despite it being thousands of years old.

Things like laserdiscs, I can probably still buy equipment to read, but it's substantially different as I need the technology to read it.

Microfiche is quite good in this respect, you can easily read it even without the specific tech it was made for (using a magnifier, or projecting the image with a simple light source.

I wonder if you could make a crystal where, like a hologram, you can rotate the crystal a minute amount in order to project a different page (an idea I saw decades ago had a digital clock style projection from a crystal, used asa sundial -- pretty sure it was theoretical).

That way the information is relatively easy to discover, and with a simple light source you can get info out if it.


I keep hearing the fear of losing content because of obsolete file formats.

Then I think about Linear B, and I rest again.

https://en.wikipedia.org/wiki/Linear_B


There's no emulator that can run a Linear B parser. If a linear B dictionary ever existed, it was never mass produced. Linear B is older than the printing press, let alone the Internet. But now we do have those technologies, and "lots of copies makes stuff safe" is cheaper and easier than ever. I don't believe any mainstream digital format (i.e. popular enough to have a Wikipedia page) will be permanently lost unless there's a complete collapse of society, and then we'll have bigger problems to worry about.


In the modern world, a physical book is nothing more than a mere printout of a PDF file or a photocopy of an old edition. People print web pages all the time, but nobody in their right mind would think much about these printouts, let alone philosophize about their tangibility, endurance etc.

On the other hand, old books, with their high-quality paper, binding and letter-press print do seem have some kind of personality...


>There are very few people who want to read many books that are 1000 years old.

That's their loss. There are very few people who want to learn math too.


This isn't for everybody, but I would say Rapid Serial Visual Presentation (RSVP), where words are sequentially presented in place to the reader, which has significantly changed the way I read.

I wrote my own implementation of RSVP, which has eBook reader support, and now it is absolutely my preferred method of reading, and I read at 1000WPM. Though normal books are still enjoyable, they feel tedious and slow.

The project is here:

https://github.com/GlanceApps/Glance-Android

(It needs some help being updated for recent versions of Android, please let me know if you'd like to be involved! It has a new back-end API in place already and it just needs a few simple updates.)


Where I think this could be really useful is status messages. When a status message on some fixed layout display is too wide, the solution is often to have it scroll horizontally back and forth automatically, which can be very hard to read.

I think it would be more readable to put it in an area big enough for the longest word, and then RSVP through the message repeatedly.

If anyone wants a simple way to play around with RSVP, here's a little quick and dirty command line reader I wrote a long time ago to play with this: https://pastebin.com/zfq2eW4n

Put it in reader.cpp and compile with:

  $ c++ reader.cpp
To use:

  $ ./a.out N < text
where N is the number of milliseconds delay between words. It will then do the RSVP thing. It should compile with no problem on Mac or Linux.

If a word (which is really just a string of non-whitespace surround by white space) ends with a period or comma, the delay is doubled for that word.

There's a commented out check that sets a minimum line length. If you compile that check it can put more than one word on a line to make the line at least a minimum length.

PS: this aligns the words on their centers. To change it to left aligning them change where it sets the variable "pad" to use a small integer instead of basing it on the length of the word. If it is the same for all words, it becomes an indent for left alignment instead of a pad for centering.


This is really awesome. I think I played with your source a few years ago, trying to adapt it to work with Google Cardboard. My initial attempt failed, because my eyes would lose focus during each word transition. I decided I'd need to add a lightly textured background, which would be shown all the time, and which would fix the distance my eyes were focusing, and then lay the text on top of that. IIRC I gave up because I realised the 'right' way to do this was to use the Cardboard SDK, but that would mean also writing something to render the text into pixels (as the SDK only supported graphics).

BTW - The Google Play link in the repo doesn't work for me, and I don't see Glance in F-Droid. What's the easiest way for non-developers to get the APK?


I can't seem to find your app on the Android store, even through the link on your GitHub page. Are you sure it's working?


Also, the website linked in the repo seems to be down.


Isn’t recall way worse for books “read” this way?


Here is 1000wpm:

https://youtu.be/7i9fZvWyLfI?t=1m41s

I wouldn't recall a thing at this speed, nor at 600 which is shown just prior to the time stamp above.


A few things here:

This is a poor implementation of RSVP, as each word is being presented at the same speed. Longer words should be given longer presentation times, as should words with punctuation marks. The presentation of the words is also centered rather than aligned, which requires a saccade for each word, which defeats the whole point. It's also a difficult text to start out with, with no context.

Even still, I didn't have a problem reading and recalling this text, though I wouldn't recommended it for a beginner.


I made a similar app (iOS) which varies the display time by word length, punctuation, and each word's place in a list of the 100 most common words (under the assumption that common words contain less information, thus take less effort to read). To be honest, I'm not sure it works any better than one running at a constant speed. There seems to be a surprising lack of research in this area.

(https://itunes.apple.com/us/app/zipf/id1366685837?mt=8 if you're interested.)


> The presentation of the words is also centered rather than aligned, which requires a saccade for each word, which defeats the whole point.

Does it actually require a saccade?

Testing with a quick and dirty command line RSVP program I have, my speed and comprehension seem about the same with either centered or left aligned. But I'm mostly testing with fiction written for the average adult. The words are usually short enough that they are within the field of good visual acuity no matter where withing them the focus is.

I've not done a comparison using text with a lot of long word.


No, I would say quite the opposite.

Reading this way is a skill which needs a small up front investment, but the pay off is immense. The trick is to not try, just relax and pay attention and let the words speak to you as if they are being narrated to you inside your head.

Because there are no constant micro-interruptions from page scrolling, ads, or even from your eyes own saccades, I find that my attention to the text is much, much better, and I need to stop to ponder something I can just tap the screen to pause it.

I also find that I am far, far more likely to finish an article/paper/chapter via Glance than via my browser. These days it's pretty rare that I'll actually finish an article online, but with Glance I'll almost always read the entire thing from start to finish.

I really, really recommend this skill, especially if you have a lot of time to kill on a mass-transit commute, or if you just want to read more.


Not sure I follow the logic; don't physical books suffer from similar downsides? Books degrade and libraries burn down. Seems a lot like bitrot and file mirrors disappearing.


Those downsides only apply to the book or software itself. My point was about the stuff you need to have apart from the book/software in order to read it. You need a computer to run software; all you need to read a book is an understanding of the language. It does get harder and harder to understand books over time, but not at the same rate that computers are changing (and language tends to stay relatively simple, anyway, because people have to be able to learn it).


I have thought about the difficulties of long-term storage for a while, and have come to the conclusion that digital mediums are inherently a poor choice for archiving and preserving data:

http://howicode.nateeag.com/data-preservation.html

As one of the more recent additions to that essay shows, I'm not the only one with that opinion:

https://partners.nytimes.com/library/magazine/millennium/m6/...

As I note in the essay, there are some technologies that work better than others for preservation, but digital's biggest weaknesses are inherent, I think.


>> > What's the successor to the book? And how could books be improved?

First, what's the purpose of books ?

There's entertainment of course. But let's focus on books that teach. Their purpose is to let you access some knowledge, in a deep way.

On the other hand, computer systems are starting to fill that role, and the level of depth they can achieve is growing - on a good day, with the right query, Google may give you access to Amazing content, content that may help you connect different concepts - based on your past searches - just like your brain does.

Another option is if it was easy for a book author to package her knowledge in a smart chatbot or maybe an expert system, and we will have hundreds of such advisors advising us, or just interactively chatting with us,correcting our mistakes, that would be an interesting replacement to the book.


>What's the successor to the book?

The second edition.

And how could books be improved?

Until a different unpowered human-readable medium is proven over a longer period of more careless storage conditions, looks like a third edition if appropriate.

If it hasn't been printed it hasn't really been published as thoroughly as it could be, and if it hasn't been bound then it's not yet a real book. Up until recent decades the survival of unique knowledge was largely dependent on the number of copies printed and distributed, so popularity has had undue importance.

But don't let earlier editions become lost or woe on you.


True. Case in point, (the newly discovered 1300 year old book): http://www.openculture.com/2018/09/europes-oldest-intact-boo...


It seems like PDFs might still work in a hundred years. Just like zip files.

Fun fact: HN's been around for 10% of a century. That makes Arc one of the longer-lived programming languages.

Re: the ability to take books off the shelf and read it, Library Genesis has made a lot of progress in that area. http://libgen.io/


PDFs might still work - but will the storage devices they are on? There are good reasons to doubt this: tapes, CDs and DVDs age and degrade over time; plus devices to read them only last so long, and may not be produced anymore at some point. Classic "spinning" hard drives degrade, too; and even ignoring this, will there still be conputers supporting, say, SATA, in 50 years?

And then there is a trend to switch to onboard soldered flash in new devices which adds further problems.

Not saying this is impossible to overcome, but "PDF might still work" is at best solving a part of the problem, for only a part of all data (PDF is great - but only for some types of data) one might want to preserve.


The cloud-based storage will never die. (And if it does, we'll have bigger problems on our hands than lost PDFs.)


>Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

I'm no historian so this is probably misguided, but I feel like this is an indicator for a society on the decline. Corruption is increasingly rampant in both the public and private sectors, and money is being siphoned off at every step. If we don't figure out how to fix it, mounting costs and inefficiencies could put and end to our era prosperity soon. We've allocated power to people interested in screwing others for short-term (e.g. their lifetime) gains. Rah rah doom and gloom.

>Could there be more good blogs?

>Are there incentive structure tweaks that yield more good blogging?

The missing validation mechanisms like Facebook likes the author mentioned are largely filled by link aggregators. Posts on HN and Reddit have a score which acts as a similar dopamine drip to likes. I feel validated and encouraged to write more when my blog generates discussion around the net (though often with an equal sum of embarassment when my flames make it to my colleague's desks).

>Why are programming environments still so primitive?

This guy should try working with plan9 and acme for a few months. Author, if you're reading these comments, you should install 9front, spend a weekend grokking it and becoming comfortable in a workflow there, then set up a Linux VM with vmx(3) and a 9P mount to your host system to get your Real Work done while continuing to learn about the plan9 model. Be prepared for your entire workflow to be turned on its head and to have to find creative ways out of your problems. Maybe today is one of those days where you change your life in a big way :)


> Corruption is increasingly rampant in both the public and private sectors, and money is being siphoned off at every step.

This is utter bullshit when it comes to the sort of major public infrastructure projects hinted at in the original post. In fact, what slows things down significantly is the fact that there are so many controls in place to reduce and eliminate this corruption. Contracts that could have been settled with a fat envelope to the brother-in-law of the county commissioner are now handled in a much more open and transparent process. Corrupt profits that could be skimmed by using sub-standard materials are now prevented by layer upon layer of inspection and sourcing paperwork (which all increases cost.) Projects that could once move quickly because unions were weak and safety regulations were non-existent, to the point where contractors could simply throw human lives at the problem until it was solved, now move at a slower pace and with far fewer deaths as a result.

Infrastructure may take more time and cost more, but the cost in lives during construction and after the project is completed are much lower.


"Corruption" is word with multiple definitions. It can mean underhanded. In that case it's a sort of shorthand for "moral corruption". It indicates the person or system is far from a moral ideal.

But, generally speaking, corruption can just indicate something that has deviated from its original use. A corrupted file on a computer does not meet its intended purpose, at least not fully. A corrupt organization, likewise, could be 100% honest in all its dealings (1) and fail to meet its intended purpose.

If government agencies partnering with private contractors cannot reasonably build the project all of them exist to build, then there is some sort of corruption going on. It's possible there is graft or underhandedness, but it's also possible that there is emergent dysfunction inhibiting their ability to execute. That is also corruption.

(1) That is, the organization is not morally or ethically corrupt.


I would think that potential litigation might be a determining factor, at least in the USA.


RE rapidly completed projects

A primary driver of the 'slowness' is a major focus upon safety in the workplace. Take a look back at the death rates per 1000 workers for some of the historical projects that people often like to point to and then look at a modern project.

The second element is that these safety elements are often enforced by regulation. For example, look at how much extra scaffolding is in use today during construction (which used to be ladders).

To directly look at corruption, this is an issue but not in a direct way. Corruption happens through the 'contractor ladder' where the primary contractor has a subcontractor who has a subcontractor who has a subcontractor. Repeat ad infinitum.

One of the primary reasons for this is the challenges in maintaining a large enough workflow to keep a standing workforce employed. It's difficult to justify paying expensive construction workers and engineers when they're not actually building anything.

Finally, tendering protocols are often quite naive and have been implemented to make something "least cost". This has led to a nightmare scenario of companies underquoting in order to win a tender, secure in the knowledge that a government will not leave a project half finished. To remedy this better contracts are required, for example, you can offer a recurring revenue stream (e.g. 20-30 years) to a company in return for a particular project. On the other hand, this can often lead to poorly build projects that last exactly 30 years.


The construction and infrastructure boom in China over the past few decades suggests major projects can still be completed rapidly, even in spite of widespread corruption.

Political will can be an important driving force for pushing through major projects, but the most impressive feats of engineering are seen when there is substantial pent-up demand in an economy that can suddenly be supplied, usually due to social or technological change.

I think major projects could still be completed quickly in developed economies if the incentives were to align in the right way, but we don't see it often because the low-hanging fruit has already been picked.

India and Africa are probably the places to look to for rapidly completed major projects over the next few decades, and perhaps we could see significant industrial development in space at some point if the economics works out.


Yes, and how many people have died during the construction and infrastructure boom in China/India/Africa. They're faster because they aren't as safety focused. For example: https://www.washingtonpost.com/archive/politics/2000/09/07/c...

How much is unreported?

This isn't to say this doesn't happen in western countries but it is less common. A large number of deaths was also associated with the Qatar world cup for example.

Look at the public reports of engineering and manufacturing companies. How many of them have a "target zero" approach to safety and report the TIFR as a key KPI? I've known executives in engineering organisations to be fired for persistent safety breaches making them substantially more risk averse.

This all costs more money and takes more time and I would posit that if China/India/Africa become wealthier and more individualistic then their construction rates would also slow.


Do you have any good starting points for p9? I have always wanted to check it out, but man is the documentation poor


http://fqa.9front.org/

Just install it and try to work normally, and when you find something hard you should research the specific thing you want to do.


Could you explain more about plan9 and acme and why they are amazing? I've never heard of them.


I could go on for hours. Just try it yourself, it's free.


> Will end-user applications ever be truly programmable? Emacs, Smalltalk, Genera, and VBA embody a vision of malleable end-user computing: if the application doesn't do what you want, it's easy to tweak or augment it to suit your purposes. Today, however, end-user software increasingly operates behind bulletproof glass. This is especially true in the growth areas: mobile and web apps.

I'll pick this one: end users back then (emacs/vba/smalltalk era) are the power users of today. Today's end users are a new kind of users.


This is an interesting one to me.

Will voice recognition at home coupled with AI allow people to start "programming".

Currently home automation systems through something like Alexa are only listening to direct commands. Interesting follow-up functionality:

* The capability for introspection. Alexa, who is home? Alexa, why are the lights on? Alexa, why are the lights not on? Especially the latter means advanced reasoning capabilities.

* You preferably also want to adjust the behavior by voice. Alexa, when I enter the room I want the lights to go on. Alexa, the lights only need to turn on when it's dark outside.

I had one student working on this problem: https://crownstone.rocks/attachments/thesis/nannewielinga.pd...

Recently I got a request from a person who works a lot with blind people. A system that allows them to query the state of the environment is very valuable to them as well.

It's a different take on being "truly programmable", but I think the different modality makes it an interesting one.


Huh this is an interesting thought. I've been reading On Lisp off and on lately and perhaps similar ideas of building both top down and bottom up apply to voice tools like Alexa where we create chainable series of commands to do a wider variety of tasks.


Definitely. My company requires the use of standard Windows applications - no apps on mobile devices - that have fairly intuitive interfaces for me (a late 30's former software developer), and while they have quirks, I get around them decently enough.

Many of my younger employees have never owned a computer. Literally. They use mobile for everything and only used the computer labs to do word processing and the like in college, or had work computers. And they are incredibly inefficient and basically useless when they encounter these applications for the first time; it is a huge training cost that I didn't expect going into it.

But it makes sense, of course. My industry doesn't make applications for today's end users, it makes them for the people that suffered through the DOS/Windows 3.1/Novell Networking era, because most of their consumers are 30-50 years old in academia or large business.

This split is happening in more than just my industry; it's happening in a lot of others due to the fast pace of mobile adoption and abandonment of the traditional PC. The real fear is that my employees are no better at using mobile devices than I am; in theory they should be, but in reality they're just technologically far worse than anyone else who had to grow up using complex systems. I have long-term fears about what this will do to the population; many people have long assumed that generations continue to get smarter and more tech-savvy, but I have found this to be very, very false in my limited experience.


Yes.

And today, many systems automatically personalize themselves for us, in very complex way. If the machine learning senses what i want to do(like in Google Search's personalization, which for most people it's probably better than advanced keywords), why do i need to bother with programming ?


Regarding the question of education/college costs rising drastically, I thought a key answer was that govt started to fund education a whole ton less.

In Canada (where this has not happened as badly), an undergrad in CS used to cost 3K annually a decade and a half ago, and costs 10K now. Other disciplines cost 6K I think .. CS degrees cost more since colleges decided that students earn a lot more and there is huge demand anyways. That doesn't seem unreasonable to me.

So .. Patrick's education costs question has an easy answer - govt funds got pulled in the US, and the wide availability of student loans acted like steroids. In places where got funds didn't get cut (e.g. Canada), things cost about the same (when adjusted for inflation and increases in salaries necessary due to things like rent/house price increases).

The question why we could get to the moon in just 9 years, or make the tallest building in 140 days? That is simple too. As a society, we are less desperate than our parents. We are more demanding when it comes to life (hence, wages and living conditions). This has spilled into the regulations we make as a society. I read it was near impossible to make some types of factories in the US anymore - due to our concern form environment, etc. I personally find a lot of red tape frustrating but then I remember .. we (as a society) put it there for some reason.


To some extent I think people just underestimate what education costs. Sweden pays ~€40k to educate an engineer (5 year, B.Sc + M.Sc). On top of that there is another ~€20k in student benefits to the student and ~€40k in a government backed student loan (for books and living costs).


My first reaction to the education and healthcare question was that growing demand must be an important factor.

The idea that a university education is a requirement for anyone wanting a career that guarantees a comfortable lifestyle is relatively new - a post WWII development.

Demand for healthcare has also risen as western populations have aged and more conditions have become treatable. Sixty years ago there were many more health problems that you couldn't spend money on if you wanted to, because they were incurably fatal. Now a significant number of those conditions have become chronic complaints that patients can spend decades paying to treat.

I don't know how much of the cost increase in education and healthcare can be attributed to increased demand, but it seems like it must be part of the answer.


As a comparison, in the UK we spend about £6k per high-school aged child/youth/young adult (11-18) on schooling. In London state schools get about £8k, the highest £8.5 (~€46k over 5 years).

So it's relatively not so much to be within €8k per year for degree level study.

We do have 4 year MEng-s though (same length as many MPhys, MChem, MSc).

1GBP is about 1.1EUR currently (!).


>> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

A large contributing factor is that major projects today are much more complicated than they were in the past. The tools we have built are always advancing, but the size of the human brain is not. As projects become more and more complicated, they require a larger number of people to collaborate, and that comes with almost unavoidable slowdowns and inefficiencies. The Lockheed P-80 is nowhere near as complex as the F-35. The BART extension might take longer to build than the transcontinental railroad, but there was no networking equipment on that railroad.

It's also important to distinguish between projects that are deep vs. broad (i.e. those solved by new thinking vs. those solved by scaling up). To be fair, most of the examples in the original article are indeed "deep" projects, but, for example, the Empire State Building was constructed quickly partly because there were 3500 people working on it. As technology has advanced, deep projects just get deeper. Although each level of technology builds on the last, there is still complexity added at every level.


I travel to Hong Kong every few years to visit family. Each time, there are more MTR stations--and on some occasions, entirely new lines. The MTR is just as complex as BART, if not moreso. I think it's worth considering why some places can build things like a metro fast, but others can't.


before they build a new station....

do they do environmental impact studies? was enough time given for the study to complete and people to challenge the results? possibly with another study? are there community meetings to discuss the impact? does everyone in the area have a chance to voice their opposition at an open hearing? do all workers on the construction team have strict safety regulations? require certified training for specific tools? the company selected must pass random workplace safety visits? are they allowed to impact existing traffic flows during construction?

we did this to ourselves


That's probably also the answer to the time series question as well -- i.e. why construction is more costly today in the US than 100 years ago.

Interestingly, I think those complexities are not imposed by any monolithic person or organization, but is a the bulk result of lots of little regulations. I'm not sure any one specific person is saying "on the whole, this system of complexity is worth it" but rather each one by itself has a specific good (e.g. environmental study) without explicit accounting of the costs.

To be silly and meta, perhaps in addition to a mandatory environmental impact study of each large construction project, there should be a mandatory economic impact study on the environmental impact study.


Yeah but how does any of this explain the 16-month delay in BART due to the installation of the wrong network equipment?

Something is genuinely wrong when it comes to major projects in America.


I would guess they do that in Denmark and Copenhagen has a very modern subway system.


It is modern but the metro is just 1.5 lines and the new ring line takes forever to build. Some of the other extensions are only planned to open by 2024. So it is quite glacial.


Western Europe has no shortage of worker safety, environmentalism, or democracy, and they don’t have this problem.


Are you sure? I live there and that isn’t the impression I get.


All this might explain maybe cutting the pace in half or 1/3. Instead, the pace in America is almost zero. ZERO.


There was a very interesting article a few years ago (I've tried to find it several times but cannot) that tried to explain at least some of this. The one factor they circled in on is that the US government stopped enjoying their own experts. Instead even publishing project requirements for different construction companies to bid on have to be outsourced. This causes mainly to problems: if you get the contract for writing the spec you cannot bid on the project. So many big and competent players will avoid writing the spec. The other issue is that the project is lacking holistic oversight by someone who understands what's going on and had incentive to keep cost down. Because nobody has a actual career as a construction expert with the government we see the government putting people in charge of these projects who worked at the contracting company the previous day. They had a few examples of the same construction company finishing projects pretty much on time and budget in other countries but going over a lot in similar projects in the US.


I think a big part of Hong Kong's success with metro stations is that they own some of the land surrounding the stations, which means that the huge increase in land values from building a metro station goes back into the metro system, unlike most US systems where most of the increase in land values goes to private land owners.


Of course there are examples where complexity cannot be the issue. Russia had frankly no legitimate explanation or capacity to out-compete the US on launch systems, yet they did in many ways. Even the US-built Atlas V uses Russian RD-180 engines. Perhaps things like regulatory capture and bureaucratic intertia, among others, are to blame.

There are other examples. I spent a lot of time around Boston, where road work takes ages, the roads are horrendously bad, and the most corrupt large-scale project in history took forever to finish (and then even a ceiling tile fell off and killed someone due to absurdly corrupt quality control). Of course, complex roads projects can be built extroardinarily quickly and safely[1].

1 - https://thelede.blogs.nytimes.com/2007/05/25/california-free...


I thought this was the most interesting question also.

My own experience bias suggests there is a scale problem related to skilled workers. That is to say there are fewer people with skills needed to achieve the work demanded (by the whole system).

There could be a number of factors here but I’d have to assert education and incentives have not kept pace with the demand. I think this holds true even if we say the projects are more complex than ever or if the technology has advanced rapidly. It still seems like an imbalance of skilled workers to the work.


I think survivorship bias is highly relevant to this question. It may not be the entire answer, but it definitely needs to he accounted for.


I would add the question, do we always need the level of complexity or advanced technology for a given problem?

When would it be good enough to have a simpler or less complex solution and why are we sometimes biased against that?

There’s also a weird sliding window that follows technological advancement and generations - what was complex for one often becomes more simple to the next. With such a rate of technological change and high levels of complexity, this is partly why some get left behind or can’t keep up. But it seems like sometimes we also ignore the simple solution in hand in deference to constant futurism.


I suspect this appears to be true but it is an illusion. This is because although you're right there's an incredible complexity to manage - we've been actually doing that using abstraction for a long time by inventing black boxes - sometimes literally white coloured boxes like fridges and washing machines that take away the necessity of thinking about the nuance in domain X but also the development of ideas that abstract out.

We can also make something look very complicated if we try, by switching context, multitasking, improper coordination.

The natural world (think of coal mining, making bicyles, stream engines) always looks very challenging if you're starting out.

> It's also important to distinguish between projects that are deep vs. broad (i.e. those solved by new thinking vs. those solved by scaling up).

Agree.

> As technology has advanced, deep projects just get deeper. Although each level of technology builds on the last, there is still complexity added at every level.

But we see conceptually simple projects everywhere that aren't being done!

We literally use the same tech to construct roads as the Romans. That is trillions of dollars in maintenance.

We know that natural sunlight and biomes would improve people's health in buildings where we spend 99% of our time. We just don't do anything about it apart from a window and a potted rubber plant or two.

We clean our butts with paper! The Koreans and Japanese had this one solved years ago!

There is no great wealth of complexity in any of these - it's just that we've decided not to think about them for legacy reasons.


I really like this format. Asking questions without answering them, this causes the reader to think a lot more than the usual blog post. I also look forward to gaining more insight from the HN comments.


> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

This is a great question, and I echo many of the sentiments here and elsewhere around general complacency, slowing rate of change (even as bits of technology get better faster), and so on.

But...

One of the reasons that big projects used to happen so fast, is that the interests of people negatively affected by those projects were quickly and efficiently ignored. Of course that's great being able to look back at the big projects is inevitably good, but it's not so great if you were one of the people whose interests were ignored. An advantage of the stasis we seem to be stuck in now, is that we as a society are not quite so willing to stomp all over the interests of whoever happens to be in the way of a great new idea. Is this a good trade-off overall? That's not so clear, but there are good intentions on both sides of the "how fast should be do big things?" question.


Louis C.K. has a joke exactly about this called "Of Course, But Maybe":

"Of course, of course slavery is the worst thing that ever happened. Of course it is, every time it’s happened. Black people in America, Jews in Egypt, every time a whole race of people has been enslaved, it’s a terrible, horrible thing, of course, but maybe. Maybe every incredible human achievement in history was done with slaves. Every single thing where you go, “how did they build those pyramids?” They just threw human death and suffering at them until they were finished."

Link: https://youtu.be/0O5h4enjrHw


I realize this does not necessarily change the point of the joke, but evidence points to the Egyptian Pyramids being constructed by skilled tradesmen, not slaves. https://harvardmagazine.com/2003/07/who-built-the-pyramids-h...


I'm reading "Sapiens" right now and the author makes a similar point about empires (Roman, Chinese, British, etc.). Yes empires subjugated and killed lots of people, but they also brought them cultural innovation like money, rule of law, trade, standardization, etc.

And the centers of the empires were wealthy enough to developer what we call culture -- architecture, art, music, etc.


"Why are certain things getting so much more expensive?"

Because relative prices (e.g. the ratio of the price of a 42" TV to the price of a cinema ticket) are constantly changing due to changes in technology and competitive dynamics.

Some things have gotten (and continue to get) much cheaper due to technological progress and economies of scale. So, relative to those, anything that hasn't benefited from the same trends looks like it's getting expensive (in real terms). And if that thing (e.g. undergraduate degrees in the US) has weird competitive dynamics (e.g. willingness to pay is driven by the availability of credit, and availability of credit is driven by sticker price, and sticker price is driven by willingness to pay), then that effect is even more pronounced.


This is very interesting and implies a kind of paradox. As some things become radically cheaper, it should leave us with more resources to spend on things that are difficult to make more efficient. But instead, it makes it appear as if they have become so expensive we can barely afford them anymore!

Does this fallacy have a name?


Yes, it's called Baumol's Cost Disease [0].

[0] https://en.wikipedia.org/wiki/Baumol%27s_cost_disease


How is the Baumol effect relevant here? The posts you're replying to seem to be pointing out the perceptual increase in price in areas that haven't benefitted from cost reduction. To my (admittedly limited) understanding, that doesn't seem to have anything to do with the Baumol effect.


Not sure if it's quite the same thing but it's similar to Jevon's paradox. https://en.wikipedia.org/wiki/Jevons_paradox


On a more cynical perspective, capitalism is effective at extracting the most money from consumers. People will pay almost anything for the really essential things: health, housing, air, food and water. Education is essentially the only reproducible way to pay for all of the above.

Therefore, any sufficiently late stage capitalist society has extremely high costs for all of those areas: it's just the stuff people will pay the most for.

Don't quote me on this.


It’s called price elasticity.

It basically means that you’re much more willing to pay a large price for a product you need and which has no substitute than for a commodity product. Which is why stunts like multiplying the price of a drug by 1000 regularly make the news.

Note that this is purely about price (ie the value you assign to the product) and has very little to do with its actual cost.

Furthermore, since you have no incentive to lower your price (indeed, you can hike it up any time you need to show more profits), well there’s no incentive to improve the underlying production process (ie lower costs) or improve the product. This goes IMO a long way in explaining the college and healthcare examples in the article.


> multiplying the price of a drug by 1000

Did you mean by 11 (1000% increase)? One of my pet peeves with using percentages.


Nope, I meant literally by 1000. I was thinking of Martin Shkreli but some googling shows he merely multiplied the price of the drug by 55 (although I think others have done worse since then)


Because we live in a very unequal society, things like education that offer a shred of economic security and maintenance of the middle class life are things that people will pay almost anything for.

Unfortunately, most of that effort is noisy but pointless zero-sum re-arranging of positions on the socio-economic ladder, the only beneficiaries being academics, textbook publishers, and campus property developers.

The solution: equality. Guarantee everybody a base level of economic security. From there, give people the freedom to pursue their real passions and interests, rather than waste their lives scrambling over the limited number of economically secure socio-economic positions our dysfunctional society currently offers.


Tangentially related, for the chart related to american gdp embedded in this article, even though it claims to be on a logarithmic scale, that doesn't mean it doesn't have some sort of "banking to 45" applied.

http://vis.stanford.edu/papers/banking

Paradoxically (to me at times, perhaps to you) banking to 45 is supposed to make the chart more accurately read. It does so by more apparent to the viewer where the relevant inflection points are (in this case, the great depression really sticks out visually).

Theoretically, the great depression might not stick out as well if banking to 45 were not done (as I suspect may have been done here).

Whether or not it turns out to be the case here, I suppose lots of the other curious questions in the article have similar answers. When dealing with qualitative measures using qualitative measures, I think even ethical people can forge a system where emergent fudging arises.

I think we (the tech folk) need to sort of up our game in casual analysis from tools like excel (which in actuality relatively good) to something like www.anylogic.com. With systems dynamics https://en.wikipedia.org/wiki/System_dynamics software I think we have a tool that can better elucidate this curious phenomenon where we notice questions whose data "feels a little funny" but may turn out to actually have a demonstrable excuse/justifiable explanation for looking so.


The chart is logarithmic - as such its overall slope can be selected by altering the base (aesthetically here to 45%) buts its curvature and smoothness is not manipulable.


The easy answer for the "Two Sigma" problem is that you... don't try to reproduce it on a large scale by finding some other method.

You provide individual, one-on-one full time tutoring to each student, if that is the analytically best method. It's worth it, since educational spending is close to the most effective public spending imaginable - something in the $3-$5 return per dollar spent, which you'd be hard pressed to find any other available investment that can do that consistently.

This also does away with some substantial percent of the overhead of maintaining separate public schools, assuming that this tutoring occurs in either the tutor's home or the child's home.


>You provide individual, one-on-one full time tutoring to each student

So, at a modest 2 hours of instruction per day, and assuming people spend on average 12 years in education and 48 years in the workforce, you need at minimum for 1/16th of your adults to be teachers. Not 1/16th to be employed in education but just teachers and just for this education scheme. Today it's about 1/40th (in the us) and that includes post-secondary and administration, logistics etc. so you're scaling up the education sector by 3x or 4x AND you need to find something to do with the kids for the other 5-6 hours when their parents are working.

>It's worth it, since educational spending is close to the most effective public spending imaginable - something in the $3-$5 return per dollar spent

Marginal ROI does not work that way. Even assuming you get that return on what you're spending now, that doesn't mean the marginal dollar invested in education is getting that return nor that you can scale up the amount you spend and continue getting that return.


Closer to 1/3rd of the working population as teachers, actually, by my numbers. Current spending is $13k/yr per student, I'd bump that comfortably up 4x to $52k per year. That's about one full time teacher per student.

As for marginal ROI, if the effects are as dramatic as the stated research implies, it'd be more than worth it. It would be a larger increase than the entire implementation of formal secondary and tertiary education system combined, which cost a lot more than K-8. I don't think you can overstate how large the demonstrated effect size was. Think "industrial revolution" or "invention of the printing press".

Essentially you'd be spending an extra 3x the amount we currently spend to get... 5x? 10x? better outcomes. Results from those studies were equivalent to making the current best students in the country suddenly the very lowest tier of educational attainment - top 1% suddenly becomes the minimum standard. Think Rhodes scholars being "the new special ed kids", or the new "functionally illiterate must-pass graduates".

(This is all assuming that the research was accurate, and the magnitude of difference is really that large. I'd want to see a lot of follow up.)


> modest 2 hours of instruction per day

Isn't this close to what students could/should be spending doing homework, reading or doing some projects together with their parents - effectively one-on-one tutoring for every single one of them?

I suspect that US education spending is so high compared to other developed countries because it has to compensate for lack of social support elsewhere in society. The #1 factor in student success is parental involvement. Instead of buying iPads for low-income schools, what's needed is stable 35-40/hr week jobs that you can live on for low-income parents so that they can spend evenings with their family.


Isn’t one-on-one tutoring by an expert by over a quarter of the population just a return to the medieval craftsman & apprentice system?


Not at all. There are huge differences: trades/subjects of tutelage aren't foreordained or narrowly restricted; much more choice exists on the part of the student and teacher ('compulsory' education now is not the same as compulsory education then--the ply-your-trade-or-starve situation is not present int he same way); teachers can leverage much more effective tools (books! Simulations!) to convey concepts quicker than a 'watch me, then do as I do' apprenticeship; teachers do not typically occupy a simultaneous position of educator authority and parent/elder authority ... the list goes on.


> individual, one-on-one full time tutoring to each student

Isn't this the promise of the various platforms? I'm talking about Canvas, Moodle, WebWork-type programs. In principle, they can have everyone in the room working at their own level, and only moving up when the system says they are ready.

Not that, as far as I know, those platforms are now being leveraged that way.


> Why is US GDP growth so weirdly constant?

This is something I noticed too. And it can't be just blamed to "inflation" of the US GDP. Almost any other country I checked does have fluctuations. Goes through booms and bust. China has gone through an exponential rise but slowed down and seems like it broke the pattern recently.

The US is the only country so far that has been going decades through a constant rate of change. And you can't blame that on inflation since inflation is neither constant; or it would mean that the GDP is constant (and inflation is increasing it). The GDP being constant would be weird too.

So is the US the singularity? Is the US dollar affecting the US GDP; and does it being the reserve currency of the world changes the landscape of the US economy?


Maybe there's a problem analogous to "clustering" in opinion polls.

I presume every now and again econometrics people have to tweak the way they measure GDP.

Maybe if someone proposes a plausible tweak and it results in a surprisingly high or low figure their idea gets ignored, while it might be taken seriously if it ended up moving the figure towards the historical average.


The chart is less smooth if expressed in real terms instead of nominal terms (i.e. in "constant dollars", to account for inflation): https://fred.stlouisfed.org/graph/fredgraph.png?g=lpcg

And part of the trend is simply population growth. Per capita trends are slightly less nice: https://fred.stlouisfed.org/graph/fredgraph.png?g=lpcR

(Note that different scales are used for the real and nominal figures in the charts above.)

Over a slightly longer period: https://fred.stlouisfed.org/graph/fredgraph.png?g=lpdg

The rate of growth is clearly decelerating.


A larger economy is more diversified, diversity will create a regression to the mean. And sheer scale creates momentum -- it takes a lot to accelerate and a lot to decelerate.

Less diversified economies show wider swings. Australia rode on the sheep's back for most of a century, these days the country's fortunes closely track the prices of coal and iron ore.

Still, from indicators like number of public companies, market capitalisation and so forth, the USA's economy is growing less diverse at the moment.


Not sure what you mean by "constant rate of change"? https://tradingeconomics.com/united-states/gdp-growth looks to me to a noisy overlay of https://tradingeconomics.com/united-states/inflation-cpi.


In this case the GDP of the US would have been either constant or linear. In both cases making it different from the rest of the countries which experience booms and bust whilst having inflation.


The Federal Reserve is extremely good at restraining booms and preventing busts to stabilize the business cycle.


Mar's law: Everything is linear if plotted log-log with a fat magic marker


Why are programming environments primitive? That’s worth a blog post-length reply, but I think it’s because coding is relatively silo’d and non-standardized (in languages, build systems, deployment schemes, and other tools). The impact of a single better tool is minimal because it could only address a tiny fraction of all developers. This is changing quickly, though, so I’m optimistic.

We are working on making programming environments less primitive. Here is our master plan: https://about.sourcegraph.com/plan/.


Acrually. For recently popular languages like javascript things are way better than say C or assembly. Resources, tools, debugging, etc.

I'm still wondering why we are forcing people to write assembly with short and rubbish opcode mnemonics when we are going to compile it anyway.

Even if you look at C vs Rust/Go we have made huge improvements. People don't need to write Makefiles anymore. Packages are easily sharable and re-usable. Security is by default. Etc.


>How do people decide to make major life changes?

before the change, on those days when it seems like nothing is happening, those people are thinking around the change. constantly, subconsciously (or not) and internally weighing options and possibilities. then, one day, they say "Im done." they find the words needed to make the change, which until then were elusive and, like a key, finding them makes the action doable, and doable without opposition. like a analog-to-digital switch that activates only when the analog portion is in the last 10% of either direction, the pwrson has their finger on the switch for days weeks months before the pressure builds up and things change.

this hypothesis excludes changes made in reaction to other external or involuntary changes in a persons life.


> Why can't I connect my editor to a running program and hover over values to see what they last were? Why isn't time-traveling debugging widely deployed? Why can't I debug a function without restarting my program? Why in the name of the good lord are REPLs still textual? Why can't I copy a URL to my editor to enable real-time collaboration with someone else? Why isn't my editor integrated with the terminal? Why doesn't autocomplete help me based on the adjacent problems others have solved?

I think all of these are possible already in VSCode/Atom, and especially true if you’re doing reactive UI programming for the web - time travelling, improved REPL, live debugging are all there. A very good spot to be in :)


> Why doesn't autocomplete help me based on the adjacent problems others have solved?

Doesn't the Facebook editor / autocompleter do that? I guess it's not available to anyone outside Facebook?

> Why can't I connect my editor to a running program and hover over values to see what they last were? Why isn't time-traveling debugging widely deployed? Why can't I debug a function without restarting my program?

print debugging is extremely versatile and the market has spoken that, apparently, nothing else has the same expressive power to overhead trade-off. I have a few theories: tool usability sucks (looking at you gdb), debugging and writing code are two different disciplines so if you can stay in the same modality (writing code) to do debugging it's one less thing that changes from under you, print debugging is versatile and can apply to everything from embedded to mobile to cloud to HPC. print debugging gets a lot of scorn but (if you want an appeal to authority) if you read "Coders at Work" no "famous" programmers use debuggers, they all use(d) print debugging.


All 15 of them?

The market has spoken, programmers want free stuff, and they’re happy doing it the old fashioned way.

Except for those who aren’t content:

[Insert videos from Bret Victor here]

https://vimeo.com/36579366

Lighttable is another attempt that comes to mind:

http://www.chris-granger.com/lighttable/


From my direct, personal, experience, it goes something like this:

"Don't you want better tools?" I'll ask

"Of course!" the programmer replies (note sometimes this is a conversation with myself) "but I don't want to pay anything for it" they caveat.

"That's no problem," I reply, "as long as it will make your life better."

"Well, that sounds good, but I also don't want to have to learn anything new."

"That..."

"Also, it has to just work from day one, if it doesn't quite work as soon as I touch it, I'll swear off it forever and go talk badly about it on twitter and hacker news."

"Ah..."

"Also, it has to work with every use case I can think of. Multi-threaded, GUI, deployed to HPC clusters running RHEL6 and also Docker containers running CoreOS, and it should be able to help me be productive in either JavaScript or s390 assembler. Also it should take no time to set up, and start giving me answers straightaway."

"Hm..."

"You know what, the methods I already have already satisfy all of these requirements and as a bonus, I don't have to learn anything new in order to use them. I spend enough time learning new frameworks to write software, why would I spend more time learning frameworks to debug software?"

"Because you spend more time debugging software than writing software?"

"Honestly the process of debugging, and the process of writing, have become so intertwined in my thinking, that distinction between them seems arbitrary and pointless."

"Thanks for your time."


> Why can't I debug a function without restarting my program?

This was/is true in C# for 32 bit programs. Being able to code with live data is really productive. Live unit testing also would be a big win.


C#/.NET 64 bit does this too but there were always limitations such as not being able to reload the code if method parameters were changed, closures were modified etc.


On Java you can often do most of this using either JRebel or DCEVM.


When I saw how C#/.net could do this I found myself annoyed because you could in theory do with with even C. There isn't any reason you couldn't monkey patch a new function in a running C program. At least for a pure function.

I used to work with a OG (with a pocket protector no less) who did this when debugging assembly by pasting op-codes using a debug monitor.


This is also true for VB6, add lines of code in real time and resume stepping through. Also you can hover your mouse on variables during debugging to see their contents.


> Why can't I debug a function without restarting my program?

I do this in Erlang all the time. It's one of the most fun parts about the whole system.


"Why are programming environments still so primitive?"

Try JetBrains tools (IntelliJ,Pycharm and kin). I still use vim to edit from a shell, but the JetBrians suite is higher-order coding.

Editing a function while debugging it was available in VB in the 90s. Drag the execution arrow to code you have already run, rewrite it, and step-over.


Shameless self-promotion:

"Why can't I copy a URL to my editor to enable real-time collaboration with someone else?"

That's one of the things we (Scrimba) are trying to enable dev teams to do seamlessly these days: https://www.youtube.com/watch?v=Rsorl3-TjdY

Instructions on how to try our beta can be found here: https://scrimba.com:9000/@welcome


"End-user computing is becoming less a bicycle and more a monorail for the mind."

I shall steal this quote. The humble spreadsheet is just about the only 'programmable' application left on the typical corporate endpoint/educational client PC.


And more companies are banning macros on work environments due to security reasons. Then everything is moved to Alteryx or some bespoke app that the end user has no control over it.


There's new growth in "low code"/"no code" apps. So maybe there will be a bicycle. Maybe.


this plagues my sleep. yes, spreadsheets! I always wonder: what's so special about spreadsheets?


Mainly that a spreadsheet program is available on most laptop/PCs. And most people have been taught how to use an 'office' package at some point. And the UI is sort of built in and familiar.

Perhaps you will be able to sleep easier when the Raspberry Pi' generation come of age? I recollect that small business people built applications with Hypercard a few decades ago.

https://www.wired.com/2002/08/hypercard-forgotten-but-not-go...


Spreadsheets are the best! They are literally THE killer app for PCs.


They do allow people to walk up and start building models with minimal introduction. Low floor, and, alas, as many know here to their cost, a high ceiling in the sense that spreadsheets often get used for more advanced modelling that would benefit from being constructed in a more maintainable way.


Why are programming environments still so primitive? In different ways, Mathematica, Genera, and Smalltalk put almost every other programming environment to shame. Atom, Sublime Edit, and Visual Studio Code are neat, but they do not represent a great improvement over TextMate circa 2007. Emacs and Vim have advanced by even less.

Why pretend IDEs don't exist? For my C++ development I couldn't be happier with Qt Creator (which happily manages non-Qt projects too), which does a lot of nice things to speed up my coding and code understanding.

I only use text editors like the ones mentioned when I have to and IDEs whenever I can.


C++ IDEs and Smalltalk / Lisp visual environments are in very different leagues. It’s worth giving, eg. Pharo Smalltalk, a shot.


I've heard great things about the productivity of Smalltalk. Is Pharo the best way to explore the language?


Sorry for the late response. Yeah I mostly played w/ it in 2015 but I think Pharo was the most modern option then and there were good resources like "Updated Pharo by Example" (a free book).


Not to be critical, but I pose this question in response:

Why do billionaires so often love to muse about interesting things outside of their field, get attention and praise for their thoughts, and then publicly allocate only tiny portions of their wealth and time to these projects so ineffectually while passing down the majority of their wealth to disinterested heirs?

e.g. Walt Disney, Henry Ford, Edgar Prince, Steve Jobs, Richard DeVos, Bill Ackman, Mark Zuckerberg, Ken Griffin.


It’s a good question, but there are counter-examples too (Bill Gates, arguably Elon Musk) and I’d argue that these get more attention and praise — so on the whole things may not be too badly out of kilter.


Isn't that kind of unfair? Lots of us non-billionaires love to muse about interesting questions outside of our narrow specialties -- we do it every day here on HN. But when it's time to get some work done, we return to our specialties where our time and effort can make the most impact.


billionaires can afford to muse about interesting things outside of their field, they get attention because they are billionaires.

Just because they are asking questions does not automatically mean they have the passion to solve the other problems.

I dont know about others but isnt Zuckerberg donating majority of the wealth?


>I dont know about others but isnt Zuckerberg donating majority of the wealth?

As an "I control everything anyway", tax evasion scheme.

https://www.nytimes.com/2015/12/04/business/dealbook/how-mar...

https://blogs.harvard.edu/philg/2015/12/03/is-the-new-zucker...


You're flat out wrong in your premise that it's a tax dodge, and it's very easy to demonstrate. Neither of those links support a claim about tax evasion in any regard.

There's no special tax benefit to what they set up. There are only structural benefits, in that the LLC can give to political campaigns and make private investments (neither of which are tax deductible). It also enables Zuckerberg to continue to directly control the Facebook stock while it's held by the LLC.

Any shares sold by the LLC generates a tax event as it would with an individual. You know the best way to avoid taxes like that? To keep the stock to yourself and not sell it at all.

The only tax deductions the LLC can take are identical to what an individual can take, the money must have gone to a 501c charity to generate a tax deduction. Nothing is gained in regards to avoiding taxes.

The first link openly admits it doesn't benefit Zuckerberg to use it as a means of tax avoidance (much less tax evasion, which is a crime that you're claiming is being committed). He'd be just as well off to directly donate the stock to a charity instead, as the ideal example given lists the LLC doing exactly that instead.

Any potential tax deductions accrued are saturated instantly, as they're limited to a fraction of the LLC's income in a given year and the total value of the deductions expire after five years (ie any charitable tax deduction carries forward for a maximum of five years and may only deduct against a maximum of 50% of your income in a year). There's nothing special about this deduction with the LLC, an individual gets the same arrangement. Again, there's no special angle.

Gates for example once gave a very large single year donation to his foundation, back in 2000. He saw very little tax benefit, because it saturated his ability to deduct a hundred times over.

The second link - an angry personal blog post - has the title calling it a fake charity. The first paragraph opens by insulting Zuckerberg's marriage. The second paragraph opens by insulting the Zuckerbergs love of their daughter. You can throw that one out as being biased immediately.


This absolutely has to be Mark Zuckerberg. Nobody else could get this mad.

The dishonesty is in the description of the event by Zuckerberg and the media. It was covered as "Zuckerberg donates $45 billion to charity," but he effectively donated it to himself.


You list examples but don’t cite any references to prove that these people didn’t make substantial philanthropic contributions. Nor do you cite counter-examples like those other commenters have mentioned.

Andrew Carnegie, the richest person of his era, donated 90% of his wealth to philanthropic causes, most notably to local libraries, universities and scientific reasearch. He implored his fellow wealthy people to to the same, believing the only worthwhile purpose for accumulating wealth was to reinvest it in improving society. The Rockefellers seemed to do the same, as have many others since.

In Patrick’s case, his company is still in growth phase and his wealth is mostly on paper, so he’s not able to give much time or money. I’m sure he will when his company’s success becomes more assured and he can spare the time and money. His friend, YC President Sam Altman, has donated $10M of his own money to YC Research to help find answers to hard questions he cares about, which I think is a lot for him (I was surprised to learn he even had that amount to spare as his own company wasn’t a huge success).


The list is far from exhaustive and far from perfect, but there's a reason Carnegie isn't on it.


How did Jobs do this? He dedicated his life to Apple until the day he died.


Reading this I'd feel a lot better if YC was run by Patrick, I prefer people that know what questions to ask over those with all the answers.


As I understand it, Patrick and Sam are friends. You don’t think they’re discussing these and other hard questions socially?

Also, Sam founded YC Research and donated $10M of his own money into it to find answers to hard questions about AI, UBI, medicine/health and other big challenges. He also toured the country interviewing Trump voters to find answers to questions about what was underlying the political climate that led to the election result.

What makes you think YC’s leadership think they have all the answers? At least in Sam’s case, the evidence seems to point to the opposite.

For what it’s worth I think there are other questions that are more important and potentially valuable to society than Patrick’s or the ones YCR is currently working on, but that’s just from my own experience and contemplation, and I don’t criticise people whose own journeys have not pointed them to these issues/ideas yet. I commend anyone making serious efforts to understand and solve the biggest issues they can identify with their own experiences and best efforts.


I read Sams posts whenever they come out and they strike me as someone who has good intentions, little experience and who believes that they are able to see what is good for the world. I read Patricks writings as someone who is genuinely curious about the world and who by thinking about problems reaches points where he does not have all the answers but is able to at least phrase the problems coherently enough that future solutions might be defined.

The difference in style is tremendous, and no statement on their friendship or private discussions was implied or intended. To me it is the difference between 'smart' and 'wise'. You can be very smart and still not be very wise (though it is hard to be wise and not smart).

As far as the evidence is concerned, that we can agree on, the 'changing the world for the better' mantra has outlived its usefulness and should for transparency's sake simply be replaced by the one thing that matters: money.

Watsi is still from the PG days, the UBI experiment is so broken it is embarrassing, the 'hard questions about AI' have been raised since Asimov's days and do not - to me at least, feel free to differ - move the needle at all.


It's a valid assessment, thanks for sharing it.


> Part of the problem with blogs is that they're less rewarding than Facebook and Twitter: your post may perhaps get some thoughtful responses but it doesn't get immediate likes.

To me, this is not a problem. People should not be rewarded with instant dopamine for low-effort actions. [0] The reward for publishing on a blog is in the responses you receive from readers and not from a counter incremented by a click or pageview.

I rarely see more than shallow insight on Twitter/Facebook, as posts have a short visibility lifetime and replies longer than a sentence are collapsed. By contrast, blogs (not like Medium or Stack Exchange) will often receive deep, thoughtful replies months or years after they are published. There's no "algorithm" to please when you're writing a blog; your post will stay there until your domain name expires.

If you are having issues finding worthwhile blogs to read, ask people around you for suggestions. Not everything needs to be indexed by software.

> And part of the problem is, of course, that writing a good post is much harder than writing a witty tweet.

Where is the problem here? Thermodynamics and information theory tell us that a valuable long-form post ought to be more difficult to write by several orders (of orders) of magnitude. Yes -- it would be wonderful if we could all spit out fascinating 17-page theses every week or two, but that just isn't compatible with our biology. On the other hand, publishing 17 pithy tweets in a week is pretty easy, and people will probably give you plenty of attention for it.

[0] https://yihui.name/en/2017/12/so-bounties/


Why doesn't autocomplete help me based on the adjacent problems others have solved?

I'm working on writing a shell with autocompletion now [1], and something like crowdsourcing completions in the style of Google query completion has occurred to me.

But privacy seems to be a dealbreaker in this case, and probably with many of the applications that Collison is thinking of.

If anyone has any ideas, let me know :) I know (and have worked on) about differential privacy but I'm not sure it helps here.

[1] http://www.oilshell.org/


> Why can't I debug a function without restarting my program?

Given the number of people here pointing out mature, powerful systems where this is totally, easily possible, I think a more to-the-point question might be "given the availability of debuggable systems, why are systems that do not value this capability so much more common/used?"

...however, I have yet to see a good answer (or even discussion--people love to feel superior and defend their choices) to that question, and it comes up a lot.


It's not that much of a gain.

And that requires learning new things.

This is the worst combination available for a new tool to have.


(I work at Retool, and Patrick is an investor.)

> Will end-user applications ever be truly programmable? If so, how?

I think that most end-user applications now have APIs that allow you to manipulate the data. For example, Salesforce, Lever, Excel, etc. all have APIs for reading and writing data. Allowing end users to build custom UIs on top of those APIs seems like a simpler problem.

Retool (https://tryretool.com) is a fast way of building UIs on top of data. And so if there are APIs for reading and writing data from “end-user applications”, Retool lets you build custom UIs and workflows on top of them quickly.

I think this is an interesting problem, and I’m not sure what the right solution is. If anybody else has ideas, feel free to email me — I’d love to learn! I’m david@. :)


Total cost of ownership for malleable software is higher than for something you pay someone else to specialize in modifying, and businesses have gradually learned this. They increasingly realize that software shouldn't be their core business, especially if it's applicable to more than just their own industry. So this is just the normal tinkering -> specialization dynamic seen in most new technologies.


Not everything useful has APIs. The most frustrating example in my life is WhatsApp. It is positively horrible. On IOS? Only way to backup your chats is iCloud. On Android .. only backup option is google drive. On an ipad without a cell phone (low income seniors who cannot afford cell plans) .. go away. It is sad and messed up how much of a locked in kingdom we have in software today. It has NEVER been this bad.


> Why are certain things getting so much more expensive?

I recently read Graeber’s Bullshit Jobs, and suspect the answer lies in there. The examples Patrick lists are all service industries driven by labour costs. This probably influences the following two questions too (project delays and GDP).


In the construction industry there are no bullshit jobs. Zero.

However only 50% of the house price comes from the construction industry, I have another comment suggesting part of why here: https://news.ycombinator.com/item?id=18040633


I'm not sure what aspect of construction you are in, but there is an amazing amount of inefficiency. For example, a friend of mine was working on the Tappan Zee bridge project several years ago. Union job, very good salary, benefits. He was working the night shift, making an added bonus night differential. However, there was a noise regulation in place where no work could be done after a certain hour, so for 6 hours a night this extremely well paid union construction crew was sitting around collecting very good money + night differential for doing absolutely nothing. Now his job is very real, and it requires skill, but for the purposes of argument, this was a "bullshit job" that went on for weeks and weeks and added literally millions of dollars to the cost of the job in which absolutely no work was done.


> What's the successor to the book? And how could books be improved? > What's the successor to the scientific paper and the scientific journal?

I think the answer to both of these looks something like a text- and figure-heavy Jupyter notebook distributed with a Docker container and embedded datasets.

The point being that books and papers introduce people to certain datasets, then teach how to gain insight from them. This would add the ability to interact programmatically with the data and equations being presented. I’m talking about technical textbooks here specifically.

I don’t know what the successor is to the scientific journal. I’ve talked to a lot of people about this and every time I’m less convinced it can be disrupted. It will require a coordinated international effort.


Will end-user applications ever be truly programmable? If so, how? How about adding some sort of block based scripting language line Scratch or Snap. Make it more approachable and encourage experimentation


> Will end-user applications ever be truly programmable? If so, how? Emacs, Smalltalk, Genera, and VBA embody a vision of malleable end-user computing: if the application doesn't do what you want, it's easy to tweak or augment it to suit your purposes... With Visual Basic, you can readily write...

Lisp was great and VBA was fun back in the days and I love it nostalgically but today VBA looks ridiculous because it actually is neither functional nor an object-oriented but a toy language and the fossil VBA IDE kept integrated in all the modern MS Office apps without an improvement for decades feels like a disaster. It ought to be replaced by either a modern dialect of Lisp (which is improbable as Lisp looks ugly and feels unintuitive to non-geeks and geeks are not the relevant target audience of Excel macros functionality) or something like Python (the best candidate that can probably satisfy everybody).

> Today, however, end-user software increasingly operates behind bulletproof glass. This is especially true in the growth areas: mobile

I believe the main reason here is the cost of supporting (and developing too) the apps and forcing users to do what you want. It's easier when you know all the possible use cases in detail and you are who designs them.

> and web apps.

Web apps don't operate behind bulletproof glass. Thanks g-d we still can view source, inspect, change and script almost everything in the web with the developer tools integrated in every major browser.


> Why are certain things getting so much more expensive?

Scott Alexander has a great post with lots of data on this:

http://slatestarcodex.com/2017/02/09/considerations-on-cost-...

The numbers are pretty incredible. There is no single clear answer as to the cause of it all unfortunately. It looks like funds becoming available allow a whole bunch of areas to suck more and more money arguably without providing a lot of value. Universities compete to have the biggest stadiums and the most luxurious dorms. Hospitals pay more and more people to manage, oversee, verify, systematize and inefficiently computerize everything.

In a lot of areas, we seem to pay a lot to achieve marginally safer conditions. In the case of health care, if paying 5x the cost extends the life of an additional 0.1% of patients, it's hard for deciders to justify not doing that spending since saving 0.1% of people in hospitals is still saving a ton of people.

Construction projects are like that too, you might be able make construction safer, have 1% fewer injuries, if you are willing to pay 5X more for the project. These safer, but much slower methods often become mandatory.

How do you decide on these trade-offs though?


> While a lot happened in the US during World War II, it's easy to forget how short the period in question was: American involvement lasted 3 years 8 months and 23 days.

How does this stack up against US involvement in other"good" wars, where the foe is the unambiguously bad?

I.e not a Vietnam style ideologically motivated confrontation.


A lot will depend on what you personally feel is "good" versus "ideologically motivated".

For example: the US was in World War I for 574 days (declared war on Germany on April 6 1917, and the war ended on November 11 1918), and entered the conflict with the Zimmerman telegram and German submarine attacks as its casus belli.

But the US had been aiding the Allied powers prior to that, and from the German perspective the resumption of unrestricted submarine warfare, and the resulting attempt to enlist Mexico to join the war against the US, was a defensive move to stem the flow of supplies from the neutral-but-obviously-biased US to Germany's existing opponents.


WWII is a bit of a fluke in terms of being clear what was "good".


> And part of the problem is, of course, that writing a good post is much harder than writing a witty tweet.

As is reading it when "a good post" implies a long post. Authors should better realize brevity is a merit and learn to organize the information they seek to share in more concise pieces easier to digest. If encounter a post that seems interesting and fits on one page I will surely read it. If it's about 1.5 screens-long I may read it. If it's more than 2-screens long I will only read it if I believe it is probably going to change my life. And my screen height is just 900 pixels.


Some of these are easy.

> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

Observer bias. Mean construction time is decreasing at an exponential rate. In the limit: nano-tech, with construction times approaching physical limits.

> Why is US GDP growth so weirdly constant?

Introduce a replicator into a nearly-pristine environment and that's the growth curve.

> How do you ensure an adequate replacement rate in systems that have no natural way to die?

Apoptosis

> Is Bloom's "Two Sigma" phenomenon real? If so, what do we do about it?

Yes. Nothing. There is no working mass education system and no basis for thinking there ever could be.

> What's the right way to understand and model personality?

Neuro-linguistic Programming.

(The "Five Factor" model is no better than Astrology. NLP is based on hard science. It's one of the very few schools of psychology that has repeatable algorithms.)

> Will end-user applications ever be truly programmable? If so, how? Why are programming environments still so primitive?

People ignore Prolog and Dr. Margaret Hamilton's work. (She's the person who coined the term "software engineering".) The combination of Prolog and Hamilton's HOS provides an end-user-programmable error-free programming environment.


> What's the successor to the scientific paper and the scientific journal?

Why not just a kind of an article-centric social network where scientists would publish their articles and reference other articles, other scientists (and whoever who has found an article relevant to their practice or research) would write their comments, score the articles by various subjective parameters and mark experiments they have managed or failed to replicate? Of course, there is to be moderation to fight junk comments and a way to weight the things that affect the ratings by the reputation of who contributes these.

If I were a scientist I would rather publish my research on my own blog if this didn't mean almost nobody is going to find it and take it seriously. Back in the days, there were no blogs people all over the world could view and that's why we needed journals. I believe journals would not even emerge otherwise. If the Internet was somehow available since the earliest days scientists would probably just upload their works there and care no more (but to receive feedback perhaps).


> It seems that the returns to entrepreneurialism in cities remain high: Hong Kong, Singapore, Dubai, and others, have improved the lives of millions of people and appear much more contingent than inevitable.

I’m interested in this as a bald assertion. Have they improved the lives of millions of people? I wasn’t aware of that. Can someone give me a quick before and after?


I was also curious about this. There are high levels of disparity in all three of those places. Hong Kong has both some of the most expensive property in the world and people living in cage homes.


Experimental Cities

These are like experimental schools.

Not intended to be mainstream at all.

Admission criteria would seem to be a critical consideration and the further from the mainstream that the experiments extend, the more unlikely it would be expected to be scalable beyond very strict criteria.

Houston comes to mind as a purpose-built city founded on undeveloped land for the primary purpose of entrepreneurship which has grown larger than average by maintaining that approach more so than average. As an example there has never been a zoning ordinance, that kind of regulatory obstacle would be seen as an experiment in cutailing prosperity, certainly not normal. A failed experiment at that after observation of long-term effects in other municipalities. Even though in most other municipalities the removal of zoning would be thought of as an experiment too risky for them to even consider.

In the mature real-world example of Houston it is also painfully obvious the benefit that could have been obtained with a little well-intended admission criteria. Besides, when's the last time you heard someone say "Hey, it's a free country" any more anyway.

> How do we help more experimental cities get started?

You've got to find someone who wants to subdivide their ranch, and then get settlers to move there like anyone else. Incentives might help speed things up, and you've got to figure that the more restrictive the admission criteria, the more people will want to apply.

Or something like that.


> Why are there so many successful startups in Stockholm?

I believe a cause of this is that Sweden market is pretty small, so companies tend to initially go to a global market and solve global problems.

Compare Stockholm to Berlin, Paris or Moscow for example. Domestic markets in these countries are huge, so people have no reasons to go global - they have lots of local problems to solve.


> Why are certain things getting so much more expensive?

People have become better at capturing value for themselves.

> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

Culture, competition and selection bias. There are still a lot of successful projects being built, just less so in culture that don't do cooperation well at this point in time. See previous answer.

> How do you ensure an adequate replacement rate in systems that have no natural way to die?

Reform(s).

> How do we help more experimental cities get started?

It has become common to cite those cities as successes without actually having spent much time there and especially not on a "grassroots" level. Chances are that it is actually both better and easier to reform existing cities or areas.

> How do people decide to make major life changes?

Like most things people do things when they are easy. Best way to make a major life decision is to make it easy to do so. Very few people are e.g. moving to countries they know nothing about.

> Why are there so many successful startups in Stockholm?

Stockholm is less worse than many other places, especially so in the end of the 90s, early 00s. Most places in the world are really quite petty and idiotic.

> What's the successor to the book? And how could books be improved?

Multimedia like Microsoft Encarta and An Inconvenient Truth. (There is more to it of course, but that is the direction).

> Could there be more good blogs?

No, journalism has a low enough barrier to entry now that blogs as such are largely obsolete.

> Why are programming environments still so primitive?

Because the stakes are low and there is enough low hanging fruit elsewhere.


> What's the right way to understand and model personality?

With data. Personality is a union of speech, writing, facial expression and body language. Until all of the nuances of these factors are measured across a diverse population in a unified dataset, our models will be primitive. The hardest part of collecting this data is that you want it in up-close and personal day-to-day interactions, from the vantage point of other humans. So the first requisite is discrete and high-res body cameras, but then the real challenge is how to make this type of study double-blind. (How to place the cameras without people knowing they’re wearing them). The models will flow from the data.


Since you seem to know a bit about the topic, why can we not measure the big five in straightforward tests just like IQ?


Why can't I connect my editor to a running program and hover over values to see what they last were?...Why can't I debug a function without restarting my program?

When he learns about Visual Studio his mind is going to be blown.


> Will end-user applications ever be truly programmable? If so, how?

This requires a clear separation of functions and data. Linux pipes are a glimpse of what's possible, and there's been some work in translating English to Bash. I think we aren't far from AI-powered English-to-Code translation being feasible: "order me an uber an hour before my next meeting" -> check calendar, filter, add -1, call the uber function. Alexa must be working on something like this. If I had billion that's what I'd work on.


> "Why are certain things getting so much more expensive?"

Incentive. Something costs X dollars because there are sufficient number of customers that are willing to pay X amount. Even if the cost of providing goods or services gets cheaper, businesses will charge more if they can.

Capitalism assumes things will get cheaper as a result of competition, but that does not take into account third party involvement. if the payment is made by a third party,the incentive to compete by lowering prices decreases dramatically. A third party could be a credit card backer,insurance,government subsidies,etc... The impact of reducing the price is either not felt or is delayed significantly. It plays a much smaller role in consumer decision making when a thirdparty is involved.

The problem cascades up and down supply chains,making trivial projects and services cost an insane amount. I blame this for many things including healthcare costs,car repair costs,public infrastructure costs,etc...


The American healthcare system is the perfect combination of misaligned incentives. In the US, patients are the ones who have to foot the bill, but they have no incentive to keep the costs down because things are covered by insurance. As a small example, I don't need new glasses every year, but I get a new pair every year because it's covered by insurance. On a much larger scale, in most countries, it doesn't make economic sense to spend tens of millions of dollars to develop a medicine that's 1% more effective than a currently existing one. In poorer countries like India, there will be very few people who could afford the new medicine. In countries with socialized medicine, the government would rather spend the money on something else. In the US, patients are more than happy to pay for the new medicine because it costs the same as the old one after insurance. Meanwhile, everyone's insurance premiums bump up a few cents and nobody bats an eye. Repeat this a thousand times and you're left with an incredibly bloated healthcare system.


> How do we help more experimental cities get started?

    — Create better city simulation software 
    — Fund "future cities" labs and institutes
    — Pool job creators together and negotiate collectively for new cities and districts
      (like Amazon is doing for HQ2)
My long answer is on Medium: https://medium.com/@yurylifshits/neocity-aa102731911b


>> Why are programming environments still so primitive?

I think adding all the features the author is requesting and have it work seamlessly is much much harder than be seems to realize.


> Will end-user applications ever be truly programmable? If so, how?

People just want things to work. Extreme customization can be valuable, but it often devolves into subtle bugs and inconsistent behavior. People forget about their custom modifications. Developers end up debugging people's custom modifications for every bug report. Having 100% consistent reproducible behavior is usually more valuable.


While seemingly unrelated, I kept answering to myself "fear" as I read some of these questions. Afraid to fire, afraid of risks, afraid of haste, afraid of users, etc. In general with business trends of places with fewer startups, higher costs, less customization, slower delivery, etc it's almost always because the environment rewards the risk averse.


> Why are certain things getting so much more expensive?

I too would like to know the answer to this question. I suspect it may have to do with low interest rates and easily available credit. For example, if buyers are willing to get a 50 year mortgage instead of a 20 year mortgage (and banks are willing to lend it), prices will increase to match demand.


> K12 education spending in the US has increased by 2-3X per student per year since 1960.

Uh what? That's not even close to true...


I'm not familiar with the statistics, and may be misreading them, but for what I can find with a quick search, 2-3X seems about right, possibly even low.

Total expenditure per pupil in average daily attendance in constant 2016 dollars:

  1959-60: $3,890	
  2014-15: $14,013	
  $14,000 / $4,000 = 3.5x
https://nces.ed.gov/programs/digest/d17/tables/dt17_236.55.a...

Separately, here's a link claiming a 3x increase for Nevada in particular: https://www.npri.org/issues/publication/nevada-has-nearly-tr...

And here's an article claiming a 2x increase since 1970: https://www.cato.org/blog/public-school-spending-theres-char...

What do you think the numbers are instead?

More meta, why do you post a harsh correction like this without offering any sourcing or evidence? If your goal was to provoke someone else into doing some research, I guess it worked, but this seems rude. What's your model for how others will react to your comment?


Maybe it’s just poor phrasing? Because:

> has increased by 2-3X per student per year

reads to me like it’s saying that every year, the cost is 2-3x higher than it was the year before. Which is obviously impossible.


He must have meant "spending per student per year has increased by 2-3X".


> With Visual Basic, you can readily write a quick script to calculate some calendar analytics with Outlook. To do the same with Google Calendar is a very laborious chore.

Google Calendar has a good API. The setup friction adds a notch of complexity relative to a VBA script, but I wouldn't call it "very laborious".


Regarding the BART delay, does anyone know if there is a way to help them out? Taking a year (or six months, or even three months) to get some switches is mind-boggling to me. I've seen a similar situation before at a tech company that was resolved over a weekend with a couple phone calls.


Some of those questions already seem to contain assumptions about the answers.

> Why are certain things getting so much more expensive? [...] How much of the cost growth is unmeasured improvement in quality and how much is growing inefficiency?

Are there other possible causes as well? If the cost growth were 100% due to improvements in quality, would it stop being a problem worth thinking about?

> How do we help more experimental cities get started? [...] Hong Kong, Singapore, Dubai, and others, have improved the lives of millions of people and appear much more contingent than inevitable.

To my knowledge (I have no citations though), the improvement meant that millions of people were lifted from far-below-western to only-sligtly-below-western living standards. Impressive yes, but how well is that even applicable to societies at our living standard. Also, are things like workplace security or environmental effects factored into the "living standard" here?

> What's the successor to the book? And how could books be improved? [...] How can we help authors understand how well their work is doing in practice? (Which parts are readers confused by or stumbling over or skipping?)

Is this a thing we want? Would the benefits of this outweigh the costs of setting up the necessary tracking infrastructure?

> Being limited in our years on the earth, how can we incentivize brevity?

Is brevity a thing we should universally incentivize?

As a side note, what's with the humblebragging? If I get Wikipedia correctly, this guy is co-founder and current CEO of Stripe and has a billion in personal net-worth. His personal opinions and descisions influence the rules for a significant part of money transfers around the world.

Yet the website and bio read as if he were just a particularly engaged intern at Stripe.

If this is the meritocracy in action, I can see why so many people have problems with it.


> To my knowledge ... the improvement meant that millions of people were lifted from far-below-western to only-slightly-below-western living standards.

Hong Kong and Singapore are substantially above Western living standards. Singapore has a PPP-adjusted GDP per capita of $90K/year, which is more than double the UK. Compared to the US, SG has better education, better health outcomes, radically lower crime and imprisonment, lower unemployment, higher incomes, and incomparably better infrastructure, all while spending only about 17% of GDP on government. Similar statements are true for Hong Kong, though it is no longer such a radical outlier, since it is under the control of Beijing.

https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)...

https://www.hoover.org/research/hong-kong-experiment

https://en.wikipedia.org/wiki/List_of_countries_by_life_expe...


> Why are certain things getting so much more expensive? > Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

These two questions, IMHO, are inter-linked. Projects are slower because things are more expensive in general. I think there are some answers to be found as well in the populist uprisings we've seen in the US and in other parts of the world. Here are some random thoughts that are all-related to these questions:

- We are worse off today than our parents were in the 60's and 70s - in terms of purchasing power and quality of life. See the trend of 20 and 30-somethings staying or moving in with their parents today. How easy is it for someone in this age group to buy a house in the USA versus back then?

- Wage disparity has grown immensely: the rich get richer - the poor move sideways or get poorer. Capitalism, though massively positive for humanity, if unchecked leads to social unrest.

- The US government was "richer" back then - or at least had more power to engage in bold new projects. See the American New Deal (WPA). NASA. Today, space exploration seems relegated to the whims of the new gilded-age billionaires (Musk, Branson, Bezos).

- Current administration will continue this shift of power away from govt into the hands of elite. Its not the government's job to dole out food stamps - let communities or society decide how to help the poor.

- Author cites rise of healthcare costs... See unfettered Capitalism comment. No govt checks on what the price of a drug is leads to silly games with insurance. Example: family member was once medivacced on helicopter for 20 minute ride at cost of $65k - but insurance only paid $5k and we ended up paying $1k of that. They're fine now - but why even propose a $65k price tag if you're going to be OK with a 90% price cut? Gaming imbalances between people who have power and those who do not - and also propensity to pay. Needs strong regulation to fix.

- In a similar point made by Author: 2nd Avenue Subway line is most expensive cost per mile of any subway in the world to-date. Why? See good NYT article on answer. Labor Unions. Hiring 30 people for a task that only needs 10. Why? Bad allocation of resources since you can't hire anyone else. This is an attempt at thwarting unfettered capitalism. On the other hand you end up with cheap day laborers (slaves) like the poor migrants who built Dubai. There's gotta be a balance in between. Govt needs to step in.


I think one interesting observation to make here is to think how one would group all these questions, into a meta question. What's the common thread among all these?

As far as I can tell, it's 'why aren't things better?'

The answer is simple: because you don't want it to be better, really. If you simplify the question down to 'why are Americans fat?', it'll become much clearer. We all already know the answer, most just want it to be some other, more pleasant explanation - hence never-ending fad diets. We want to believe, more than we want to face reality oftentimes :)


> Why is US GDP growth so weirdly constant?

i'm not sure about this but i get the impression that semilog plots of exponentially growing things often look deceptively linear, and hence that the best-fit exponential curve often looks like a better fit than it is.

The claim here is that the actual rate-of-growth has been constant over time. What is shown is a graph of GDP growth with an exponential curve overlayed whose rate-of-growth parameter has been fit to the data. But perhaps if you had stopped the analysis at different times in the past, you would have gotten similarly linear-looking graphs, but with very different best-fit rate-of-growth parameters. What i might like to see instead is a graph showing, for each year in the past, the value of the best-fit rate-of-growth parameter if you looked only at a fixed-sized moving window of data ending at that year. If the size of the moving window is '1 year', it is no surprise that GDP is all over the map. But even when the window is 10 years, charts of GDP growth per decade (from a google image search) show more substantial variation:

http://www.leftbusinessobserver.com/GDP-per-cap-by-dec.jpg http://www.massline.org/Dictionary/Photos/G/GDP-US-Cumulativ...

I expect that the variation would be even more pronounced if you made a chart that showed, for each year, the growth over the preceding 10-year window, instead of those bar charts which only have one bar every 10 years. Also, even 10 years is too short; what i'd really like to see is charts of average GDP growth over 30 years, but i don't have time to compute that.

In addition, there is some evidence that very high rates of GDP growth are only possible when a country is less wealthy/developed (see eg figure 4.4 from https://www.econ.nyu.edu/user/debraj/Courses/GrDev17Warwick/... ; note that it appears to be possible for less wealthy/developed countries to have either low or high GDP growth, whereas wealthy/developed countries appear to only be capable of low GDP growth; this could be explained by a 'diminishing returns' argument). If true, this would imply that over very long timescales, the rate of growth would decrease.

My "analysis" here is rather hacky -- what is the proper statistical way to analyze this? What is the proper statistical test to see if long-term growth rates are constant?


> Why is US GDP growth so weirdly constant?

It's just a made up output with a wildly manipulated function. This is not unique to the USA.


From the article: “K12 education spending in the US has increased by 2-3X per student per year since 1960.”

That can’t possibly be correct.


On a tangential note, there is an Italian proverb about questions and answers.


For those of us not well-versed on Italian proverbs, would you tell us what that proverb says?


It says, Un pazzo può fare più domande di quante sette uomini saggi possano rispondere.


As for why health care and education are getting not expensive


> Why are there so many successful startups in Stockholm?

1) work ethics: https://en.wikipedia.org/wiki/Religion_in_Sweden

2) 33 sunshine hours in December: https://en.wikipedia.org/wiki/Stockholm#Climate

> Why is US GDP growth so weirdly constant?

http://economistsview.typepad.com/economistsview/2011/05/doe...

Holy cow.

Great depression.

Hitler coming to power, WW2... Clearly observable.


For that first question regarding the cost of healthcare and education, cost disease:

https://slatestarcodex.com/2017/02/09/considerations-on-cost...

http://slatestarcodex.com/2017/02/17/highlights-from-the-com...


>> Why is US GDP growth so weirdly constant?

Maybe because the Fed's dual mandate of keeping employment low whilst keeping inflation under control has this side effect.

Growth is not related to the total utility value of the work that is produced by the country; it's a highly controlled metric; that's why the financial system is prone to booms and busts; the metrics we use are not in sync with reality and sometimes the discrepancy becomes so obvious that everything crashes.

>> Why are certain things getting so much more expensive?

Probably also related to the Fed's manipulation of the money supply. The way that inflation is calculated is by looking at the Consumer Price Index; this index only accounts for prices of products based on their general classifications; it doesn't take into account the fact that the quality of all products overall have been steadily declining over the years. If we were to factor in the decline of quality of products over time, we would find that inflation was actually much higher than reported.

>> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

Engineering practices are getting worse over time and increasingly focus on risk-mitigation rather than productivity. Big tech monopolies don't need to be efficient in order to keep increasing profits; they can afford to hire a ton of engineers who have nothing to do so they become focused on risk-mitigation instead of development efficiency.

Small companies mistakenly look up to corporations as role models for their own businesses and they start adopting the same inefficient and expensive tools and practices (e.g. serverless). Engineers who are efficient are shunned as being bad engineers because their front end code doesn't have 100% test coverage and they don't use the latest bulky over-engineered tools that came out of Facebook.

>> How do you ensure an adequate replacement rate in systems that have no natural way to die?

It's good to know that people in positions of power are thinking about these problems.

Blockchain?

>> Why are programming environments still so primitive?

This is related to the question "Why do there seem to be more examples of rapidly-completed major projects in the past than the present?" - The answer is probably similar.

Companies that succeed don't succeed because of good engineering practices and tooling; they succeed because of other completely unrelated reasons (business connections, funding, etc...) but they become role models in the developer community; a successful company's past technical decisions are treated as some kind of recipe to success even though many of those decisions were actually pretty terrible and developed in a rush to meet the challenges of hockey-stick growth curves.

Tooling and developer efficiency hasn't mattered because other factors like access to business connections and funding have been so much more important in terms of achieving successful outcomes.

Real technical innovation can't hold a candle to social networking.


These really are the questions of our time. I wonder if we can answer them.


There needs to be a club of people focusing on these sorts of questions consistently. There are only so many Gwerns and Scott Alexanders and Dredmorbiuses out there. There really are not that many people out there thinking about and discussing the Big Ideas and you keep running into the same ones over time. I fear many club members are lost in the blogosphere and near impossible to find with Google.

Some possible answers:

> Spending on healthcare > cost of college > construction costs > childcare costs

There are many complex reasons why costs can rise in any area, but the common theme between them is something you already know - the propaganda machine told everybody that university degrees meant belonging to the middle class club with accompanying benefits aka house ownership, a high status mate, healthy income, respect.

Turns out this is a bad model. There are diminishing returns for not only college degrees but even that knowledge itself in relation to what other parts of society do. The fuzzier and understudied/underappreciated areas of construction skill, caring (including healthcare - recall Robin Hanson's contention that 50% of healthcare costs are just people wanting to feel supported) have been deprived of talent and that is what is driving up costs across the board - supply and demand.

> Why do there seem to be more examples of rapidly-completed major projects in the past than the present?

My opinion is the same as Thiels I think - it's that our ability to do complex coordination is dropping. The reasons why? Information is an ecology - Cal Newport will tell us distraction or context switching is a sin if you want complex coordination and that is right - but also I believe in The Ladybird Book Theory which is that our education - taken broadly - has given us a false impression of complexity instead of thinking from first principals (notice how the old ladybird books are written - the kind of thing kids used to read) - which is something Musk puts a lot of weight on. The Elon/Cal thesis might be that "leave out some stuff - focus on key principals intensely". It sounds trite but I think it is right.

From a ROI point of view - not very many people in society need to be very very good at driving projects - so if you focused on making super-coordinators aka a new form education for those selected for Big Tasks should pay off immediately.

> Why is US GDP growth so weirdly constant?

My guess is energy.

> How do you ensure an adequate replacement rate in systems that have no natural way to die?

Sunset clauses are one but I don't really know because the main way seems to be just forgetting. I think when our institutions screw up it can get bad enough that the fix was a thousand years later we forgot that was even a problem.

> How do we help more experimental cities get started?

An idea.

You need to build a machine that compacts garbage into a substrate that can be used to form new land offshore. Nearly all prosperous cities are near water and produce garbage. It should be possible to gradually build experimental islands while solving the garbage problem so it's all in the technical detail of designing a machine that makes blocks/substrate out of the garbage. It pays for itself and building pyramids starts with understanding how to make a single brick.

> Is Bloom's "Two Sigma" phenomenon real? If so, what do we do about it?

Maybe human to human communication is weirder than we think. In education you're conventionally thinking about transmitting information, understanding from A to B but maybe because our common ancestors have spent millions of years in forests and other environments instead of classrooms they transmit information to each other in ways which sound a bit odd to us. Think of pheromones, our sense of smell, hearing somebody's voice unmediated by electronics, seeing somebody's posture body language - these could all form metainformation about the information in language that is very helpful in the student/tutor relationship.

Think also of the 'sleeping dictionary' - a person in a couple who learns his or her spouse's native language learns it really fast. It can sound a bit woo-ish but I think it'll be objectively measurable.

> What's the successor to the book? And how could books be improved?

I don't know but I really like Neal Stephenson's The Diamond Age (subtitle: "A Young Lady's Illustrated Primer").

Wikiquote: "At the age of four, Nell receives a stolen copy of an interactive book, Young Lady's Illustrated Primer: a Propædeutic Enchiridion, in which is told the tale of Princess Nell and her various friends, kin, associates, &c., originally intended for the wealthy Neo-Victorian "Equity Lord" Alexander Chung-Sik Finkle-McGraw's granddaughter. The story follows Nell's development under the tutelage of the Primer, and to a lesser degree, the lives of Elizabeth and Fiona, girls who receive similar books. The Primer is intended to steer its reader intellectually toward a more interesting life, as defined by "Equity Lord" Alexander Chung-Sik Finkle-McGraw, and growing up to be an effective member of society. The most important quality to achieving an "interesting life" is deemed to be a subversive attitude towards the status quo. The Primer is designed to react to its owner's environment and teach them what they need to know to survive and develop."

> Could there be more good blogs?

I've been moping about this recently too. I hope this was not a passing trend. The main social networks seem sterile.


> Why is US GDP growth so weirdly constant?

Its not hard to make something look consistent when you are constantly changing the algorithm you use, and substantial changes are made every few years. Comparing GDP from year to year when different formulas were used to calculate it (even if the underlying data was all accurate) is comparing apples and oranges. For example, a 2013 change in the GDP formula gave an instant 3% boost to GDP.

https://seekingalpha.com/article/1368001-u-s-governments-new...


>> Why is US GDP growth so weirdly constant?

> My guess is energy.

Your guess is correct. Energy is what drives the economy. We've been going from least concentrated to most concentrated energy sources: human/animal muscle to wind, river flow, to coal, oil, nuclear. Oil being the best tradeoff between ready-to-use and energy concentration. GDP is what people are paid, but the people have magical machines that run on energy.

Any bell curve looks exponential in the beginning, but when you run out of that concentrated energy you're left with less concentrated energy unable to sustain the same level of growth.


> There needs to be a club of people focusing on these sorts of questions consistently.

They're called "universities".


I can't believe this is still a middle class meme, some ideas die hard.

Universities have been sliding downhill for a long time and they know why because they operate the nightclub protocol for entry.

Geography and formal education not what links the people I mentioned. Correlation, causation, you know the mantra.


I didn't say anything about formal education or credentialling.

The original and still key purpose of universities is to gather academics together.

Teaching was a by-product of that origin. Would-be students flocked to the cities where scholars had clustered and eventually, folks started to organise.

The word "university" means "of one", or "guild", or "group". That reflects its origin as a club for academics.

Literally people whose reason for being at university is to be in a club of people focusing on these sorts of questions consistently.

The closest alternative are the many and various think-tanks, most of which are intertwined with academia.


[flagged]


I remember when I was 18, back in 1998, installing a specially licensed copy of this thing called "Visual C++" and doing just that, i.e the thing they call 'debugging'.


As I understand his point, it's about not having to pause the program. This seems like actually a quite useful feature, and probably not that hard to implement in say, visual studio (possibly requiring a manual trigger for efficiency). So the question is, why has it not happened in 20 years. Why are debuggers at basically the same level of sophistication as 20 years ago, when there's seemingly a lot of low hanging fruit. I think he have a point, and adding to that, how come so many environments aren't even up to par with the state 20 years ago. Many people doesn't even use a prober debugger, deferring to printf debugging and at most command line GDB in exceptional cases, due to bad environment support.


With VS and .NET you can pause the program, inspect variables, change variables, execute arbitrary code using a REPL, change threads, view memory usage, edit code and continue the program (if properly configured), debug into framework libraries and even third party libraries with a bit of hackery. You used to be able to debug JS - not sure if you can still do this as I haven't tried for ages since the Chrome debugger is pretty good.

You can also use the same tools to debug a crash dump although often a lot of info is lost, but it can be useful.

Admittedly nothing much has changed in years and it would be good to see more.

One thing I would like to see is a time travelling debugger akin to what is provided with Elm, but Elm apps have state concentrated in one place so it is easier to implement there than for a .NET program probably.

Outside of .NET I am not sure of the quality of debugging tools. Jetbrains created something pretty good for Ruby which I used once, it was quite nice. But I am sure there are languages without much support and it would be frustrating to use them.

Haskell is an interesting example because you don't often need to debug, because of the purity you would more likely run additional unit tests / property-based tests on the functions to find the problem. Having said that I've only made small Haskell apps, nothing in the day job.


A perfectly reasonable question probing at limits of software engineering.


[flagged]


At the level of those questions it's not that difficult. Read books (not just summaries of seminal works, the seminal works themselves, and not just non-fiction) and follow some quality discussion and news sources for a decade or so. A background in a science, history, etc, also helps.

Also start a blog or write small essays with your observations (even if they're trite at first -- they'll get better over time, especially if you incorporate comments from others, and learn to expand your initial points of view).


I suspect you misread the comment you responded to...


Appears so. The good thing is, judging from the tone and the lack of any concrete argument, the parent has achieved the first part of what he asks for anyway!


We've already asked you to stop posting unsubstantive comments to HN. This one breaks multiple site guidelines in addition, such as this one:

Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here, we'd appreciate it.


Nice clickbait headline


Can the title of "Questions" be changed to something a bit more useful?


What would you suggest?


A bit silly to ask questions and not have a discussion section below your post ...


In order:

Inflation, regulation/globalism, corrupt system, wait, don’t, up to us all, good question, good question, remove politics, no, books, papers, don’t try, no, false laziness.


Things are not this simple. I think these questions have more complicated answers than you think.


Rising costs: inflation. Assume a rate of a little more than 3.5% per year and you're at 9x after 60. That's pretty much the same reason why college cost rose up.

In fact inflation is necessary in our current economics. Imagine there was no inflation. Then very wealthy people could put all their money on a savings account, live of the interest rate and still accumulate more money. Thus the economy would eventually stagnate because of no investments. Inflation kind of stops that, also the central bank can control it by setting the interest rates for credits.

There are attempts to make inflation unnecessary like currency that loses value over time (and you need to frequently put stamps on it). But that's very far away from any widely used currency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: