Hacker News new | past | comments | ask | show | jobs | submit | lhecker's comments login

Unfortunately, none of these are responsible for the startup delay. Since version 1.18 effectively ~90% of the startup duration is spent starting up WinUI and having it draw the tab bar and window frame. It still needs a second to start. If it still used GDI like Windows NT did, it would start in well under 100ms even on an extremely old CPU.

Fixing this situation is essentially impossible because it requires rewriting almost everything that modern Windows is built on. Someone else in this thread said you couldn't sell 4 quarters worth of work to fix this, but the reality is that it requires infinite quarters, because it requires throwing away the last 10 years of Windows shell and UI work and that will never happen. You could paper over it by applying performance spotfixes here and there, but it'll never go back to how it could be that way. At a minimum, you'd essentially have to throw away WinRT which has an almost viral negative impact on performance. Never before have high latency, but still synchronous cross process RPCs been that prevalent and everything's a heap allocated object, even if it's within the same binary. It's JuniorFootgunRT.


> none of these are responsible for the startup delay

> effectively ~90% of the startup duration is spent starting up WinUI and having it draw the tab bar and window frame

I listed "Display scaling support", "Tabbed interface", and "transparency". Is none of that related to WinUI and drawing the tab bar?


Yeah, you're right, they're related to WinUI. But what I meant is that such features aren't inherently expensive, they're just made expensive due to the choice of UI framework.

Display scaling is very fast in GDI apps and has no impact on launch time, a tab bar is essentially just an array of buttons (minimal impact on launch time?) and transparency is a virtually cost-free feature coming from DWM. I wrote a WinUI lookalike using its underlying technology (Direct2D and DirectComposition) directly one time and that results in an application that starts up within ~10ms of CPU time on my laptop, quite unlike the 450ms I'm seeing for WinUI. That is including UIA, localization and auto-layout support.


> They're equivalent except that asynchronous callbacks is what actually happens [...]

Neither stackful nor stackless coroutines work like this practice. The former suspends coroutines by saving and restoring the CPU state (and stack) and the latter compiles down to state machines, as mentioned in the article. Coroutines are functionally not equivalent to callbacks at all.


Which is exactly what happens when you use asynchronous callbacks except that you have to do the storing of state explicitly. Stackless coroutines even typically compile to (or are defined as equivalent to) callback based code.


Stackless coroutines are literally the same thing.

Stackful coroutines are just a poor man's threads.


Hi! I'm the person who wrote that blog post. If you have any questions whatsoever, please let me know!

English is not my native language and "The solution is trivial" was meant sarcastic. The solution isn't trivial at all, nor is it complete, because solving the corner cases is complex.

I'd also like to mention ahead of time that DirectWrite/Direct2D is very fast at drawing western (Latin) text without coloring (>2000 FPS). In other words, in most situations Casey's suggestion doesn't help much, but it does address his termbench issue. At the time we assumed we were using the API incorrectly, after all Direct2D already had everything Casey suggested we should implement and it ran very fast outside of termbench. The new solution is largely identical to regular Direct2D with the only major difference being that ClearType/Grayscale gamma-correction is run on the GPU instead.

Casey's suggestion wasn't unique to me, as a significant number of other terminals do it exactly that way in OpenGl, something I was quite familiar with already. Originally I credited alacritty in the blog, because that's how I knew how to do it, but I removed it to keep the article as succinct as possible.

Update: I'm very sorry for publishing the blog post without giving proper credit where it is due, and it's been updated since. I'm sorry for my continued, overly defensive behavior.


Removing a credit or a reference is probably the worst way to add succintness to an article.

It's literally saying "the contributions of people whose work I based my work on is less important than the least useful sentence in describing my work".


Yeah, Leonard inadvertently gave away the game with that comment. Removing credit from a blogpost because you want it to be "succinct" is not justifiable. I think he's trying hard to pretend he didn't know any better; that excuse seems increasingly thin given the developments of this thread.

He just didn't care about other people.

Edit: it looks like Dustin from the same team at Microsoft shares the same hostility https://news.ycombinator.com/item?id=31284857


I'm not sure editing the article would be the proper thing to do at this point. Someone else suggested to edit in 1-2 weeks for possible future readers.

I can add back the sentence where I accredit Alacritty for the general, underlying algorithm/idea then, because that's where I heard about it first. There isn't really any other alternative, performant way to implement GPU-accelerated terminals, so I don't think hearing about it again from Casey changed my perception of what the only alternative to Direct2D is, in case it's fundamentally flawed for our purpose which it turned out to be.


Sometimes logic isn't enough. YOU caused a huge amount of anger among the community by being condescending, and insisting "I learned this from alacritty, not the person I was condescending towards" isn't going to make anyone less angry. I'm just telling it like it is here. Humility is, unfortunately, what you need, and you can't fake humility.


> I'm not sure editing the article would be the proper thing to do at this point. Someone else suggested to edit in 1-2 weeks for possible future readers.

There's nothing improper with adding "Edit: credit to the <name of the person> who suggested the solution. While the solution seems trivial, there are certain technical challenges to overcome".

> because that's where I heard about it first.

No. That's not where you heard it first. Otherwise you wouldn't have written "this needs doctoral research on performance" in the original issue.

Edit: I misattributed this quote, you didn't say it.

> I don't think hearing about it again from Casey

There's an actual, verifiable screenshot of your reaction to his words. That was not an "again".


> There's nothing improper with adding

The point of the waiting to edit suggestion is to avoid a bunch of reactionary edits during the most heated period. Immediately implementing every edit demanded of you when different people are demanding different edits all at once seems improper to me and waiting a short period seems a level headed approach. It also avoids the look of just trying to cover everything up the quickest way possible instead focusing on well thought out sincere reactions.

> No. That's not where you heard it first. Otherwise you wouldn't have written "this needs doctoral research on performance" in the original issue.

You've attributed the quote to the wrong person, it was not lhecker who said that.

> There's an actual, verifiable screenshot of your reaction to his words. That was not an "again".

There is nothing in lhecker's response that suggests he hadn't heard of the alternative. His comment argued it should all be able to handled in DirectWrite without using the alternative not that the alternative hadn't been thought of.


> You've attributed the quote to the wrong person, it was not lhecker who said that.

Ah, true. I definitely misattributed this statement. For this I apologize.

> His comment argued it should all be able to handled in DirectWrite without using the alternative not that the alternative hadn't been thought of.

His statement was "it isn't worth it" and "we ready doing it via the framework we're using" to only state, a year later, [1] "We actually took the same approach Casey suggested" while insisting that "it doesn't help much".

If he'd heard about this approach before and seen it in, say, Alacritty, he has a very shitty way of showing it.

[1] https://news.ycombinator.com/item?id=31285723


I'm not sure I'm much of one for being able to untangle interpretations of chunks taken from comments spread over literal years but I think the distinction missing here is previously the WT team thought they could get away with using DirectWrite's built in approach to handle the precise issue. It was never that they didn't know there was an alternative to DirectWrite just they thought DirectWrite had a way to do it that would work just as well. They then found out this wasn't the case, completely backpedaled on which was the right architectural way to achieve the goals, and wrote this blog post about having implemented it the other way. This is what the full blogpost tries to detail the history of.

It might be a good olive branch to extend thanks to cmuratori for trying to push that it was the right architectural choice but it was never about being the source of the idea. In either case it probably is a good idea to credit the other terminals that had already proven the implementation can work prior to all of this even starting though.


I totally agree with you


https://www.youtube.com/watch?v=nDO24U3hMkU

(it's bit long-winded but the general message is sound)


The problem that the poster has is that his input is basically being dismissed.

He was too 'overconfident' with his 'opinion'. But it is exactly what has been implemented.

It's basically this scenario:

> "You don't understand, it's very very hard, next to impossible, to create a website where you can leave an email address. i think you should reread all the RFCs and a bunch of programming books. You have no right to say anything, WE are Microsoft, not you, so you have NOOOOOOOOOO idea"

> 1 year later: "We created a website where you can leave an email address. It was actually not that difficult. But look, WE are Microsoft, so let us explain how smart we were to figure it out".


That's not how I interpretted the github thread and blog post. It seemed like the MS team was saying "yes we know this is one way to solve it, but it's incredibly difficult", and one year later they post "we finally did it and this is how difficult it was". As the author of the blog post mentioned here, the sentence about how "trivial" it was, was actually sarcasm, and the rest of the blog post highlights the technical challenges


That's not how interpreted it.. there's way too much elitism in tech. And also aggregations about how easy or difficult things are.

"AI will solve things" -> they have no clue, and just trust it's gonna work itself.

"It's difficult to add a field to that form, it'll take 2 weeks" -> usually bs.

I read the thread, and I fully agree that it should be a weekend work. So yeah, it's just other devs mismanaging their time, and therefore saying that things take way longer.

The issue created 2 days afterwards (https://github.com/microsoft/terminal/issues/10461) starts with the previously suggested solution which didn't make any sense because the original post "didn't understand how DirectWrite works, and yadayadayada". The solution is as trivial as "but just optimize our render pass instead"

No sarcasm there.

The whole sarcasm is just covering up their own ass

1 day later implementation starts. 2 days after that it was implemented..

> Yep! And I'm entirely on it!

So eager to work on an issue created by himself instead of someone else!

> Terminal cannot turn away valuable performance work simply on ideological grounds.

The "Atlas-Engine" label hasn't been added to the original issue either.

It just stinks like people using politics to improve their HR performance review.


I find it weird that github user cmuratori is willing to write in so much detail about how to do it and how it's only a weekend of work, but doesn't submit a pull request themself. To me, it's rather easy to make requests and speculate about different approaches, but actually implementing the solution can be much more expensive.

Not to mention, as some of the other HackerNews threads mention [1], they very well could have already gotten cmuratori's idea from another person (in fact they say they already heard it from alacritty), and it could have already been an idea in consideration but they hadn't fully explored it yet because they chose a different path to try first. The github thread you linked seems to indicate the same thing. For cmuratori to just write a few github comments and then demand credit seems a bit much.

Also note that the blog post has been updated to mention cmuratori, although that could just be to appease the outcry.

[1]: https://news.ycombinator.com/item?id=31287644



Your team needs to get the story straight. The other person in your team indicated it wasn't sarcasm.

I couldn't imagine how bad your team would treat team members with inferior authority when your team treat your own users like this.

Non native speaking skill is really no excuse here.


It's pretty clear the disconnect here, there's one side that says "Do this optimization it's trivial" and the other side going "We have to write an entire text renderer that's far from trivial" and the result was this blogpost saying "Hey we didn't this thing and it was trivial! All we had to do was write an entire text renderer first". So in a straight forward sense yes, the concept is trivial, but in a more real sense it's not, since your straight forward idea is straight forward only once you've done something difficult.


okay. so. a bit offtopic, but ... how come it's 2022 and Microsoft, with all its glory and industry leading best practices and trillions of microdollars of Azure-colored dollars valuation ... doesn't have a library for this? you know, the company that makes the thing, the suite that is used worldwide, underground, spacestationside, all the 365 days of the year.

I mean it's no wonder the introduction of the computer doesn't show up on the GDP charts when the industry is in fucking shambles and isn't even ashamed for it ... https://www.youtube.com/watch?v=5IUj1EZwpJY&t=35m40s

:|


No disconnect whatsoever.. Just someone who wants to make promotion by writing a blogpost, and dismissing someone else's input.

Politics and ego that's all


Why did you join this person's discord under a pseudonym?

Edit: I ask because if you're taking a pessimistic view of this, here is the rough order of occurrences:

1. You insulted Casey, knowingly or not, intentionally or not, but in public.

2. You join his discord, under a fake name

3. You write this blog post, without credit to Casey (other than "a community member").

I hope you can see how someone could very easily take this as something more sinister than it probably was.


1) When did this person insult him? Note that there was more than one microsoft person in that issue.

2) I don't get why so many people talk about this as if it were some horrible thing. I don't use my real name on discord the vast majority of the time, I don't see why you expect others to do so. The person in question admitted who they were when it was mentioned and later apologized for not stating it when they first joined.

3) I'm personally more mad about it not crediting the other terminals that have that kind of approach like wezterm, or whatever prior art they used.

My understanding is they thought DirectWrite did caching internally, which made the suggestion unnecessary, but it turned out that wasn't working quite as they thought it was. I didn't think the terminal people were combative at all until Casey himself was.

Ultimately, I think a big part of the outrage is just people assuming the worst interpretation of anything Microsoft.


I don't your name, pc86, is a real name, too. That's the life of them internet. So much about Pseudonym. So if already the very first statement is that questionable, how can I believe the rest you write had any truth?


That this may or may not pseudonymous is mostly a coincidence. I've talked about current (at the time) and past employers, I've posted my emails in comments and my profile before, my LinkedIn, used my first and last name separately, etc. This is not an anonymous profile by virtue of any effort on my part. I think just about every account - including my discord - is associated with my real identity and uses my real name.

The GP seems to use similar naming conventions for all their accounts. GitHub, HN, Twitter, are all "lhecker." It's a pretty standard naming convention - I'm pcopley on a ton of services. So I can see how it could raise an eyebrow when you get an aberration of that norm or habit or brand or whatever you want to call it. The simple (and honestly maybe even likely) explanation is just "it's not an anonymous account I just have a different username on discord." But if you're inclined to pick up a pitch fork every now and then it's easy to take that as some sort of nefarious act.


Besides what pc86 wrote: there’s a difference between ‘not a real name’ and ‘a fake name’. A dog is not a fake cat. Being a fake entails pretending to be the real thing, which ‘pc86’ - as should hopefully go without saying - is not.


This is way above my pay grade.

Just don’t take anything personal. Being defensive / know best is in this industry everywhere at all levels and organizations.

Being humble in general is hard, being humble in code reviews is harder, being humble to Internet comments in any forum… sometimes impossible. Then repeat that with two super genius experts - eek.

You find everyone is usually trying their best if you ever have face to face (except server admins, they are stone-cold psychopaths, stay away).

Would bet you both are helpful and open minded at the end of the day.


Oh yeah, for sure. I can't tell you how much I regret being that defensive about something that benign. This was certainly a life lesson. I've apologized to Casey in person before (well, to my best ability) and I'll apologize again if given the opportunity to do so.

I just wish people wouldn't say I'm taking credit, when it's an old, widely used idea and when I spent such an incredible amount of time reading and understanding DirectWrite/Direct2D's code in order to ship this renderer in the state it is.


It's not that you were taking credit, or that you were somehow wrong, it's that you initially put down Casey's idea so vehemently and so publicly. I think the lesson is, if you're going to step into the public arena, be humble, or be very, very right.


Maybe you should post in that github thread pointing out to the people originally arguing with Casey that the solution is in fact trivial and that he was right all along. It sounds like you and he are both on the same side in this discussion.


This is great advice (except the server admin part :-) !!)

It's sometimes tough to dissociate the problem with the people involved. Hope this will have a positive resolution as most of the people seem to be experts at what they do.


Most people are just performing the narrative that they use to shape their life in front of you (and the rest of the world). When someone comes at you overwhelmed with a disempowering emotion (e.g. locked up by anger, envy, snark, etc), how they behave has very little to do with who you are or what you did.

The first time this really hit home with me was when I was dating a girl who laughed at inappropriate times (e.g. it was extremely amusing to her how much my ski boots would squish my feet the first few days of the season). It was only after I broke up with her (for unrelated reasons) and saw her go through her own struggles that I learned that her family of origin was horrible and there was no one to comfort her when she would get hurt. Laughing was her way of dealing with the emotions of physical injury because she didn't know how to soothe herself or others.

Anyone who has the misfortune of having a friend or family member get ensnarled by the Qovid / Trump / populist zeitgeist can see that most of these people have gone through immense emotional suffering (often involving betrayal by a government agency, big business, or spouse), and are just bereft of the self awareness to understand what is going on with their emotions.

So yes, don't take anything personal. It almost never is.


>You find everyone is usually trying their best if you ever have face to face (except server admins, they are stone-cold psychopaths, stay away).

The server admin is a reflection of the union of the community of devs, maintainers, designers of their hardware, and users/benefactors of their boxen.

If your server admin is a psychopath. You may want to take a hard look what goes into keeping their boxen ticking along smoothly, and the demands of those using it.

Sometimes you get to be a peaceful, serene shepherd.

Sometimes, you're trying to keep that last critical link to reinforcements alive under siege from all threats, foreign and domestic, all at the same time.

Occupational hazard I guess.


You need to handle your PR better, dude. Whenever you speak out, always get a chance to acknowledge the help/ideas of others, unless they are your competitors. Make sure your work isn't understated though. Like "Turns out, many users complained about <...> and even a few shared patches (<link>, <link>). We even had a conversation with Mr. Smith suggesting <..>, so we finally took a good look at it and made a universal solution that covers all the bases".

It's not like you will get less credit for mentioning it - you still delivered the solution and that is what counts. But if you mention other people, it opens a few doors:

* They will like you and will be more likely to share further suggestions or code snippets (that will help you build your promotion package).

* The management will see you as a person who knows how to work with people, so you'll get a higher chance to get into the management track.

* Once you discuss a technical solution with somebody, and don't leave them in dust like an asshole, you get basic mutual trust. If you ever lose your job, they might gladly recommend you to their company, or vice versa.


If you don't mind a comment about your English, I'd suggest that what you wrote seems like it's better described as "tongue in cheek" than "sarcastic". "Sarcastic" can have connotations of meanness or snarkiness whereas "tongue in cheek" suggests good-natured joking, which seems to match the tone of the original post better.


> English is not my native language and "The solution is trivial" was meant sarcastic. The solution isn't trivial at all, nor is it complete, because solving the corner cases is complex.

I think it's really difficult to convey sarcasm well in text, and what seems obvious to you as a writer might not come across as sarcasm to readers. I certainly didn't understand that as sarcastic right away.

It might be better to address it directly and in a friendly way, like this:

> Like I did, you might be wondering right now why we don't just create our own, much larger lookup table and wrap it around Direct2D. Well, we can’t just tell Direct2D to use our custom cache [...]


Thank you for your suggestion! I believe I can't just edit the article now, since that would be disingenuous, but I'll try to avoid sarcasm in the future and use something like you suggested instead.


You could edit the article in a week or two, once things are calmer (maybe leave the original text in crossed-out form, or a link to the original via archive.org); it might help future readers who don't get there via this controversy.


"Sarcasm" unfortunately has a negative connotation. It makes it sound like you were being spiteful.

In practice, I interpreted the text as "There's a simple solution... but we cannot use it", which is a perfectly fine thing to say, in my opinion.


Perhaps worth considering is that sarcasm doesn't transmit easily over text, and certainly not in contexts where it's not expected. If you want to be clear about it, a common convention is to write /s after your sarcastic statement.

It might also help to acknowledge that Casey's feedback and the interaction with them was constructive and instrumental to this improvement being considered and deployed, even if the team ultimately took another approach than the one that Casey suggested.


We actually took the same approach Casey suggested, at least when it comes to the drawing part which is what this is about. A lot of terminals implement it that way. We have a long way ahead of us to improve the performance of our text ingestion now though.

I'm not sure what role Casey's feedback had in this being considered for implementation. His original termbench tool was _incredibly_ valuable for sure, but I'm not sure later discussions changed the outcome of that. If we had figured out a way to solve the issue while we continue using Direct2D, we would've definitely sticked with Direct2D. Since there was no solution, there was only one other way to solve it and it's a way that has many other terminals already do it.


It’s a good idea to have a native-English-speaking editor (and sometimes more than one!) — ideally one who has full situational context — review draft blog posts before they are published, and this is a good example of why. Doubly so if you’re representing a big company.


Just going to add that referring to someone as a community member [of Microsoft] is such a self aggrandizing term for "someone reported it on the project's github issues page"


I read it as member of the open source community rather than the Microsoft community.


Did you participate in the molly rocket discord?


> Casey's suggestion wasn't unique to me, as a significant number of other terminals do it exactly that way in OpenGl, something I was quite familiar with already.

In your reply to Casey you made it sound like you where unaware when you say things like "I'm somewhat surprised that other accelerated terminals aren't already doing this" is OpenGL not accelerated? What am I missing here?


You got the wrong person, I didn't write that. Once it became obvious that it's the only way to solve the issue and that a different use of Direct2D can't help with this, I implemented a prototype based on my pre-existing knowledge and explained how it works to the team.


Hi! Are you also the person who wrote the first, insulting, reply?


The names of the commenters may not be highlighted but they are still in the images. If "the first, insulting, reply" refers to "of the screenshots on Twitter" lhecker would be the second to reply.


The sarcasm was evident to me, FWIW. I don’t think you have anything to apologize for.


Sarcasm is usually very easy to spot if you assume the best in people. If you let cynicism take over then things said can be interpreted in the worst possible light.


There's no need for highly opinionated discussions. Differences in opinion should be a trigger to find a common understanding, it shouldn't result in defensive behavior. Everyone's just a person trying to help.


Honestly?! Your article is pretty good, here is a great example how misreading and Twitter cancel bubble works out :D

Do not let your sarcasm/irony out of the articles, I've enjoyed.


Seems like the solution was to convert a true type font to a bitmap font, I remember doing the same thing for my video games when I was a kid in the 90s its good to know I wasn't the only one that found it challenging.


Caseys suggestions were pointless because Direct2D is fast at drawing western text without coloring? Didn't casey open an issue because the terminal was slow with coloring?

It seems like you managed to optimize Direct2D with your 'glyph cache' pixel shader but that doesn't mean caseys suggestions were pointless.


I didn't say they were pointless anywhere. They were anything but.

I did say however: If we had figured out a way to solve the issue while we continue using Direct2D, we would've definitely sticked with Direct2D. Since there was no solution, there was only one other way to solve it and it's a way that has many other terminals already do it.


You didn't use that word, but you did say "in most situations Casey's suggestion doesn't help much", which is probably what your parent post was responding to.

Despite the apologies you or your team have given, you continue to denigrate and downplay. First, Casey's suggestion "doesn't help much"; then, "Casey's suggestion wasn't unique". And so on. Casey suggested a solution, spent time proving it was easy, and you even used the suggested solution to good effect, and you're still trying to downplay Casey's contribution or involvement.

If Casey hadn't kept pushing, it's doubtful you'd even have gotten around to doing this, so some actual credit is warranted, along with perhaps a real apology.


Perhaps leave sarcasm out of technical blog posts, or add a footnote to indicate sarcasm?

I couldn't tell you were being sarcastic and it seems others couldn't as well. I only found out through this comment.

Text is the worst medium for sarcasm since sarcasm is often signaled by vocal inflection or body language, neither of which are present in your post.


Baffles me that people think sarcasm is a good way to communicate on the internet. You are begging to be misunderstood, either accidentally, or maliciously.


It's not really sarcasm (which usually connotes contempt or mockery, which I suspect the blog writer didn't intend). It's irony, and irony has been widely used in written communication since we stopped using writing exclusively for record-keeping.

It can and does work on the internet, provided (i) the writer is prepared to accept that a subset of readers won't "get it"; they'll fail to pick up on contextual clues that signal irony and (ii) you have to be a good enough writer to include those clues so that at least your intended audience knows not to take it literally.

EDIT: To clarify, in this case, irony was a bad idea because it was badly executed. The context that would allow readers to interpret "The solution is trivial" as ironic was only available to people who were privy to the original conversation, while the blog post was intended to be read and understood by a much wider audience who lacked that context.


Irony should not really be used in technical communication at all, though. The goal of technical writing is that as many readers "get it" as possible. Therefore, any rhetorical technique for which you have to "accept that a subset of readers won't 'get it'" is a bad technique for technical writing.


Writing is hard. Writing for a multi-lingual audience is harder. Writing in your non-native tongue is harder still.

> EDIT: To clarify, in this case, irony was a bad idea because it was badly executed.

That was exactly how I felt reading it. Had it been the last statement of a long and complex explanation, it would have landed differently and warranted a chuckle.


imo, sarcasm only works over text if your text can be taken sarcastically and non-sarcastically.


Everyone knows you’re supposed to denote sarcasm with </sarcasm>. </sarcasm>


There's a (renewed?) push from Autistic/Ally TikTok users to use Tone Tags / Tone Indicators, especially when communicating with ND people.

https://tonetags.carrd.co/#masterlist

A number of creators, especially cosplayers, have recently shared posts encouraging their use.

I don't know how widespread it is, because TikTok and IG both tend to feed you content relating to your niches. So I may be seeing a disproportionate number.

(I had to Google what NBH was about, it means "This isn't aimed at anyone specific reading this".)


Gosh this seems like a brilliantly effective next step in the TikTokers’ campaign to drain life of any and all colour and playfulness and spontaneity. As an autistic person, I’d cast my vote for ‘occasionally misunderstand things’ over ‘have ridiculous sarcasm warnings on everything so that you can’t actually be sarcastic, or anything but grimly solemn and annoyingly earnest 100% of the time’.

(Also, Lord save me from people who call themselves “allies”. It’s just called being a normal decent person, but that doesn’t let you brag about it or use autistic/black/etc people as fashion accessories, so I s’pose that’s off the menu..)


I think it's about writing for one's audience, and being inclusive.

I've got autistic friends who struggle with open questions. They strongly dislike opening greetings without quickly taking the conversation somewhere.

They often miss stuff that's implied in conversation. It has to be explicitly stated.

They suck at gauging tone and intent in written language. They worry about the feelings and opinions of others. It can be upsetting and stressful for them.

But they're smart, capable, fun, artistic and creative, kind, thoughtful and inclusive.

They are certainly not lacking in colour or playfulness. (Possibly lacking in spontaneity to a degree, but I don't think that's a deal-breaker.)

And it's great that you're comfortable enough with misunderstanding things for it to be preferable to an alternative.

I choose to change my language to suit them. If using /s and /nbh or whatever helps them to correctly parse what I write, and assists in me communicating, why would I choose not to do that?

When using spoken language, I denote sarcasm through tone of voice. Does that render it pointless? If not, why would using /s?


Wait! Where's your opening tag? You monster!


It's like those people that open a parenthesis (for a short sentence, and then go 5 paragraphs without bothering to close it!


It is a good way to communicate though.


Beautiful comment because I can't tell if you're being sarcastic, and your comment works either way.


Yes. It’s even formally known as Poe’s Law:

> Poe's law is an adage of Internet culture stating that, without a clear indicator of the author's intent, every parody of extreme views can be mistaken by some readers for a sincere expression of the views being parodied.

https://en.m.wikipedia.org/wiki/Poe%27s_law


What's different about the internet when it comes to written sarcasm?


The internet lends itself to immediate/premature responses. People read headlines without reading the article. People stop reading in the middle as soon as they feel they have something to say about what they’ve read. People don’t take the time to think about what they’ve read before responding.

And that’s before you even get to the internet’s tendency to read what’s written in the least generous way possible in order to score internet points with a response to something wholly divorced from what the author intended.

Now add sarcasm to that mix. Pulling sarcasm out of context often leads to quick-draw responses to the exact opposite of the point the author was making.


Nothing, obviously.


Communication isn't always the primary goal of things put on the internet.


If you limit your communication online to a subset that cannot be willingly or unwillingly misunderstood, then you will say nothing at all. People's capacity to misread is infinite.


"It's generally not good communication to use intentionally ambiguous language" != "It's only good communication if you literally can't misunderstand it"


Re-read what you just wrote and break down what you said:

> Baffles me that people think sarcasm is a good way to communicate on the internet.

The subtext here is that the idea of using sarcasm on the internet should be obviously stupid to everyone, thus you're stating anyone who exercises it is stupid. That or you're clearly smarter than everyone else.

> You are begging to be misunderstood, either accidentally, or maliciously.

Thus, if you're using sarcasm you're either stupid or evil.

Was your intent to insult people?

Sarcasm and irony can be effective means of communication, the same as exaggeration. Communication is hard and people posting on the internet don't() invest much thought most of the time.

* Typo


> people posting on the internet do invest much thought most of the time.

i would like to sign up for the internet you're on. where can i send my money?


That'd be nice right? Sorry that was a typo.


I'll try to avoid sarcasm in the future.

What I haven't understood yet is what significance "The solution is trivial" being sarcastic or not has on the article itself. I understand it's reflecting poorly on it due to what I said earlier, but is there something that makes the article harder to understand due to this? (I realize this question sounds a bit rude, but I'm genuinely curious and I don't know how else to phrase it.)


Firstly, I think people are making a mountain out of a molehill. The article remains comprehensible on the whole. It’s a good article.

What is presumably meant is the following: sarcasm and irony require a lot of skill and nuance to get right, precisely because they can be understood as saying something along with its inverse. More often than not, a sarcastic written comment will divide the audience into people who understood it to say P and those who understood it to say !P. This is especially true when your audience is large, culturally diverse, and (on average) rather literal-minded.

But don’t beat yourself up too hard. It’s a good article by any measure, and even more so for someone who is not a native English speaker.


Thank you for being reasonable here. A lot of "feedbacks" saying that person should "avoid personal stuff" in technical articles are more likely showing their taste instead of actually work on how the writer could express better his irony. Which I understood reading the "trivial" not so trivial xD


I’ll be your mentor for 5 mins.

> What I haven't understood >yet is what significance "The >solution is trivial" being >sarcastic or not has on the >article itself.

If there’s no significance either way it shouldn’t be part of the blog. It adds no value and takes away from your goal of being succinct which you stated in one of your responses.

General (well meaning) advice from a stranger: 1. Always leave personal feelings out of blogs, technical and professional communication - especially the broadcast type communication. We tend to think about a small number of people but a larger number of people without context will interpret things very differently. 2. Sarcasm, irony etc need context and sometimes are also differently perceived by people from different cultural backgrounds. Your goal is to represent your and your teams efforts while helping your users. Everything else will detract from it. 3. When faced with feedback take it gracefully even if it you disagree completely or it makes you mad. You don’t need to get defensive and explain ‘your side of the story’. It almost never goes well.

Also why the hell were there such rude responses in the community post in the first place? I’ve worked at Microsoft before and I’d have roasted my team if one of them responded in that disrespectful manner - even if the community member may have trivialized work.


I think there is a lot of overreacting here to your blog post. That’s just the final piece. What rubbed me the wrong way was how your colleague castigated this person in the issue. I know people can be rough around the edges. I know they can be blunt and sometimes rude. But I think as MSFT you have a duty to rise above that by not engaging in it.

In any event, this whole thing is blown out or proportion and doesn’t deserve 300+ comments let alone another from me.


"The solution is trivial: [possible solution]! Well, unfortunately, [problem with solution]."

The issue here isn't so much sarcasm than that "is" should be read as "seems" here. But I don't see any other idiomatic reading that is sincerely calling the solution trivial.


> But I don't see any other idiomatic reading that is sincerely calling the solution trivial.

Agreed - it may not be obvious on first glance, but there's no other reasonable reading.


I for one caught on that the writing was analogous to, but more subtle than, "It sounds simple, you can just do blah, right?!; but actually you can't! So we had to do this complex thing to get it to work."

So yeah, as noted, sarcasm is a tricky tool in writing. While I enjoy it in technical writing often, it's definitely not as common on a company-associated blog post (for various good reasons).

My own takeway for myself is a reminder: be careful crafting snark/jokes/sarcasm. Length of the statement probably increases the chances of being misinterpreted.


Nooo... I love sarcasm ann/or irony in texts they make chuckle all the time. It actually keeps it more light and delightful to read it.


It’s a bit silly to berate a customer for their use of ‘it’s easy’ and then use it yourself. ‘It was sarcastic’ and ‘it was a language issue’ are weak excuses.

It’s just poor form, why make an excuse in the first place? You can just say this was a mistake and try to be better next time.


Some very important goods are almost exclusively produced in rural areas (food for instance, being produced by few people). The importance of these goods might be as important as the goods produced by people living in large cities (services for instance, being produced by many people).

If we start considering the people living in an area as the likely maintainers and experts of their field, I could imagine that we might want to assign the importance of votes based on the total importance of the "produced goods" in that area. That way these "maintainers" have a say in politics relative to their actual importance in society. I guess?


You might be interested in Hong Kong's "functional constituencies" system: https://en.wikipedia.org/wiki/Functional_constituency_(Hong_... Half of their legislative council is selected by voters in specific constituencies. Among them is specific rural constituency ("Heung Yee Kuk") and one for agriculture and fisheries, but also various more-urban constituencies like tourism, IT, and healthcare.

I don't have a good sense of how well this works in practice, and of course it is confounded by Hong Kong being a very unusual jurisdiction to start with. But yeah, I think it would be a more principled approach than having this one feature in our government to nominally give extra voting rights to rural populations in particular. It's difficult to figure out what those importances really are and it's unlikely everyone will agree, but I think everyone can agree that the answer is not that rural voters are the only special constituency.

Alternatively, there's a simpler argument - those things that are important in society are important because everyone cares about it, and therefore an urban legislator is unlikely to say "We don't need farms," because the urban legislator needs to eat. If the agricultural constituency says they need some measure, they already have the ability to convince the general voting public in proportion to their importance in society.

(Also, it's not like our current system effectively gives rural voters an additional voice. California is our top agricultural exporter, but it tends to vote in the opposite way from the smaller-population states. And even if you did give a specific voice to California's agricultural interests, it's highly likely that they'd disagree with the policies advocated by smaller-population states to reduce immigration and increase deportations, for instance.)


>That way these "maintainers" have a say in politics relative to their actual importance in society. I guess?

Who decides what's important? Who decides the relative importance of "maintainers?"

Even if that were a good idea (I don't think it is), there's no reasonable way to make something like that work.


> Also, like k8s & Bazel, [gRPC] is less advanced and lower performance than their internal technology.

Are you sure about that? As far as I know gRPC is literally just as good as Bazel, which is why Google is even migrating to it internally.

For instance this comment agrees with me: https://news.ycombinator.com/item?id=12348286


Note that Bazel is a build system (https://bazel.build). I believe you are confusing it with Stubby. That's not to say that the general thrust of what you're saying is necessarily wrong though.


I think you are misreading that comment. He’s saying that stubby is fast, not grpc. In fact performance is the big unknown with grpc adoption within google. It definitely isn’t on par with stubby today and it has to get there before anyone significant will switch to it.


I can't comment on raw numbers (because I simply don't have them) but at least for the service I work on, replacing Stubby with gRPC wouldn't really move the needle even if it was 2-3x slower (it might be faster, this is just for illustration) -- we spend our time waiting on IO from other services or crunching numbers in the CPU. Being a Java service, gRPC/Java might well be just as fast or faster than Stubby Java, but I could understand that Stubby C++ has been hyperoptimized over the years vs. gRPC C core which might have a ways to go. By the latest performance dashboard [1, 2], gRPC/Java is leading the pack but gRPC C++ doesn't seem like it's slouching too much either. I seem to remember the C++ impl crushing Java at performance a while back, so I'm sure that'll change in the future.

Honestly though? It'd take a _very_ demanding workload such that your RPC system was the bottleneck (so long as they're within constant factors of each other). There are services like that, but they're the exception and not the norm. Most services don't need to do 100kQPS/task. Even then, at that point you're spending a lot of time on serialization/deserialization, auth, logging, etc.. Your service is more than its communication layer, even if that's important to optimize it's still just a minor constant factor.

The real problem is inertia. There's a lot of code/tools/patterns built up around Stubby and the semantics of Stubby (including all its features which likely haven't been ported to gRPC yet) and that's difficult to overcome.

Our #1 use of gRPC so far I would imagine is at the edge. gRPC is making its way into Android apps since it's pretty trivial for translating proxies to more or less 1:1 convert gRPC to Stubby calls.

[1] https://performance-dot-grpc-testing.appspot.com/explore?das...

[2] https://performance-dot-grpc-testing.appspot.com/explore?das...


You and I seem to be using a different denominator to quantify "most" services. I'm thinking of it as "most" in terms of who has all the resources / budget. You seem to be thinking of it in terms of sheer number of services or engineers working on them. The fact is that the highly demanding services have the huge majority of the resources, and are the most sensitive to performance issues. If your service uses 10% of Google's datacenter space, you won't accept a 5% or even 1% regression just so you can port to gRPC, because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

Totally agree that world-facing APIs will all be gRPC and that makes perfect sense to me.


> You seem to be thinking of it in terms of sheer number of services or engineers working on them.

I'm not sure where I said that, but yes, that's part of the switching cost.

> The fact is that the highly demanding services have the huge majority of the resources, and are the most sensitive to performance issues. If your service uses 10% of Google's datacenter space, you won't accept a 5% or even 1% regression just so you can port to gRPC,

The thrust of my statement was that for many services, RPC overhead is minimal. So even a 2x or 3x increase in RPC overhead is still minimal. I agree, a 5% increase in resource utilization for a large service is something that would be weighed. But lets explore that idea for a moment:

> because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

Not necessarily. Engineers are expensive and becoming ever more expensive while computing resources are becoming increasingly cheaper. Not only that, but engineers tend to be more specialized and so you can't just task anyone to maintain the previous system, it tends to be people with deep expertise already. And those people also have career aims to do more than long-term support of a deprecated system, so there's retention to be considered.

Pretending for a moment that all your services except a small handful moved on to somme system B from some system A, if the maintenance burden of maintaining system B starts to eclipse the resource cost of moving to system A (which decreases all the time due to improvements in system B and the increasing cost of maintaining system A, and the monotonic reduction in computing resource cost), then you might well just swallow the 5%-10% increase in resources either permanently or temporarily and come out ahead in the end.

Additionally, as system B moves on, staying on system A becomes increasingly risky: security improvements, features, layers which don't know about system A anymore all threaten the stability of your service. If you've checked out the SRE book, you'll know that our SLOs are more important than any one resource. If nobody trusts your service to operate, then they won't use it and then you won't have to worry about resources anymore since the users will have moved on.

> because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

To reiterate the point above, these roles tend to be fairly specialized and hard to staff. Arguably these same engineers are better tasked making system B good enough to switch to so you can thank system A for its service and show it the door.

Bringing this back to Stubby vs. gRPC, it's a pretty academic argument so far. They're both here to stay. And honestly, when we say "Stubby" there's already different versions of Stubby which interoperate with each other and gRPC will not be any different. Likewise, we still use proto1 in addition to proto2 and proto3 (the public versions) since that just takes time and energy to fix.

We do make these kinds of decisions every day, and it's not always in favor of reduced resources. If we cared for nothing other than resource utilization, we'd be completely C++, no Java, no Python. Realistically, the cost of maintaining systems with equivalent roles can often lead to one or the other winning out, usually in favor of maintainability so long as their feature sets are roughly equivalent. We're fortunate to be in a position that we can choose code health and uniformity of vision over absolute minimum resource utilization. And again, even if we choose system B (higher resources) over system A, perhaps due to the differences in architecture or design choices the absolute bar for performance of that system will be greater than system A, despite starting lower. Sometimes it takes a critical mass of adopters to really shake out all those issues.

I know that quotes from Knuth are often trotted out during these kinds of discussions, but it's true: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

That 3% is where we choose to spend our effort, and that critical 3% includes the ability of our engineering force to make forward progress and not be hindered by too much debt. It also includes real data, check our Google Wide Profiling [1].

> Totally agree that world-facing APIs will all be gRPC and that makes perfect sense to me.

Probably not all. We still fully support HTTP/JSON APIs, but at least in our little corner of the world we've chosen to take full advantage of gRPC.

Anyways, thanks for letting me stand on my soapbox for a bit.

[1] https://storage.googleapis.com/pub-tools-public-publication-...


Interesting that you allude this the coexistence of C++, Java, Python, and Go because I think this bolsters my point. The overwhelming majority of services at Google are in C++. There are individual C++ services that consume more resources than all Java products combined. I think this speaks to the appetite for performance and efficiency within the company, since it is demonstrably the most difficult of these languages.


...which surely depends on the circumenstances. It could just as well be 3 years or more (for instance if you're part of a gang) and I think this is reasonably since you want that person to change (i.e. stop dealing) and not destroy his/her entire life, right?


Just put it into a gas cylinder (it doesn't even need to be compressed or anything).

ELI5 version: It might be lighter than air, but that just means that it will "float" ontop of heavier things - like oil ontop of water. But all those things still obviously have a certain weight. If you put it into a cylinder it will thus under the influence of gravity still weigh down onto it, which then can easily be measured.


If your cylinder is an airship, and you put that on a scale, you'll find that it has negative weight :)

I would think measuring the correct weight would be:

(weight of empty cylinder) + (weight of air volume that the cylinder could contain at atmospheric pressure) - (measured weight - positive or negative)

If you measured e.g. an empty cylinder, you'd end up with only the weight of air volume in cylinder at atmospheric pressure, which would be correct.


This is nearly correct. Rather than the weight of air volume the cylinder could contain, you want the weight of a volume of air equal in volume to the cylinder (including its walls) -- and you seem to have the wrong sign on the measured weight and the cylinder weight. Your equation says that the measured weight is equal to the buoyancy force, plus the weight of the cylinder, minus the weight of the helium, which would be weird.

(weight of helium) = (measured weight) + (buoyancy force, which is the weight of an equal volume of air at the appropriate pressure) - (weight of cylinder)

source: https://en.wikipedia.org/wiki/Buoyancy


As I've written in another comment:

> That's because the weight of the gas cylinder is relative and you got the point of reference wrong: The base weight is when it's a vacuum inside the cylinder, because otherwise it doesn't even displace the air which you don't want to measure for your reference weight, making it appear heavier. Whatever you add inside the gas cylinder now will make it heavier and the difference you get is the weight of the gas.


Oh I understand, thanks! So like I can weigh a chicken breast sitting on top of a plate, because I know how much a plate weighs and I can just subtract it from the total weight.

Great explanation, thank you.


Suppose you have a 40 kg cylinder that you've filled with helium. The cylinder now weighs (g × 38kg) newtons. How much helium do you have in there?

The answer is obviously not "negative two kilograms", and it's not obvious to me that the answer is "two kilograms". After all, if we weighed the same cylinder in a vacuum rather than in the atmosphere, it would presumably be heavier, but the amount of helium inside wouldn't change.


I was told there would be no math.


> I can weigh a chicken breast sitting on top of a plate, because I know how much a plate weighs and I can just subtract it from the total weight.

Not only is this math, we know that it's incorrect for the problem under discussion. The fact that it follows "I understand, thanks!" does not fill me with confidence in the original explanation.


Are you implying that helium weighs nothing? That it's unaffected by gravity? What's actually going on is helium is lighter than air, so the pressure of the air around it pushes it up. Just because it rises at ground level does not mean it weighs nothing. I can jump, but when my feet are off the ground it does not mean gravity has stopped pulling me down, it just means I have moved upwards with enough force to overcome the effects of gravity. Eventually my upward momentum will slow enough for gravity to pull me back down. Same thing with helium, eventually it gets high enough that the neutral air molecules are sparse enough that it can float on top. It doesn't continue to rise past that point because gravity is still pulling it down.

So what is the weight (that is, what is the force of gravity pulling down the helium molecules without ambient air pushing it up) of helium? If you have a cylinder of a known weight, you can fill it with helium and subtract the weight of the cylinder. What's left is the weight of the helium. To measure this effect without air pressure pushing the helium up, you might need to do this in a vacuum, but the logic is there and I think my analogy fits. I weigh the chicken on a plate and subtract the plate to find the weight of the chicken.

What I was asking is "how do you weigh a gas without it spreading around and mixing with the air?" "Putting it in a cylinder" is the answer I was looking for.


> What I was asking is "how do you weigh a gas without it spreading around and mixing with the air?"

That's a rather different question than you actually asked, "how do you weigh something that is lighter than air?" You might notice that not a single response takes the perspective that what makes the problem difficult is that helium will spread out or mix with the atmosphere. Lots of gases (not to mention liquids!) are heavier than air while still being fluid; weighing them doesn't pose the same problems.

> Same thing with helium, eventually it gets high enough that the neutral air molecules are sparse enough that it can float on top. It doesn't continue to rise past that point because gravity is still pulling it down.

This isn't true at all. https://en.wikipedia.org/wiki/Atmospheric_escape


>You might notice that not a single response takes the perspective that what makes the problem difficult is that helium will spread out or mix with the atmosphere.

Well that's not true. The very first response, the one that started this conversation, did exactly that. "Put it in a cylinder". Boom, done. That's why I said "good answer". That's the answer to the question I had asked. You can weigh hexafluoride by putting it in a bowl because it is so dense it will not float away. Helium is the opposite. That's what I wanted to know. Because obviously helium will just float away.

This is a complicated question obviously, and goes deeper than what I asked. But you and I are asking two different questions, and for some reason you seem hell bent on just shitting on everyone else's answers with poor science and snark instead of a simple Google search that takes like 5 seconds and gives you literally all the answers you could ever want.


> Earth is too large to lose a significant proportion of its atmosphere through Jeans escape. The current rate of loss is about three kilograms (3 kg) of hydrogen and 50 grams (50 g) of helium per second.

Seems true to the tune of whatever exists minus 50 grams per second.


Earth is too large to lose a significant portion of its atmosphere. It can (and does!) easily lose a significant portion of its helium.

50g of helium per second would exhaust the current level of helium in the atmosphere in about 2.3 million years. (Atmospheric total mass 5.15e18 kg from wikipedia, composition of the atmosphere by volume from http://eesc.columbia.edu/courses/ees/slides/climate/table_1.... .) That may sound like a long time, but consider that the earth is 4500 million years old.


Maybe I'm not understanding your proposal with the cylinder, but IIRC, it takes about a cubic foot of helium at standard temperature and pressure to lift an ounce. Let's call it 3.5 cubic meters per kilogram in Euros. That means your cylinder has an inner volume of 7 cubic meters, which is silly, so it sounds like you're talking about a normal sized cylinder with compressed helium in it, which is heavier than air and which you'll be able to weigh on a scale just like anything else. (More or less.)


Those numbers were made up to illustrate a point. I have no idea how buoyant helium is at STP and didn't intend to conform to any realistic scenario. "7 cubic meters at STP" is a perfectly fine empirical answer, and I guess you can get the mass of 7 cubic meters of helium from the ideal gas law.

The point is, it makes no sense to say "the weight of the helium must be equal to the weight of the helium+cylinder system minus the known weight of the cylinder, just as the weight of a chicken on a plate is equal to the weight of the chicken+plate system minus the weight of the plate". You've got to bring in some extra information.


What's the mechanism? It can't just be separation from the atmosphere -- if you fill a balloon with helium, it will get lighter, not heavier. Same thing if you fill a cylinder that sinks in water (perhaps because it was already full of water) with atmospheric air; it may well start to float.

Suppose you have a balance scale with a "cylinder" full of air on one side. You pump out the air and put in helium. What happens to the scale?


That's because the weight of the gas cylinder is relative and you got the point of reference wrong: The base weight is when it's a vacuum inside the cylinder, because otherwise it doesn't even displace the air which you don't want to measure for your reference weight, making it appear heavier. Whatever you add inside the gas cylinder now will make it heavier and the difference you get is the weight of the gas.


While I appreciate the answer, it's not a great explanation of the balloon example, since a balloon starts out effectively containing nothing. The problem there is that as you add helium, the volume of the balloon increases.

I'm a little more comfortable with saying we do some weighings in a vacuum (say, to determine the density of air at a given pressure) than with saying we'll start with a cylinder that contains a vacuum. For example, our 40kg cylinder, when airtight and containing vacuum, will weigh less than a 40kg object should, and I think that muddles the example.


Only the difference between vacuum cylinder and helium cylinder matters. (Since the buoyancy in air only depends on the volume of the cylinder, not what's inside.)


An empty cylinder should measure exactly as much lighter than the air as the air weighs. That point is your zero point.

Put in helium and you'll eventually reach and exceed the parity point as you keep filling up (same weight as the air), and the ratio of how much it moves for x volume of additional filled gas tells got the additional weight.

Soon enough you'll reach 2x air weight and you'll have tipped the scale exactly.


Aren't you confusing buoyancy and weight?


What's weight? Mass is well-defined; you could call weight "the force of gravitational attraction between the object and the earth" or you could call it "the force acting upon the object, when at rest". Since we're talking about measuring weight, I was using the second one, which is easy to measure directly.

Granted, I was also using the first sense when talking about the weight of the helium specifically. I guess in that sense my terminology could have been better.


Can someone explain me why workers where only alive for 6 seconds? Even now: Why are they only alive for 20 minutes and why is that even acceptable? Is it due to memory leaks?

I'd probably instantly lose my job if any of my systems would crash that often without me fixing it, regardless of the reason...


The unicorn workers are leaking memory. Any worker going above a limit will be restarted. Obviously it would be better to reduce the memory leaks further and we welcome help with that. To be clear, it is not a crash, all requests are handled without error to the end user.


I think what'd be even more interesting is whether a candidate can transform a simple recursive algorithm to an iterative one and vice versa.

It's taught in CS courses for instance but I know a lot of software "engineers" who aren't able to come up with a solution for such problems. (Hint: It's rampant in the webdev crowd)

As a side note: Yeah I also think that recursion is really seldom needed, but sometimes it's actually your only option. For instance I wrote a Future/Promise library with support for continuation for Rust (it'd be the same for C++). Simply said: Since the Promise is a generic the actual type (and thus it's machine code) changes from type to type so you have to use virtual function calls to invoke the next promise in the continuation chain. This obviously has the disadvantage compared to dynamic languages that the chain length is limited by the stack size.


tl;dr: Slashdot and SourceForge have a new owner which removed all this nonsense from those sites - including the adware.

I believe the average HN reader - including you - should be interested in this then: https://www.reddit.com/r/sysadmin/comments/4n3e1s/the_state_...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: