Hacker News new | past | comments | ask | show | jobs | submit | rabbits77's comments login

I recently left a job at a very different large company with a similar timeframe (a little under ten years). Pretty much everything this author states is related to my experience.

There is nothing all that special about Google. Maybe there was twenty years ago, but that ship has long since sailed. It’s just another large US tech company. Like Microsoft and IBM before it.


For a long time google had cachet as the most engineering friendly big tech firm (which was mostly deserved) and also the place with the highest level of talent (which is more team dependent but also somewhat deserved). You might end up working on ads or some other inane thing, but at least your engineer coworkers would be really good. They're still riding that wave to some degree because they haven't scared away all their top talent yet.


> It’s just another large US tech company. Like Microsoft and IBM before it.

This is just a hyperbolic statement that should not be taken seriously at all.

Look, Google isn't some fantasy land that some people might have lauded it as once upon a time, and it isn't unique in terms of pay or talent, but it is certainly at the top echelon.

I did an interview loop for high level IC at both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.

IBM has never a company where engineers could rise to the same ranks as directors and higher with a solid IC track.

Is Google special compared to Apple/Netflix/Meta? No. Is it special compared to Microsoft, IBM, and any non FAANG or a company that isn't a top deca-corn? Yes.


Microsoft and IBM used to have a similar extremely talented teams. IBM ran research centers full of the world's top Ph.D. The innovation that happened at those places easily rivals Google's.

It's a similar trajectory is what people are saying. When Google was small and everyone wanted to work there they could take their pick of the top talent. When you run a huge company you inevitably end up with something around the average. I.e. all those huge companies that pay similar wages and do similar work basically end up having similar talent +/- and within that there are likely some great people and some less than great people.


>all those huge companies that pay similar wages and do similar work basically end up having similar talent

But IBM and Microsoft don't pay the same as the largest top tech companies.


> IBM has never a company where engineers could rise to the same ranks as directors and higher with a solid IC track.

Never? Maybe if you’re talking the last 15-20 years, but IBM has been around a lot longer than that…

I personally know people who moved up the ranks there to director and above, so I can say with confidence you’re absolutely wrong.


Yes! It’s sad how ignorant of IBM and US technology industry history some of these comments are. Then again, I suppose every generation does a lot of its own “this time we’re different” myth making. Not everyone has the wisdom to see the broader context.


Indeee. I think because for younger generation is physically impossible to have experienced it, while for the older generations it's complicated to get into a disruptive startup.

Obviously people could read about the past, but sometimes that's asking too much, they are busy creating "the future".


Also, you know, there probably aren't that many posters on HN who worked at AT&T in the 60s...


If you can’t be bothered to read up on why previously great companies fell from grace, you’re kind of begging to repeat their mistakes…


>I personally know people who moved up the ranks there to director and above,

I didn't mean that engineers can't become directors, I meant that IBM didn't have a track for top ICs to get paid more than directors and still not be on a manager track.


> ...both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.

This is maybe the third time I've heard this mentioned here on HN, so now I'm curious: What specific kinds of differences?

I imagine there might be a certain kind of prejudice against Microsoft and its employees, especially for "using Windows" or whatever, which I've found often unfairly coloring the opinions of people from Silicon Valley that are used to Linux.

If you don't mind sharing, what specific differences did you notice that gave you a bad impression of the Microsoft team and such a good impression of the Google team?


> What specific kinds of differences?

Overall talent level. Almost everyone I've interviewed with at Google impressed me, as well as came across as thoughtful and kind.

I did interviews with many teams at Microsoft (9 technical interviews total) and the only person that impressed me is now at OpenAI.

Every single interview question I got at Microsoft was straight out of intro to CS /classic Leetcode.

They would straight up ask "find all anagrams", "string distance", "LCA of a tree".

Google instead disguises many classic CS questions, so it takes a lot more thinking. Microsoft seemed to just verify that you can quickly regurgitate classic algorithms in code.

I'm sure there are some great teams at Microsoft: but because each division/org is much more silo'd I think it's more likely a team has a lower overall bar.

Google makes everyone pass through a hiring committee and you're interviewed by people that have nothing to do with the team you might end up on. Meta is similar. Amazon has the team interview you, but they also have bar raisers come from other teams.

Microsoft seems the outlier here that someone can get on a team with only interviewing with people on said team.


I've used www.firstrade.com for decades now. Not a well known name, but extraordinarily reliable.


There's a bit of contradiction in the article. The main objection is the author's feeling of uneasiness in open spaces. "liminal spaces" created by, for example, large parking lots. The author then complains that these wide open spaces are not "walkable". What? They are certainly walkable by their very design! What they are not, it seems is the real objection, are cozy spaces lined with tacqueiras and coffee shops.


Walkable means that you can walk to something, not just walk to another house.


Walkable in the sense that I can meet most of my needs via walking.

I live 2 blocks away from a grocery store, for example. There is a 24 hour pharmacy roughly the same distance, and a couple coffee shops + a gas station + McDonalds not too much further away.

There is an expansive set of tennis courts and beach vollyball area within walking distance, and next to it is a great park & playground. A bit further in the other direction is an elementary school with playgrounds, and beyond that, at the edge of what I'd consider walkable, is a splash park.

Get on a bike and the offerings double.

Meanwhile, my parents are 20 minutes away from anything outside of a single gas station. Plenty of nice houses, and at least one school and fire dept., but they basically have to drive into town -- even though they're surrounded by houses -- just to snag a simple coffee or quick grocery store run.


the article states this person is walking in commercial area. that is what is creating the liminal space. she’s not strolling through the woods. she’s basically just bothered that her neighborhood isn’t gentrified enough yet.


This article predates the resignation of SawyerX (due to a lot of the abuse and misery heaped on him), who was the Perl release manager (aka "Pumpking") that was in charge of Perl 7.

The short version of it was that there was a bit of a power play by people who felt ignored and wanted a bigger part of what was felt to be an important development.

SawyerX goes into his a bit in a talk from 2023. https://www.youtube.com/watch?v=Q1H9yKf8BI0

This has resulted in a new (2024) code of conduct so that leadership can proceed without the previous sorts of verbal assaults effecting things again. https://news.perlfoundation.org/post/new-standaards-of-condu...


I'd say you're being a bit overly cynical. There's plenty of good news in Perl too. specifically, since you mentioned smartmatch, that's been pretty well fixed as of a couple of weeks ago with Switch::Right https://metacpan.org/dist/Switch-Right.

As is tradition this was the subject of a very engaging talk at the most recent TPRC: https://www.youtube.com/watch?v=0x9LD8oOmv0


Switch modules have been around for years. There's Moo and Try::Tiny too. Some of this stuff should really be part of the language by now. Same with exporting functions.


As usual, it's a module when it should be a language feature. Too much useless talk and not enough stuff getting done.


If it’s available via a module, it’s done. And it smoothes over some versioning/compat issues.

Useful talk deals in benefits and tradeoffs (as the linked article demonstrates in spades). Useless talk deals unsupported “should”s.


Sure. I've been using the language for more than 10 years but this is dumb: different modules for every trivial feature that should be a language feature instead. Smart match is a perfect example. Smoothes off nothing. I'll be off using Ruby, thank you.


You manage feature differences one way or another. If you like choosing rbenv vs rvm vs asdf and then using them to manage your ruby versions and gem dependencies rather than having in-band switches in a single interpreter, great, you're welcome. I could even see someone making a case that it fits neatly within an org where systems/devops folks take more of the environments/dependencies division of labor.

If what you really like though is the charge you get out of just saying "this is dumb" while indulging the privilege to not even notice that you did a repeat performance of unsupported shoulds vs worthwhile tradeoffs, though, well, maybe you should examine that.


I use the system Ruby and don't have to worry too much about rbenv and rvm. 2.7 and even 3.0 is well supported. That's what I also did with Perl, except when I used MacOS which was a pain because of modules that used C libraries like LibXML. On Linux we can also use containers without worrying about speed penalty. There are sufficient solutions and okay tooling. Ruby's also got not one but two JIT compilers right now.


The Perl module system is awful.


It's a lot better than Node's NPM.


how so?


Serious answer:

* the module system literally just runs a script, so it can do -anything-. As a result there are 3 or 4 competing install systems all with their own cruft, some defined entirely using Perl as config, some using YAML. You need to have all of them installed.

* Of these, Module::Build is a common one written by someone who completely overengineered it, and it installs hundreds of dependencies, even though all it really does is just copy some files.

* Install scripts can do stuff like ask you interactive questions that prevent automated installation, which is a constant hindrance when packaging Perl modules e.g. to RPM

* Perl leaves literally everything up to external modules, including exporting functions or even defining module dependencies (e.g. 'use base', 'use autoclean', 'use Exporter' ...) and often the module config is written entirely in Perl rather than YAML or JSON file, so trying to do anything clever (like add IDE/language server support) is an absolute nightmare.

* The client to install new modules initially asks about 20 questions and does a slow mirror test, making it difficult to use in automated settings. Luckily someone wrote cpanminus which literally does exactly what you want - installs the damn package.


I have a Gemini too! I still use it sometimes, I just open Termux and use it as a highly portable scratch pad for working on small coding projects.

I wish that the handheld form factor had more options, but it seems that market is just too niche and otherwise dominated by tablets and phones.


I have been unimpressed with ChatGPT4's ability to generate code for problems of even medium complexity. Even if given partial or complete code from another language and told to translate it!

If we're seeing a heavy drop in StackOverflow usage then my guess is that Stack Overflow was getting most traffic from some very basic queries and ChatGPT is eating that base out from under them. Better for StackOverflow that they partner with OpenAI and focus on serving the higher end that they have left.


Does this interaction seem like low complexity?

https://chatgpt.com/share/e92bf633-4815-45bc-8ae9-9068fe257d...

[EDIT: pasted in wrong link]

There is basically no information out there on how to write error tolerant parsers for language servers. My entire knowledge before starting work on this was someone giving me a three sentence explanation on the F# Discord for an approach using an intermediary AST doing a tolerant first pass.

The key with handling medium and large complexity tasks with an LLM is to break it up into less complex tasks.

First, I showed an example of a very simple parser, parsing floats between brackets and asked for a response that parsed just strings between brackets, then I asked:

I'm working on a language server for a custom parser. What I really want to do is make sure that only floats are between brackets, but since I want to return multiple error messages at a time for the developer I figure I need to loosely check for a string first, build the AST, and then parse each node for a float. Does this seem correct?

I get a response of some of the code and then specifically ask for this case:

can you show the complete code, where I can give "[1.23][notAFloat]" as input and get back the validatedAST?

There's an error in the parser so I paste in the logged message. It corrects the error, so I then ask:

now, instead of just "Error", can we also get the line and char numbers where the error occured?

There's some more back and forth but in just a few iterations I've got what amounts to a tutorial on using FParsec to create error tolerant parsers with line and column reporting ready for integration with the language server protocol.

If anyone would like to point me in the direction of such a tutorial that already exists I would very much appreciate it!


You need an account to see your link.



"However, we must not forget that AI needs to learn as well from vast sources."

Well, that's actually the problem. This current wave of AI is not "learning" anything really. An AI with any sort of generalizable reasoning ability would just need basic sources on programming syntax and semantics and figure the rest out on its own. Here, instead, we see the need to effectively memorize variations of the same thing, say, answers to related programming questions, so that they can be part of a intelligent sounding response.

I was dubious at the value of GenAI as a search tool at first, but now see that it's actually well suited for the role. These massive models are largely storing information in a compressed form and are great at retrieving and doing basic rewrites. The next evolution in Expert Systems I suppose, although lacking strong reasoning.


> An AI with any sort of generalizable reasoning ability would just need basic sources

That is a completely unsupportable assertion.


No, it's the very definition of being able to generalize.


Exactly, imagine thinking you can learn how to program Java or C just from being handed the language specification, or even learn how to play chess just by being told the rules of the game.

Humans don't learn anything of substance just from being told the strict rules, we also learn from a wealth of examples expressed through a variety of means some of which is formal, some poetic, some even comedic.

Heck, we wouldn't even need Stack Overflow to begin with if we could learn things just from basic sources.


> imagine thinking you can learn how to program Java or C just from being handed the language specification

Throw in the standard library documentation, and that's exactly how many of us learned how to program before projects like stackoverflow or even the web existed for the public. We took those rules, explored the limits of them using the compiler, and learned.

Stackoverflow is, IMO, a shortcut through portions of the exploration and learning phase. Not a bad thing, but importantly it's not required either.


Yeah I was programming from way before Stackoverflow existed and I simply call BS.

No one learned Java or C by reading either of the following documents:

https://docs.oracle.com/javase/specs/jls/se10/html/index.htm...

https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

It's cool to romanticize the past though.


I mean...

Humans do this.

Not perfectly, no, but we do.

As the only general intelligences we know of so far, I'd say it's support for the assertion that an AI with general reasoning abilities wouldn't need SO or other examples to figure out how to do specific tasks.


The article begins by explaining that it’s a follow-up to an earlier article on Wayland!


Indeed; and if you read that article it says that current Wayland compositors depend heavily on Linux infrastructure, and have not yet accepted patches to support NetBSD. Everything Wayland on the BSDs is in the "one guy faffing about to see if he can get it working" stage. Unless/until the Wayland project starts mainlining BSD support, or the BSD projects have robust parallel maintenance of the core Wayland packages and compositors, BSD as a desktop system is dead in the water.


People not having the same priorities as you does not equate to death. You are being rude and overly dramatic.


Ah, what a delightful opportunity to delve into the nuances of the English language and its wonderfully intricate idioms! Let's embark on an enlightening exploration of why the phrase "dead in the water" is certainly not the same as simply being "dead." For anyone less familiar with the idiomatic expressions and metaphors of English, this can be a fascinating journey.

To begin with, when we hear the word "dead," one's mind might instinctively leap to its most literal and unfortunate meaning—devoid of life. In biology, this means the cessation of all vital functions: no heartbeat, no brain activity, no breath. The ultimate and irreversible state that all living things, sadly, will eventually meet. It's quite final, isn't it? The end of the line. Kaput. There's no ambiguity here; dead means dead.

However, the wonders of language allow us to use words in metaphorical or idiomatic expressions to convey more complex or nuanced situations or states. And that's where "dead in the water" swims into the scene. This phrase, you see, has nothing to do with the literal cessation of life. Oh, no. It's far more colorful and applicable in a variety of non-lethal scenarios.

Originally, this idiom comes from the nautical world—a domain rich with metaphorical language, given the myriad challenges and adventures faced at sea. Imagine a ship, if you will, its sails billowing as it cuts through the waves. Now, picture it suddenly unable to move; the wind has died down to nothing, the sails slump, and the ship is merely adrift, going nowhere. It is, quite poetically, "dead in the water." The ship isn't literally dead, of course—it's just temporarily incapacitated, unable to proceed along its intended course until the wind decides to grace it with its presence once again.

Transposed into everyday usage beyond the high seas, "dead in the water" is a vivid metaphor for projects, plans, or initiatives that have come to a halt—stymied, unable to progress, much like that becalmed ship. It's used to describe something that has little hope of success or revival in its current state. For instance, if a business venture runs out of funding or a new policy is halted by regulatory issues, they might be described as "dead in the water." Not literally deceased, but stuck, with no forward momentum.

In essence, while "dead" is the cessation of life, "dead in the water" is about cessation of progress or movement—figuratively speaking, of course. The latter suggests a temporary state, a problem potentially fixable, perhaps with effort, change in strategy, or a shift in external conditions, unlike the permanence and finality of being literally dead.

Isn't it simply marvelous how language lets us draw such specific shades of meaning with just a tweak of phraseology? Through this exploration, we can appreciate not only the richness of English idioms but also the joy of explaining something so deceptively simple yet profoundly different. Here we stand—or float, if you will—at the junction of literal and metaphorical, grasping the beauty of expression. And isn't that what language is all about?


Language is great and rich and all that but you forget too quickly that it's a two way street. Nothing gp said implied that they didn't know the idiom. Indeed, it seems obvious to me they were just being concise, which is perhaps a virtue of prose you could benefit from.


As an outside observer, it seems reasonable to assume asveikau missed the figurative language. The original phrase used was "dead in the water," which as noted does not imply death; however the response alleged that death was (rudely?) what the original commenter was talking about.

Seems blown out of proportion, at least partially because of missing figurative language. The above wall of text seems a long-winded, somewhat tongue-in-cheek way to say "yeah I didn't say that."


Not everybody understands English 100%.

The guy in the old west who drew his gun and said “them’s fightin’ words?” He probably didn’t enough English to understand the whole sentence but picked out a few words which meant to him “fight.”


Of course not everyone understands English perfectly. Even among native speakers it's easy to misunderstand (especially on a web forum). I wasn't making a judgement.

Anyway, it was pointed out that the same user that said "dead in the water" did indeed comment upthread that BSDs will "end up dying," so goes to show that I wasn't paying attention.


I am a native English speaker raised in Washington DC.

I am literally pointing out that Wayland enthusiasts love to gang on X11 as "dead" as a put-down and anybody who still uses it is evil or something. It's a toxic attitude that I've seen a lot here. It should ... pardon the metaphor .... die.


I think one should allow others to look at the entirety of the thread started by bitwize, where they claim:

> ...the BSDs will really end up dying,...

That wasn't very figurative, even if "dead in the water" was. For all it's worth, bitwize might have been using that figure of speech inappropriately when they really meant "dead".

All this to say that even as an outside observer, one can read exactly the same things differently, so perhaps we should all get off our high horses and start riding ponies or bicycles (what? :)) — I mean, back to the discussion at hand.


> As an outside observer, it seems reasonable to assume asveikau missed the figurative language.

No, I didn't miss it at all, and this is a weird take.

However, its appropriateness as a metaphor does have to do with the literal meaning. Even a less maintained software project with a longer-term deprecation roadmap that still has millions relying on it is not "dead". This very article is talking about how the *BSDs are putting more maintenance into that tree than upstream, and those are actually signs it isn't a total dead-end; they've done that maintenance over the years because it's valuable to them. But bitwize was calling it dead in an attempt to put it down. I've found this particular brand of negativity is very common in Wayland enthusiasts. It is like ad hominem in software maintenance discussion. The source is available for anyone to hack on and use, or not, as they please. There's no sense in ad hominem attacks, exactly as bitwize engages in above, for that.

To be honest, the attacks like this remind me of the XZ backdoor. The sock puppets in those mailing list threads complaining, I would say whining, about "dead", "unmaintained" libzma were channeling the exact same energy. Cool it down. It's not necessary.


I had to go through your post history to see if you were some kind of gpt bot. That was quite impressive.


Thank you Sheldon!


... It's run down the curtain and joined the choir invisible. This is an X11-parrot ...


And that's what Wayland wants.

"We will only develop for Linux if you want it do it yourself" is the vibe I get from the Wayland team.

Why should FreeBSD be the ones who have to develop? The stubbornness of Linux users.


I think it's more the detail they lost (or do not agree with) is that working successfully in multiple environments can valide the strength of the design. Some people will look at a slightly different structure underneath and see it as noise and hassle instead, where you or I may see their action as taking unnecessary or questionable dependencies.


The short version is that the Linux ecosystem has for whatever reason spent a decade or so resolving the already-solved problem of "who is logged in to this TTY?" and came up with a new way to solve that that the BSDs don't in general implement and Wayland relies on.


Red Hat wants it to be really hard to remain compatible with Mainstream Linux (whatever Red Hat does) while also differentiating your distro from theirs in meaningful ways, and wants some good old “fire and motion” against competitor distros.

At least, I think everything they’ve been up to and these projects they heavily influence have been doing makes a ton more sense if that’s the plan. It’s that or a lot of weirdly-hostile and disorganized stuff has been happening by chance in a way that achieves that effect by accident.

The BSDs are just collateral damage, I reckon.


Jokes on them, the BSDs are more usable than Linux now if you care about productivity.


Yeah, I mean I work inside a Fortune 10 company and after countless man hours across multiple teams we have exactly zero LLM applications in production and the pipeline heading to production is empty.

I guess it’s good at generating plausible blog spam and helping children with homework. I’ve used it to bootstrap my own writing. It’s not entirely useless but hardly world changing.

I think the biggest commercial use right now is Klarna uses it for basic lvl 1 support? I don’t know the details but it sounds like a good result from RAG over a fairly constrained corpus. So, again, nice but completely unaligned with the massive valuations in that space right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: