Hacker News new | past | comments | ask | show | jobs | submit | localghost3000's comments login

Coaching exists. I've had it and done it. I've worked with folks with disabilities too and been able to help them grow.

The problem is that companies often promote folks with excellent technical skills into people centric roles like an EM. Sometimes you get lucky and they are naturally good at the stuff. But more often than not, they really struggle. It's a conundrum. If you hire someone who nails it on the people stuff but is weaker on the technical side then they aren't able to make informed decisions.

I've been head of eng before. It's a hard job and I ultimately switched jobs to go back to IC work. I took the coaching aspect seriously when I was in the role though. I think ultimately you need to understand what your report wants and give them projects, feedback, and resources to achieve those goals. The big caveat being that sometimes the role that person is looking for isn't available. You can't have an entire team of principle engineers for example. So another aspect is to be clear about what is and isn't possible in terms of career path at your co. and try to find other incentives like pay etc to keep them happy. YMMV of course.


I understand his point and think it's an interesting observation, but honestly I disagree. What AI is great at is handling all the bullshit writing tasks that you have to do that no one reads or cares about. For example I got asked to write a blurb for a company newsletter recently. Told an LLM the things I needed to talk about and how long it should be, and the tone I was shooting for. Done in less than a few minutes. Previously that would have taken me at least an hour.


I am a professional writer (in the sense that I have published short stories that were paid for, even though it is a hobby). Recently, my wife was participating in an event where she had to portray an historical figure and she had a fact sheet and a couple of articles about the person (she wasn’t super famous). I used Perplexity with ChatGPT-4o and prompted it with all the materials we had and asked it to generate a 5 minute monologue in first person for the event. First draft was excellent, I touched up a few lines and printed it out. Done.


I don't think this example refutes the article. LLMs can obviate bullshit writing tasks and remove the appeal of writing altogether for some people.


The intern use case is 50% of the value I get from LLM’s (I refuse to call them AI because they aren’t). The other 50% is as a better stackoverflow where I can look up syntax etc.

Anything that requires a deterministic outcome is not a good use case for them. Anyone who says different is selling snake oil.


One interesting kind of cog - I've seen examples of tools that claim to provide reliable workflows of AI's as web-scrapers, to take a URL and pull out a list of JSON objects from a listing page (containing a set of specified fields) , and instead of having to write parsing code and deal with changes to the website, the AI driven scraper tool will just keeping chugging along.

Caveat - I haven't actually used these myself, like you I am a heavy user of AI interns!

They are a massive force multiplier for me in coding and in doing investment research.

I use cursor AI for generating code changes from feature descriptions, and perplexity-AI to answer complicated questions using current data from the internet.

Claud and ChatGPT are very nice for language translation - I use one to translate and the other the reverse translate to verify the quality of the translation.


These days I equate senior titles with more mid level. You've seen some shit but could use a bit more time in the oven so to speak. I see Staff Engineer and Engineering Lead used for what I would consider a senior role 7 or 8 years back. I fully expect those will become meaningless as well and we'll move on to something else.


From the post:

> This feature is available at no additional charge in all AWS Regions, including the AWS GovCloud (US) Regions and the AWS China Regions.


> I most often can’t see any use case for AI/ML

I'm admittedly a skeptic on all this so take what I am about to say with a grain of salt: You should trust that voice. We're in a hype cycle. It was VR before and crypto before that. Big tech is trying _very_ hard to convince you that you need this. They need you to need this tech because they are lighting billions on fire right now trying to make it smart enough to do anything useful. Short of them coming up with a truly miraculous breakthrough in the next 12 to 24 months (very unlikely but theres always a chance) investors are gonna get fed up and turn off the money fountain.

It's always a good idea to learn and grow your skillset. I am just not sure this is an investment that will pay off.


ML researcher here.

I will second this. Even if you think localghost is wrong about AI, it is important to always trust that voice of skepticism (to a limit).

But I will say that we are in a hype cycle and as a researcher I'm specifically worried about this. I get that we have to bootstrap because you can't say "we want to spend money on research" (why?), but if you make a bubble the goal is to fill that bubble before it pops. The more hype you make, the more money you get, but the quicker that bubble pops. My concern here is that too much hype makes it difficult to distinguish charlatans form honest people. Charlatans will jump from cool topic to the next (don't trust someone who was a VR founder, then a crypto founder, and now a ML founder. Trust people who have experience and can stick with a topic for longer than a hype cycle).

The big danger, is if charlatans dominate the space, the hype disappears, and then there is no money for everyone. So if you do believe in the possibility of AGI and that AI/ML can make the world better (I truly do), make sure that we don't over hype. There's already growing discontent for products pushed too early with too big promises. If you really believe (like I do), you have to get rid of the bad apples before they spoil the whole barrel.


Yes as someone who works in geophysics and AI I see a lot of people promising a lot of things that no neural network will be able to do no matter how much attention it has because good data is actually what people need and they typically lack it. There's a ton of use cases across geophysics for AI, I'm even organising a conference at the end of September about this. But imo there's a bigger need for better data and better software tools first.


This is such a good perspective and thank you for posting. I agree with your statements and of all the hype cycles that have happened, I think this does have a real shot of becoming something. Because of that I think they’re going to keep throwing money at this until someone figures it out. Because what else is there left to grift on in tech right now?


  > I think this does have a real shot of becoming something.
I wouldn't be doing a PhD if I didn't. PhDs are terrible. I'm amazed people do them for "the money" and not the passion.

  > Because of that I think they’re going to keep throwing money at this until someone figures it out.
My concern is who they throw money at, and even more specifically who they don't throw money at.

  Some people known to do carpet pulls, no prior experience in ML, and throw together a shitty demo that any ML researcher should be skeptical of?
  $_$ 
  PhD researchers turning their theses into a product?
  ._.
Something's off.... But I guess when Eric Schmidt is saying you should steal and ask for forgiveness later, I don't think anyone should be surprised when unethical behavior becomes prevalent.

  > Because what else is there left to grift on in tech right now?
 
l̶i̶f̶e̶Hype finds a way. There's always something to grift.

The key thing to always recognize: grifters are people who have solutions and are looking for problems (e.g. hamstring AI into everything) while honest people have problems and are looking for solutions (i.e. people understand the limits of what we can do, the nuances of these things, and are looking to fill in that gap). I can tell you right now, anyone saying anything should be end-to-end AI is a grifter (including Google search). We just aren't there yet. I hope we get there, but we are quite a ways. Pareto is a bitch when it comes to these things.


I do not understand the AI naysayers.

The other day I had an idea for a Chrome plugin. I'm a senior dev, but I've never made a Chrome plugin. I asked ChatGPT 4o if my idea was possible (it was) and then I asked it to create an MVP of the plugin. In 10 seconds I had a full skeleton of my plugin. I then had it iterate and incrementally add capability until it was fully developed.

I had to do some stylesheet tweaking and it asked for a permission that we didn't need, but otherwise it completely nailed it. Easily provided 95% of the work for my extension.

I was able to do in 60 minutes what would have probably taken several days of reading specs and deciphering APIs.

Is my Chrome plugin derivative? Yes. Is most of what we all do every single day derivative? Also yes.

How are people still skeptical of the value that LLMs are already delivering?


It's probably because it's providing different amounts of value to different people. For some people, it's not giving any benefits, and in fact making their work harder (me). They are skeptical because people naturally don't believe each other when their personal experience does not match up with another.


It's the best API searcher ever made but most people don't search APIs. They are waiting for it to make them a grilled cheese or something.


It's the best API searcher for APIs which are used a lot. If you want do anything other than the most common thing it can be worse than useless. (I've been running into this in a project where I'm using Svelte 5, and the language models are only interested in telling me about/writing Svelte 4, and transformers.js, where they tend to veer off towards tensorflow.js instead. This despite me explicitly mentioning what version/library I'm using and the existing code being written for said version.)

Anyways, they can definitely be very useful, but they also have a golden path/winning team/wheel rut effect as well which is not always desirable.


That's only Svelte to blame. How can something change so much every major release.


Anyone else getting the feeling that big tech is in full panic mode trying to find the Next Big Thing? Gotta keep the money machine going amiright?


Not really, "big tech" has been innovating non stop for decades and finding solutions for problems we didn't know we had.

Each new thing reaches maturity and market saturation eventually and they have to find something else but it's not related to tech only.



Above was to the originally-submitted Wired article link. Mods have since changed the URL.


I call BS. If this were a reality stuff like Django’s admin panel or Wordpress or google forms or a million other things would have already done this type of dev in. It hasn’t. You know why? Because there’s always some weird requirement or an update to existing code that requires a high degree of context in the business. Stuff that AI sucks at. There’s also the fact that most stakeholders totally suck at describing what they want. This applies to humans much less a god damn machine.


I think that's where the human part comes in. Instead of writing a bunch of boilerplate code and other lower value stuffs, you ask AI to generate and then modify accordingly.

My job definitely can be largely replaced by AI, if a human reviews the code. I'm sure in my domain(data engineering) many positions are not safe.

I need to drill deeper into a tech skill other than just doing ETL. I'm thinking about system programming.


I mean look: investing in your skills is ALWAYS a good idea. AI or not tech becomes obsolete so you need to remain relevant. So yes. Do that. High ROI that.

AI is useful for getting started in some cases where you can clearly describe going in what you want. Therein lies the rub: no one knows what they want with total fidelity starting out. If we could, then we’d probably be able to give an accurate time estimate. It’s rarely that simple however. There’s always some complexity that hasn’t been accounted for. Something that an AI, lacking any kind of real understanding will be able to help with much less automatically fix.


You know, I have tried on numerous occasions to get into Beefheart and it never lands. Fans of his stuff describe it as some higher level of musicianship and sophistication but it just sounds to me like a band that can't play and is just kind of owning that.

I watched a Beefheart documentary once where a fan said something to the effect of "It sounds like noise but if you listen closely, each band member is playing a totally different time signature and key". I mean yeah. That's pretty much what it sounds like lol.


I dare you to try once more with the album Shiny Beast. IMO this is the best starting point in to Beefheart. Fun fact: I never liked rock music, close to hating it. I'm into soul, disco, jazz, hiphop and house. But Shiny Beast is one of my all time favourites.


Listening now. Will report back!


Ok. Gave a listen to the whole thing. While this is certainly a lot more musical than the other stuff I’ve heard like Trout Mask Replica, it’s still not really for me. The absurdist lyrics are a real barrier as are some of the guitar sounds. That said I like this voice a lot. Has really interesting timbre. I had heard it before but this album does a good job of putting it front and center.


The one — and only time — I heard TMR was probably in the late 90s (98 or 99). Just the name of the album is seared into my brain as some of the most unlistenable garbage that I've ever been assaulted with. To make this clear: I once listened to KUT broadcast analog feedback, one morning, for an hour, which was significantly better than TMR.


In college I held a contest with my friends "The worst band in the world". It was always a conversation starter at gatherings where we could debate the merits of our picks.

We had a few people who really hardcore defended Captain Beefheart. There was never any consensus on the worst (the goal was of course fun conversation not consensus). But if I may provide my own contemporary pick: Blood on the Dancefloor is the worst band in the world. And not just because of the music.


Thanks for the report, I appreciate you gave it a try. Thinking about it I doubt if I would like the album very much if I'd hear it for the first time now myself. But way back when it became an instant classic for me. TMR not so much...


This is excellent advice. I started off with Trout Mask Replica and thought "hm, maybe this isn't for me", but then I listened to the song "Love Lies" on a whim, then the rest of the album, and was hooked.


I have heard people describe the experience of "It just sounds like noise, but eventually it just clicks and it is the most amazing thing you have ever heard".

Side note, unrelated: Trout Mask Replica's album artwork features the head of a European Carp and that has always really bugged me. Not even the same order of fishes (cypriniforms vs. salmoniforms).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: