> For example, on LinkedIn, someone may say they were CMO of a company for just three months, and if you take that data at face value, you may think something very bad must have happened for a company to hire and lose such an important role in so short a time.
> But when you look closer, you see that this person was a sophomore in college at the time, that the company was just a month old, and that actually, it wasn’t so much an executive hire as it was kids trying to get something off the ground.
Although not the main point of the article, this is much better advice for finding high-potential employees who are under the radar than the article that was on the front page a few days ago: ("How to hire low experience, high potential people" https://news.ycombinator.com/item?id=39288669 )
When the last article hit the front page I couldn't believe HN was upvoting an article that advocated for deep-diving into people's personal early life history as a hiring technique. That's like reading the tea leaves for interviewing candidates.
Instead, this article reveals the real secret: You have to be prepared to do some legwork and research, as well as look beyond the obvious. You could go hire a lot of smiling ex-FAANG people with perfectly curated LinkedIn profiles and professional headshots by throwing large salaries at them, but you're not going to get the best results that way. It's also one of the least efficient uses of your headcount budget and those people are almost guaranteed to ghost you in 18 months when someone else offers them a fancier title.
There is a lot of gold waiting to be discovered among people who aren't curating perfect LinkedIn profiles or collecting famous company names for their resume. It takes some work to find it, but it's out there.
It’s the shift of perception from technology being “geek stuff” to a lucrative field on par with finance, medicine and law. Before, someone who knew a bit of programming was likely to be passionate and with high potential. Now they may still be, but also they may just a way to get rich quick.
But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law
Or did you mean to convey that people who go to boot camps think it's a way to get rich quick? Your wording is a bit confusing toward the end there
I think he means that ‶guy who knows to code″ used to be a statistically strong marker of ‶tinkering guys with a potential″, which is less true now that IT is now socially internationally perceived as a high-reward career, hence a one where people may go either because they like it, or just because they want to make stacks.
The problem with this theory is people have been saying it for 25 years. At least as long as I’ve been in the profession. And it’s not historically accurate in the past either. Programming was a clerical job for much of it’s history.
Boot camps take the “seems to be passionate about coding in their free time” signals and help their students try to fit those signals by encouraging them to build personal projects on GitHub etc. This somewhat dilutes the ability of recruiters to check off “has some GitHub projects” as a heuristic, however useful that was to begin with.
>But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law
but they do get a reasonable shot at a middle class lifestyle with potential to do really well at some point.
That's better than a lot of career paths give you nowadays.
What's the impetus for headhunters to do all this legwork though? Just like noone got fired for buying IBM, the headhunters at large FAANG almost always go for the shiny, smiling LinkedIn profiles.
For headhunters none, for you if you are running your small company - huge. Great hire can pull you out off troubles, bad hire can run company to the ground.
What about that example do people find to be high-potential? I'm a bit confused because to me 3-month "CMO" of 1-month-old company as a college sophomore sounds at best like a slightly positive signal, and at worst a strongly negative signal, since it could mean anything from "they had enough ambition to work and study at the same time for 3 months and learn stuff on the job" to "they couldn't focus on classes and/or wanted to do something that quickly would put a big-sounding title on their resume".
Honestly any "big-sounding title" within 3-5 years after undergrad is a big red flag and takes a lot to overcome that resume going in the trash. I think best case scenario it means they have connections and got a legitimate job somewhere using those connections, in which case it's extremely unlikely I'm reviewing their resume since I barely graduated with a social science degree from a college nobody's ever heard of. The more likely case is that they aren't actually doing the job of a "Chief Technology Officer" but are instead just a programmer. Which, at 22 years old, is good and what they should be doing. At 22 it's very unlikely you have the technical chops to lead large teams doing big projects, and even less likely you have the emotional maturity to do so.
It shows immaturity to try to shoehorn a C-level job into your resume where it doesn't make sense and where 99% of people will rightfully raise an eyebrow at it. I say this as someone who had "Assistant Director" on my resume as my second job out of college, and I was assisting in the direction of a department of two people. Myself, and my boss. In my actual day-to-day I wrote bad PHP and read HN. It was a joke, and looking back I'm sure there are some jobs I applied to that tossed my resume after seeing that, and rightfully so.
C-level positions from anything less than a Fortune 500 are a joke and an immediate disqualifier. Anyone exaggerating that much upfront makes me not trust anything that will follow.
So much of life these days feels like Freddy Got Fingered. Fondling someone else's horse on their farm does not make you a Farmer.
A sizable chunk of engineers from the most competitive companies got fired - people used to optimizing for everything - resume readability, LeetCode hards, system interviews, etc.
You think they won’t optimize their profiles for their next job?
There's a line somewhere between "optimized" and "highly curated" where I consider LinkedIn a negative signal. Those are both utterly subjective phrases, but nonetheless.
My LinkedIn is utterly moribund, which has come up in interviews a few times. Either they say "you don't really use LinkedIn, huh" or I explain the same. As far as I know this has cost me nothing.
Really? Is it a walled garden when like 90%+ of recruiters are on there?
Would you say “a good resume is a red flag, nobody producing value has time to tend that”.
The big irony is making things look nice, be readable, be sellable is creating value. In the case of making a LinkedIn or a resume, the value your creating is in terms of your very own capability to get a job and have a high take home pay!
I have a highly optimized LinkedIn, yet I don't really spend a ton of time on there. I log in once or twice a month, answer messages, and do a few things. Maybe make a post once every 2 months or so. Nothing crazy, yet my profile is very well optimized.
And this is why: Everyone looks at it. Far more people then you would expect.
I can't tell you how many meetings I have gone into where I walk in and see that everyone at the conference table has my LinkedIn profile up on their laptops. They knew they were meeting with me, had no idea who I was, so they looked me up on LinkedIn right before. Its my introduction and sets the tone for a meeting or presentation. This experience is not only in the professional world either. I recently (in my mid-30s) decided to go back to academia to get a PhD and now I am encountering people in academics who use my LinkedIn to learn about me when I reach out for academic or research assistance. When applying for stuff, presenting at conferences, all of it uses LinkedIn as the central way to communicate your identity. Having a nice profile helps me set a positive first impression that helps me gain more responses.
Also, I can't fail to mention that recruiters use LinkedIn as a search tool. Not every recruiter is worth your time, but the right ones certainly are worth it and LinkedIn helps connect you with more of them. THey also use your LinkedIn when communicating with their clients. Almost every job interview you do, you can assume the hiring manager and interviewers have looked at your LinkedIn profile before meeting you. Most have probably looked more closely at your LinkedIn profile than your resume.
Curating a LinkedIn profile is not that much work. Maybe 30-60 mins per month is all it takes if you have a lot happening and maybe 1 hour per quarter for everyone else. Its not a huge commitment, and yet the gains in my opinion are far greater than the small amount of time spent on it.
I don't care if people want to hate on LinkedIn and say its stupid. If anything that benefits me, because it helps optimized profiles stand out. Maybe theres a few people (like the parent comment) who have an aversion to LinkedIn for some reason and will look down on someone with an optimized profile. But I truly believe it offers far more value than it detracts. The people that look down on it are usually people that are not in any position of power anyway or they are a micro-minority.
I just want to second this. I search “name” LinkedIn whenever I don’t know who people in a meeting are. It is extremely rare that the LinkedIn doesn’t provide some useful info.
Much easier to answer questions at the right level of detail when you know the audience.
Exposure, as I understand it, is becoming known to more people. I can't see how society, which is predicated on interpersonal relationships, could be organized to dissuade increasing your renown.
Perhaps with some sort of score that rewarded accomplishing things more than just talking about them. Then you could look at someone's score and trust that they've accomplished more. Seems meritocratic, but also somewhat dystopian, honestly.
I’ve trended in the opposite direction. I don’t particularly enjoy taking about any successes I may have had and I don’t care to make sure others (including in the company I work at) are aware.
It’s not great for career progression. But I actually don’t care about that. So instead of playing that game I play one I actually care about which looks very different than one where exposure matters.
If you solve the Navier Stokes equations, you'll give a couple interviews at some local media and then keep working in other problems. The director of your lab will not make a career for himself talking about your achievment.
Some exposure is of course necessary, but when it becomes the main driving force it turns ridiculous.
Google, ChatGPT, Gemini, Perplexity, etc are all retrieval services (with some rudimentary remixing capabilities thrown in, for the latter few). Some human has to actually have interesting insights and write them. Retrieval services help with distribution & discovery. The less we’re connected, the harder it is to find the right audience who value your (niche) insights.
The longer I think about something, the dumber AI gets.
I have given our internal knowledgebase documents to ChatGPT and asked for summaries, and it adds in things from the general internet that are wrong to very wrong contradicting our KB (we are very far off the beaten path of conventional audio studio design testing).
I think this effect is because we learn and the AI doesn't. (In the short term, for any given release of the AI.) Start asking about a topic that's new to you, and it will "know" things you don't. Then you learn those things, and it keeps making the same mistakes.
Over the longer term, new releases have improvements, but the feedback loop is currently pretty slow. It will likely get faster.
I have questioned AI on topics I know little about, and it is terrible at handling queries. Frankly, I think this could be something taught: Can you use critical thinking well enough to break the AI?
AI so much much more than retrieval since it knows how to correctly apply exist concepts to previously unexplored areas.
You can ask ChatGPT how one might implement and enforce laws in a world where magic exists. Or time travel. Or whatever. Because it knows laws and it knows common fiction tropes and can reason well enough to surmise how to apply them together.
Well in my experience it more knows how to incorrectly apply existing concepts to incompatible areas and exert great confidence while doing it.
I have just two recent examples where I thought it would be faster to use chatgpt instead of reading the manual.
1. How to asynchronously (e.g. asyncio) record from the microphone in python. Chatgpt gave me good which looked elegant, but didn't work. Well turns out pyaudio isn't compatible with asyncio.
2. Trying to install a docker of some server (I think it was wallabag?) with an SSO using openid. Again I got a elaborate docker compose file and instructions for the server configuration. After trying to implement it for a while I realised that those configuration options don't exist. Chatgpt just invented them.
My conclusion from these is that one really shouldn't ask AI question were one already knows the answer/correct way and you just want to have something create the boilerplate. I shudder and the thought that people are proposing to use AI for medical purposes without supervision by a professional.
I've seen similar, although in my cases the wrong answers turned out the be useful. They used commands or functions or APIs that I had not known about, and when I looked into those found that they could solve my problem when used correctly.
4 does this, but I don’t think this it the show-stopper some people seem think it is. You can ask it to correct itself by feeding it the error messages. It’s like have a very knowledgeable, extremely fast programmer on call. Sometimes their guesses aren’t right first try — so what?
Maybe. I just don’t like to confuse the two, it’s like talking to a middle schooler vs talking to a graduate student. In both cases you can say the experience sucked but you should specify which was it.
To an extent, but it's only about 85-95% correct even in the good domains, which is fantastic for a few steps… and then it makes a fundamental error and half the time doesn't understand corrections given by domain experts that would get it back on track.
This limitation is the only reason why all of us all have jobs, so do plan for your economic future on the assumption the AI get past this flaw with no warning — how many steps it can do before falling over isn't something I can predict, despite regular use of even the OpenAI playground before ChatGPT came out.
You make the following comment regarding artificial stupidity systems (AI)
> since it knows how to correctly apply exist concepts to previously unexplored areas
and yet these systems know nothing at all. Far too many people (including the developers of such systems) have failed to understand that none of these systems can go beyond the programming that humans have incorporated into them. Yes, they appear to exhibit certain [abilities] but no more than any other essentially mechanical device and the limited capabilities that we have been able to design in them.
You can certainly pose questions and these systems (within the constraints of the programming involved in them) can retrieve and correlate data that is accessible. Bus the insights drawn will require human involvement.
Over decades, I have built tools to help in the analysis of all sorts of data sets and it has taken a human asking the [right] questions to get appropriate outcomes.
We do not understand our own intelligence let alone being able to build any artificial intelligent system that can operate on its own.
What does amaze me though is that we create natural intelligence systems all the time and they are called children. I have been avidly watching the development of my youngest grandchild and she makes an absolute mockery of any artificial system we have built anywhere.
In a very real sense, every artificial stupidity system (AI) that we have built is as complicated as a hammer or a knife.
It is what we (human beings) do with these simple tools that determine the outcomes.
> and yet these systems know nothing at all. Far too many people (including the developers of such systems) have failed to understand that none of these systems can go beyond the programming that humans have incorporated into them.
> within the constraints of the programming involved in them
I think the point of machine learning is that programming isn't "incorporated into them".
These systems may not be intelligent yet, but we certainly didn't program them - the majority of their features and abilities arise from the training data they were exposed to.
I get what you’re saying, and will echo that these systems are still just tools that enhance what we, as humans, are capable of doing. But I think you’re failing to consider that even comparatively simple systems can exhibit emergent behavior, well beyond their programming. You’re being unreasonably reductive and dismissive.
Emergent behaviour can occur, not a problem. But if you study such systems, I think you will find that the emergent behaviour is based on the programming involved and is not "beyond the programming".
When it comes to intelligence, this is not something that we can say is actually emergent.
There are currently a number of projects that are looking into intelligence and free-will. There are researchers on the same teams who hold quite different opinions - the results for these projects are not at all conclusive.
I admire you for pressing home your point that others are missing. I practice a visual art form (which I won't name; many other smaller cultures around the world too have their own) which will never "emerge" from AI _unless_ it is programmed in, or trained on the visual art itself. Even though, I don't see how it could ever figure out the intricate detailed meanings without it being programmed. The people trying to counter you are thinking only within the culture within which these AIs have been created, and thus it does seem to them that anything AI creates is emergent because it seemingly created soemthing they haven't, didn't, couldn't, wouldn't. Without the programming (never mind the electricity), AI is still a blunt tool.
It is shocking to me how many people miss the fact that the big prediction machines trained on lots of data, are fundamentally historical and based on that data?
In what sense is a novel, never-before-seen image that comes out of DALL·E 3 "encoded in" in the system? The image may be, in a sense, encoded in the latent space of images, but so is every image (of the same dimensions) that ever has been or will be.
Those images are the result of how the data is processed by the code. Even if you put in a random number generator, the code processes those numbers based on it internal code constraints.
Now a well designed and built code base may be able to generate every image of the same dimensions. You can certainly write code that can exhaustively generate every possible image, but the question here is whether this exhaustive generation is intelligent.
This isn't really true except in the most reductive sense.
If you ask for a picture of Shakespeare dressed as Spiderman reading a McDonald's advert on the moon, no one has programmed that and the fact it knows these individual concepts is no different to the fact a human has seen them too.
> The specific behavior you get is a result of the programming. what does "beyond the programming" mean? I think it's actually a meaningless statement.
The laws of physics encode every motion and interaction of every particle.
None of those fundamental laws directly says "water is wet".
> I would love to see an actual example where a program does something not encoded in itself already.
You see it every time you use a machine learning system. Even the simplest models, say a linear regression line in a spreadsheet, "learn" from the data, not from the code.
Wanted to give it a try. I didn't even consider many of these and doubt I would've come up with such a good list even if I had a lot of time. Much less than 1 second it took to generate the response.
In a world where magic exists, laws and the legislative system would likely undergo significant adaptations to address the unique capabilities and risks presented by magic. Here are several ways in which these systems might be different, based on various assumptions about the nature and scope of magic:
1. *Regulation of Magical Practices*: Just as with professions and practices in our world, there would be regulations governing who can practice magic, how it can be learned, and where it can be used. Licenses or permits might be required for practicing certain types of magic, similar to medical licenses or driving licenses.
2. *Magic-Related Offenses*: New categories of crimes would exist, such as unauthorized enchantments, illegal curses, magical fraud (using illusions or mind control for deceit), and unlicensed necromancy. Laws would specify what constitutes a magical assault versus a non-magical one, with potentially different penalties.
3. *Enforcement and Jurisdiction*: Special law enforcement units trained in counter-magic or magical defense would be necessary to handle crimes involving magic. The jurisdiction might depend on the nature of the magic used; for example, cross-dimensional or time-altering magic could involve a higher authority or international magical council.
4. *Evidence and Investigation*: The collection of evidence might include magical means, such as truth spells, scrying (viewing distant or future events), or reviewing memories directly. However, there would also need to be laws protecting individuals against invasive magical investigations.
5. *Rights and Protections*: There would be laws to protect non-magical beings from discrimination or harm by magical individuals or entities. Similarly, sentient magical creatures or beings might have their own rights, leading to more complex legal considerations regarding interspecies interactions.
6. *Ethical Use of Magic*: Legislation would likely address the ethical implications of spellcasting, summoning, and other magical practices, including the use of magic on unwilling subjects or creating sentient beings. Regulations might prevent or restrict the use of magic that can alter someone's will, memories, or identity.
7. *Magical Contracts and Agreements*: The legal system would need to address the binding nature of magical contracts, which could have irrevocable consequences. Laws might specify how such contracts can be fairly negotiated and what remedies exist for breaking a magical contract.
8. *Intellectual Property*: In a world where spells, potions, and magical items are created, there would be intellectual property laws protecting the creators of magical inventions. This could include patents on spell formulas, copyright on magical texts, or trademarks on magical item designs.
9. *International Relations and Security*: On a larger scale, there might be international laws governing the use of magic, especially in conflicts or espionage. Treaties could limit the use of particularly destructive spells or ban the creation of magical weapons of mass destruction.
10. *Education and Public Safety*: Laws would govern the education and training of individuals with magical abilities, possibly requiring mandatory schooling to ensure that magic is used safely and responsibly. Public safety regulations would address the storage of magical materials and the construction of magically protected buildings.
In this magical world, the legal and legislative system would need to be flexible and innovative, capable of addressing the ever-evolving nature of magic and its impact on society. It would be a blend of traditional legal principles and new rules designed to manage the unique challenges posed by the existence of magic.
All of these are basically "what we have now, but with magic".
I'm sorry but how magic and the legal system interact is a subject explored in fiction. Hell, even Harry Potter touched on the subject. And that's not going to even touch authors like Jim Butcher and Piers Anthony who have entire series about "this, but with magic".
Fun little exercise: while reading above, see what happens when mentally substituting “magic” with “AI”, words like “enchantments” with “algorithms” and so on.
That's also something I often encounter with ChatGPT. It often is very wrong about things I ask, because I often wonder and try things off the beaten path. That's our advantage against LLMs.
No the advantage is that you have context. That context is HUGE. That's why you can't and won't be able to have chat gpt actually do your job for you. You still need to ask it questions. And to even know what questions you need to ask you need to have context.
You don't just need a context, you need to finetune a model around that context since working memory isn't enough. That is what human workers do, they spend the first months finetuning their model around the company context before they can start to provide valuable code.
Once you can pay to get your own fine tuned version of the big LLMs maybe we can start to do real work with these.
However my point is that having the context and the ability to use it is sooooooo far out of reach for a computer it's unfathomable. In much the same way how life is unfathomable at some level. It's this self referencing infinity that is somehow collapsing into results while being uncollapseable.
Computers can't do that not without something fundamentally different
Purely probabilistically, trying things off the beaten path is just a matter of higher LLM temperature. Turning up GPT-4's temperature is basically an expensive /dev/urandom pipe today, but I don't see any fundamental reason why LLMs can't catch up. Maybe all it takes is tinkering with how temperature is calculated.
You've missed the metaphor, I think. Higher temperatures will make it "more creative" for lack of a better term, but there's a lot of specialist knowledge it doesn't have and which you can't give it just by twiddling a dial.
It has a massive speed advantage that lets it read the whole internet, but it's dumb enough that it also needs to, and when even that doesn't give it enough examples, it's like asking me to improvise Latin based on what I recognise of the loanwords in English.
The fact is that we're not that interconnected though. Finding people that have useful, actionable information, and want to share it with you, is extremely difficult.
If you want to maintain an information advantage, you should definitely not rely solely on the web. Simply reading a book is often 10x better. Comment from a few months ago:
One thing I've found comically underestimated is books. It's not whether the information is physical or electronic, but what actual corpus of information is available
...
So yeah I'd say 3 main repos of knowledge are: the open Internet, printed books, and Scihub, and many people today only use the first one
Though I'd also put "talking to the right people" and by extension "working at the right place" pretty high up there in getting novel information.
I think for a certain generation, Google probably diluted their ability to access other knowledge bases, and act upon that info. And for the generation currently learning how to learn, LLMs may do the same thing.
The article misses something super critical: why?. More specifically why a key few hires can change the trajectory of a company. The answer almost always comes down to one word and applies companies of any size: leadership.
Leadership is a concept largely foreign to the software industry for two reasons.
* Most software developers are never exposed to strong personalities and have no idea what strong leadership looks like. An excellent software developer tends to score high in agreeability, but a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance.
* Strong leaders know when to not follow trends with extreme criticality. This is highly paradoxical since so many software companies are funded by advertising which is a business of growing trends. Real leaders set new trends and take share from existing players setting their brand reputation in the process. Most people in software are deathly afraid to abandon conventions of comfort whether in business or in product/process innovation.
> Leadership is a concept largely foreign to the software industry
Maybe in the companies you’ve worked for, but I haven’t found this to be true at all
> An excellent software developer tends to score high in agreeability, but a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance
Hard disagree. “Maximum confrontation and/or defiance” has never been a goal of good leaders who are trying to build a team that works together.
Encouraging people to speak their mind is good. Encouraging “maximum confrontation” is just going to create chaos. The goal is to work together to ship, not to argue and defy all the time. I can’t think of anyone who would want to work on a team where everyone had agreeableness dialed down to 0 where leaders encouraged confrontation all the time, except maybe for people who just like to argue a lot.
> Most people in software are deathly afraid to abandon conventions of comfort whether in business or in product/process innovation.
Another strong generalization that I can’t agree with. Most people I’ve worked with in software have been so aggressive about bucking trends and trying new things that we’ve had to dial it back a notch. A lot of the debates I’ve had with teams have been about choosing boring, stable technologies over the newest cutting edge technology that’s popular on Twitter. Same goes for business strategies, where I’ve had to deal with everyone from product managers to sales people trying to do things their own creative way when the standard, boring practices are what finally got the job done.
Leaders do more than build teams. Team building is inward looking. Real leaders are outward looking and have to be willing to defend their team(s) at personal cost.
This reads as a criticism of team building (maybe that wasn't your intent, but it was my first reading.)
Strong teams need a balance of transformational leadership, servant leadership, some laissez-fair management and some micromanagement (etc, etc).
I.e. situational leadership.
In my experience, given how many ANDs you need, the most successful teams have a combination of leadership styles and roles from different people at different times. Balancing leadership and responsibilities across people also allows the personal cost of such activities (whether it's emotional labour, political backlash, etc) to be distributed as well.
This, I also believe, is why co-foundership appears so often in stories of successful companies. You always end up with "the leader that is good at X despite Y", and there is always someone who can come in to provide "Y without necessarily X" in a way that makes their joint leadership work in ways that individually they would fail.
> Encouraging “maximum confrontation” is just going to create chaos. The goal is to work together to ship, not to argue and defy all the time. I can’t think of anyone who would want to work on a team where everyone had agreeableness dialed down to 0 where leaders encouraged confrontation all the time, except maybe for people who just like to argue a lot.
To "turn that down to 0" doesn't mean operating 100% of the time in non-agreeable mode. It also doesn't mean doing that only with your team. It also doesn't mean encouraging people to do it. It's just that sometimes you gotta put your foot down.
You jumped to another extreme, but the whole point is just that both extremes are problematic. An always 100%-agreeable leader needs a lot of luck to succeed.
I can see where you’re coming from and I have some more context to add, which can be helpful.
This phrase:
> a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance
Feels like something uttered by a cocaine snorting MBA who gets high off creating hostile work environments and enjoys firing people.
I’ve been in environments where everyone is agreeable because they have to be agreeable. They are agreeable to a fault. These environments have a banal toxicity that is hard to pin down, but it shows when there is disagreement. These orgs value agreement above all else, above performance, above achieving goals. They suppress valuable insight because it is uncomfortable to them. FUD is a great rhetorical strategy that I’ve see used to squelch conversation, thinly veiled appeals to authority is another. This breeds complacency and destroys value.
We have a management class in the west that believes that management can be a mechanistic exercise of gathering metrics, assessing performance, and assigning corrective actions. That it can both be systematized and abstracted away from the work is a core assumption of western management. This is a paint by numbers approach, similar to Searle's Chinese room, and works to take the leadership out of management.
You’re right that most of the time the best choice for your business and customers is to choose the boring, stable technology that works. Avoid the rewrite, don’t use the hottest new frameworks, or languages that are in vogue.
Experience shows generally to dismiss developer desires for novelty. But sometimes you do need a novel solution and your developers are the ones who are going to tell you. You will not figure this out by following the rules taught in management school. Metrics like CPI and SPI are only going to tell you that you’ve made the wrong choice on your project when it’s too late. You need to make a decision, which means seeking out information and making the best decision based on the information you can get. This takes leadership.
The best programmers I know have strong opinions. They will tell you that you are wrong. They do this to learn both to test themselves and the people around them. If they are wrong, the withdraw, then more on. It’s the most healthy thing I’ve seen and is actually what builds success.
Disagreement is healthy and good and should be encouraged, when the goal is knowledge gathering. A leader who is turning agreement to zero is a leader who wants to be decisive.
Stop the madness. It does not matter how things feel. It matters how things are measured, which includes employee retention and delivery and product quality. Most of the comments here loudly scream none of these people have been in management.
The one big difference between a leader and a contributor is ownership. Everyone here has danced around the idea of ownership without addressing it. Unless you have owned liability in a managerial capacity it doesn’t matter how things feel, because your perspective is too narrowly construed.
Disagreement, when voiced, is a form of confrontation. It is healthy. Healthy teams are brutally honest. Most of software absolutely abhors confrontation.
I'm sure all software developers have been exposed to strong personalities. Often in the form of bullies, primadonnas, class clowns, big men on campus, pointy haired bosses, etc.
The problem is that—as a generalization—strong personality tends to correlate inversely with competence, and software engineers esteem the latter above all else (because a machine is not swayed by charisma, after all).
Good leadership is really the combination of strong personality and competence. That combination is so rare that for most software engineers, the safest bet is to simply try to avoid employers with strong personalities entirely.
Aggressive personalities aren’t the same as strong personalities, in precisely the way status trinkets aren’t class.
I’d say strong personalities are essential for consensus:
When it’s clear who is holding court, there’s much less talking over each other and bickering, precisely because the leader has made it clear what the expectation is — and everyone knows it will be enforced.
“Let’s quiet down for a moment, I really want to hear what @edmundsauto is saying.”
There is a class of people who believe leadership is something you can learn on its own. These are the people who join all the useless commitees for resume building purposes. Leadership only has value when paired with significant knowledge in some domain.
> Avoiding strong personalities purely for the sake easing hostility sounds a lot like cowardice.
Cowardice is when you avoid something, because of fear. GP is talking about avoiding people who will annoy you.
> Cowardice is when you avoid something, because of fear. GP is talking about avoiding people who will annoy you.
That sounds like autism. Avoidance for what ever imaginary reason your creative mind can conjure is still cowardice. The excuses only fool yourself. Other people will see it as they choose.
what you call the "software industry" aligns with my experience in the "enterprise software industry"
outside of that, it couldn't be further from my experience. many many defiant and confrontational founders/PMs, who often actively avoid the accepted "best practice" and rethink what they're doing if they find themselves in the majority
> many many defiant and confrontational founders/PMs, who often actively avoid the accepted "best practice" and rethink what they're doing if they find themselves in the majority
My experience as well. Big enterprise software companies reward keeping your head down and toeing the line, but "confrontational and defiant" personalities trying to reinvent the wheel are a common feature at every other type of software company.
You may have missed the point of this article. Here, we’re being exposed to the “how”. Then why is unrelated to the why. If you have a why, the is the how to follow.
And it is extremely hard to find person, who prefer the people who were deep in the trenches.
What I see most time, managers buy bright cover, very few bother to look deeper.
And it's easy to understand - as said wisdom, "No one fired due to buy IBM solution". When I agree, not many people are like IBM with near cloud-less clear reputation, but it's just simple and need less energy, to not try swim against flow.
I love articles like this. Short, concise, and filled with interesting and unique tips instead of the usual cliche dribble. I particularly loved the outlining of Zack's methods.
To double down on one of the points a bit, its well observed and accepted that the CEO or senior authorities get all the attention, even in politics; But there's always the less-known team behind it that did most of the work.
They don't get the credits because they couldn't take the RISKS, that the senior folks did. Senior positions come with its associated risk and stakes, and not everyone has the guts to tackle that kind of exposure.
> Senior positions come with its associated risk and stakes
Strong doubt on the risks part. Sure, for founders. But for executives at established companies, the more senior you are the less risk you have. Severances get larger, firing rates get lower, and you probably have more cash in the bank.
Nobody asks them and I've seen attempts to prevent assistants of members of Parliament running as candidates. They have the know-how and the moat is pretty narrow so the acting politicians felt the need to widen it somewhat.
A lot of those risks are for other people, not themselves. They are there for a variety of reasons but accepting risks is probably not the primary one.
Interesting discussion. I agree with the author's premise.
IMO finding these undiscovered gems will stand out more; a lot of content has the veil of originality but lacking.
A lot of content online feels unoriginal or uninspired. Perhaps we'll see the pendulum swing back toward original long form journalism to tell these stories.
For my back-burner startup, the reason it would work (in theory) was due to a particular thing we were doing that none of the other players in the market would do, even if they thought of it.
(Unfortunately, just as I got time to move it to front burner, it was probably killed by LLM consumer popularity.)
Find a human who can bypass robots.txt/limitations of ChatGPT and find less traveled sources given the internet fails to archive itself and important information can be wiped any minute? Isn't that just called good journalism?
The author primarily talks about interviewing interesting but un-interviewed people as a new source of interesting information to write about. The author talks about their process of finding new people to interview.
The daily newsletter business is a crowded space, to say the least. The barrier to entry is as low as it could possibly get and the unit economics are brutal.
Getting ahead of competitors isn't some abstract, arbitrary goal. It's the only way to survive in a business like this.
If your goal is just to write for the sake of writing and you don't care if anyone reads it, this article probably isn't for you.
I think it’s almost there. I care if people will read me (I am an amateur fiction writer), but not a competition to read me and not others. Or praise my writing more than they praise others.
Tangent. I understand the appeal of a four hour work week but how can people take that seriously? You will either have to have a pretty unappealing standard of living or you will have to exploit other people. If you only work four hours per week, then you can only consume as much as can be produced in four hours each week. But I guess people mostly hope to make the exploitation strategy work, manage to only produce something for four hours per week but consume stuff that took others forty hours to produce.
It has been years since I read it, but iirc the main point was to build businesses that once they are up and running don't require much much work to keep bringing in money.
I suspect achieving what he was aiming for is much harder now than it was when he wrote the book, as the barrier to entry to starting businesses goes down and the competition continues to rise.
But I think there is still something to be said for the goal of building a "lifestyle" business rather than trying to build a hyper-growth startup. If you are strategizing through the lens of "How can I make enough money to survive and thrive with the minimal amount of time spent?" vs "How can I grow my business to $100m" you'll come up with very different plans.
How so? If I build a simple app that does its job and does it well, I don't need to work on it for 40 hours a week for the rest of my life. After the initial effort, you can put in almost no work, just periodic maintenance, and you can keep bringing in sales.
There are lots of opportunities like this. There is nothing morally or ethically questionable about this. It doesn't require you to over-inflate your prices. That is the power of software. The VC hyper-growth world is heavily focused on continuously changing your software, but that isn't a requirement (and in fact, is often not desirable to your customers).
If the market works as intended, someone will replicate your app and sell it cheaper, the price should eventually converge to the costs. And you might not agree, but I find it morally questionable [1] to run a business that pays good money for little work.
[1] There is of course more nuance to this then expressed here.
How does that make any sense? If you’ve built something that provides good value to people, more than they are paying for it, how can you believe that it is a morally questionable arrangement?
Imagine a tiny economy with two people. There is one guy that produces everything humans need and he can produce one day worth of supplies in four hours. But to keep his business running he depends on a piece of software that the other guy wrote, without it he could not produce anything. The other guy happily licenses his software for one day of supplies per day. Now the only thing he has to do besides enjoying all his supplies is to now and then spend an hour of work to maintain the software.
I think the extent to which writers have to get creative in finding sources for novel pieces is taken for granted. I did not realize how tactical and painstaking the process is to create the content that I casually skim and enjoy. So I strengthened my appreciation of the process
I agree this is kind of trivial, but doing anything more than the average person already puts you in the high percentiles.
My abstract strategy for making contact with the right people:
Try to imagine things from the perspective of the people you're trying to contact. What is their day-to-day, what are they sick and tired of? What will make your cold email stick out in that context? It's click-bait targeted at a specific person. Again, this is probably just one level above what the average person will do. But that already puts you ahead of most people. It gives you some chance of reaching the right person.
The thing is, almost all of the "secrets" of successful people are trivial, it is just that few people are doing them consistently. Anybody could practice basketball as hard as Kobe Bryant did, but almost nobody is actually willing to put in the consistent work over years and years.
> But when you look closer, you see that this person was a sophomore in college at the time, that the company was just a month old, and that actually, it wasn’t so much an executive hire as it was kids trying to get something off the ground.
Although not the main point of the article, this is much better advice for finding high-potential employees who are under the radar than the article that was on the front page a few days ago: ("How to hire low experience, high potential people" https://news.ycombinator.com/item?id=39288669 )
When the last article hit the front page I couldn't believe HN was upvoting an article that advocated for deep-diving into people's personal early life history as a hiring technique. That's like reading the tea leaves for interviewing candidates.
Instead, this article reveals the real secret: You have to be prepared to do some legwork and research, as well as look beyond the obvious. You could go hire a lot of smiling ex-FAANG people with perfectly curated LinkedIn profiles and professional headshots by throwing large salaries at them, but you're not going to get the best results that way. It's also one of the least efficient uses of your headcount budget and those people are almost guaranteed to ghost you in 18 months when someone else offers them a fancier title.
There is a lot of gold waiting to be discovered among people who aren't curating perfect LinkedIn profiles or collecting famous company names for their resume. It takes some work to find it, but it's out there.