Hacker News new | past | comments | ask | show | jobs | submit | norseboar's comments login

I think this confuses the responsibilities a CEO may have (write memos, etc) with the responsibilities they must have (ultimate authority/responsibility for company decisions/direction). If a CEO hired somebody to do ~all company comms, and maybe financial modeling, and even make important decisions about company strategy, the CEO did not hire another CEO. The CEO delegated. All managers do this to some extent, that's the point.

There still needs to be some entity who says "here is when we'll listen to the AI, here are the roles the AI will fill, etc", and that entity IMO is effectively the CEO.

I suppose you could say that entity is the board, and the AI is the CEO, but in practice I think you'd want a person who's involved day-to-day.

The article quotes:

> "...But I thought more deeply and would say 80 percent of the work that a C.E.O. does can be replaced by A.I.”...That includes writing, synthesizing, exhorting the employees.

If AI replaces those things, it has not replaced the CEO. It has just provided the CEO leverage.


> It has just provided the CEO leverage.

Exactly. And you could say this about a lot of other roles as well. AI certainly has its flaws, but at this stage it does rather frustrate me when people actively resist using that leverage in their own roles. In many ways I couldn't go back to a world without it.

My days are now littered with examples where it's taken me a minute or two to figure out how to do something that was important but not particularly interesting (to me) and that might otherwise have involved, for example, an hour or two of wading through documentation without it, so that I can move on to other more valuable matters.

Why wouldn't you want this?


Same reason people try to compete in sports without using PEDs, or prepare for standardized tests without a good nootropic stack.

There's some large contingent of the population who believes being "natural" places them on a moral high horse.


I think part of the idea here is that we’re not talking about putting GPT-4o in charge of a company, we’re talking about GPT-7a (for “agent”). By the time we get to that turn of the game, we may not have as many issues with hallucinations and context size will be immense. At a certain point the AI will be able to consume and work with far more information that the human CEO who “employs” it, to the point that the human CEO essentially becomes a rubber stamp, as interactions like the following play out over and over again:

AI: I am proposing an organizational restructuring of the company to improve efficiency.

CEO: What sort of broad philosophy are you using to guide this reoorg?

AI: None. This week I interviewed every employee and manager for thirty minutes to assemble a detailed picture of the company’s workings. I have the names of the 5272 employees who have been overpromoted relative to their skill, the 3652 who are underpromoted or are on the wrong teams, 2351 who need to be fired in the next year. Would you like me to execute on this plan or read you all the rationales?

CEO (presumably after the AI has been right about many things before): Yeah OK just go ahead and execute.

Like, we’re talking about a world where CEOs are no longer making high level “the ship turns slowly” decisions based on heuristics, but a world where CEO AIs can make millions of highly informed micro-decisions that it would normally be irresponsible for a CEO to focus on. All while maintaining a focus on a handful of core tenets.


This is just saying “if an imaginary thing was good at being a CEO, it could replace a CEO.” Which is a tautology on top of a fantasy.


> I think this confuses the responsibilities a CEO may have (write memos, etc) with the responsibilities they must have (ultimate authority/responsibility for company decisions/direction).

The vast majority of articles about CEOs are populist rage-bait. The goal isn't to portray the actual duties and responsibilities of a CEO. The goal is to feed anti-CEO sentiment which is popular on social media. It's to get clicks.


Then why did the CEO's in the survey agree with the premise?


Do CEOs have much responsibility though? I've only ever seen them punished when they've done something intensely illegal and even then they get off for lighter crimes usually.

Golden handshakes mean if they move on they win. They can practically suck a company dry & move on to another one thru MBA social circles.


I think there's a bad assumption in here, which is that pay should keep pace with productivity gains in the first place.

I'd argue the whole point of productivity gains is that they do outpace pay. The idea is the same work generates more value. Some of that extra value can be passed back to the employee, but if all of it is passed back to the employee, then the goods produced don't actually get cheaper. If nothing gets cheaper, there's no incentive for a business to invest in tech that makes employees more productive.

The data in the piece has a lot of problems with it too, but I think the core assumption is fundamentally off-base.


These are relative terms. Let's say in 1980 I was flipping burgers for $3 an hour and making $10 for the company. Let's (just for the sake of argument, ignore inflation) say in 2020 I'm making $20 for the company. For my pay to keep pace with productivity gains I'd be making $6 an hour, leaving my employer $7 better off (they were making $7, now $14). My pay would've kept pace with productvity - it doubled, my pay doubled.

When people say that "wages kept pace with productivity growth" that's what they're saying - that a 10% increase in productivity resulted ina 10% increase in pay, not a $10 increase in productivity resulted ina $10 increase in pay.


This flies in the face of so many HN readers that claim that salaries are based on the value created by the employee.


> so many HN readers that claim that salaries are based on the value created by the employee.

That's because those people are sadly wrong. Salaries have never been based on value created. They have always been based on the minimum a company could pay for the talent they desire.


Value created is the ceiling, the floor is min(cost of employee's alternative, cost of employer's alternative).


> if all of it is passed back to the employee, then the goods produced don't actually get cheaper.

There's a spectrum between none and all.

Also, why wouldn't goods get cheaper? Scaling up production is easy so it's not a supply issue. There would be two outcomes: people would buy more goods overall (as has happened over time) or they would work less and enjoy more leisure time.


No bad assumption here: people say that productivity growth and wage growth ought be the same. If this was not the case, then in the long run 100% of value added would go to capital! Of course this does not mean that the productivity gains should directly translate into wages.


I think you have to look at the division of the added value by the changes in productivity. Were it something closer to 50-50 that might be reasonable, but so far as I can tell based on changes in wages over time, that's not the case at all.


Totally. I don't think there's anything wrong with saying "Gee, this store is making way more money but its employees are being paid the same, that sucks". I think indexing the minimum wage to inflation makes sense. I think a lot of low-skill jobs should be higher-paid, and I think raising the minimum wage is a good tool in some cases (although I think the people pushing for national increases often overlook the effect that a doubling wage will have in a rural area where the cost of labor really does impact the ability of a smaller store to stay open).

My point is just that I don't think "wages should keep pace with productivity" is true. If wages always rose with productivity, we'd be focusing all the gains on the people in the sectors where productivity is growing, and not lowering the cost of goods for everybody else.


The article argues that “the minimum wage should keep pace with productivity growth,” not “pay should keep pace with productivity gains.”

Suppose a business with a profit margin of 20% increases its revenue by 10% without increasing labour input. If the owner captures that 10%, the profit margin is now 27%. If the revenue is paid out as higher wages (“pay keeps pace with productivity gains”), the profit margin falls to 18%. If wages increase by 10% (“pay keeps pace with productivity growth”), the profit margin remains constant and profit can also increase by 10%.

What happens to the profit margin of individual businesses will vary. But across the entire economy, it’s reasonable to expect that the wage share will remain pretty constant, and until the 1980s, it did. https://en.wikipedia.org/wiki/Wage_share


  If nothing gets cheaper
Would anything need to get cheaper if everyone was making more money?


It depends on why productivity is increasing. Generally speaking, the employer is pushing productivity increases (implementing better processes like assembly lines, buying equipment that lets an employee do more, etc). If the employer is pushing the increases, they need some incentive to do that.

Often that incentive is making goods cheaper, so they're more competitive. That's a huge generalization, but it makes the point that there's nothing wrong with productivity gains outpacing wages.


If productivity doubles, and worker pay doubles, then the cut going to capital also doubles. Don’t see the problem with that. Your math is wrong.


But isn't this the cause of inequality? If you can't pay commensurate with productivity, a larger share of capital flows to capital owners and you stretch out the exponential curve. There are humans on the other side of the equation and there are other costs besides labor involved with operations.

There's a reason why common good capitalism is an increasingly attractive model for a lot of folks. A society cannot maintain the maximize returns to capital model indefinitely. I think many people are realizing that Friedman's economics either cannot be sustained or lead to a place where many would prefer not to go. People, and therefore the invisible hand, are inherently flawed.


The problem is all that extra value that workers have been generating the past 50 years is going into the pockets of the rich.


Presumably, it is possible for a product to also get better.


Yes, and product improvement is largely attributable to R&D, rather than assembly-line-level production. Those gains disproportionately flow to high skill white collar workers.


(Benchling PM here)

As far as sweeping generalizations go, I think that's a pretty reasonable one :). I'd imagine that almost all of our users (including most lab admins who assign permissions) don't want to keep a complex permission system in their head.

What we've seen is that this system ends up leading to a small number of well-designed and well-named roles. Most users see the roles themselves ("DNA Designer"), but don't need to worry about exactly what the configuration behind it is.

Somebody needs to be aware of the powerful (although not quite turing-complete) configuration system, but what we've seen in practice is that it's usually one or two technical admins whose job it is to gather requirements from the different teams and figure out how to translate those into a few digestible policies that everybody else can assign.

We certainly didn't invent this model (it's basically RBAC), but we've found it's a good way to address the often-complex demands of a big pharma (where IP is crazy regulated from like, 3-4 angles) without taxing the individual scientists too much.


I think the author's complaint has merit, but I don't think tying this complaint to wanting Zelda to be "saved" makes sense.

The issue is genre. To take an extreme example, I might like the brutal open-world genre a la Dark Souls and complain that the Forza games are all garbage because they aren't that. Most would recognize this complaint as silly, because if I want a brutal open world with no hits I should find a game that purports to contain that, not go bashing racing games because I don't like that genre as much.

What the author's doing to Zelda obviously isn't as extreme, and it is a bit more grounded (Zelda games did used to be more like what the author wanted). But it's the same type of argument: back when there were no open world games, the author played one called Zelda and really liked it. Since then (starting with the third), almost /every single title in the series/ has been an extremely dungeon-focused, puzzle-focused game. The overworld has always been a big part, and there have always been some secrets, but the defining aspect of the genre has become these dungeons and puzzles.

This isn't to say the complaints are invalid; there's nothing wrong with wanting a game that is more hidden, less hand-holding, more focused on the action and less on the gimmicks that let you solve a puzzle. But that's not asking for a better version of Zelda, that's asking for a different genre altogether. Focus on asking for new titles in that genre, and leave other genres in peace.


Where is the evidence that present hiring methodologies don't predict successful business outcomes? I recall seeing evidence re: resumes, but not on the overall process itself [1].

If this data is present (data showing that present hiring methodologies don't predict successful business outcomes), do you have data showing that hiring people who need jobs is any better? We /are/ different from each other -- as an example so obvious it borders on the ridiculous, people who have been programming for ten years will be much faster at it than people who haven't. At what point do you draw the line to state that people stop being different? If so, what data did you use to draw that line?

[1] http://blog.alinelerner.com/resumes-suck-heres-the-data/


I understand the paradigm paralysis - we have been led to believe that having the best CS zombies in the world is what makes success, but it's just not true. Meanwhile, jobs aren't filled and the people who need them are suffering.


Just one instance. https://twitter.com/mxcl/status/608682016205344768 , I've seen hundreds like this.


Honestly, if you weren't in the room you can't tell what happened. It's just as likely that he wasn't a good culture fit.


"Cultural fit" - I forgot this term existed outside of the HBO television show "Silicon Valley".


It's a nice catchall phrase for all kinds of discrimination that would be illegal if stated explicitly. If we call it a gut feeling or culture fit, it's suddenly legal.


Cannot ++ you enough. In my experience, people prefer to surround themselves with those similar to them. The same holds true for any software organization.


It is important to work with people who are you are comfortable working with.


Just as it was important to people in the pre-1960s southern US that their children only go to school with children whose families they were "comfortable with." Somehow, they've since (somewhat) figured out how to adjust to their discomfort.


Rational self interest might be a myth when applied to the extent that some economic models apply it, but at a basic level holds up. I know /tons/ of people that have taken one job over another because it pays more, even if it's one that they're less interested in.

Not everybody has the luxury of being passionate about something lucrative -- that's not something you really have control over (maybe some, but I don't think much. This is another issue entirely, though). I get that it might be lamentable that an industry isn't full of the most passionate people anymore, but the tradeoff is that it became a large-scale industry. If the original author wants the kind of deep, nerdy devotion he used to have, he can find it! There are plenty of research institutes and startups investing in long-term big bets that need this kind of thing. But being mad that /everybody/ isn't that way anymore seems childish.


I think there are two different problems here: building a good solution, and building a solution for a problem that people have to begin with.

If you /don't know/ if there will be any customers for your product (no matter how well-built it might be), spending time and money on building a very solid v1 is a waste. Sure you will have experience that you can leverage in your next venture, but that's an extremely expensive way to get it. And if you're bootstrapping, or have hired other people, you're potentially spending the financial stability of you or your employees to get this experience.

If you /do know/ you will have customers (either because you're sure it's a problem people have, or you've got people giving you money for basic R&D without any guarantee of returns), then I completely agree with you and the author -- build the product right and it will pay dividends later on.

In the prototype-then-throw-away model, you might not get the engineers' best development work, but you will get the best brainstorming and design work, because everybody's comfortable adjusting the product until they're confident they have something that people want. If you marry yourself to it beforehand, if you commit people's livelihoods to it, people will naturally try to rationalize what they're doing because they're committing so much to it, even if it's wrong. And if you've got the smartest people working with you, they'll be incredibly good at it. This creates a much bigger problem years down the line if it fails.


I actually asked Stallman about this once at a talk. It is precisely because we own our desktop. The analog he used was a food truck. It's fine to eat something from a food truck that somebody else prepares, even if there's a secret sauce you don't know about. The issue is being given the food truck and then being told that you must still use the sauce whose contents are opaque to you.

That being said, I don't really care much about using non-GPL programs. But that's the rationale.


Your recitation of his response does not address the OP's point, and my smarmy comment was downvoted.

So, let's see if we can work up this wishy-washy analogy a bit.

I like to eat food. (I like to use software.)

Much of the food I eat is prepared by vendors who employ 100% documented citizens or immigrants with valid visas. (Much of the software I use is 100% Free.)

Some of the vendors employ undocumented immigrants. (Some of the software I use contains nonFree components.)

My ability to get this food from these vendors depends on a number of other vendors - even a simple sandwich needs bread, meat, vegetables and condiments all sourced from and delivered by companies that may or may not employ undocumented immigrants.

I may purchase a completely prepared meal and eat it at home. (I may install and run boxed software locally on my machine.) I may purchase ingredients and prepare a meal from scratch. (I may obtain source code and locally build software to run on my machine.) I may purchase both raw ingredients and some pre-cooked food to take on a picnic. (Modern software distribution is complex, and "using a website" is a vague reference umbrella.)

In each of these scenarios, to eat my desired meal, I may need to source an ingredient produced by a complex system chain which may or may not have involved undocumented immigrants. Tracking this information down for every ingredient is tedious, but I agree should absolutely be possible.

I could prepare all of my meals myself, in my own kitchen, and source all of the raw ingredients from a small number of vendors whom I trust. I would probably have a limited number of basic ingredients to work with at the beginning. I would probably be able to produce reasonably healthy, functional meals. I would not be able to produce a rich menu that appealed to a variety of tastes, however. (A basic GNU system is pretty functional but you've got to wire up most things yourself.) I would have to work with these vendors frequently and would spend much more time managing the ingredients in my kitchen and preparing meals. I would also likely be limited in what I could produce by my locality and kitchen equipment. (Data must be created locally and hardware must be self-managed.)

I might trust these same vendors to prefabricate side dishes (system libraries) or larger heat-at-home dishes (application source bundles) that I integrate into my day-to-day meals. I may gain some variety because they can prepare dishes that I do not have the skill or kitchen equipment to prepare myself. (The trust that I confer on these vendors for not using undocumented immigrant labor does not prevent me from contracting a case of salmonella. [Heartbleed.])

I can get food in many forms from many varieties of vendors - delivered meals, take-out meals from a restaurant eaten at home, sit-down meals at a restaurant, meals from a food truck eaten on a nearby patio, meals from a food truck eaten at home... (Apps, desktop, websites, servers. Simply browsing the modern web means code runs in response to your every input on hundreds of machines around the globe.)

Sometimes my act of eating employs undocumented immigrants. Crap.


So he dismissed the question with some hand-waving and mumbling about food trucks?


Do you see this system of the elite ruling as an ideal that will never be reached (a la Plato's Republic) or as a practical system? Because in the latter case, you've got an intractable problem on your hands.

The problem is a basic political systems one: at some point, you need the consent of the governed. There are no stable systems on the record where a majority of those governed dislike the leadership. Either the oppressed are a minority (colonial Europe, slavery in the US) or the gov'ts are short-lived and unstable (on an order of 50-100 years, in the short term these can work). The most stable regimes that don't have majority support are ones that suppress free speech, which is hard to argue is operating "in everybody's best interest".

So, if you want a stable oligarchy that preserves some free speech, you need a group of "elites" that the whole society agrees is fit to lead them. I don't think the average voter believes themselves to be in the best position to run the country -- that's exactly why they vote for somebody else to do it.

But if you take voting out of the equation, you're trying to build a system where you have an elite that has support of the majority, without consulting the majority. At best, you have a system of guesswork that results in something similar to democracy (because the oligarchy are people the majority supports anyway). At worst, you have an unstable government.


It's a good point. But are there always only two groups, the elite and the majority, or could we create more? It seems to me power works best for everyone when it is more distributed, less uniform, and exists on a gradient from the elites down to the majority. Maybe a government structure could enforce that. For example, instead of just America's, Executive, Legislative and Judicial branches, we might also have additional branches for Education, Finance, Security, Business, etc., each having some (but not always equal) constitutional power over other branches, and all trickling down in power to the majority that has voting rights.


What you're saying is basically boils down to maximizing the complexity (integrated information) of the system. That is possible by increasing differentiation and integration. It is better power to be divided in more fragments, each differentiated from the rest, and then integrated with each other so as to be forced to create a dynamic equilibrium. That's what our brains are doing too, on a scale much more complex.

As reference, the concept of integrated information was developed by Giulio Tononi but I applied it in a different field.


Does this remind anybody else of: http://www.gocomics.com/calvinandhobbes/1993/05/26



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: