Hacker News new | past | comments | ask | show | jobs | submit | yafbum's comments login

There's another similar thing needing correction, which is that the LEDs near the center sweep a much smaller volume than the ones at the edge, and should be dimmed in order to yield equivalent luminance. LEDs describing tiny circles very close to the center would need to be dimmed a lot since they'd essentially be stationary. Wouldn't it be better then to sweep slightly larger circles at the middle anyways?


If we're going there, note also that all the LEDs not on the edge are blocked by other LEDs (or the board) part of the time; LEDs on the edge are visible even past 90°.


Scientific fraud robs the community twice: the first time by wasting research funds on fraud, the second time because experiments with the appearance of success funnel more dollars to them, taking away funding from other areas that might look less sexy but might have yielded a breakthrough of they'd been pursued. Grant issuers should insist on some form of end-to-end third-party data custodianship to prevent tampering with data during analysis. Seems better than the many billions of $ being wasted on fraud and subsequent missed opportunity.


Not to mention all of the students who end up dropping out or having to take Masters instead of PhDs because their theses built on the fraud fail.

Just like we should place failed results (when due to wrong science, not bad skills) on an equal level with successful results, we should place failed thesis projects on equal level with successful thesis projects as having added to the general knowledge (again, when the failed projects demonstrate a falsified theory, not when they fail due to mistakes or inability on the part of the student).


Great comment. I would encourage all PhD students to bomb-proof their thesis topic and build it on as solid a foundation as possible. I chose perhaps a less flashy, relevant, and lucrative topic for this reason but I would rather finish than be staring down the barrel of a "my entire work has been based off a lie"...


How do we know that a paper was based on false data?


I think the point is to pick a grad lab extending a well-trod path, or at least well replicated path, not something relatively new and flashy.


And a third time when they ask for research funding in the future and the public says "No, last time you used the money fraudulently"


And a fourth time when the scientific community says “this is our hard earned conclusion” and the public says “I don’t trust this.”


nor should they, since the public is indeed being lied to.


About…? By whom..?

Whatever it is, know that science as a whole is giant, diverse, and self correcting over the long term.


I don't agree.

When you have lots of people whose livelihoods depend on the gravy train, who can't be sure that what they are working on is fraudulent because they are so specialised, who would take that risk?

Its all about funding. And in the US basically all funding comes from the same source - government, military or corporations - what I think of as the governance system.


Over time, yes. But over, say, three weeks, science can do a lot of damage to an economy before it self-corrects.


It’s very weird to me that science now equals vaccines and home-mandates, and it also equals advice given with little data. Science is way way bigger, and is an idea not a particular set of people or current beliefs. It also has brought untold value of taking us out of the dark ages but apparently that’s been normalized so much it’s no longer valued.


And a third time: reputation

Dan Ariely and Francesca Gino were two of the most well-known behavioral economists. Hell, Ariely even published a a board game about it (as well as a bunch of popular books). They've both been accused of data manipulation this year.

My friends in the field are worried that the whole field will be tainted. It grew out of a controversial idea - that despite what past models demand, people don't consistently behave in a rational way. If the two biggest practitioners of a new discipline are outed as frauds, what does that do the the reputation of the discipline as a whole? Will people be skeptical of any behavioral hypothesis that bucks tradition because Ariely was accused of fraud?


Ariely and Gino have been very credibly accused.

And they are far from the only ones: Diederik Stapel, Brian Wansink, the list goes on and on. (I'm not listing the many names, also at R1 universities, whose verdicts are still "in process.")

What NPR liked to call "replication crisis" was a combination of junk science and blatant fraud.

The whole field (slightly more broadly, of social psychology) is badly damaged for at least a generation.

Pete Judo has some consumer-friendly youtube videos on the topic.


So many problems in science could be alleviated if we changed the culture and processes to multi replication. Your paper can only be accepted into a paper if an unrelated group can replicate the results with only the paper and associated submitted info.


This doesn’t scale. Sometimes only a small number of labs have the equipment needed to run an experiment, let alone the training on the equipment to do what needs to be done. This equipment can cost millions. Even if the results are correct, experimental science is very finicky by its very nature. Getting an experiment to work can take months.


Fine, let's just do it on those that scale. Not all science is done in high-end, expensive labs, and this type of control will help the young researchers who are publishing their first works to gain more credibility and develop healthy habits.

Sadly, as long as the tyranny of publishing exists, the other researchers will always prefer working on their own things than replicating someone else's not yet published experiment.


I'd say if no one can afford to, or knows how to replicate the thing you did. What point is there in it? And only because such an approach has downsides it's not clear it means the overall benefit of lessening the replication crisis and forcing scientists to provide enough information their work can be replicated at all would be a huge win. Based on people like https://www.youtube.com/@AppliedScience experience, 9 out of 10 times there is some crucial information missing in the papers they are trying to replicate.


> I'd say if no one can afford to

Because sometimes they can afford to build on your prior results, and advance the state of the art. Factoring in all of the malfeasance, what's the trade off between not publishing due to inability/unwillingness to replicate, and publishing bad results? We should know this tradeoff before significantly changing the status quo.

> or knows how to replicate the thing you did.

Agree completely.


Agreed - having done a PhD in an experimental neuroscience lab, there's a nonsignificant number of things that nobody else in the world, even in my own lab, can do. I can train the techniques to others, and do, but this is separate from scientific discovery. There are no incentives for someone else to spend their time replicating my work using unbelievably challenging and expensive methods. It just wouldn't work in practice without a more fundamental restructuring of the whole enterprise (which is also necessary but hard).


Like what? I can think of extremely few domains that any decent research university isn't well equipped to competently replicate. One of the very few benefits of the tuition explosion - even undergrads get relatively casual access to equipment worth millions of dollars.

Of course they can't replicate things like ultra high energy particle research, but these sort of obscure things make up a very negligible chunk of all science produced, even if it's quite an important little chunk.


I guess you're not thinking hard enough. I estimate >90% of research in most technical fields would require 10s of years to replicate unless you are one of the few labs already working on the same topic.

For example in on of the research areas I'm familiar with (optical communications), there are maybe 10 academic labs in Europe (and even less in the US) who have the equipment to reproduce some of our experiments. In our lab there is 1 PhD student who could pull off reproducing the more sophisticated experiments (because he is the one focusing on communications) it took him 2 years to get to that stage.

This is an relatively easy area, i.e. equipment is largely off the shelve, very applied with lots of industry involvement. There are plenty of experiments published which could only be done in 2 labs (both of them industrial), just due to the cost of the required equipment.

In other areas (e.g. with fabrication in the clean room) reproduction would require even more time investment.

Don't get me wrong, reproducing results is important, but what people don't realise it happens all the time when people do adopt part of published results into their research. Mandatory reproducing results would just create large overheads which would get us nowhere.


It's entirely possible there are fields with needs I am unaware of or not considering, but your response is not compelling and sounds like hand-waiving. Exactly what equipment are you talking about?

I suspect you are likely grossly underestimating the available supplies at many research university in the US. For instance things like class 100 clean rooms are basic facilities. Many (and I want to say most) research universities also have partnerships with (if not ownership) of various specialized labs in the surrounding areas for more specific purposes. For instance the NASA Jet Propulsion Lab is managed by Caltech.


Regarding equipment:

Realtime oscilloscope at least 4 channels > 50 GHz bandwidth $0.5M (for some research you need >=12 channels so multiply that number)

Arbitrary waveform generator 4 channels > 45 GHz bandwidth $0.3M (again you might need more than for channels)

RF amplifiers, electro-optic components etc. easily cost $2000-$5000 each and you need several (4-8 at least) of these. The RF cables and connectors/adapters easily cost several thousand $ each.

Fibre components, subsystem components (e.g. a WSS of which you will likely need 4 or so is $50k).

And I will certainly not let a student without training touch the sensitive high-speed RF equipment.

Regarding your comment on clean-room. For many fabrication purposes class 100 is not sufficient (also calling it basic facility is quite rich). And the equipment in is very expensive, LPCVD machines, E-beam, other lithography ... is $10sM. Most universities I'm aware of require fees (typically paid from grants) of $10sk per year to use the facilities (those are the reduced rates for university staff). The training/certification on the equipment typically takes about 1year.

Regarding JPL, yes it's managed by Caltech, what do you think NASA will say if Caltech professors will ask for a student to use the facilities to verify some paper? Sure, lets delay the next Mars mission a year or so, to let some PhD students try stuff in the labs.

I think you seriously underestimate what the cost of using all that equipment is and how much training is involved to be allowed to use it. You definitely don't want any


When I say a class 100 cleanroom is a 'basic' facility, I mean it's one you'll find at any decent research university in the US, and it is. If there's something that can be reasonably expected to be required for cutting edge research, you'll find it. As for lab access, my experience is in CS. I was granted access to a globally ranked supercomputing system paired alongside a paired largescale audio-visual facility. The only requirements for access were to be whitelisted (involved in research) and registered. After that of course you needed to slot/reserve time, but it was otherwise freely and unconditionally available.

It's difficult to really explain how much money is spent in top US universities. It's as if there's a fear that revenues might manage to exceed costs. But one of the practical benefits of this is that bleeding edge hardware and supplies, at costs far greater than anything you've listed, is widely and readily available.


I think the cost and complexity of reproducing work is somewhat overestimated, as is the specific expertise of individual researchers, though maybe your field is exceptional in this regard.

Primary research, pioneering new techniques and equipment to explore the unknown, is time-consuming and costly and requires a lot of original thought and repeated failure until success is achieved. However, reproducing that work doesn't involve much of this. It's taking the developed methodology and repeating the original work. That may well involve expensive equipment and materials, and developing the technical expertise to use them, but that does not involve doing everything from scratch and should not take anything like as long or cost as much.

I also believe that we far too readily overestimate the specific special skills which PhD students and postdoctoral researchers possess. Their knowledge and skills could likely be transferred to others in fairly short order. This is done in industry routinely. A PhD student is learning to research from scratch; very little of their expertise will actually be unique, and the small bit that is unique is unlikely to be difficult for others to pick up. I know we don't like to think of researchers as replaceable cogs, but for the most part they are.

My background is life sciences, and some papers comprise years of work, particularly those involving clinical studies. However, the vast majority of research techniques are shared between labs, and most analytical equipment is off the shelf from vendors, even the very expensive stuff. Custom fabrication is common--we had our own workshop for custom mechanical and electronic parts--but most of that could have been handled by any contract fabricator given the drawings. And the really expensive equipment is often a shared departmental or institutional resource. Most of the work undertaken by most of the biological and medical research labs worldwide could be easily replicated by one of the others given the resources.

Depending upon the specific field, there are contract research organisations worldwide which could pick up a lot of this type of work. For life sciences, there are hundreds of CROs which could do this.

As one small bit of perspective. In my lab a PhD student worked on a problem (without success) for over a year. We gave it to a CRO and they had it done in a week. For less than £1000. The world is full of specialists who are extremely competent at doing work for other people, and they are often far more technically competent and efficient than academic researchers.


It is a misconception that replication doesn’t occur. In vanguard basic science, every journey crosses replicating the most recent results.


I'd like to learn more, could you please provide some sources.

Based on the recent Ranga Dias fraud, I'd think replication culture still has a long way to go.


Is that true? Do you have any citation? It sounds incredible. Every study?


It's broadly true that most research is eventually subject to attempts at replication. However the replication isn't explicitly attempting to reproduce the exact prior research, but to build on it.

If the foundation established by the prior research is flawed, attempts to build on it will usually fail.


You don't really need that (and it would be too expensive), you just need to fix the incentives so that people do more replication.


Someone will need to pay that other group though. Sometimes quite a lot.


I'd argue it's worth it.

Some practical ideas, when allocating the money for research 30% or so has to be kept back for replication. Public science is mostly government funded. And the government could allocate X% percent of the budget for replication work. It sounds like a solvable problem to me.


I agree in principle but forcing institutions or governments to allocate money for this and enforce is pretty much an unsolvable problem at this point IMO.


Yeah, as I said it's also a question of culture and that's very difficult to change.


If this had been enacted, there would never have been any scientific progress. Should tell you something


You base this claim on what exactly?


The title is misleading without context. TFA doesn't argue that RTO is a bad policy or is being abandoned. It only presents facts showing that the post-COVID RTO trend is flatlining, i.e. employees aren't currently RTO'ing any more or any less than they were last month (as a share of paid work days).


That is how I understood the headline.


Same. Unsure what confusion OP is referring to specifically.


Looking at other comments, evidently a lot of HN readers didn't!


Time zones are geopolitical artifacts just like borders, laws, currencies, etc. They can't really be "permanent"...


They can be "more permanent" than changing twice a year.


People need to stop using data as an excuse for making decisions, or apologizing for not having data, or arguing about the lack of data as an excuse for maintaining a status quo. As entrepreneurs, as managers, as individuals we make decisions with no perfect data all the time. It's called dealing with risk.

Sure you could get a sense for how much active coding time or whatever BS metric you can track, and look at correlation with WFH, but that stuff is pretty minor in the grand scheme of things. How's new grad onboarding affected? How's team culture and conflict management affected? How is dealing with difficult health challenges affected? Working remotely is a big shift with likely long term longitudinal effects on an organization that are hard to predict. You can't A/B test or analyze your way out of every single decision there. So I respect people making decisions without data on this, that's what they're being paid the big bucks for. As I'm sure they will respect my decision to leave the minute the terms of the arrangement no longer suits me, which will add a data point of N=+1 somewhere.

It's ok to ask to see the data if you're curious. But people are often asking for data not out of curiosity but just because they disagree with the consequences of a decision that affects them. I think that's disingenuous. You could deploy a team of data scientists to tell me that the org will definitely be 5% more productive if everyone worked from some office an hour away, and it wouldn't really matter to me. I still prefer having a remote work option and I won't commute more than 40 min to work. We don't need to ask for data about things we don't like, we can just disagree and either commit or create the attrition outcomes that will drive different behaviors in the future.


I'm not sure this doesn't just lead to blind rubber-stamping unless this is done very very rarely


The trick is to have good access controls so confirmations happen often enough to be useful, but not so often to be rubber-stamped


I guess most of the common task are scripted / automated. Running a "raw" sudo command should be very very rare.


News outlets based in Europe routinely pull this cookie wall crap. I guess they get a pass for very very principled reasons and not just because they are based in Europe whereas Facebook isn't... /s

Banging Facebook over the head might make Facebook suffer, but it isn't going to create an alternative privacy-conscious social network, or even the incentives to the existence of such an alternative. It's just going to further add cost to a bunch of properties that might have once been dominant and hegemonic, but aren't anymore (hello tiktok) and destroy value that would otherwise have accrued, primarily, to advertisers whose ads now will be much more crappily targeted.


The reason is that they are smaller and if you want to make an example to scare an industry into compliance it's better to go after big companies first. If Facebook gets dragged over the coals the smaller ones are next unless they adapt. This myth that only non-EU companies get pulled into court is nothing but propaganda, mostly spread by the poor, poor, US-based violators of privacy.

https://www.enforcementtracker.com/ has a list of enforcement actions, and low and behold, most of them are against EU companies.

(The country filter is for the fining entity, not the fined entity, just in case anyone thinks this doesn't include US companies at all)


> The reason is that they are smaller and if you want to make an example to scare an industry into compliance it's better to go after big companies first.

They've had like five years of large European media companies doing this. That was the time to make an example out of someone, not hope that some even bigger company comes along.


I'm sure there's plenty of local enforcement, I'm just talking about my own cookie wall experience which is mostly happening when consuming Europe-based news outlets. I was really surprised to see that it was enabled by some national agencies which have explicitly okayed the cookie wall for such cases.


The fact other companies do the same thing doesnt mean what Facebook is doing is OK. Probably nobody (yet) filed GDPR complaint about them.

Actually going after the big players is way how to set precedent. If Facebook will be allowed to do this then most likely everybody can.


At least in Germany, our DPAs okayed the news site behavior.


Still find it out as I thought you had to offer the same functionality/content whether they accept extra cookies or not


out = odd?

Here’s what the DPA conference had to say

https://datenschutzkonferenz-online.de/media/pm/DSK_Beschlus...

Deepl:

> Whether the payment option - e.g. a monthly subscription - is to be regarded as an equivalent alternative to consent to tracking depends in particular on whether users are offered equivalent access to the same service in return for a standard market fee. Equivalent access generally exists if the offers include the same service, at least in principle


The premise of founding OpenAI as a nonprofit with "nobler" goals than making money was that it would be a strong magnet to the right talent. Going to work for Microsoft (or any other tech company for that matter), from that point of view, is like crossing over to the dark side of the force. It will be interesting to see how many of OpenAI's employees were there because of its nonprofit status, and how many were there in spite of it.


I suspect very little people joined OpenAI for their noble non-profit mission after they introduced their for-profit subsidiary. OpenAI compensation was and still is top notch. Compare it to Signal, which is a true non-profit (and salaries are a lot lower).


Nonprofit status relates to the absence of investor payouts, and doesn't fundamentally have much to do with pay levels. Some employees can be on occasion willing to accept lower pay when the motives are altruistic, but most employees at nonprofits are paid (have to be paid) market rate.


> > Nonprofit status relates to the absence of investor payouts

The people at the helm of the first organization which cracks AI won't have any need for money


Im trying to find some of those rich people who dont need money/bitcoin - so they can send me some!


They can leverage AI to get power, they can leverage AI to build the best hedge funds, the possibilities are endless.


And what have we got other than hft from all the billions poured in so far?

Further, I suspect any ai capable of outperforming the market would be too rational, to the point that you wouldn't be able to market it.

If the ai decided that bitcoin wasn't the future, and in 20 years time was proved right, how many people would have maintained that patience?


But why would you need to market it?

There comes a point where if an organization has something unique there is no point in selling it, they'd just use it for their own gain and watch the AI multiply their initial investment.

It has happened already, the best hedge funds are not open to investors but ran as a pension scheme for employees and founders.


Signal employees make $400k to $600k a year. How much is OpenAI paying?

https://projects.propublica.org/nonprofits/organizations/824...


Interesting, but deceptive. Those are, as noted, "Key Employees and Officers." I just assume most employees with the title "Software Developer" aren't making over five times what Moxie is making.


Nov 13 article: "OpenAI recruiters are trying to lure Google AI employees with $10 million pay packets"

https://www.businessinsider.com/openai-recruiters-luring-goo...

also, this shows $900k+ for L5 w/ 3-4 yoe: https://www.levels.fyi/companies/openai/salaries/software-en...


Your number seems to be coming from a very small sample size (single digit N?), and GP's link is only about "key" employees like CxOs, VPs and top-ranking engineers.

I wonder what a median rank-and-file employee at these companies make.


I see your point. The fact that the small sample is for 3-4 YOE 'only', may indicate they do pay very well.

Prior discussion of median TC at OpenAI: https://news.ycombinator.com/item?id=36460082


What are the credentials for the entry-level paths to positions? A phd?


... looks like this has been the case for a while (since inception)

https://www.nytimes.com/2018/04/19/technology/artificial-int...


Looks like double to triple of what the same level at MSFT fetches https://www.levels.fyi/?compare=Microsoft,OpenAI&track=Softw...


Few


700 out of 770 employees already signed an open letter saying they will consider changing jobs.


Remember all those Apple and Amazon employees who signed a letter that they're not going back to the office? Last I heard Apple was at 100% compliance

Make no mistake, if Microsoft is matching $10M PPU's with $10M Microsoft RSUs vesting over 4 years, every single employee will join. But I kind of doubt that this is their plan


> Last I heard Apple was at 100% compliance

That doesn't tell you whether it was the numerator or the denominator that changed.


It's at least the denominator and likely both. I personally know a non-zero number of people formerly there who found a different job or retired when Apple insisted on everyone returning to the office.


> Last I heard Apple was at 100% compliance

Do you have a source for this? At other tech companies I'm aware of, the numbers are still much lower than 100%, even after threats of performance impact.


There was no significant uptake on any letter at Apple.

One reason is that retaliation is very possible. The opacity of the executive team was not a feature of Steve Jobs’s Apple, but the Time Cook-era opaqueness combines poorly withthe silo’d, top down nature of Apple’s management which _was_ inherited from Steve Jobs.

The opaqueness, I think, is a result of Tim Cook integrating the retail and corporate sides of the company; retail salespeople are treated better, but software engineers are treated a little more like retail salespeople.

Since the pandemic, Apple execs have seemed to be isolated in a bubble and are not listening to the rank and file anymore. The people they do listen to seem to be out of touch.


That kind of compensation skew will case ripples in the company. It's possible that OpenAI is worth it, but it is a big gamble by Microsoft.

I think that is approximately L70 comp.


Oh yes, it's going to create waves. People who've had compensation stagnation at Microsoft, reduced bonuses, "belt tightening" now see that "well, we're willing to throw stupid amounts of money at those people over there, just not you".


That's capitalism WAI.

People in stagnant careers signal to their employers that they don't need to do anything at all to retain them.

People at a hot startup have all the options in the world.

Compensation isn't about what's fair at all. It's supply and demand.

You want to make more money? Make yourself in shorter supply and generate more demand for your service.


Isn't it like that at any company?


Yeah but in these kinds of big companies there are supposedly compensation "bands" by technical level. When salaries within each band are very different, it can be an impediment to talent mobility within the company.


> Last I heard Apple was at 100% compliance

Bit of a flaw in your logic there ;-)


The CTO of Microsoft tweeted this morning that they would hire any OpenAI employees who wanted to join MS with commensurate pay. For whatever that’s worth.


Do you have a link ;-)



Déjà vu


Derp


Are those PPUs really worth anything?


As others have pointed out, it's easier to sign a letter than actually go through with it. Besides that, wasn't there some employees who said something similar on Friday when this happened?


This is simply a matter of momentum. If enough of the signatories follow through more will cross the bridge until there are too few people left to keep it going and then there will be an avalanche. It all depends on the size of the initial wave and the kind of follow on. If that stops at 200 people leaving it will probably stay like that, if that number is 300 or even 400 out of the 700 then OpenAI will be dead because the remainder will move as well.


I think peer pressure also plays a part. You want to be part of the majority in case things change, Sam comes back etc.

Actually going is a whole different thing. Why not go to Google or FB or Anthropic if you’re quitting anyway, and they can match the offer.


I don't doubt that a lot will, but it's also easy to sign a letter.


[unpopular comment removed]


At the point where the majority of employees have signed it I imagine it's much easier. Might even get a bit weird if you're one of a few holdouts

e: for what it's worth I thought it was a reasonable discussion point, not sure why you got downvoted on that one :(


It was a risky gambit for the first half of the signers. Was it not?


I have no idea what the process was for gathering the signatures, but one way to solve this problem is a mutual confidentiality pact that goes away with enough momentum. Ask people if they're willing to sign, but if you can't get 50% of the people to sign the document and all the signatures go away.

Similar things are done when doing anonymous 360 surveys in the workplace. If not enough people in a certain pool respond, the feedback doesn't get shared.


Oh aye for sure - All unions are risky to start with, that's why you gotta keep them quiet until you have overwhelming support


Is it really burning a bridge though if >90% of your coworkers (and even bosses) agree?


Lot harder for the first N employees at the front of the line.


Unless they promised to withhold the letter until 50% signed.

Also I have enough savings and my skills are in demand enough that I wouldn’t consider signing such a letter much of a risk. The researchers at openAI are likely a good deal more in demand than I am.


While the financial future is significantly less certain than it was a week ago, many of those employees have RSU-equivalents potentially worth FU money. Even if you are going to land on your feet, it is still making a statement to walk away from that payday.


If you believe that your profit share is going to be worth much much more with the current board gone, then threatening to walk away to force them to resign isn’t really walking away from much.


Was your comment unpopular, or wrong?


How reliable is this information, could it be a deliberate rumor spread by interested parties?


I agree that it is surprising that the first big whiff of collective bargaining that we see in Big Tech is “let’s save this asshole CEO (who would probably try to bust any formal unionization),”[1] rather than trying to safeguard the workers in the industry as a whole. But I just attribute that to Silicon Valley being this weird hero-worship-libertarian-fantasy cosplay rather than outright conspiracy.

[1] Just to clarify, I don't know Sam and I am taking the usual labor viewpoint that most CEOs, in order to become CEOs, had to be a certain sort of asshole who would be likely to do other such asshole things. There are some indications that this sort of assholery is what he was fired for but it's kinda hard to read between the lines here.


Everything about this post is spot-on. The CEO got ousted because he lied to the board and went against the company's mission to make safe AGI. He tried to milk OpenAI for money and personal gain, and the board actually did something to stop it.

The downvotes here are pretty irrational. But that's the defining feature of class warfare: we're all closer to homelessness than a billion dollars, but we've been conditioned to believe the opposite.


> they will consider changing jobs.

Hmmmm. The stiff resolve of a spaghetti noodle. I "consider" changing jobs literally every single day.


They didn't say they were going to Microsoft, as far as I can tell. I presume many can get golden offers anywhere including academia and other institutions with stronger nonprofit governance track record.


Literally says "join the newly announced Microsoft subsidiary"

Source: https://www.nytimes.com/interactive/2023/11/20/technology/le...


what this have to do with their willingness to join Microsoft?!?

this only signals desire to leave openai. nothing else.


This is a direct quote from the letter:

>We , the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.

It does suggest more than their willingness to leave.


thanks. couldn't have guessed that from the comment alone. not like I'm following this fabricated soap opera very close


Very few people in tech are in it for noble reasons. Although, a nice pair of golden handcuffs can let you delude yourself into thinking what you are doing is noble. I can't imagine people working on shadow profiles at Facebook think what they are doing is noble.


Exactly this.

When challenged, they say ‘someone else would’ve done it anyway, so it might as well be me’. Which isn’t incorrect I think.


You join OpenAI because if there is an open spot you’d take it. Plus it’s a famous company doing cutting edge AI, sure you can read the statement, but everyone wants to eat and get a better resume. It’s a bonus thing to feel.

In summary, nobody gaf


I would wager a very small % of them care about the legal structure of the business and just wanted to build really cool stuff with Sam.


If they were in it for purely noble reasons, they would have already left when it became NotOpenAI.


When you're total compensation depends on the for profit part does it really matter?

People talk about the coherence of 700 people signing an open letter as being goal aligned, but I see it more like mortgage payment aligned.


it didn't preclude a now common cult of personality problem.


When reading the title I was instantly reminded of "Come to the dark side, we have cookies!"



TL;DR: The emperor has no clothes, and the OpenAI board are just a bunch of clowns.


Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.


> They can't let OpenAI squander world-leading tech and talent

Where is OpenAI talent going to go?

There's a list and everyone on that list is a US company.

Nothing to worry about.


The issue is not that talent will defect, but that it will spoil into an unproductive vortex.


If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.


You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)


Ehh I don't think SVB is an apt comparison. When the FDIC takes control of a failing bank, the bank shutters. Only critical staff is kept on board to aid with asset liquidation/transference and repay creditors/depositors. Once that is completed, the bank is dissolved.


While it is true that the govt looks to keep such engagements short, SVB absolutely did not shutter. It was taken over in a weekend and its branches were open for business on Monday morning. It was later sold, and depositors kept all their money in the process.

Maybe for another, longer lived example, see AIG.


The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.

It is a great time to be a lobbyist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: