There's nothing in here you haven't already heard:
* He's close with Peter Thiel
* He's an ace networker (but, completing the stereotype, is also an introvert)
* He left YC, not entirely voluntarily, because he was too committed to OpenAI to do the YC job
* Echoing YC, the OpenAI board tried to oust him over concerns that he was being manipulative and pursuing outside interests (a chip firm) that conflicted with the mission
The article doesn't seem to have much to say past that. If you go to an Altman house party, you'll apparently get a blanket if you want one. But, like, you'll also get one of those on a Delta flight, so.
It feels like people are a bit obsessive about this guy.
OpenAI is only one of many companies building AI. While ChatGPT and DALL-E are impressive, there are domains where those models are pretty much useless and other AI solutions are needed.
For hosted models I’ve noticed that Bard has been quietly improving. Was awful at first but now seems to be closing in.
Actually-open AI with open weights and open source is innovating rapidly too, especially when it comes to making models more efficient on smaller hardware. We have fast local mixtures of experts now. Rumor is that Llama3 is training as well, and that it may be some kind of MoE.
I feel like focusing on efficiency first for local models rather than shooting to beat GPT-4 is the right approach, because without major efficiency improvements a shot for simple dominance will just yield a model only wealthier people can afford the hardware to run.
OpenAI still seems in the lead but it’s shrinking.
If you have the most powerful model by far like they have, it's a radical adantage over competitors: your model can perform automatizations that they cannot do.
OpenAI's lead is phenomenal: remember that they managed to put GPT4 on top of the leaderboard for one year and counting, while all others (except maybe Anthropic) struggle to have their model in the top3 for more than a few months.
I've definitely seen Bard beat ChatGPT once in a while. I don't use bard much but when ChatGPT gives me an answer I know is wrong I go see if Bard did any better and yes, sometimes Bard is better, not often enough for me to switch ... yet?
Seems to me like Sam Altman is really good at making the actual builders believe that they need him to make things happen, and really successful at making the world believe that he was the actual builder. On that note he just posted a thread on how much of a visionary he is.
I follow a lot of people on this topic and even at the top of the foodchain there are hundreds of people from all corners of the world competing with each and improving each others work. It's actually really fun and amazing to watch(especially the pace at which it happens) and very different from how Sam portrays it.
The reality is that the higher the pay, the less it stings when people screw you over. If OpenAI was a low pay startup, Ilya would probably feel a lot more bitter about what happened. People care for various reasons. You don't need to care, because you actually already have what I described above, because of three reasons:
1. the work that you did and the country in which you did it
2. the people you were surrounded with while you were doing it
3. the culture around it, especially the US is very vocal about what they achieve, which automatically became a huge (possibly) unintentional mutual marketing platform for you and your friends.
There are plenty of security people that have contributed to the security community just as much as you, but make a fraction of the money. Good for you, honestly, but not everyone has had those opportunities. For many of us(I for example grew up in Germany as an immigrant), the predominant culture is to not invest in people like us(very different from the valley), and to make sure to constantly tell them that they can only be successful if they are good worker bees. It took someone convincing me to leave to the UK to realize that.
I want to know and improve, because I have made several people and institutions rich or contributed to their fame, because of whatever work I did or whatever thing I reverse engineered, and got ultimately burned with zero credit.
LLM's would exist with or without OpenAI, but it's absolutely fascinating that Sam how somehow managed to convince the world(outside of a big scientific and engineering niche) that ChatGPT would not have happened without him, but that conversely everyone at ChatGPT is disposable. Even if you don't care about Sam in general it may be worth caring about it in some fashion, when he's lobbying on Capitol Hill to limit access to non regulated AI.
It's only natural for me to want to learn some of it, to at least protect myself in the future.
Sam Altman controls billions of dollars in assets, and attends Bilderberg and Bohemian Grove events. He's sort of the walking definition of social and economic power.
The argument is that Sam Altman is a central player among the small group of extremely wealthy people whose personalities and decision-making decides how you and me will live our lives.
They buy and sell our politicians, set policy directly or by proxy, and make decisions that lead to prosperity for some and mass misery and death for others.
None of this is subtle or a conspiracy theory, it's how our society is structured, it's not particularly confusing unless you choose for it to be.
"The political-science professors, perfectly sane men, look at me with wonder when I talk about the ruling class in America. They say, “You are one of those conspiracy theorists. You think there’s a headquarters and they get together at the Bohemian Grove and run the United States.” Well, they do get together at the Bohemian Grove and do a lot of picking of Secretaries of State, anyway. But they don’t have to conspire. They all think alike. It goes back to the way we’re raised, the schools we went to–after all, I’m a reluctant member of this group. You don’t have to give orders to the editor of The New York Times. He is in place because he will respond to a crisis the way you want him to, as will the President, as will the head of the Chase Manhattan Bank."
If you're trying to make a case that you're not a conspiracy theorist, your first cite probably shouldn't be Gore Vidal. But if you're just trying to be entertaining, he's hard to beat.
I don’t get it, are you saying that Sam Altman is not currently one of the most powerful and influential figures in the world, with the power to influence policy through deep pockets and connections? That he’s just an average Joe that has a CEO job like a million other CEOs?
I've largely nodded along with you over the years but from the little I've read of Vidal's later years he wasn't a hard core truther (of the "they cut the steel and demolitioned the towers" ilk), more his usual cynic political realist self:
I'm not a conspiracy theorist, I'm a conspiracy analyst.
Everything the Bushites touch is screwed up. They could never have pulled off 9/11, even if they wanted to. Even if they longed to.
They could step aside, though, or just go out to lunch while these terrible things were happening to the nation. I believe that of them.
It's entirely possible that he went further than accusing Bush et al. of being asleep at the wheel and being overly fond of Saudi oil, if so I missed that particular descent into the abyss.
Sure. We can probably amicably depart the thread here with the sentiment that Vidal is not the least controversial authority you could introduce to an argument about shadowy powers controlling the world. I like Vidal! I'm just saying, he's not like, the AskHistorians pick.
It's also just possible I'm wrong! Like, obviously? But I think we're all just hyperfixated on this guy because we can see ourselves wearing the same goofy blue sneakers as he did in the "Bilderburg" photo. It's a crab bucket thing. Our attention to him, I mean. He could end up being a supervillain! I just don't think he is right now; I think what he is is much more boring than that.
> I like Vidal! I'm just saying, he's not like, the AskHistorians pick.
We're in damn near furious agreement . . . although there's a few of the original AskHistorians academics from a decade back that would take Vidal over Jared Diamond in a heartbeat :)
Interested in this theory of middle class controlling the world. My personal yardstick is: “whose interests are most served by the system?”
For example, “middle class” doesn’t want wars, revolution, things shaking up. Which is why comrades hate them. Upper classes despise them and lower classes want to kill them. :) So seriously, is it the “middle class controlling the world” that has cash fire hoses gushing dollars “abroad” while “middle class” struggles to hold on to its shrinking perch?
Hah I just read about "Bohemian Grove." Do techies consider that sacred ground because Oppenheimer discussed the atomic bomb there? The idol worship present in SV is so thick I can hardly see anything else.
Not many from us can go from "nearly losing the job" to "getting 95% of the employees to demand your return, firing the board that tried to oust him, and then returning with absolute power".
There's something to be said about that, lol. Probably not even Musk has that kind of power.
That kind of "power" can turn on a dime. I'd be wary of it. In this case the mob learned they have influence when they work together. That doesn't mean they are loyal to Altman, rather that they are influential when united.
20 years ago, it was, "A billion here, a billion there, pretty soon it adds up to real money," but nowadays I question whether a billion here and there even realistically adds up to real money. The amount of assets Altman controls are unimpressive on the scale of "who controls power in society," and there are literally thousands of people who control more money.
If you think this AI hypejerk is where actual power comes from, you’re deluding yourself into thinking Silicon Valley is the source of actual societal power.
I care in that we are at one of the historic inflection points in tech history - so I waant a very salient truth as to what decisions (and by whom) are steering what is going to be the next era in the information age. I love it.
So, what I care about is a true lens into how we will be getting to the next step in the path.
Thats all I care about...
And obviously SA is a master at ensuring the engineers in his harem are well fed (paid).
But its really important to note who his closest bodies are in his orbit. There is no question about Thiels propensity for imbuing his views in his interests - and, personally, I'd really like to know what SA's goals and views and desires are especially WRT how they are colored by those he trusts, or advises him, such as Thiel.
Its critically important to see the path ahead - because there is no going back from the level of entanglement AI is causing, and will cause. I love how Tristan said it regarding AI, and where it is today
>"It the worst it will ever be" (meaning its only going to get stronger and more entangled, and thus, this is the only optimal time to check our rudder)
And SA and Thiel have a solid grasp on the inputs to that rudder.
I hope that all the youngin's in tech (like the actual teenagers, like my children) are really paying attention to where we are in tech history with AI in general - and with OpenAI in particular - as SA can attempt to look like a Carnegie but actually be a Rockefeller.
I do a lot of local politics work now, which means I pay attention to a lot of arguments that are isomorphic to these HN kinds of arguments, but that are about zoning or the kind of materials we're using to pave roads or whatever. This thread resembles to me nothing so much as the vibe in a debate about whether a neighboring municipality should be allowed to build a McDonalds right on our border. Strong opinions all around, lots of projected consequences, but in the end we don't have a say, there's nothing we can do about it directly.
Of course you have a say. Many policies are made above the municipal level, and California is a good example of a place where people eventually got tired of not having a say in zoning decisions of neighboring municipalities and did something about it.
And beyond a direct say any person of course can be an advocate for anything. And should be, if it affects their society.
Not sure you meant to make the exact opposite point of the one you stated, but convincing people they don't have a say, when in fact they do, is a pretty longstanding trick of the ruling class, so perhaps you've been convinced as well.
I came up with a term for the type of argument OP is making 'Astro-Gasing'
Astro-turfing/Gaslighting at the same time:
"The act of one or many to feign as though they dont care, and there is nothing to worry about, or what one can do - so just go along with it, when eventually the truth is revealed that there was in fact something to worry about - the astro-gassers stood to benefit from your complacency." Whether Thomas realizes it or not - he is in this class.
Further, there is an entire new city being bought-up, planned and developed in the greater Bay Area as a new tech-city. I 100% gaurantee you, it will be riddled with AI 'living augmentations'... its being built by the technorati in SV. It will be interesting to see how that builds out.
> I hope that all the youngin's in tech (like the actual teenagers, like my children) are really paying attention to where we are in tech history with AI in general - and with OpenAI in particular - as SA can attempt to look like a Carnegie but actually be a Rockefeller.
You should hope that they go back to school and study history. All of this has happened before, many times over.
>All of this has happened before, many times over.
Sounds good on paper, but the nature of modern day life is far different than it was when other revolutions occurred.
Yes, revolutions in tech are just that - they up-end the old way of life, the dust settles and a new way of living emerges.
This is so basically obvious that its not worth mentioning.
What is worth more than mentioning, but truly paying attention to, aside from Thomas' analogy to a neighboring mcdonalds, which is an actually insanely lazy look at this matter - especially coming from one who is a security expert, is that the level of "entanglement" into our daily life with massive core-AI systems, such as the capabilities of GPT-'N' has, cannot be understated.
When cars overtook horses, we could weaponize cars by putting guns on them and calling them tanks. But once you put a gun on a car and call it a tank, it can only shoot at one target at a time.
With AI, when it provides the substrate upon which you will interface with the information relationship you have the rest of global reality, can be easily weaponized, not just against you, but anyone and anything interfacing with it, its important to know who, or at least what incentives, truly owns, or at least is the lens through which one interfaces with whatever information that system provides.
SO, aside from 'doom-and-gloom' -- its really important to really understand as much as possible, and rather than Thomas' just throwing up his hands (which again is bizarre from a security guy just saying he doesnt care who runs the black box, is weird) - I want to know how whatever that system is (openAI/whatever dominant AI) and who made it such and what agency I still retain in my digital life.
--
@Unity:
Entanglement of big business and government in our lives was and will always be - what big-core-AI provides both business and governments (which now there is vanishingly thin membranes between the two these days) complete solvent for any friction for said entanglement.
AI's capabilities for extracting indivulaized-insights-AT-SCALE is what AI provides to both, with zero recourse from the individuals. That is the "alignment" problem that Aza and Tristan are pushing back against.
Also- I do not "worship" or "idolize" SA, and especially Thiel - I am terrified of them. That's why I said that they may attempt to appear as a Carnegie, but are actually a Rockefeller (Carnegie attempted to wash his reputation with endowments - Rockefeller was just an evil oil-carpet-bagging C*Sucker and an evil person (he is the reason we have "fossil fuels" as manufactured scarcity for profit)
I have no idea why you think I idolize any of this - I am fully, and even further, into the Raskin-Harris camp... alignment regulation is going to get trampled - and to be honest, I have serious questions about SAs motives...
I'd bet he's already got a team working on the plans for his bunker/lair, and taking notes from the Zucks and Thiels on best design resources.
> the level of "entanglement" into our daily life with massive core-AI systems, such as the capabilities of GPT-'N' has, cannot be understated.
Entanglement of big business and government is not new. Yes, we should be very concerned, but the solution is not to worship idols like business leaders or politicians. They will not solve that problem for us. You have to think carefully before handing more authority to individuals who've already been ascribed far more than is due.
> from a security guy just saying he doesnt care who runs the black box, is weird
One cannot select the "right" person to run that black box. We're all biased, flawed, etc. So yes, you should be concerned. But use that concern to drive your actions towards distributing authority– not centralizing it through more idol worship.
In response to your edit:
> I do not "worship" or "idolize" SA, and especially Thiel - I am terrified of them.
This is idol worship. You're ascribing a power to Altman and Thiel that they do not have. Bad behavior has consequences that are beyond what even the government can impose. If Altman or Thiel were to make a habit of targeting people, he or his adherents/descendants would eventually find themselves friendless in a house built on sand. Altman almost lost his job that way. The fact that consequences may play out slowly in some cases doesn't make it less true. Nobody can infinitely violate the golden rule without eventually facing consequences.
It's not even about building it, it's about controlling it.
So if Altman is as smooth and ruthless and self interested as he appears, he's likely to end up in control of... something big.
Even if he "just" becomes the next "richest man in the world," that's still a title that carries with it a massive ability to shape society. It's no wonder people would like to understand what this potential oligarch is like behind closed doors.
This can sell another generation of smartphones once edge device AI chips hit the market and then people can use their mirror neurons for machine interface.
You miss the part about his Universal Basic Income experiment with 3000 families set to conclude in 2024. Will be interesting to see Sam go all-in, provided results are positive.
It's nice to occasionally read a piece that reminds you what good journalism is.
On and off record sources, people as complex individuals with multiple truths, anchoring to key chronological events, and only moderate clickbaiting and defensible hyperbole.
Nothing particularly new, but a solid brief. Kudos to the Post and Dwoskin, Fisher, and Tiku.
Sam Altman is a pretty polarizing figure, seemingly by virtue of both who he is and what he's touched.
This article has a lot of pros and cons to the man, all of which seem to track with what other sources have reported about him.
E.g. the "if he cares about you you, he really loves you; if he doesn't care about you, you don't exist" brutalist attention filter that a commenter here reported seeing (in one of the ad nauseum OAI CEO threads)
I'm surprised I don't recognize a single one of these companies. Isn't YC most famous for companies like AirBNB, Reddit, Twitch, Instacart, Dropbox, Doordash, etc. all things I use and that broke into pop culture? Or is that entirely from pre-Altman era and now it's just all tools for developers?
YC has been doing a lot more B2B for the last decade or so than in its early days. Many of the early unicorns (including all of those you listed) were B2C companies, so they're more well-known.
Oh ok. It looks like as a small business we use Gusto, so that's been useful for us. But it's kind of strange to me they had such a string of big B2C successes early on and then that side evaporated entirely.
The B2B stuff seems like things YC companies can just sell each other and make startup pain points easier, like pick axe stuff that doesn't require as much original vision or as much risk.
If you don't recognize any companies from that list (a number of which are very successful), then that's likely because you only learn about companies once they hit a truly massive scale, which takes time (and explains why the ones you know are older).
I have heard of that one, and I guess to be fair looking at the list again I've heard of OpenSea a few times too as an NFT marketplace. Just not the target customer I guess, I am not a developer (or into NFTs).
You’d need to consider baseline (wins/fails ratio) adjusted for macro conditions. And then look only at the subset of companies where Sam had impact.
A better way of looking at it would be to try analyzing the performance of Sam’s personal investments. And comparing it against a baseline (e.g. Sequoia or YC).
Weird CEO cult going on again. Per the article Altman created this and that. In reality thousands of people created the stuff. Altman mostly did some management and marketing.
We sure love our CEO cults in the startup tech world. I don't understand why we give credit like that for people that high in the management chain, but we sure do.
(1) People who have actually worked in large tech / engineering companies, and implicitly understand that 'CEO' is a analogy for (the organization)
(2) People who have no idea how the sausage is made, and need things simplified down to one person
The former aren't misdirected, because they know the game. And the latter probably couldn't wrap their heads around the truth, even if it were explained to them.
So I'm not sure anyone who could plausibly understand otherwise is confused by the hero worship.
Except for new graduates... they're probably misunderstanding.
Wow, what a lovely puff piece. Known as a 'rehabilitator' in PR circles. The conversation:
Sama: "I'm getting a bit of knock-back over my management style from various sources."
PR Co: "No worries, we'll run a Premium PR package to restore your reputation. A couple of nice pieces in the WashPo and other places and you're good to go."
This kind of gives away the reason for his firing.
He was pursuing his vision of AI incluing a chipset specific to LLM processing that would break nvidias hold of the market.
Apparently the BoD didnt like it. They should have just been more forthcoming about it.
Some CEOs would get fired for moonlighting. Others (Elon Musk, etc) build massive companies and make their existing companies more resilient and lasting. The fact that employees closed ranks around him tell you which kind of CEO he is, and the BoD chose to ignore that at their loss
>The fact that employees closed ranks around him tell you which kind of CEO he is
I can only conclude that the guys were afraid for their big bag of cash, I would expect them would allow anything that would ensure they get an obscene amount of money.
the board was about to vaporize employee equity overnight - the secondary was days away and would have been life changing for many employees and significant for all of them. SA is probably a great a leader but the financial incentives for the employees to support him and ensure the secondary went through were huuuuuuge
Thats putting the carriage before the horse. How did the valuation get there, particularly when it did it before M$off, GOOG, etc?
It was the team. Who put the team together?
We will never know if OpenAI would have succeeded w/o SA. That's unfalsifiable. But they got to a life changing equity because the CEO helped get it there.
My point: SA was likely putting the scaffolding of a 2nd home run in place, right as the board got scared of being on 3rd base going for the run.
And therein lies the problem. The BoD was not playing the long game.
Yep, the boards failure occurred when they allowed the profit motive to take control of the organization. The crisis was merely a manifestation of the inevitable.
It seems there's more to it than that [1]. Altman did a surprisingly good job of cultivating a "nice guy" image among the tech crowd, but I personally don't buy it.
It's unfortunate that everyone else was so comically bad at literally any kind of communication. I'm starting to feel like the general advice lawyers give of say nothing is a terrible advice, when you have such a public facing company. It's like what you need in that situation is a communication coach first and a lawyer second.
But as an engineer that also never learned this kind of narrative control, I too am struggling at where I can learn this from. Any hints are appreciated actually.
Sam Altman on the other hand has decades more experience is that playing field.
This sounds like snark but it’s not intended to be: do you really need sources to learn narrative control? Perceiving how narratives work seems to be a skill that almost all humans are very good at naturally. I acknowledge that shaping narratives is a different skill set (especially the parts that require putting yourself in a position to shape narratives effectively), but in my opinion if you’re already asking the question you already have all the foundational knowledge and awareness you need to start exercising those muscles as well.
I get the point, but the reality is that this goes way beyond business communication. And yes, given enough time almost anyone can learn almost anything, but we're also human with limited time and it simply makes no sense trying to learn everything from scratch. You gotta know when to ask people that are experts in their respective fields.
Not exactly what you asked but I think the same approach would work: I set the intention to gain insight into how stories work. This intention alone was sufficient to massively increase my abilities in this area, because during "idle time" my brain would mull over / analyze stories I'd heard recently.
So I think for PR or whatever the same thing would work, simply deciding to become skilled in this area and you will naturally begin to pick up the patterns already present in your reality.
When did I say it didn’t go way beyond business communication? That’s largely my point: this is a basic human skill in the modern world that I would think existing for 20-30 years would give you plenty of experience in navigating stories told by others as well as using your own narratives to steer outcomes.
Too much of a nice person image makes me really skeptical, especially if it seems at all cultivated.
People are people. Virtually anyone can be prickly or have some opinions you might not like. Show me the reality. I can take it because I don’t expect humans to be cardboard comic book caricatures.
In OpenAI's case there are millions of dollars on the line for the employees involved so I consider that the most likely reason to stand behind him. Discounting that, are there any other companies that he's involved with where you see "cult vibes"?
Apparently, but family drama feels a bit... weird to use for character assessment. Especially when a ton of money is involved.
There are a lot of inter-family dynamics and motivations that cause people to do things.
Not saying she's right. Not saying he's right. Just saying that whole situation is probably too close to home (literally) to opine globally outside it.
I am conflicted about that. When money is involved I think it amplifies character even more. Not everyone with a ton of money has these kinds of allegations.
My guess is that a lot of families with money have these kinds of allegations, true or not.
Money just usually changes hands to make sure we never hear about them.
People are fundamentally people, and all do the same sorts of things. And the line between "highly successful businessperson" and "sociopath" is pretty fine.
Altman's sister's 'memories' of abuse were developed in adulthood [1]. Recovered memories are deeply suspect, and a number of serious miscarriages of justice have happened because of them. Most infamously the 'satanic panic' trials of the eighties and early nineties. Elizabeth Loftus has done lots of good work on the dubious nature of eyewitness testimony, and recovered memories in particular [3]. Essentially, they're far more likely to be contrived than real - especially when 'recovered' decades later during psychotherapy. I trained as a psychoanalyst myself (in addition to studying pure psychology at undergrad), and while repression and denial are genuine and common defence mechanisms; there is a whole pseudoscientific cult around elevating them into sources of personal truth. This goes back to Freud's original sin - not believing the direct accounts of family sexual abuse from his female patients, and instead perceiving them as fantasies. Ironically this has often led to an overemphasis on the veracity of unconscious beliefs [2].
> Recovered memories are deeply suspect, [...] especially when 'recovered' decades later during psychotherapy.
> This goes back to Freud's original sin - not believing the direct accounts of family sexual abuse from his female patients, and instead perceiving them as fantasies.
I can't understand how these two statements are not contradictory, could you elaborate?
Sure - since talk therapy was pioneered it's vacillated wildly in approach between what we would today call depth and cognitive approaches. At times the emphasis has been entirely on teaching socialisation and behavioural strategies, at other times it's been delving into unconscious motivations. More specifically Freud (and successors, especially Anna Freud) identified a series of defence mechanisms, that have been widely validated and are now accepted far outside of analysis. Two of these are denial (literally refusal to accept or acknowledge reality) and repression.
Meanwhile Freud's developmental framework had young children going through early stages of 'opedipal' development involving attraction towards their opposite sex parent, and fantasies involving them - which would usually be repressed and could emerge as neurotic symptoms.
So you have this concept of memory as a potentially hidden repressed desire or trauma, and also this idea of attraction to the primary care giver. Psychoanalytics techniques - like hypnotism and later free association, were designed to uncover the unacceptable thoughts and primal trauma that lead to neurotic symptoms. So you have this delving for hidden desire and the one original source of trauma (this is another major issue with psychoanalysis, the idea that all trauma connects to an original primal trauma), combined with a fixation on the desire for the parent.
Over time techniques and theory have balkanised, so you have people simultaneously seeking desires in the unconscious and refusing to accept the surface level meaning of symptoms and patient communication - while simultaneously latching on to any 'recovered memory' of trauma. This contradiction can motivate patients to perform for their therapists, producing symptoms, dreams, memories etc which explain the 'origin' of issues which may have other causes - from personality disorders to physiological disease, to social contagions.
I think they are saying that belief in repressed memories, to the current extent, are a result of the pendulum swinging too far in the opposite direction of Freuds skepticism and denial.
Things are far less clear cut than you make them out.
Yes, there is good science that shows that eyewitness testimony can be unreliable. There is also some okay science that shows that people can be prompted to create false memories.
However, there is no good science that shows people can be prompted to create false memories of deeply traumatic or personal events such as you are saying. There is definitely no good science showing that recovered memories of traumatic events are more likely to be false than true.
Much of the rhetoric around the potential for falsely recovered memories comes from a foundation that was founded by an alleged abuser and has a loose relationship with scientific truth and a fairly sketchy history of protecting child abusers. https://en.m.wikipedia.org/wiki/False_Memory_Syndrome_Founda...
> there is no good science that shows people can be prompted to create false memories of deeply traumatic or personal events such as you are saying. There is definitely no good science showing that recovered memories are more likely to be false than true.
You're absolutely correct that theres a poverty of experimental research in implanting / recovering memories of sexual abuse. It's difficult to devise experiments with any ecological validity in this area, for obvious reasons. However we have the existence proof of thousands of such allegations proven to be false [1]. For obvious reasons, experimental studies on implanted memories of sexual abuse would be unethical, but less than half of US clinical psychologists believe repressed memories can be recovered accurately, and 95% agree that recovered memories can sometimes be false [2].
While child sexual abuse is horrifically common. Given the age of the victim during the alleged offence, and the delay in recovery, this particular case seems extremely unlikely to be true - in the absence of any collaborating evidence.
> Much of the rhetoric around the potential for falsely recovered memories comes from a foundation that was founded by an alleged abuser and has a loose relationship with scientific truth and a fairly sketchy history of protecting child abusers
I'm much more familiar with the work of Elizabeth Loftus on the construction of memory than I am with any media facing foundation. Her lab research shows it's trivially easy to create memories of early childhood experience - although again, she never researched implanting memories of abuse. She also did some great work on demand characteristics in psychotherapy - i.e.: unconsciously producing behaviour performatively for a therapist, such as symptoms or false memories.
> However we have the existence proof of thousands of such allegations proven to be false [1]
That's nothing like what is alleged here. A mentally ill mother pressuring a son into giving false testimony is very different from an adult "recovering" a memory.
> Given the age of the victim during the alleged offence, and the delay in recovery, this particular case seems extremely unlikely to be true
Based on what?
> I'm much more familiar with the work of Elizabeth Loftus on the construction of memory than I am with any media facing foundation.
She was recruited as a founding board member of that foundation and makes a lot of money off of the idea of implanting false memories as a expert witness.
> Her lab research shows it's trivially easy to create memories of early childhood experience - although again, she never researched implanting memories of abuse.
Yet she gives expert testimony saying that it is common.
Replication of her work has shown that while yes, some people can have relatively minor memories implanted, like getting lost at a mall, it is much harder or impossible to implant a less plausible memory, such as getting an enema.
Family matters can be messy. I have a family member with an alternate reality where our parents never supported her, she had to overcome bad parents. Its a lie. I was there. She just didn’t like them asking her to be responsible. However, I totally could see everyone in her life thinking she came from such an abusive family. Im one of like 4 people on earth that know the truth.
The point is, families can be messy, especially with money involved, so we shouldn’t even try to insert ourselves.
Recovered memories are not a valid source of evidence. We should always presume the accused is innocent, that's the basis of any reliable justice system. Especially when no evidence is presented beyond extremely dubious self report.
Um no we should not presume the accused is innocent; we're not a justice system at all, we're a commentariat. Nor should we presume the accused is guilty; the information we have is precisely what we know, which leaves the question open, and it would be wrong to presume an answer either way. We should assume the accuser has a probability of being correct and not invalidate their claims based on some spurious framework designed to justify doubting them.
Especially not when there's a HUGE and incredibly well-documented pattern of "ignoring accusations of abuse against powerful people" which our society is, like, right in the middle of trying to get better about (while for some reason you are advocating for undoing all that progress?).
Probably this, where she is discussing her first memories on a podcast:
> Annie then basically proceeds to mention that “panic attack” doesn’t quite feel like her first memory, but doesn’t decisively settle on a “first memory.” She concludes the article: “TBD on the first memory of that history. Here’s to exploring.”
> This becomes relevant later on, as Annie ends up remembering an earlier memory than her panic attacks—Sam sexually assaulting her.
Her presentation is poor, but it seems pretty clear something terrible happened to her and was invalidated. Her pattern of behavior is consistent with others with major childhood traumas. Most people are not having panic attacks and thinking of killing themselves before elementary school.
Her pattern of behaviour is also consistent with the bpd, which frequently involve delusions of persecution. Suicidal ideation and severe anxiety are unfortunately not rare in adolescence either, and certainly don't evidence sexual abuse - when not reported or remembered at the time.
Not to mention the well-established phenomenon of repressing memories of traumatic events, as well as the general memory impairment effects of chronic stress.
Is your position then that if a pre-adolescent child does not immediately report abuse with objective evidence and coherent testimony that we can only conclude nothing happened?
Daniil Khlomov once said, that when a patient talks to him about traumatic scenario from their past, it doesn’t matter if it actually happened or not, the only thing that matters is that the patient wants to discuss this exact scenario.
So let me get this straight, your training as a psychoanalyst qualifies you to discredit a woman's testimony, despite you never having met with her? How wonderful.
By the way, have you read this?
> Today, the American Psychiatric Association (APA) reiterates its continued and unwavering commitment to the ethical principle known as "The Goldwater Rule." We at the APA call for an end to psychiatrists providing professional opinions in the media about public figures whom they have not examined, whether it be on cable news appearances, books, or in social media. Armchair psychiatry or the use of psychiatry as a political tool is the misuse of psychiatry and is unacceptable and unethical.
Just to be clear. I'm not a psychologist or psychoanalyst. I trained as a psychoanalyst (and not in the US) but don't practice. Also to note - the APA don't represent psychologists or psychoanalysts, but rather psychiatrists. Anyone discussing the motivations of any public figure is engaging in armchair psychoanalysis, and although I agree with the premise behind prohibiting professionals from doing so, psychiatrists often do in fact provide professional opinions to the media about people they have not examined (for example diagnosing certain political figures).
In any case, I don't think anyone is qualified to credit or discredit a claim in a medical or psychologically diagnostic sense without a clinical interview (something that few psychiatrists are qualified to perform by the way). That's not what I'm doing here. What I'm pointing out is the unreliability of recovered memory. Specifically someone claiming that they repressed sexual abuse at age 4, for decades, and then remembered it accurately. That's simply not how memory usually works.
> In any case, I don't think anyone is qualified to credit or discredit a claim in a medical or psychologically diagnostic sense without a clinical interview (something that few psychiatrists are qualified to perform by the way). That's not what I'm doing here. What I'm pointing out is the unreliability of recovered memory.
Uh huh... and in another comment in this discussion you've gone on to speculate that she has BPD, with the implication that her accusations shouldn't be believed for that reason.
It would be consistent with motivating the confused cognition (e.g.: her rambling and contradictory statements in the article linked by OP), and producing false or disingenuous allegations. However so could lots of other things. My point wasn't to diagnose this person, but to make clear "it seems pretty clear something terrible happened to her and was invalidated" is a dubious rationale for believing any kind of allegation.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Which includes the Duke Lacrosse players -- who were actually innocent.
I read the (long!) New Yorker article and I was left with three impressions: psychologists and their families are really weird people, I don't like her as a person, and she is right that memories aren't very reliable.
> Which includes the Duke Lacrosse players -- who were actually innocent.
She did say she takes every case (although she drew the line at a Nazi). Memory is unreliable, but it’s tough to argue that was the issue in the lacrosse case.
Leveraging that against a person who is risking everything to get justice against a person in power is a lot to put on them. Preventing that pressure opens up a way for manipulators to get through. Gödel probably has something to say about all that.
As an aside it’s interesting that in specific comments referencing the recent Altman events I have been repeatedly downvoted to -3.
I was wondering if I were hiring someone and there were these allegations against them if I would hire them. Would I go by the fact that these are unproven allegations and hire the person, or keep looking, afraid of being associated with them.
What about if that person was an investor. They are dangling cash in front of me? Is that a lot different than the hiring situation. I think I would rather be safe and avoid associated with them.
It's so important in these situations to gather as much hard evidence as possible (audio/video recordings), from the start.
Voice recorders and storage are very cheap nowadays, keep them on you, running at all times if you ever find yourself in any similar circumstances. That way, it's no longer of "he said, she said", and it's clear who the problem is.
Of course assuming this is legal to do so where you live.
This would have been good advice a decade ago, but with how good deepfakes are now, I would argue that any audio/video evidence recorded and submitted by either party should be considered as part of their testimony rather than valid evidence, unless its provenance can be cryptographically confirmed.
> unless its provenance can be cryptographically confirmed
How do you envision that working? All I can think of that could in theory be possible would be using crypto to prove a video existed at a particular point in time. But that's not that valuable in terms of proof. Plus you can get the same effect by just handing a copy to someone like a lawyer who has a reputation for honesty and has a lot to lose if they're found to be dishonest.
That's pretty much what I'm thinking too. And you don't even need to submit a copy, but rather just a cryptographic hash of your file.
But there's still the issue of proving that you haven't manipulated it before handing it in, and here I can't seem to think of anything better than the recording and hash submission being done by a DRM'd tool out of the individual's full control.
> Some CEOs would get fired for moonlighting. Others (Elon Musk, etc) build massive companies and make their existing companies more resilient and lasting.
I don't get why more CEO/founders don't found second and third companies. You build a great leadership team, then explore adjacent spaces that can reinforce your vision of the future. In a way, it can clear the space of potential competitors. Or build potential customers.
I think Altman and Musk are doing an amazing job of it. Dorsey wasn't doing a bad job, either.
Thanks to covid and the remote work revolution, this has actually become a very real scenario for many people working in tech. Of course this also further exposes Altman's hypocrisy, as he is one of the stronger proponents of the "return to office" movement. So he obviously doesn't want his employees doing the exact same thing he does himself.
I actually agree. While I more or less detest Musk and Dorsey, I do also appreciate as you put it, that they were able to build a leadership teams that can run the business while they go and about acting as the face of the business. Unfortunately I don't think they have succeeded fully at this, because to me their public image has now started to hurt their businesses...
IDK supposedly executives are paid to be on top of things, be responsible if they go south (but boards and golden parachutes protect them), and work however many hours it takes to make the important decisions.
So if their time is divided then my guess is they won't be doing it very well. If they are all face and PR then perhaps AI will replace them.
IMHO, this is the difference between the Berkshire Hathaway model and the Apple model.
Good leaders should probably invest a lot of time in developing a deep and talented pipeline of other good leaders, so that the current leader can be replaced.
Bad (but still effective) leaders leave the company flying off the edge of a cliff without a successor when they exit. Usually because they played politics and blocked anyone's rise who could challenge them. (See: PLO)
Its almost always because of OpenAI like-situations.
The CEO reports to a board, and the board is too narrowminded or risk averse to allow for the CEOs ideas.
Its actually not very dissimilar from the public company vs private enterprise conflict paradigm: public rewards short term thinking while private rewards long term investment.
Unless the board is composed of visionaries, entrepreneurs , or the CEO has a track record of cultivating enterprises, you are going to have a really hard time