Hacker News new | past | comments | ask | show | jobs | submit login

Contrary to many people's expectations, the most typical profile of someone who kills themselves is a 33 to 44 year old man who presents no warning signs and leaves no note.

This is a tool that could be extremely helpful for addressing a large portion of suicides.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3777349/

https://www.health.harvard.edu/blog/suicide-often-not-preced...

https://www.medpagetoday.com/meetingcoverage/cap/82375




You and I seem to have different definitions of "warning signs." From the medpagetoday story you linked:

- 239/657 suicides were people under psychiatric care (please note they used present tense, more of them might have had a history of care)

- 187/657 had a previous attempt (!!!)

- "About 22% of the deaths were accompanied by no known inciting event or identified life stressor" Phrased differently, 78% of suicides accompanied a life event or stressor.

Contrary to popular belief, risk prediction isn't usually rocket science. Suicide correlates are extremely well studied, and any experienced mental health professional can point high risk individuals out to you if they ever cross paths with one.

The most challenging problem often is: what do you do about it? Can you get them the help they need? Can you navigate them through a big bureaucracy like the VA? Do you have grounds to forcibly keep them in in-patient psychiatry? Etc, etc.

Source: Relative is a psychologist @ the VA.


I think you are overstating the predictive power. You can find strong correlations looking backwards, but that doesn't mean that they are good or strong predictors looking forward.

To take just one of your data points as an example, 78% of suicides were preceded by a stressor. If you flip it to make a prediction, what percentage of Life stressors lead to suicide? The number of people who experience major life stressors must be enormous. If tens or hundreds of millions people have major stressors a year, that doesn't help you very much in targeting limited services .

The same holds true for psychiatric care. Prior suicide attempt is probably the exception there, but even that int ~30% predictive power looking foward.


You're missing that the models (including mental models) that professionals use to predict suicidality are not univariate.

When a manic depressive with substance abuse issues and prior suicide attempts walks into a clinic nobody needs a "new cognitive science tool" to tell them if they're high suicide risk, and these cases are a larger proportion of suicides than most people think.

Also that sort of missed the point of my comment, which is perhaps my fault for not being clear enough. Preventing suicide is a two step process:

1) find high risk people

2) prevent their suicide attempts

My comment was simply relaying the POV of someone who does this professionally: as a society we are better than most people think at #1 and worse than most people think at #2.


I think we still might be talking past each other. My assumption would be that we do a pretty terrible job at both #1 and #2 on your list, and we should look at how we can do both better. I don't think it is a competition.

I have no doubt that professionals can predict suicide for some people with actually high confidence, e.g. >50%. Sometimes it is just obvious to anyone with eyes. My point is that despite this, it doesn't mean that they are good at finding suicidal people in general.

If 45k Americans kill themselves per year, how many of those were previously identified as "high risk". How many total are identified as high risk?

Most of the things we consider as "risk factors" are terrible predictors. Take depression or anxiety, which are considered major factors. They only increase the relative risk of suicide between 2 and 5x. That is crap. When your baseline suicide rate is 13/100,000 and your major depressive is 5X more likely, you are still at 65/100,000.

This is just a long winded way of saying that if you want to claim we are good at predicting suicides, we have to take into account all of the false positives, and all of the false negatives.

If someone gave top professionals the job of predicting which 13 people out of 100k would kill themselves, how many do you think they would get right and how many would they miss?


I meant more immediate warning signs, like giving away personal possessions or sudden happiness.

When my friend killed himself, it both was and wasn’t a surprise. On one hand, he had a whole host of risk factors that everyone knew about. In the other hand, he’d gotten therapy and meds and seemed to be doing much better. His texts seemed normal. I’d even taken a training called “mental health first aid” - I knew that if I saw any signs I should ask him directly if he was thinking of suicide.

If there had been a magical way to know that he was in a period of high risk, there are a million things that could have been done besides in-patient. As one example, he had many friends who would have been by his side 24/7 (as had been done when we knew he wasn’t doing well).

I’m extremely skeptical that a technology like this could have worked and detected what was going on with him. I’m even more skeptical that the military would use it in an appropriate way. But still, I do see a glimmer of hope in it.

Of course, the obvious solution and the one that should be higher on everyone’s priority list is to simply stop sending soldiers into wars, stop letting parents get away with beating the shit out of their kids, and stop covering up rape. Unfortunately the mind reading machine is probably more likely to actually get done.

Just to add, I hear what you’re saying about how the hard part is preventing suicide more so than identifying risk. That lines up with what I’ve seen. I don’t think that negates the need to work on both problems.


My impression (I do research in this area broadly speaking) is that this sort of tool is not really very strongly related to the reasons why they present no warning signs, etc.

Implicit measures tend, on replication and rigorous scrutiny, to kind of be very broad brush and not very specific. There is a literature on implicit measurement and suicide, and it seems they hold up, but they don't dramatically add to anything, and there's also tricky issues about the meaning of "preconscious" assessment. So, lots of false positives etc, and also maybe even bigger questions about the meaning of implicit processing relative to conscious opportunities for "revision".

I think the problem is, what good does it do if someone has some predilection toward suicide but isn't aware of it? If they were going to be resistant to addressing it without this tool, why would they be less resistant to it with it?

The problem with suicide prediction has never really been in predicting suicidal attitudes or cognitions, it's been predicting the actual follow-through with an attempt or success. That depends on a lot of stuff that exists outside of attitudinal space, for lack of a better way of putting it. Someone can basically feel disturbed by the idea until they don't, and then change rapidly in their feelings about it, or vice-versa.

I can't say I think this research line is a bad idea; I think the more information the better. But I'm really skeptical about how well it will work up when really skeptically evaluated, with lots of data, especially when you put it to practice. There's a lot more hurdles than people think.


> I think the problem is, what good does it do if someone has some predilection toward suicide but isn't aware of it? If they were going to be resistant to addressing it without this tool, why would they be less resistant to it with it?

(Not a researcher in this area, but someone who's dealt with depression) this is a big concern of mine, and it doesn't seem different from the issues with a standard lie detector. A depressed person can feel fine one day and not the next. If we think of things like anhedonia, grief, guilt, etc they aren't constant day to day. A lot of suicides are preventable simply by getting a person through a day or two. This is the number one reason I'd argue for waiting periods on pistols (most gun deaths are suicides btw). Would such an event even show up on this device? Timing becomes an extremely important factor here. My understanding too is that we still don't even know how these diseases function from a chemical level in the brain (norepinephrine, dopamine, and serotonin). Categorically there can be: too much, not enough, broken uptake mechanism; and our main treatments are with reuptake inhibitors (SSRIs or SSNRIs) and these tools are really limited in those categories (which are very high level fwiw and need to be broken down substantially more to target good drug delivery).

I am also very concerned about how this technology can be abused. While I agree that there can be a lot of uses to it, that doesn't mean we shouldn't be concerned with how it can be abused.


Even if it turns out not to be the perfect solution, it might turn out that this kind of measurement might be useful for things like estimating the effectiveness of medication and other medical interventions, or even lifestyle changes. Imagine if this kind of thing could be turned into a medical device. Many people who end up committing suicide have had previous suicidal episodes. Being able to have a technology that people with known suicidal tendencies can use to help keep tabs on their own mental health could potentially be as useful to them as blood sugar monitoring is to diabetics.


The road to hell [...] good intentions.


I see other people in the comments saying that this might be used to limit people's rights. I don't view that as a good reason to not pursue this research.

We have existing legal frameworks for ensuring people's rights are protected. Doctors can't just kidnap someone, courts are involved. In many cases those rights and protections should be strengthened.

The fact that those protections and frameworks aren't perfect shouldn't stop doctors and medical researchers from doing their job, which is to treat patients.


I'd be fine with this being developed in a civilian clinical setting.

I am not fine with the military developing this and then considering it's use on service members.

Particularly troubling are the lines "since patients will often tell their clinicians what they think the clinician wants to hear rather than how they are truly feeling" and "on aggregating preconscious brain signals to determine what someone believes to be true."

They see an issue with a voluntary clinical process and they want to remove the voluntary aspect of it. To me, it seems they are interfering with a process failure they haven't categorized correctly and are attempting to remove the patient from their own process of care.

If the intention is to use this on service members without their explicit request, this presents one of the slipperiest slopes I've ever seen.


"I am not fine with the military developing this and then considering it's use on service members."

This. The rights and protections you have in the military, as well as the military judicial system, are vastly different from the civilian world. I have very little confidence in even the civilian side (so many abuses and so much incompetence).


That's an extremely good point. It's easy for me to forget that DARPA is primarily in the business of war.

I hope there is a version of this that's used to help people, but I have changed my mind and now agree that it's inappropriate for a military to be developing a technology like this.


How can they train a dataset knowing that their training participants were also telling clinicians what they want to hear rather than how they were truly feeling? In any event the truth is unknowable, so it is wrong to assert it with authority.


It doesn't matter how many legal frameworks there are. Neither courts nor doctors can read people's minds so they are basically kidnapping people based on something that is barely more than a guess. There are thousands of stories of people getting sectioned for something stupid like having a dark humour and telling an off-colour joke. There are also thousands of stories of people who had been plotting their suicide for months, reached out to the hospital for one last attempt for help, got turned away for supposed attention-seeking, and then killed themselves.

Anything that can elevate institutionalization to more than mass guessing has to be a plus. Though we also do need to solve the problem that these institutions are so often nightmares to be in, so that suicidal people are getting what they need instead of just being imprisoned.


there is no framework whatsoever for neural monitoring. This is willfully dishonest, and more than a bit worrying to see trotted out as a defense of one of the most invasive technologies currently in development.


That's not true at all, at least in a civilian context. As others have pointed out, the fact that this is being developed by DARPA means that those legal restraints may not actually apply, which I agree is very disturbing.

First of all, it would be health data protected under HIPAA.

It could also be relevant to involuntary hospitalization, where the current standard is "clear and present danger." In general, you can't be involuntarily hospitalized for saying something like "I'm having thoughts of suicide," but could be involuntarily hospitalized for talking about a specific plan for suicide or actually attempting suicide. The idea that this technology could legally demonstrate that someone is a clear and present danger to themselves is far fetched. I'm not saying the legal system is perfect or even good, but it's not 100% stupid. Judges can and do distinguish between statistical and non-statistical evidence.

Red flag laws/ERPOs use a less stringent standard from what I understand, so it is somewhat more likely (although still unlikely overall, I'd argue) to be applicable in that case.


In Sweden they can certainly do so, if a person is a danger to themselves or others (as declared by two doctors) they can be put into closed mental care with police escort and drugged to the extent that they can't speak for themselves without any right to legal protection. The protection of their life is considered to be more important. I don't know about US law, but probably other countries has similar laws.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: