Hacker News new | past | comments | ask | show | jobs | submit | Nowado's comments login

Last time I tried it, app literally just read papers. As in parsed arxiv pdfs text2speech. It was an awful misunderstanding of the medium. Unless it was rebuilt significantly over last months, it's just bad.


We built Oration (https://oration.app) to improve on issues like this. It also generates a summarized version


I'm not sure if it's the most modern setup there is, but https://www.youtube.com/watch?v=UPtG_38Oq8o gives exceptionally friendly explanation.


There is even more fun aspect.

'Survival' for cancer tends to be defined as surviving 5 years. The earlier you catch, the more patient had left to live anyway.


Probably hyperbole but a colleague told me about a 80/20 distribution, that a decreasing amounts are spent on substantial life extension or quality of life improvement in the west as the pop ages.

The basic good old medical care invented 100 years ago, while dizzying amounts are spent on prolonging lives for very, very few years, often very late in life - efforts that are very close to - in effect to have done nothing, ie. almost performative.

Is this true?


I don’t know about that, speaking to oncology as I work in a NCI designated cancer center (i.e. somewhere that spends dizzying amounts) and it skews younger than you might think these days.

I’m not sure what you mean by “very, very few years”. As a hypothetical would prolonging life for ~3-7 years in a 40-50 year old be considered “almost performative” to you?

“Good old medical care” often means 3-6 month survival for these patients.


yes. the amount we spend to keep people alive who have little to no hope of ever recovering is immense. of course it is cruel and leads to myriad bad outcomes if you were to even attempt to have a discussion about trying to change that (it is the slippery-est of slippery slopes)

there's probably no way to actually do anything concerted about it without turning society into Logan's Run but having gone through it with a grandparent and a parent, it is clear something is broken at the end of life


Spending on Medicare beneficiaries in their last year of life accounts for about 25% of total Medicare spending on beneficiaries age 65 or older.

So yes, it's true (although that includes the cost of hospital stays which is where a lot of people end their life).


Probably. Look at the people in the hospital - they’re old. Inpatient costs are astronomical, and seniors with poor social supports end up hospitalized at great expense with issues where root cause are easily prevented… like dehydration.


Being old is a fairly long part of life nowadays. Old is not the same as hopeless or almost dying.

My grandma had a melanoma at the age of 74, which is "old" by most human standards. It was located on her earlobe and an operation helped her get rid of it.

She then lived to be 90, most of that extra time either fully or partially self-sufficient. Only in the last months in her life she really deteriorated.

Basically, she gained almost a fifth of her life by that single operation performed when she was already old.


That’s awesome. I’m not suggesting that older folks not get care.

But because of way our system works, we’ll happily pay $300k to hospitalize an otherwise healthy 70 year old who is dehydrated and develops serious problems that could be solved by an aide or helper that would cost $20-30k.


You really don't want to have white text on light grey buttons.

Fun little thing otherwise!


To the OP: It looks like your `button`s have no `color` property, and are just relying on the user agent stylesheet to (hopefully) be a dark text color. This will vary for different users.


Do you have some user research you could share?

I remember thinking about this exact problem (branching conversations, in particular audio), but I couldn't find a reasonable consumption pattern.

Looking at how I consume podcasts, it's a completely passive experience - I probably have something in my hands and can't talk. Choosing paths is just too much interactivity.

I figured that maybe that's just a wrong mode to look at it and people can consume the whole thing differently, not as a podcast. Ok then, I'm an obsessed power user/fan, I consume the whole thing, all branches. Given how human attention/memory works, that means returning to earlier parts of recording after listening to branch at least some of the time, multiple times experiencing 'where did we start? Let me go back a bit. Oh, that topic was the starting. Let me forward a bit now that I know it'. That's horrible, I think. You were at least more reasonable than me when thinking about it and decided to have only 1 level of branching ; )

In similar vein, what happens when comment gets added after I already listened/how do I know which parts are 'the definite experience'? Unlike previous two issues, those questions are answerable, but I'd still like to hear what you think the answers are!


I think there can be several ways to use it. I think passive listening can and for most will be how it's used. But people who want to discuss and "add to a conversation" will have the option.

Definitely will be modifying the experience to be fully handsfree.

I think the context issue is what can make this actually work or not. Currently it's not built out but my thinking is to have a short context of what was commented on i.e. 10 seconds before the comment. That way you can jump back into the conversation from a new comment left.

I think the context can also be determined by how long it's been since you listened to the last audio - meaning a comment left after a week might have 60s of context vs a comment after 10 mins might just have 5s.

And yes, the UI isn't great at showing what you already listened to now but that needs to be obvious too.


General tendency for internet content tends to be strong separation between creators and consumers, in particular limited interest of consumers in other consumers (think twitch chat. Each message is valued very little compared to the streamer, to the point where they always read out messages they respond to). That means unless there's something nudging people to default outside of central path of audio, adding to conversation isn't part of canon content.

There could be a way for responder to signal where the content they are answering starts, with some sort of fuzzy automation in the future. I have strong doubts about the actual experience of this for the listener, but maybe that's solvable.

I meant situation, where I already consumed the whole recording, but it gets response later on.

I do not have mental model for context being logically attached to the response. Do you think about it as response+context being a valid piece of content?


Yes I think of it as a response being a valid piece of content. For example if you recorded the above message in audio and I wanted to respond after your first paragraph then my response maybe:

[context] (your audio) "adding to conversation isn't part of canon content."

(my audio) "I disagree I think adding to the conversation can be just as valuable especially if content is filtered correctly meaning you'd only see comments from people you followed"

Something like that. But yes I do agree this is a huge issue and basically if it can't be solved then the app will fail.

I think the use case won't be to replace a typical podcaster but imagine if we're both in a group and discussing a podcast on the podcast - that's more how I see it


I could see it working a bit better if the original podcast had fixed points where they allow comments. So they can say something like "ok now we will open this section up for comments, and afterwards we'll continue on to...".

Then as comments come in they can follow a dialog structure, or the original poster can come in and add some clarification as a reply.

edit: If there are a lot of comments coming in, you could set it to autoplay only the top X comments and their replies or whatever.

For reviewing afterwards, it would also be really helpful to have an auto transcription so you can quickly scroll through for anything you missed or want to go back to.


I like this idea, give people control over where they allow others to comment. I'll think more deeply into this thanks


Interesting, do you happen to have some quantitative results on this/additional insights/etc?

I've interpreted transformer vector similarity as 'likelihood to be followed by the same thing' which is close to word2vec's 'sum of likelihoods of all words to be replaced by the other set' (kinda), but also very different in some contexts.


There's no simplified definition like that, vectors can even capture logical properties, it's all down to what the model was tuned for: https://www.sbert.net/examples/training/nli/README.html


I was thinking about similar solution (actually, textblaze funded by YC is pretty much that) but I didn't like remembering the shortcuts part. So I made a different kind of indexing for it: https://discu.space/ Presentation uses 'what are you answering to' as a key, but you can use anything.

It currently exists as (hopefully working for everyone, could use more testing) Chrome extension, but there's a universal API underneath. It could be run entirely locally if one was to give up portability.


++ with mere MA in cog sci and psychology. If you really wanted to get EEG to work for typing, maybe you could train someone to map thinking about specific kinds of things to the keyboard, but that would be an extremely weird experience. Eyeball is going to have the best signal about eyeball-related motor cortex we can access.


Gives me captcha loop : (


Update your browser. I recently got stuck in that with a slightly old version of Firefox. After updating, everything worked again.

I suspect it wasn't cached state or plugins, because I tried to clear those to no avail. My bank then informed me it didn't like my Firefox version (despite being relatively recent).


I have the most recent version (116.0.3) and I'm also seeing a loop. This includes trying the site without plugins.


We detached this subthread from https://news.ycombinator.com/item?id=37274783.


Isn't this how copilot 'just' works, except with comments? What's the advantage over copilot?


Copilot doesn't continuously update your code when you make changes.


you get to delay code review


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: