Hacker News new | past | comments | ask | show | jobs | submit | ryanjamurphy's comments login

There's an ethical/moral-luck dilemma at the heart of this.

If a AAA-tier podcast on the subject you want to listen to a podcast about exists (and you know about it), then that's probably a better (and obvious) choice for your listening time.

However, if you want to listen to someone discuss or explain something and you don't know about a AAA-tier podcast, it's possible that a generated podcast is better than nothing.

On the other hand, it's also possible that the generated podcast will miss or hallucinate a key detail, and herein lies the dilemma. Is it better to listen to something that might get something wrong, or not to listen and perhaps someday to learn about the subject through some other form that is less likely to include mistakes?


> But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma. One can even keep buying original recipe JTB [...]

Sorry, naive questions: what is a terrifying dilemma to 21st century epistemologist? What is the "modern" recipe?


Thanks for this comment. I think I found the article: https://www.pnas.org/doi/10.1073/pnas.1821936116

Layperson coverage: https://news.harvard.edu/gazette/story/2019/09/study-shows-t...


I suppose the extreme version of the parent comment's vision would be to develop entirely new neurological circuits that can process, interpret, and integrate some arbitrary new source of data in the world. I agree that that's kind of unimaginable now, but give infinite monkeys infinite typewriters and one of them will probably hook up the company's sales data to a new section of cortex just to see what would happen.

I read a more interesting takeaway, perhaps: that we can — and do — develop new "senses" for any given signal we can perceive. A possibly-shoddy example of this is what social media does to us: the social networks provided everyone with a novel social sense, and indeed everyone who uses social networks perceive and attenuate to that sense in different ways.

This has practical implications: given that we don't have infinite cognitive capacity or even much moment-to-moment bandwidth, we should be careful about which of these digital senses have our attention.

There're obvious links here to "augcog" (augmented cognition; [1]), but also I feel like Ackoff's five assumptions about "management misinformation systems" are relevant somehow[2].

Interesting to think about!

[1]: Especially DARPA's work and similar — https://en.wikipedia.org/wiki/Augmented_cognition#DARPA's_Au... [2]: https://www.jstor.org/stable/2628680


The important thing I take away from this comment is that before you choose what to read, it's crucial to be able to identify which readings contain the most useful and valuable takeaways that are worth the effort of reading them. It's true that modern mental models of reading and writing train us to only seek out the stuff that's easy to read, but the real problem is that there's so much to read that we have to prioritize, leading to a tendency to read the easy stuff because it's a guarantee you'll get something out of it. Then there's a sort-of market dynamic leading to the success of the easy stuff and the dismissal of the hard.

If that dynamic means that we miss out on the readings that are truly transformative, we've lost. So perhaps the strategic differentiator between readers is to actually have a really powerful theory of prioritization, and useful mechanisms to prioritize (such as the curated references of a good university course or a social network that shares only the most important resources, regardless of how difficult they are to understand).


I've recently heard #1 and #2 succinctly said as "Inspiration follows action."


Thanks for this recommendation. I've tried SteerMouse and other alternatives in the past and haven't stuck with them (can't recall why exactly) but would dearly like to get away from Logitech's software.


I am also curious about this!


It's not really moving on - I'd still love to have something like this, but it's an entire paradigm shift that I haven't had the capacity for, not to mention buy-in from other parties.

ericalexander0's comment above touches on some of it ("poor assumptions, prioritizing ideology over customer value, and misaligned shared mental models")

In my particular case, I was expanding a business and starting a new one and had just discovered the whole "productivity" scene and had naive notions of using task management tools and Notion wikis to achieve some latent superpowers. But I got into a rabbit hole where nothing was good enough, there was always some element of lossiness as you moved between tools, and all the tools in the world are not a subsititute for having clear mental models and actually just getting on with the job rather than thinking endlessly about the most beautiful and intuitive ways of getting it done.

Separately from these meta-concerns, building and navigating a model was not as fluent as a Workflowy/Dynalist situation (the latency was small but annoying - like early days of Notion), and as your model built up and reorganised it was easy to lose track of things. There's still some value in graph-based knowledge management (e.g. Obisidian), but it's also important to remember that storing and having access to information (however aesthetically pleasing it might be) is not the same as knowing something.

Possibly a larger conversation lurking somewhere about productivity, management, the meaning of work, ADHD, and friction.


> It can be the great filter that leads to the Fermi paradox.

I'm increasingly of a similar view — that the great filter is something we're already past, due to the incredible combinations of constraints that led to where we are today.

Another infinitesimal probability may be the development of abstract intelligence. The conditions that led to our brand of intelligence being an evolutionary advantage seem particularly unique: https://www.ncbi.nlm.nih.gov/books/NBK210002/


I feel there might be intelligent species out there without either the resources to develop technology or the ability to do so. Sharks could become smarter than us one day, but without thumbs they aren't going to succeed in overtaking the planet. Similarly, there could be a humanoid fish species under thick ice sheets of Europa, and they would never guess there is anything more than their "ocean" in the universe.



Good one, I'll have a look. Are there any good videos on YouTube on the topic of origin of intelligence? I can't seem to find any.


Wish I could recommend one but I don't frequent YouTube, sorry!


It doesn't respond to the 2016 address, but this site [1] has a table of many of Christy's claims and evidence against them.

A central graph in the 2016 address (found on page 2) has numerous visualization issues (e.g., misalignment of the lines on the graph to exaggerate differences, no uncertainty ranges are provided, averaging data sets in the curves, leaving out data) [2].

[1]: https://skepticalscience.com/skeptic_John_Christy.htm

[2]: https://skepticalscience.com/climate-models-intermediate.htm


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: