Hacker News new | past | comments | ask | show | jobs | submit | suriyaG's comments login

Zuckerberg's reasoning on AI/ML seemed very justifiable to me in the Dwarkesh podcast

IIRC,

- Having a better model is a competitive advantage in fighting against spam - Better models enables facebook itself to understand their code vulnerabilities, better employee productivity etc. - Being in the frontier of open source, keeps them at an advantage in terms of updates from community


Also Meta is in the business of sharing content. Instagram, Facebook, Threads, and WhatsApp are all about sharing content.

They’re highly incentivized to expand the options for generating content. OpenAI and Anthropic have to make money from the generation, which raises the barrier to creation. Meta can say fuck it, let everyone create and well sell ads against what they create.


I understand you're trying to simplify every other job out there in a crass way. But,

- How many companies out there are tripling revenue? If a majority of the companies are tripling their revenue YoY is it healthy or cancerous to the economy?

- How many times does an industry "reinvent" itself.

- What use are all other software, without CRUD apps pulling money from the "other" economy into software?

famously, computing got its foothold, __because__ lotus 123 was utilitarian. What I mean by that is, ultimate utility for all software products come from software interacting with the real world somewhere down the line.

it might not be hip or world breaking. But, it is the most important part.


Lotus123 did affect the economy and was a breakthrough.

But that was 1983, closer to WWII than to today. No AI, no internet, no cellphones . We have vastly more powerful systems and dev tools now.


Vastly more powerful systems that are slower. Pretty much the only office tasks that are faster are crunching spreadsheets and sending data to a neighbouring office: everything from booting the machine to keying in data to making phone calls takes longer.


Maybe if we got some real serious inflation, could we triple our revenue? /s


kahneman was such a fascinating personality. Other than "Thinking fast and slow", I highly recommend "The Undoing Project" by Michael Lewis about Kahneman and Tversky's incredible journey changing the standard economic theory.

Some interesting talks with Daniel Kahneman

- https://www.edge.org/adversarial-collaboration-daniel-kahnem...

- https://replicationindex.com/2017/02/02/reconstruction-of-a-... Kahneman himself reponds in the comment sections to a very critical piece about his work.


Don't forget "Noise" it was a great read and great bridge between "Thinking Fast and Slow" with "Nudge"


Nudge is based on a significant amount of messy science and smuggled policy preferences. It has also not really borne out via the policy initiatives started based on its concepts.


Sad that Tversky, despite being younger, predeceased him by nearly 3 decades.


Does behavioral economics offer an alternative theory with useful predictions? Like, do we have an option pricing model that holds up better than models coming out of standard theory?


Those option pricing models (mostly) work because everyone uses them. Because (almost) everyone is convinced they work.

Convince everyone to use a different model and it will work.


Kahneman's reply was burried deep inside the comment section. Reproduced below for others' convenience:

>> I [Kahneman] accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited.

What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.

My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.

I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own. What impressed me was the unanimity and coherence of the results reported by many laboratories. I concluded that priming effects are easy for skilled experimenters to induce, and that they are robust. However, I now understand that my reasoning was flawed and that I should have known better. Unanimity of underpowered studies provides compelling evidence for the existence of a severe file-drawer problem (and/or p-hacking). The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through. When questions were later raised about the robustness of priming results I hoped that the authors of this research would rally to bolster their case by stronger evidence, but this did not happen.

I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.

I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims.


I'm a person who hates lists and plans for personal tasks. But, am learning to love them. Because as I often realize, things get missed in the myriad of things to remember.

The plan to me is just a way of reminding my future self what I thought of as ideal output in the past. Instead of infinitely changing my plans and chasing fireflies


That's slightly different, IMO. Writing down things that need to get done as you think of them is not the same as writing down a plan.


Au, is also the chemical symbol for Gold. It's the short form of the latin word Aurum. This is probably, what the authors intentended as shown in the yellow tint in the website. I might be wrong though


definitely not :)


I think, in simpler terms. There's always a Chief Information Security Officer (CISO). But, there's rarely a Chief Productivity Officer. It's usually the CTO et al fighting for dev productivity if any, else nobody just cares and you get impenetrable tarball software


I'm not supporting the parent comment. But don't you think we as humans have a penchant for conflating the reel and the real and ending up reinforcing the stereotypes present in the world? If anything, we need less stereotypes. A caricature is fine, But after a certain point it just feels tiresomely pigeonholing into an idea.


> conflating the reel and the real

Love the sneaky word play! Does art imitate life or does life imitate art?

> ... reinforcing the stereotypes present in the world

The whole point of comedy is to take away the teeth behind these things. The act is meant to reshape culture and conversation. Stereotype, beauty, the perception of color, were very fungible and its just another tool!


The bloomberg article seems to have been taken down and is now going to 404. https://www.bloomberg.com/opinion/articles/2023-12-07/google...


Just an error in the link, here's the corrected version: https://www.bloomberg.com/opinion/articles/2023-12-07/google...


and here's a readable version: https://archive.ph/ABhZi


I think it is Space Repetition Software/System


The repository https://github.com/stephen/airsonos has the code and is surprisingly accessible.


The meat for the airplay side is here: https://github.com/stephen/nodetunes

Please excuse the code quality... I think I was still learning how to write js at the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: