Simple. Metaphysics is not math. Ethics is not math. Really, the only intersection is formal logic (until a certain German/Austrian mathematician blew it all up with his annoying theorems)
But applied mathematics can have ethical impact -- e.g. the concept of whether a human should trust the output of a particular language model. So GP's idea of 'trust' not applying because an object has its basis in math seems like a false dividing line. Ultimately everything can be grounded in things such as math as far as we know, although its not useful to reason about e.g. ethics from thinking about the mathematics of neuronal behavior.
This is not true. Lots of things have no mathematical foundations because it is impossible to state them formally/symbolically. If you can not specify it formally then it is not mathematics. AI is mathematics because software/code/hardware is mathematics so all the hullabaloo about "safety" makes absolutely no sense other than as a marketing gimmick. Even alignment has been co-opted by OpenAI's marketing department to sell more subscriptions.
But in any event, the endgame of AI is a machine god that perpetuates itself and keeps humans around as pets. That is the best case scenario because by most measures the developed world is already a mechanical apparatus and the only missing piece for its perpetuation is the mechanical brain.
As usual, I can build this mechanical brain for $80B so tell your VC friends.
I don't get this line of logic -- of course software has safety implications, because people use it for things in the real world. It isn't "math' that is cleanly separable from the rest of humanity; its training data comes from humanity, and it will be used towards human goals. AI is entangled with the rest of human dealings.
Whether AI poses existential threats for us or not, I'm open to either direction, but that the experts (e.g. Hinton, LeCun) are divided is reason enough to be concerned.
The way safety is handled in real world situations is through legal and monetary incentives. If the tanker you are driving to the gas station blows up then people get fired (no pun intended) and face legal repercussions. This is the case for anything that must operate in the real world. Safety is defined and then legally enforced. AI safety is no different, if an AI system makes a mistake then the operators of that system must be held liable. That's it, everything else about extinction and other sci-fi plots has no bearing on how these systems should be deployed and managed.
I have no idea what people talk about when they say LLMs must be safe. It generates words, what exactly about words is unsafe?
The long-term impact of this paper has confused me from a technical lens, although I get it from a political lens. I'm glad it brings up the risks from LLMs but makes technical/philosophical claims which seemed poorly supported and empirically have not held up -- imo because they chose not to engage with RLHF at all (which was deployed through GPT-3 at the time; and enables grounding + getting around 'parrotness'), and uses over-the-top language ("stochastic parrot") which seems very poorly to capture what it feels like to meaningfully engage with e.g. models like GPT-4.
You may be interested in this paper: https://arxiv.org/abs/2105.09352
As far as I know it was the first to train a model on commit diffs to generate code mutations, in this case for bug fixing.
Ideally, meditation would spur you on to action; that's the direct aim of engaged Buddhism [1]. But more broadly, many Buddhist schools aim to encourage a direct feeling of love for all sentient beings, which if combined with the philosophy of something like effective altruism [2] (instead of Woo), could contribute to effecting meaningful systemic change.
Also -- I believe Buddhism does not apply negative connotations to 'indignation' as opposed to raw anger, i.e. I don't think it is classified as a negative state of mind to be dissolved.
The mechanism of introducing an information bottleneck, e.g. by changing from prose to poetry and trying to recover the prose, seems similar to autoencoder techniques that are popular in machine learning.
The difference between sports and debates are that the post-processing in sports isn't going to change the most important "outcome," i.e. who won.
But post-processing when it comes to debates can mean overlaying information on top of the video that identifies clear falsehoods -- undermining a candidate's ability to play fast and loose with the truth to win, knowing that there's no real penalty for doing so.
So the "winner" might emerge differently if for example, news agencies didn't publish the live video, but each agency did independent fact checking (if the video were under embargo) and then each published annotated and unannotated versions.
You could still watch the vanilla version if you wanted to, but at least there would be widespread access to factually vetted versions as well.
Aren't there some set of claims that a candidate makes that nearly everyone can agree are objectively false? What's wrong with annotating a debate with that sort of information?
Sure, everyone can do their own research; but most won't.
There may be some bias to fact-checking, but at least it's better than relying on the candidates to do their own fact checking (i.e. usually a hugely self-serving and distorted view of reality).
But by that line of reasoning, shouldn't we never hit dead ends in AI research at all -- why has AI progress been so difficult, then? Wouldn't any field of research with many dimensions of variation never get stuck on its path towards its ultimate goals, ever?
Couldn't different objective functions be structurally more difficult than others to optimize? No matter how high-dimensional the search-space, trying to create a gaming laptop in the middle ages would have been a pretty frustrating experience.
The reduction of 'humility'/'humbleness' was across a broad sample of books, not only the self-help section, and was part of a broader study [1] describing the down-trend in many words associated with virtue.
You can indeed interpret these statistics in many ways, but you first need to know the statistics.