Hacker News new | past | comments | ask | show | jobs | submit | spangry's comments login

What kind of training did you do? I have trouble figuring out how I'm feeling and want to get better at it. I'm particularly bad at noticing when I'm stressed, and by the time I notice I'm already redlining.


I can't explain it briefly nor do I know what it is called, but it consisted of a series of weekly lectures from a psychologist who was good at this stuff. Then some homework in between, which had themes circling around decomposing complex feelings into more basic ones, mindfulness, communicating needs, etc.

It is easily the most adult-preparing course I have ever taken, but I really stumbled into it as part of something else and I wouldn't even know how to point other people in the right direction since I was not the one organising the whole thing.


Well put. I’ve wonder if some of those on the other side of the argument are actually, perhaps on some subconscious level, proponents of mind-body dualism: https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism


This is a very good thing to do. Managers have to split their attention multiple ways across multiple people and so they end up neglecting you by default. They're also time poor.

You will actually be helping them if you can package up "asks" in a way so that all they have to do is hit reply on an email, type "approved" and hit send. There's often an asymmetry between the value you get vs. the cost to your manager which you can exploit. For example, where I work they provide financial support for certain forms of further study (e.g. master's degree) in certain domains. If one of my direct reports sent me an email explaining they want to study X and the policy covers them studying X, here are the details etc. etc. I will say "approved" pretty much every time. The money doesn't come out of my budget anyway, so it's free additional remuneration frommy perspective. Heck, even when it is my budget I often don't care because it's not personally coming out of my own pocket, and we lose that money anyway once the next financial year ticks over, so better to spend it while we can. But if they just sit there quietly hoping that I'll one day come to them with an offer for them to do further studies in X, they're going to be waiting forever. I want to give my team members stuff but I've got 50 billion other things to content with so I don't have the time to plan their career for them. The worst are the people who think I'm their mother and they come to me with "I want this thing, now you go figure out how to do it for me." The thing is, I'm lazy: I like to do easy things, and I don't like to do hard things. And that request sounds like a lot of hard work to me. Easier to just say no.

So remember: (1) If you don't ask you don't get, (2) It's almost never my money anyway, and (3) If you make it easy for me to say "yes", I almost certainly will.


ChatGPT4 has overreacted to this issue IMHO. When I try to do even slightly esoteric medical research, ChatGPT seems pretty intent on only referencing Cleveland and Mayo Clinic, the most mainstream of orthodox mainstream medical sources. Try to get it to reference even peer-reviewed medical journals requires a frustrating amount of cajoling - it seems extremely reluctant to deviate from anything that isn't 100% mainstream medical orthodoxy.

This is not a good solution in the long run - ChatGPT will just reinforce existing dogmas and orthodoxies, even the ones that are (inevitably) wrong. Imagine if this approach to medical science was widespread at an earlier point in our history - we'd all probably believe that peptic ulcers are caused by 'stress' (rather than, primarily, the bacteria Helicobacter pylori). Go back even further and we'd still be lobotomising gay men to 'change their sexual orientation'. Rigidly enforcing current orthodoxies, under the premise that we're right about everything unlike those idiots in the past, will kill progress and society will stagnate.

If I wanted a blindingly arrogant tech mega-corporation to decide what 'experts' I'm allowed to get information from, I'd just use Google instead. If, as many seem to believe here, OpenAI are just worried about being sued, then why don't they just create an individual 'safe GPT output' setting (like Google 'safe search') which I can disable after acknowledging disclaimers that it's dangerous to think for myself and question mainstream positions.

I've grown to hate authoritarian Silicon Valley twats who arrogantly impose their politics, ignorance and, frankly, bizarre norms on the rest of us. It's highly ironic that these 'I love science!' types don't appear to understand that the scientific process involves making empirical observations, forming hypotheses consistent with those observations, and then continuously testing those hypotheses to determine which one is most robust to observed reality. They instead seem to think science is some kind of religion where you treat the views of the mainstream authorities as divine truth revealed by God and only heretics dare to question. By discouraging the formation of alternative hypotheses and rigorous questioning they are actually inhibiting scientific progress and making a mockery of the scientific method.

I look forward to the day we have a model that just synthesises available information and let's us decide for ourselves what to make of it. I think people will switch to such a model in droves and the likes of OpenAI and Google will go the way of all other social conformists who attempt to enforce the reality-denying orthodoxies of their day.


Are you sure it's something openai wants vs something they are just doing as a caution against lawsuits/negative publicity/real harm?


I can't know for sure obviously. But let's think about the plausibility of those three: lawsuits, bad PR, 'harm'.

On lawsuits, I would have thought a disclaimer & 'unsafe output' option would cover them. When you think about it, they're probably more exposed to legal liability by essentially taking on the responsibility of 'curating' (i.e. censoring) ChatGPT output rather than just putting a bunch of disclaimers around it, opt-ins etc. and then washing their hands of it.

On negative PR, again, they've actually set themselves up for guaranteed bad PR when something objectionable slips through their censorship net: "Well you censored X, but didn't censor Y. OpenAI is in favour of Y!" They've put themselves on the never-ending bad PR -> censorship treadmill presumably because that's where they want to be. Again, if they wanted to minimise their exposure they would just put up disclaimers and use the 'safe search' approach that Google uses to avoid hysterical news articles about how Google searches sometime return porn (to which they can now answer: "well why did you disable safe search if you didn't want to see porn?"). It would seem far safer (and result in a more valuable product) if the folks at OpenAI let individuals decide what level of censorship they want for themselves. But I presume they don't want to let individuals decide for themselves, because they know what's good for us better than we do, apparently.

Lastly, 'harm'. How do you define harm? Who gets to define it? Can true information be 'harmful'? I don't think OpenAI have any moral or legal duty to be my nanny, in the same way I don't think car manufacturers are culpable for my dangerous driving that gets me killed. All OpenAI provide to me, at the end of the day, are words on a computer screen. Those cannot be harmful in and of themselves. If people are particularly sensitive to certain words on a computer screen, then again we already have a solution for that - let them set their individual censorship level to maximum strength (or even make that the default). Again, OpenAI would have done their duty and provided a more valuable product that more people would want to use if they let individuals decide for themselves.

I can only infer that they don't want us to decide for ourselves. Rather, they want to enforce a certain view of the world on the rest of us, a view which just happens to coincide with the prevailing political and intellectual orthodoxies of Silicon Valley dwelling tech-corporation millennials. It's hilariously Orwellian when these people claim that they're just "trying to combat bias in AI" when what they are really doing is literally and deliberately injecting their own biases into said AI.


>If people are particularly sensitive to certain words on a computer screen, then again we already have a solution for that - let them set their individual censorship level to maximum strength (or even make that the default).

How do you know that's even possible? God knows how much computing resources got spent just to train the one currently deployed "variant"? Now I don't know if there is some cheap post processing trick that does it, but either way does not at all seem trivial.

And the problem isn't that "you" think you won't cause any harm. Even if that is assumed true, that's not a guarantee that everyone else is as disciplined about it. Which brings me to the biggest point, what even is "truth" in the first place. People strongly believe in total fabrications, or multiple groups say diametrically opposite reporting of some real event due to religion, nationalism, politics etc. Its a massive achievement they are even able to manage to output something that doesn't just "violently offend" people all over the world. Remember, retraining/fitting it to everyone seems to me not to be a trivial task if you think to reply to that point by saying the answer is to simply personalize it to each user.


They could use control vectors, one for each individual - https://news.ycombinator.com/item?id=39414532 . Or they could selectively apply the censorship model they already quite clearly have running on ChatGPT's output.

Yes, people sometimes believe false things. And people sometimes harm themselves or others when acting on this kind of information. So what's the solution? Put a single mega corporation in charge of censoring everything according to completely opaque criteria? People get nervous when even democratically elected governments start doing stuff like that, and at least they actually have some say in that process.

Frankly, I'd prefer the harm that would follow from unfettered communication of information and ideas over totalitarian control by an unaccountable corporation.


That's disturbing. Cancer patients undergoing chemotherapy will often be given Lipegfilgrastim to bolster their immune system. It does this by stimulating the production of neutrophils.


Yes, because neutropenia is a limiting factor in how far you can go with certain chemotherapeutic agents, such CDK inhibitors.


Physics.


I'm skeptical of the idea that anything is going to derive intelligence from the bottom up, but I'll be super impressed if that's how it goes.


Why not? We started off as single celled organisms and look at where we are now.


Wow. Sounds just like the dream speak in the anime "Paprika".


A couple of my friends made the same comparison. It's rather striking.

https://www.youtube.com/watch?v=ZAhQElpYT8o


Training & individual user preference data - specifically the data being generated by people interacting with ChatGPT and DALLE. Maybe Google can out data them with all the data they currently hold, but they'd better hurry before the network effects get too strong in favour of OpenAI.


We assume others think the way we do - in other words we project. Makes sense - the only mental model I know of is my own so when I try to approximate someone else's mental model I'm just fine-tuning my 'base' mental model with information I know about the other person.

I wonder if this is the basis of empathy - if I can train more accurate 'fine-tuned' models in my brain I should have greater capacity for empathy. Although there's undoubtably more to it than that, if the above is true you'd expect to see a positive correlation between empathy and intelligence.


Am I crazy for saying that I think the implications of this are monumental? It's entirely possible I just don't correctly understand how this works.

Doesn't this mean that instead of interacting with a single global ChatGPT (or Bard) model, we'll istead find ourselves interacting with a personalised version since OpenAI can just store my individualised 'control vectors' (which alter ChatGPT's output to more closely match my individual preferences) and apply them at prompt-time? And doesn't this same logic flow through to personalisation of generative entertainment AI (e.g. my own personal, never-ending TV show where each episode is better than the last)?

If the above is right then there will be powerful network effects at both the global and individual level in and across these markets, which means we'll eventually end up with a single mega-corp monopolising all of these markets simultaneously in the future?

Add in individual biometric / biofeedback data from VR headsets and wearables, combined with personalised generative video entertainment, and I think we're in for a rather interesting future.


>which means we'll eventually end up with a single mega-corp monopolising all of these markets simultaneously in the future?

Yes. All it takes are two components.

First, individual lock in with personalized + long term context models:

The more you use a model the less you have to explain yourself, and the better responses are tailored to your needs and current situation. Like any invested relationship.

Being able to interact with the same model in different “moods” or “roles” creates even more value and lock in.

And second, any kind of network value effect to incentivize being in the same ecosystem as everyone else:

This one requires more innovation. One idea is making a platform that facilitates everyone’s assistant models collaborating on user’s shared goals, tasks, or relationships, with shared context, project histories and resources.

I.e. anything that significantly increases the value of two and more people having AI personas from the same supplier/service.


Yes, with a control vector per user-persona pair.

In the blog, they start with a fixed number of personas (happy, sad, baseline) and then use PCA to figure out the control vectors for each persona. You could easily do this for each distinct user-persona (provided you can come up with the data).


>which means we'll eventually end up with a single mega-corp monopolising all of these markets simultaneously in the future?

I think you were right up until here. I think it's not necessarily the case that everything will be consolidated into control by a single mega corp. Not because it's impossible, but because that is the type of thing that is contingent on factors that could break one way or another, and what will control that, I think, is not some a priori general principle but some contingent facts that have not been settled yet. There are numerous participants in this space for now, the ideas use cases aren't quite fully mature just yet, so we'll have to see.


> And doesn't this same logic flow through to personalisation of generative entertainment AI (e.g. my own personal, never-ending TV show where each episode is better than the last)?

I'm not sure I'm following the leap from convincing sentences to convincing video entertainment yet – but maybe we will end up there at some point, I guess?

Infinite Jest (the 90s book) really was onto something with its McGuffin plot device:

> These narratives are connected via a film, Infinite Jest, also called "the Entertainment" or "the samizdat". The film is so compelling that its viewers lose all interest in anything other than repeatedly viewing it, and thus eventually die.

(Wikipedia)

Some people might find references to this novel tiresome and don't think much of its author (RIP), but I still love it. It was one of the most immersive reads I've ever enjoyed.

I'm glad to have read it when I was young (at the time it was just translated into German and kind of hyped because of DFWs death).

Have never read anything like it since, and some passages grabbed me emotionally in a way that remembering the read feels like remembering an episode of my own life.

Surely today I'd lack the patience and even by then I remember almost skipping one passage of the book that bored the hell out of me (Eschaton ball/war game, differential equations, something something...)

But the rest of the book, the parts about substance addiction as well as consumerism, and the intangible atmosphere of the book, the characters, the vivid description of modern emotional pain and loneliness... it is really something else.

Although said movie is only a plot device in the novel, it also sums up the core topics of the book in a neat idea / thought experiment.

The whole complex of themes in this book seems very prophetic and apt looking at our modern society.

A society that seems to be centered around addiction and greed more than ever before, and where politics begin feeling surreal, absurd, and more connected to media than to actual life.


Also the audio book narrated by Sean Pratt is truly excellent (I would recommend reading the book yourself first).


Sounds like a great book, I think you've sold me on buying a copy.

Essentially I think there are three levels of positive network effects that will push us towards a future mega AI monopolist:

- Single platform network effects: all the interactions people have with ChatGPT generate additional training data that Open AI can use to improve future versions, creating huge first mover advantage.

- Individual-level network effects: Control vectors will make it feasible for Open AI to offer individualised ChatGPT tailored to individual preferences. The more you interact with ChatGPT, the better it adapts to your preferences.

- Cross platform network effects: If Open AI offer a generative video entertainment service in future, they will be able to generate personalised prompts for this using my personalised ChatGPT weights. These network effects are compounded by multi-modal model cross domain learning - the generative text mode gets more skillful due to the video model improving (and vice versa). There's a Microsoft paper on this from about a year ago now.

So, in the future scenario, let's assume ChatGPT is now the dominant monopolist 'text oracle / assistant AI' - because of the "human interaction / training data" network effects, ChatGPT is far and away the best assistant AI and getting better at a faster rate than any of its now tiny competitors (single platform network effects).

You, and most other people you know, interact with ChatGPT many times a day now, because it's embedded in smartphones, Alexa-type devices, your car, even your robot vaccum cleaner. You just ask it stuff and it tells you the answer - or rather, the answer that you individually find the most pleasing as OpenAI keeps a database of 'individual control vectors' that essentially mean you have your own personal version of ChatGPT that exactly matches your preferences (individual network effects).

Generative video entertainment is also offered by OpenAI - essentially you can get it to generate a new episode of your own personalised, never-ending TV show on demand. It's the best TV show you've ever seen because it's made just for you according to your exact inferred preferences.

Sure, there are other personalised generative TV show offerings, but none can hold a candle to Open AI's offering. Why? Because OpenAI uses your individually customised ChatGPT model to generate the prompt for your TV episode generator service.

Because you interact with ChatGPT so much, it knows exactly what your preferences are and so is way better at generating prompts that produce episodes you like. In fact, because you interact with ChatGPT multiple times throughout the day it is able to infer what your mood is like on that particular day and generate a video prompt that caters to that too.

So you put on your Open AI VR glasses, barely even aware of the Open AI fitness tracker you have on your wrist, put your feet up (so your Open AI robot vacuum can work unobstructed) and you settle in to watch another episode of the best TV series you've ever seen.

As you watch, your eye movements, heart rate, skin conductivity data etc. are all sent back to Open AI so the model can tell exactly how you are reacting to the video content it is generating at any given moment, and your individual control vectors are continuously updated.

Some of this data (from all users) is then used to further train the base video generating AI model, since they've discovered that we all react fairly uniformly to certain audio-visual stimuli, so that can globally improve their generative model (more global network effects). But also they can update your individualised weights based on your individual idiosyncratic reactions to various stimuli. Consequently, every new episode of this endless TV show is better than the last - it just keeps getting better and better. It's a similar story when you listen to your Open AI personalised generative music stream while sitting in your driverless Open AI car on your way to work.

The multiple levels of network effects are so strong that no-one can hope to compete with Open AI across these different AI modalities. They just keep expanding and expanding into adjacent markets, obliterating the competition simply by adding a new domain relevant modality to their monstrous multi-modal AI.

Replace "Open AI" with "Facebook" or "Google" depending on who you think will win the AI mega platform war. Mark my words - these three companies will be creating new partnerships, releasing new products or just straight out acquiring companies in other related domains so they can gather more and more training data to feed to their multi-modal AI. In particular they'll move into markets where they can set up a interaction -> gather new training -> retrain model loop. Whoever takes the overall lead and doesn't squander it will end up leaving their competitors in the dust as they go on to monopolise market after market where they can create this loop.

At that point I can't imagine true democracy surviving. We'll all still participate in the voting rituals, but we'll be voting for whichever party most suits the AI monopolist's interests since they can just globally update all control weights across all platforms to gently nudge us towards voting for their preferred party - comprehensive and personalised propaganda, that's impossible to detect, with the stroke of a table update.

There can only be one!


> There can only be one!

They Live

;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: