Hacker News new | past | comments | ask | show | jobs | submit | rndphs's comments login

> OpenAI is consistently the one coming up with new ideas first (GPT 4, o1, 4o-style multimodality, voice chat, DALL-E, …)

As far as I can tell o1 was based on Q-star, which could likely be Quiet-STaR, a CoT RL technique developed at Stanford that OpenAI may have learned about before it got published. Presumably that's why they never used the Q-Star name even though it had garnered mystique and would have been good for building hype. This is just speculation, but since OpenAI haven't published their technique then we can't know if it really was their innovation.


The author's example for x^2 + x could be written with the first two symbols swapped. With this it looks fine to me. Putting the 2 first here is like putting the x first in "2x" such that it becomes "x2". I think also maybe if the lines above and below had curved ends so you could see where they start and end clearly then this could be not so bad notation.


>When a person "has the concept `7`" they can reason thus: "6 eggs would be fewer", "7 is an odd number", "7 is a whole quantity", "with 7 less of 10, i'd have 3" etc.

I just input this into GPT-3. Its responses are in italics, this is first try no rewriting or retrying anything:

This is a test of understanding of the concept of "the number 7".

Question: Is 7 odd? Answer: Yes, 7 is an odd number.

Question: Is 6 eggs fewer than 7 eggs? Answer: Yes, 6 eggs is fewer than 7 eggs.

Question: With 7 less of 10, what would I have? Answer: I would have 3.

Question: Is 7 a whole quantity? Answer: Yes, 7 is a whole quantity.

This is mostly a joke because I think I understand where you are coming from (and that you are hypothesising that gpt3's responses are an elaborate trick of sorts). But I don't believe AI has to take the same route as human intelligence, and I don't think we really understand what a concept is or how it behaves from a signal/data perspective, but I think that may be inconsequential for creating general AI.

Also people are can be really stupid sometimes and also have failures, and the concepts that people hold can be incorrect or flawed etc. So it may be useful also to compare human failures with AI failures, rather than just AI failures with human successes.


I think the failures of people spouting hype and failing to deliver in ML has absolutely nothing to do with the real and immense progress which is happening in the field concurrently. I don't understand how one can look at GPT-3, DALL-E2, alpha go, alpha fold, etc and think hmmm... this is evidence of an AI winter. A balanced reading of the season imo suggests that we are in the brightest AI summer and there is no sign of even autumn coming. At least on the research side of things.


The difference between the two views could be summarized in a textbook intro from twenty years ago: here is a list of problems that are not (now) AI. Back then it would have included chess, checkers and other games that were researched for their potential to lead to AI. In the end they all fell to specific methods that did not provide general progress. While the current progress on image related problems is great, if it does not lead to general advances then an AI winter will follow.


I disagree. If we find a particular architecture is good for Chess, and another for image generation, then so be it. We would still have solved important problems. We are seeing both general and specific approaches improving rapidly. I don't think the AI winter was defined by a failure to reach AGI, but rather that they reached a Plateau and produced nothing of great commercial or even intellectual value for some years, while other computer science fields thrived. I would say the situation is the exact opposite right now.


> Back then it would have included chess, checkers and other games that were researched for their potential to lead to AI.

20 years ago (2002) Deep Blue had beating reigning world chess champion Kasparaov was old news.

Unsolved problems were things like unconstrained speech-to-text, image understanding, open question answering on text etc. Playing video games wasn't a problem that was even being considered.

I was working in an adjacent field at the time, and at that point it was unclear if any of these would ever be solved.

> In the end they all fell to specific methods that did not provide general progress.

In the end they all fell to deep neural networks, with basically all progress being made since the 2014 ImageNet revolution where it was proven possible to train deep networks on GPUs.

Now, all these things are possible with the same NN architecture (Transformers), and in a few cases these are done in the same NN (eg DALL-E 2 both understands images and text. It's possible to extract parts of the trained NN and get human-level performance on both image and text understanding tasks).

> While the current progress on image related problems is great, if it does not lead to general advances then an AI winter will follow.

"current progress on image related problems is great" - it's much more broad than that.

"if it does not lead to general advances" - it has.


A very telling example, since we now have methods like Player of Games which apply a single general method to solve chess, checkers, ALE, DMLab-30, poker, Scotland Yard... And the diffusion models behind DALL-E apply to generative modeling of pretty much everything, whether audio or text or image or multimodal.


Crawl, walk, run. You can't go directly from crawl to run. You need the intermediate steps (pun not intended)


The arms race to passing the turing test has begun.


Yeah I just tried google image searching to find something like the pikachu photo from https://mobile.twitter.com/gottapatchemall/status/1511777860...

But I can't find anything close to the realism that DALL-E 2 achieved here.


There was an abomination of a live action Pikachu movie some time ago. When I google "realistic pikachu" I get images exactly like this from the movie but not gross.

In fact this photo is exactly what you get when you photoshop the face of an ugly chihuahua unto a Pikachu plushie head and add a yellow brushed hamster body. And a cape. Literally that is what you're looking at.

It understood your prompt and amalgamated the right source photos into this nightmare fuel. Jesus wept.


Yeah, it's still impressive to be able to imitate those styles and add a blue cape that didn't exist in the movies, along with chihuahua eyes. It also appears to be higher definition than Detective Pikachu CG. I'm curious if you could do the same for all 150 original Pokemon, even those for which realistic CG representations don't exist. Would it be able to take the cartoon version of Farfetch'd or Psyduck or a more obscure one and achieve the same realism, without the reference from the deep dataset?


Well to my eye it's realism beyond anything that I could find. Mind you I didn't search for that long so there might be something there if I was to delve deeper.

I am pretty familiar with photoshop, and while I'm not an expert, I would find making something like this really difficult. Anything is possible with photoshop, but some things are very hard.


> In fact this photo is exactly what you get when you photoshop the face of an ugly chihuahua unto a Pikachu plushie head and add a yellow brushed hamster body. And a cape. Literally that is what you're looking at.

i guess some people are overhyped, but it's cool that this can do that. Previously, it took a trained human.


If this is the exact image you wanted and are entirely satisfied for it, great. But what people are reacting to is that it is outputting interesting images at all.

What are you going to do with this cape wearing realistic Pikachu that is actually a picture of a hamster?

Typically the trained human has something specific in mind. And if the client isn't satisfied they will torture them with countless requests for adjustments. So right now this is of limited use.

To me what is far far far more interesting is that Dall-E possibly understands the concept of what a Pikachu is supposed to be. That is downright creepy, and fascinating. I suspect that this visual aspect to things after people get over the clipart generation might find more functional utility as a way to see through the "model eyes" so to speak. To visualize the model itself. That could unlock a lot of doors in how training is done.

Maybe in the future you could train it on textbooks and prompt it for a picture of a molecule. Now that would be something. Especially if you start feeding it data from experiments.


> Typically the trained human has something specific in mind. And if the client isn't satisfied they will torture them with countless requests for adjustments. So right now this is of limited use.

Confused as to why you think you cannot do this with DALL•E?


Human artists also do a whole lot of mimicry. One could look at art produced by many artists and say that it is just things stitched together from pre-existing art.

“Good artists copy, great artists steal.”


For example the “enterprise vector people” graphics you see on every corporate website. Most human art is extremely repetitive.

AI art seems to be coming from the opposite direction to human artists - from a starting position of maximum creativity and weirdness (e.g. early AI art such as Deep Dream looked like an acid trip) and advancements in the field come from toning it down to be less weird but more recognizable as the human concept of “art”.

And DALL-E is impressive exactly because it has traded some of that creativity/weirdness away. But it’s still pretty damn weird.


This is going to be mostly a rant on OpenAI's "safer than thou" approach to safety, but let me start with that I think this technology I think is really cool, amazing, powerful stuff. Dall-E (and Dall-E 2) is an incredible advance over GANs, and no doubt will have many positive applications. It's simply brilliant. I am someone who has been interested in and has followed the progress of ML generated images for nearly a decade. Almost unimaginable progress has been made in the last five years in this field.

Now the rant:

I think if OpenAI genuinely cared about the ethical consequences of the technology, they would realise that any algorithm they release will be replicated in implementation by other people within some short period of time (a year or two). At that point, the cat is out of the bag and there is nothing they can do to prevent abuse. So really all they are doing is delaying abuse, and in no way stopping it.

I think their strong "safety" stance has three functions:

1. Legal protection 2. PR 3. Keeping their researchers' consciences clear

I think number 3 is dangerous because researchers are put under the false belief that their technology can or will be made safe. This way they can continue to harness bright minds that no doubt have ethical leanings to create things that they otherwise wouldn't have.

I think OpenAI are trying to have the cake and eat it too. They are accelerating the development of potentially very destructive algorithms (and profiting from it in the process!), while trying to absolve themselves of the responsibility. Putting bandaids on a tumour is not going to matter in the long run. I'm not necessarily saying that these algorithms will be widely destructive, but they certainly have the potential to be.

The safety approach of OpenAI ultimately boils down to gatekeeping compute power. This is just gatekeeping via capital. Anyone with sufficient money can replicate their models easily and bypass every single one of their safety constraints. Basically they are only preventing poor bad actors, and only for a limited time at that.

These models cannot be made safe as long as they are replicable.

To produce scientific research requires making your results replicable.

Therefore, there is no ability to develop abusable technology in a safe way. As a researcher, you will have blood on your hands if things go wrong.

If you choose to continue research knowing this, that is your decision. But don't pretend that you can make the algorithms safer by sanitizing models.


OpenAI is not the only AI shop. If they didn't make DALL-E someone else would, and control its release as they see fit.


At the molecular level, basically all photon modes associated with the thermal energy (or lower) will be already thermally occupied. E = hf = k_bT/2. This frequency at room temperature is about 30THz. So on the microscopic level, any frequencies under 30THz are constantly irradiated by thermal fields anyway.

Edit: Furthermore, the Gibb's free energy of any molecular process determines the reversibility of the process at a given temperature. Any molecular process with Gibb's free energy that is lower than the thermal mean energy is going to be essentially a reversible equilibrium process, and stimulating it with radiation will only shift the equilibrium very slightly I believe. I think it's for this reason that we don't see radio catalysed reactions in chemistry, unlike photocatalysed reactions.


I'm not talking about noise. I'm talking about a spike in the frequency spectrum.

If you can build a protein that can tune to e.g. 3GHz (or whatever frequency a phone uses), thus behave differently at that frequency, then basically that proves that radio waves can theoretically alter the reactions in the molecular soup that is a cell. All I'm saying is that I'm not so sure that this can't be done.


I think though that any biological process using these sorts of energies on the molecular level will be swamped with noise and therefore wouldn't be a useful mechanism. 3GHz is like 0.00001eV. A process with Gibb's free energy change of 10ueV has an equilibrium constant of essentially 1 at room temperature, and so is almost completely reversible.

The reason why we can make things interact with radio waves at all is essentially because electrical conductors provide coherent modes for low energy photons to couple to. Without conductors and their free electron cloud we would have a very hard time building anything to receive or transmit radio in any way that isn't thermal.

It is true that there is some degree of conductivity in cells but without a non-thermal way of coupling between current and molecular processes I don't see how radio waves could affect cells in a non-thermal manner

Edit: I guess nerves have a non-thermal coupling mechanism from low frequency currents to molecular mechanisms, so it must be possible. But the machinery for that has been highly evolved for that specific task, I'm not sure if it follows that such machinery would appear commonly in cell processes.


Are single-photon models even useful here? What about aggregate photon effects? The sheer amount of photons hitting you from a cell-tower is enormous. Perhaps an "optical tweezer" type effect could happen?

And for the non-thermal effect discussion, have you considered voltage-gated ion-channels in cell-membranes?


You're gish galloping. Rather than continue to propose arguments without evidence of actual risk, find a citation that has a salient hypothesis that's tested that shows risk.

We aren't your Google-scholar and you're just promoting FUD by asking into the ether "but couldn't X cause Y". Me typing this message COULD cause a butterfly effect that leads to an earthquake. In any "does X cause Y" scenario you have two answer what the probability is that X causes Y and what's the impact of X does cause Y.

In RFR exposure terms it's what is the probability that RF below ionizing levels cause damage to DNA to promote cancer. The vast majority of the research says no and theoretical mechanisms for harm of RF below ionizing levels has never been proven to anything close to a statistical significance or in ways that are reproducible. Even if you did you'd have to assume impact. The OP study is basically assuming there's some impact and studying the population broadly and observed none.

Low probably, low impact, low or no risk.

Please present evidence that presents a high risk argument that is backed by some research showing an increase of the probability and/or impact or rfr exposure to DNA damage.

Until you do that, you're gish galloping. Please respond to our arguments (or consider if we're right) instead of declaring new ones with no references.


I'm usually very patient with the leftovers of the "ionizing only" crowd and what you call "gish galloping" (huh?!) was my attempt at nudging you to discover the science that shows that worldview to be outdated.

So when you write: "...and theoretical mechanisms for harm of RF below ionizing levels has never been proven to anything close to a statistical significance or in ways that are reproducible" ...I lose that patience with people not even interested in looking.

Look up Yakymenko et al. 2015 "Oxidative mechanisms of biological activity of low-intensity radiofrequency radiation". Full-text link: https://www.researchgate.net/publication/279863242_Oxidative...

Excerpt: "...among 100 currently available peer-reviewed studies dealing with oxidative effects of low-intensity RFR, in general, 93 confirmed that RFR induces oxidative effects in biological systems. A wide pathogenic potential of the induced ROS and their involvement in cell signaling pathways explains a range of biological/health effects of low-intensity RFR, which include both cancer and non-cancer pathologies."

Yes, the word "cancer" is in there along with "low-intensity RFR". The pathway is free-radical promotion in cells by RF and subsequent damage to proteins, DNA etc.

Keep believing the "ionizing only" line if you want. You're allowed to have an opinion. But then its just you against the peer-reviewed & published data.


I've actually already heard of this study and it's another proposed mechanism without any actual evidence in the wild that the proposed mechanism is happened or results in any significant health outcomes. It's a well known study in science circles because of how bad it is in spreading FUD over rfr.

"While the evidence may support the notion that RFR can increase markers of oxidative activity in tissue, it does not establish that this increase is biologically important and can actually lead to specific diseases. It also does not establish that cell phone use causes any harm by this mechanism."

https://sciencebasedmedicine.org/about-that-cell-phone-and-c...

They used the word cancer but didn't provide any real data that linked the proposed mechanism to cancer. Please stop believing fear mongerers and demand not just a hypothesis but actual data that a mechanism causes harm.

Oh, and not all oxidative stress in the body is bad. There are oxidative compounds that benefit human health and too much antioxidant can produce adverse effects.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5551541/

Our bodies even produce their own antioxidants:

"Your body's cells naturally produce some powerful antioxidants, such as alpha lipoic acid and glutathione."

https://www.health.harvard.edu/staying-healthy/understanding....

Oxidarive stress and free radicals are turning into buzzwords that ignore how our bodies balance that and just stating something causes oxidative stress in vitro or ex vivo doesn't say whether our antioxidant system can handle that and in the end negate any potential harm. This is why in vivo studies are done and the OP is a massive in vivo experiment that's been naturally happening since cell phones were first deployed.

I'm also going to highlight that I'm trying to pursuade you that we know the risks, they're low (basically zero), and you don't have to be worried about them. To contrast you're proposing unproven mechanisms for an uncertain risk that contradicts the observations do the original post while repeating arguments used by snake oil salesman that sell Faraday cages for people's wifi routers.

Stop being afraid, the world is way less scary when it comes to RFR exposure than these fear mongerers want you to belive.


"Stop being afraid" ??

So a sci. discussion melted down to an unsolicited pop-psych consultation.

Your argumentation is flawed. Have a nice day.


Last time I had an MRI scan, I had strong sensations throughout my body where exposed to the MRI's radio emissions. I rather enjoyed the sensation, it felt like a massage and I would have enjoyed it for longer.

I was surprised, as they didn't mention this before the scan. After, I asked about it, and they told me most people don't feel anything, but some do like me, and for a few it's so painful they have to stop the scan.

They told me it was my peripheral nervous system interacting with the radio emissions, not a physical (non-signal) effect as it felt like. From that conversation I learned there was about 10kW transmitted through my body during the scan.

MRIs have been studied for dangerous effects, of course, and all the evidence shows them to be extremely safe... provided there is no metal in the body which can heat up or be displaced by the field, and not counting risks from the contrast agents which are sometimes injected, which some people are more susceptible to than others.

I was never convinced by dismissive arguments that non-ionising radiation "can't" have any biological effect other than localised heating, or that the thermal background spectrum means infrared and below can't have an effect. (I know the physics pretty well; it's not lack of understanding.)

But after those sensations caused directly by the emissions, I'd experienced a biological, non-thermal effect from radio in the microwave-or-below frequency range directly and clearly. That was really interesting.

The body clearly does a lot of things based on countless subtle signalling pathways. Pretty much anything any pathway can sense could have an effect, even if it's not a conventional chemical reaction. One of the more interesting technological ideas around this is the use of high coherence terahertz signals that resonate with DNA molecular dynamics.


Oh yeah I don't doubt it. I think though that there is many orders of magnitude difference in the field strengths between cell phone radiation and MRI, and this makes all the difference.

THz radiation is a different story too as it has about enough energy such that it could influence irreversible processes.


I.e., at random: thus not inducing any coherent electric current, so irrelevant to the discussion.

The only other subjects that induce such confident statements of fact from the profoundly ignorant are economics and politics.


But the only electric current on the molecular level is coherent current...? Chemical reactions are not macroscale phenomena, and so it shouldn't really matter if the energy comes from a random distribution or not. Also please don't insinuate that I'm "profoundly ignorant", that certainly isn't relevant to the discussion.


Profound ignorance is insistence of certainty in the entire absence of knowledge of a subject.

Microwaves absorbed in tissue induce electrical currents carried by ions in solution. Just about everything that happens in your body involves ions moving in solution, one way or another. Details matter.


But the movement of ions in solution is almost completely dominated by thermal motion. Your signal doesn't matter if the signal to noise ratio is essentially zero.


In other words, life is impossible?

That will be surprising to those of us who, you know, exist.


No, the molecular machinery of cells uses energy level differences that are far above the thermal energy level at body temperature, which allows them to actually make changes to things irreversibly. Enzymes are a great example of this.

Try to use microwaves to move ions from one side of a container of salt solution to the other and then get back to me on the ability of microwaves to control ion movement. Hint: you basically can't without obscene levels of radiation. The thermal "pressure" due to the diffusion of ions is enormous.

For a sense of scale, the thermal velocity of water molecules at room temperature is about 500m/s. The drift velocity(average movement of charge carriers, i.e. coherent current) of typical electric currents is on the order of 1mm/s.


For microwaves to produce currents that could plausibly have an effect, there would need to be rectification and resonance, so that current could ratchet up. Unfortunately, both are known to occur in living tissue, as may be observed in people whose dental fillings enable them to pick up AM radio broadcasts.

On top of rectification and resonance, the signal would need to be carried in a place where its current has a persistent effect, and the nature of the signal itself, the modulation, would need to be such as to drive some cellular-scale electrochemical process. It is not possible to predict what that would be for the signal in question, if indeed there are any.

We appear to have got lucky with previous generations, but that tells us nothing about the next.


So, getting quantitative, suppose you have a modulation that gives you, at the membrane, a nanoamp average current, rectified, with a modulation resonant with the nanostructure, so each cycle pumps ions in just one direction. A nanoamp of +1 ions is 6 billion of them moving per second.

Now, 6B ions is hardly any, in the grand scheme of things, but they are in a very small space, and another 6B are moving in the next channel over. The only places we know of (well, that I know of) where these nanoamp currents are important is in organizing healing, and in embryo development. Old people have a hard enough time mustering healing activity without anything disrupting the process. I don't know what other processes might involve such currents.

Again, we don't know whether 5G modulation will affect healing in old people, but it is certainly physically possible that it can. It will be very hard to measure, but that doesn't mean the physiological effect must be small.

If there is an effect, will we notice? Is anybody monitoring healing rates in old people, at the population level, today? How do you even measure that?


As someone who has spent a lot of time reading the science of these chemicals, as well as their cultural impact and the experiences of people that I know and have talked to, as well as many experiences I have read about online. I can say that this is such a bad take that it's almost funny.

It's like saying that a helicopter and a blender are the same device because they both have spinning blades. These two substances are completely different in just about every way that they affect people, aside from producing a mood boost (which is an effect also caused by things like morphine, cocaine, a few different psychedelics, alcohol, caffeine, nitrous oxide, and many many more substances which don't act via serotonin).

One of the main theories on the action of MDMA involves the sudden serotonin release inhibiting the amygdala and thus reducing fear response. Prozac certainly does not do this. MDMA's effects are not even all shared with other serotonin releasers such as fenfluoramine or MDAI. And Prozac's effects differ somewhat from other SSRIs too.

The neurotransmitters and receptors in the brain are not a set of dials where you can ratchet one and always expect a given psychological effect independent of the other receptors, or even independent of the way it's ratcheted, or even the timing of it. Also the effect of these drugs is highly dependent on context too. Believe it or not, MDMA can sometimes produce dysphoria and or unpleasant/unhappy mental state. It is uncommon, but I have seen it happen.

Sorry for the rant, it's just this type of oversimplification really strikes a nerve with me.


You want to see funny? You said:

> One of the main theories on the action of MDMA involves the sudden serotonin release inhibiting the amygdala and thus reducing fear response. Prozac certainly does not do this.

{cough}

Fluoxetine Facilitates Fear Extinction Through Amygdala Endocannabinoids

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4832021/

Chronic fluoxetine prevents fear memory generalization and enhances subsequent extinction by remodeling hippocampal dendritic spines and slowing down systems consolidation

https://www.nature.com/articles/s41398-019-0371-3

Serotonin-2C receptors in the basolateral nucleus of the amygdala mediate the anxiogenic effect of acute imipramine and fluoxetine administration

https://academic.oup.com/ijnp/article/15/3/389/720875?login=...

You read some articles. That is not enough. You are being duped by more pharmaceutical company propaganda.

Have you ever ask yourself why they don’t investigate the reason people are not producing enough serotonin? Or white people lack serotonin?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5337390/


Have you actually talked with people who have taken these drugs? The way prozac affects people is clearly much much different from the way MDMA affects people.

Also, from your first link: "In contrast to fluoxetine, citalopram treatment did not increase BLA eCBs or facilitate extinction." And your second link: "Importantly, other SSRIs such as citalopram have shown the opposite effect, disrupting acquisition and retention of fear extinction"

Yeah...

Plus "fear extinction" is not the same as immediate fear reduction/inhibition as seen with MDMA.

I think that we as humans like to believe that we know more than what we do about these sorts of things.

Edit: Your third link talks about anxiogenic effects of fluoxetine, i.e. increases anxiety; literally the opposite of what you are arguing.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: