Hacker News new | past | comments | ask | show | jobs | submit login

Another discovery that something thought to be uniquely human really isn't that hard.



It depends on what you mean by "art".

There's different sorts of art for different purposes (commercial, fine, etc.), and for many of these sorts of art having an AI make it would be acceptable.

But when I think "art" without a qualifier, I think "fine art". And the purpose of fine art, for me, is human communication of ephemeral things that aren't possible to communicate in language.

I don't want AI generated fine art because it turns the art into just a pretty picture. It's no longer about another person communicating something significant and human to me. It removes its purpose for existing.

In that view, I totally understand why some art communities would ban AI-generated art. I don't see a problem with it. I'm sure other communities will not ban such art. That's fine, too.

But I know which community I would be more interested in paying attention to.


> It's no longer about another person communicating something significant and human to me.

There is a person thinking about the prompt, typing in the prompt, painstakingly selecting the image that matches what they imagined in their head.

It just less elitist, as now even a child can create a complex, meaningful and beautiful images by leveraging the essence of human art (compressed into a single ML model).

I, for one, never had the time & chance to learn painting. Even if I did it as a hobby, it would take me up to a decade to become proficient at it.

And now I can ask AI to paint what I dreamt about last night (!!!), or create an impressionist painting almost exactly like I see it (or even better).

Fun factor is also there: I can see how an anime character could look in real life, or put a movie character in a new context , or just generate cute corgis wearing funny hats and driving cars


> There is a person thinking about the prompt, typing in the prompt, painstakingly selecting the image that matches what they imagined in their head.

Which does make this a type of art. It just isn't the same form of art as digital painting, the same way painting a tree is not the same type of art as growing and 100 trees, choosing one of them and taking a photograph of it. Photography is definitely an art-form, but it's a different art-form from digital painting, even if the end result of the both of them can be a digital image. Not better or worse, but definitely different.


> There is a person thinking about the prompt, typing in the prompt, painstakingly selecting the image that matches what they imagined in their head.

True, which makes it more akin to photography. I'm not saying it's "not art" (I'm saying the exact opposite of that), but that it's not art that is of interest to me. Much like my interest in art photography is very limited. Different people have different tastes.

> It just less elitist

I disagree. I think this whole subject is orthogonal to the concept of elitism. Sure, there are certainly art snobs who take an elitist perspective, but they're a very small segment of the art world.


How would you describe a Rothko in words? Can you create a Van Gogh painting by incantation?


Can't help but think about the obvious: maybe we should just skip all the drudgery of the natural convulsive progress and ruthless competition, push "accelerate" and get to the mature AGI civilization regime as soon as possible?

And there we should just ask our new creative oracle machines the obvious: "devise an understandable minimally invasive method of uplifting our own cognition and extending our biological lifespan". Surely at least some gene/cell/whatever therapy which does what we need is theoretically possible - the human beings aren't special enough for this not to be the case.

At least that's the zeitgeist I perceive among others who understand the significance of the unfolding progress.


Pushing "accelerate" costs a great deal. Investors will want a return on their capital: either in money pressed from unequal access to the resulting paradise or in direct power over this new god.


With billions upon billions spent on Big Science, asking the government to spend just 1 billion on an AI training run with guaranteed spectacular results doesn't sound too outlandish.

Why should we pay taxes, if the government won't even train AI for us, given such opportunity?


AGI is an existential threat to human-led governments. Why would they build something to displace them?


We all know that analogies from Hollywood classic scifi movies aren't productive or even at all useful in our mundane reality. The general-purpose AI is going to be a very useful, obedient, capable tool, and the great powers are going to seek ownership over it, like they do with various military and space technologies.


Current safety techniques are far from sufficient for accelerating AGI progress to be a good thing.


AI safety discourse is mostly still in 2000s, with outdated intuitions informing loads of rigid concepts specific to this time and community.

Large language models trained on cross-entropy loss for next token prediction aren't going to be dangerous per se, even in the limit - simply due to the nature of the objective and the distribution they are set to approximate.

We should make sure these powerful tools don't fall into a small exclusive set of power-hungry interests, though. Transformative tools such as these should benefit all of humanity and let it prosper.


If large language models trained on cross entropy loss for the next token prediction in the limit aren’t dangerous, then they aren’t enough by themselves for AGI.

An AGI, by virtue of being at least as general as a human, would be able to effectively form a plan for how to act in the world to achieve fairly general goals, in fairly general conditions.

Even if it isn’t agentic, if a program can formulate these plans in a way people can use, then the combined program+user still implements the optimization process.


That's why we have to make sure the user (committee) is safe and aligned. The usual silicon valley tech execs, or east coast financial bros aren't going to cut it this time.

The democratic solution here looks better than the expected rule of the (very) few.


That isn’t enough if the plans which are enacted have unintended undesirable consequences.


Yeah, give me a second, singularity exists.


It seems to me that the human effort spent in creating the original images in an AI's data set is an important element to consider when thinking of the "difficulty" here.


Certainly, but those human artists were also trained on a data set their whole lives. Raise a baby with no training data set and they might produce a handprint on a wall or a rectangle of facepaint. There's a reason indigenous tribesmen don't spontaneously invent hyperrealism.


However, there's a major misconception about training of humans and AI.

Image generating AIs are trained with massive amounts of images and text, however image generating humans train with much broader spectrum of experience.

Also, feeding a model tons of images created by humans (directly or by proxy) and claiming that AI is generating something completely new is a bit naive IMHO. Humans mix a much more broader and deeper experience pool to create things without prompts.

An AI model blurts out something derived from a corpus of images and text created by humans, that's all.

The technology is impressive for sure, and it marks a new era in terms of possibilities, but it doesn't take my breath away, sorry.


Further to your point, comparing humans to AI is not just misunderstanding AI, but also looks past how humans create and choose the styles which this AI is now reproducing through a statical approach. Without the human guidance the AI would be bland, AI is limited to the walls of whatever it has been exposed to - humans are not.

The AI emulates. Humans create. - The significance here is not trivial.


Humans emulate as well, and produce bland art as well.

I've seen lots of human-made art (especially on Artstation) that failed to elicit any emotional response in me.

The reason people are having this discussion is a fear of AI creating aesthetically pleasing and deep emotionally through statistical approach.

And the deeper fear that AI can potentially "hack" our senses by providing us with exactly what we want to see, read or hear (statistically).

i.e. if some subset of art is pushing our biological buttons more than all other art, and the effect is generalizable over human population, what would that say about us?


I think the point has been missed. AI is bound. Humans are not.

To better illustrate that difference: Just because anecdotally some human might produce crap doesn't mean that is the limit of all humans.


Well put. AI is an evolutionary death, in a sense. True Intelligence is not, at the point at which it is not, it is no longer artificial, and then ethical concerns must apply (rights for so-called AI citizens, etc.)


>AI is limited to the walls of whatever it has been exposed to - humans are not

So you can imagine new colors?


Humans mix a much more broader and deeper experience pool to create things without prompts.

That wide variety of experience changes the decision of what to create, but I think it has less influence on the actual output than one might expect. A person can have enormous life experience but if they have never seen a logo, a photo, a painting, a drawing, or any other kind of rendered image they will be no more sophisticated in their artistic output than a caveman. Modern artists stand atop a mountain that their ancestors had to climb inch by inch.


Actually, I don't agree. I take photographs and design logos for my pet projects sometimes, and I optimize for feeling most of the time.

I want my photos to make the viewer feel a particular way when viewing what I took. This affects colors, style, and everything.

Arguably, as long as the medium carries the message the creator intended, it's equal in my view. It can be a photo, a drawing, a digital painting or a physical painting with oils. They can create the same feeling if the artist aims for it.

This "optimization for feelings" is a result of my own non-photographic experiences. Sometimes a song, sometimes an event I went through, sometimes an art piece I saw on a medium I don't work.

From my understanding and observation, I saw that many artists work that way. They reflect their emotions in one domain (generally personal life) to another domain (the art they create). Also, there was a video which I fail to find over and over which shows how three designers got affected by the things they saw (and deliberately planted into their minds slowly) during a 20 minute car trip.

Yes, we're building upon this great mountain of experience and knowledge, yet our output is affected by what we experience in other parts of life, unrelated to the art we create.

Consider the following experiment: The music you listen changes your mood, your working speed, and what you feel; effectively changing how you operate and what you create. Exclude work from this, which is by definition something you have to do, unless you're literally dying.


"yet our output is affected by what we experience in other parts of life, unrelated to the art we create."

Right, and what happens when you let that loop feedback to itself?

An asphalt road on every corner, so we build more asphalt roads, we design only what has been designed before. Amplify this thousands of time through computer generated design, the feedback loop is closed and unchanging. Unless your goal is actually to create an independent organism (different moral issue), you are creating sophisiticated feedback loops, not worthwhile content generation. Unless by content generation you mean endless remixing...which is not the same.

Sort of like changing the color of a 3d model and claiming it is a new race with new attributes, as is often done on the cheap e.g. in videogames.


> claiming that AI is generating something completely new is a bit naive IMHO.

I don't see how AI art is less new than the vast majority of human art. Both can create unique compositions that are still deeply rooted in patterns and principles thousands of years old.

> Humans mix a much more broader and deeper experience pool to create things without prompts.

AI does not have understanding, but it's stealing the underlying patterns that are the end result of that human experience.


Absolutely. Though I think that speaks to one of the ways we tend to discount the difficulty of developing human artistry, rather than a defense of AI gen being "not hard".


Yes, that 4 GB of data can't be really independently generated, but the fact that a large chunk of that effort can be compressed into 4 GB is still amazing.


If your life's work can be compressed down to 2 bytes, it wasn't that hard.

OTOH, if the 2 bytes is an index into a larger corpus of collected works, the question becomes more interesting.

OTOH, if the corpus is an ensemble of other 2-byte objects...


Who would've thought art would get automated before driving.


That's like saying "well, making knives is easy" while ignoring everything that is needed to make materials then shaping them into final form.

It's only "easy" because bunch of people over years perfected every step of the process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: