Hacker News new | past | comments | ask | show | jobs | submit login
Good Old Fashioned AI is dead, long live New-Fangled AI (billwadge.com)
164 points by isomorphy on Nov 15, 2022 | hide | past | favorite | 89 comments



The "new-fangled" AI, as the article calls it, is often useful when the stakes are low, and you can accept mistakes in outcomes. Examples of such applications are: trying to determine which of your friends occur in a photo, which movies a subscriber may be interested in, or which action could lead to victory in a computer game. Getting a rough translation of a newspaper entry, as mentioned in the article, is also a good example.

As soon as you need reliable outcomes, such as certainty whether an erroneous state can arise in a program, whether a proof for a mathematical conjecture exists, or whether a counterexample exists, exhaustive search is often necessary.

The question then soon becomes: How can we best delegate this search to a computer, in such a way that we can focus on a clear description of the relations that hold between the concepts we are reasoning about? Which symbolic languages let us best describe the situation so that we can reliably reason about it? How can we be certain that the computed result is itself correct?

The article states: "The heart of GOFAI is searching – of trees and, more generally, graphs." I think one could with the same conviction state: "The heart of GOFAI is reasoning – about relations and, more generally, programs."


> As soon as you need reliable outcomes, such as certainty whether an erroneous state can arise in a program, whether a proof for a mathematical conjecture exists, or whether a counterexample exists, exhaustive search is often necessary.

Proof checking requires 100% reliability. But if you are searching the space of all possible proofs for a valid one, that process does not require 100% reliability. On the contrary, automated theorem provers rely on heuristics to guide their exploration of that space, none of which work 100% of the time. "Exhaustive search" is an infeasible strategy, because the search space is just too large. Finding proofs is the really hard part (NP-hard), and the part which most stands to benefit from "AI" techniques – checking their validity is a lot easier (polynomial time).

"New AI" deep-learning techniques can be used to augment automated theorem provers, by giving them guidance on which areas of the search space to target – see for example https://arxiv.org/abs/1701.06972 – that produced a seemingly modest improvement (3 percentage points) – but keep in mind how hard the problem is, a 3 percentage point improvement on a very hard problem can actually be a big deal – plus I don't know if any more recent research has improved on that.


Finding a proof of a mathematical conjecture is much harder than NP-hard, it's undecidable (the Entscheidungsproblem) because the search space is infinite (and worse, doesn't necessarily include a proof even if the statement is true - Gödel's first incompleteness thm). So even if we restrict to conjectures that are actually machine-provable, it's still much harder than NP decision problems which are combinatorial so have an exponentially growing search space.


Remark: Deciding if a proof of a proposition exists* up to a certain length n is NP-COMPLETE. Deciding if a proof exists at all - which is the Entscheidungsproblem - is semi-decidable. So the parent wasn't entirely wrong.

* - In Peano arithmetic, let's say


The idea of chainlinking "AI to guide/choose AI for the next step" is where I expect more impressive results in future. It will be important to understand the limitations of AI to be sure of proper placement.


> Finding proofs is the really hard part (NP-hard), and the part which most stands to benefit from "AI" techniques – checking their validity is a lot easier (polynomial time).

Note that this is true only for boolean logic and equivalents. Anything stronger has either much higher complexity or is simply undecidable (but proof checking is fast, which is the main point).


The only barrier for higher stakes applications is going to be the frequency of errors. Flying an airplane or running a factory has a lot less margin for error, but humans don't do those things perfectly either (Chernobyl, Three Mile Island, Union Carbide-Bhopal disaster). It doesn't have to be perfect, just better than humans. And in fact, I'd argue that by having no deterministic outcomes prevents systemic failure, like having a single point of failure for all those drones in the crappy episodes of Star Wars.


> The only barrier for higher stakes applications is going to be the frequency of errors.

Frequency and strength. My issue with e.g. image classifiers is that when they’re wrong, they’re catastrophically wrong — they don’t misidentify a housecat as a puma, they misidentify a cat as an ostrich.


Or, put in the language of risk, probability and severity


The only barrier for higher stakes applications is going to be the frequency of errors.

IE, "The only barrier to the software working perfectly is it's tendency to fail".

Which is to say this sort of argument effectively assumes, without proof, that are no structural barriers to improving neural network performance in the real world. The thing is, the slow progress on self-driving cars shows that reducing the "frequency of errors" can turn from a simple exercise in optimizing and pumping in more data to a decades long debug process.


>It doesn't have to be perfect, just better than humans.

I have a different opinion on this.

Humans don’t like uncertainty. We like to feel like our mental model of reality can predict future outcomes. When it doesn’t, we get very uneasy. It’s why we don’t like dealing with erratic humans.

Part of the problem with AI is it’s lack of interpretability. People aren’t going to want to interact with AI if they can’t intuit what it will do, even if you can show it’s statistically better. The performance barrier is going to be much higher than just a little better than humans. We don’t have that limitation when dealing with people because we can more easily infer their goals and actions.

Thinking that being a little better than humans is the threshold is a rational decision. But human trust is often irrational. The latter often drives politics which can regulate AI into a corner.


Better than humans is not really meaningful as human skills have a very wide range and they can even vary depending on the circumstances and available resources.

Also, on average is not a great target either, sometimes it makes sense, but there are plenty of examples where we definitely don't want more average work.


But there's the rub. It's impossible to determine through testing whether a particular AI system will actually have a lower frequency of errors than humans. You can program an AI system to handle certain failure modes and test for those in simulation. But complex systems tend to have hidden failure modes which no one ever anticipated, so by definition it's impossible to test how the AI will handle those. Whereas an experienced human can often determine the correct course of action based on first principles.

For example, see US Airways Flight 1549. Airbus had never tested a double engine failure in those exact circumstances so the flight crew disregarded some steps in the written checklist and improvised a new procedure. Would an AI have handled the emergency as well? Doubtful.


If you understand the problem well enough that you know that the frequency of errors is stable and won’t blow up on out-of-sample data, then it seems like you’ve fully modeled what’s happening? That’s not any easier than solving the problem because it is solving the problem.

If don’t have good reasons to be confident that the error rate is stable, then you’re just guessing that you solved the problem, because it seems to work.


The problem is that humans fail in very different ways. I don't ever need to wonder if a human driver following behind a road maintenance truck visibly carrying traffic cones or stop signs is going to be a problem...


There's a weird bit of induced demand like widening a freeway, makes errors less often than a human but is more scalable so the absolute number of errors increases. I guess in the case of self driving cars, it could be from hordes of autonomous shipping trucks that outnumber existing truck drivers.


> As soon as you need reliable outcomes, such as certainty whether an erroneous state can arise in a program, whether a proof for a mathematical conjecture exists, or whether a counterexample exists, exhaustive search is often necessary.

Checking proofs is easier than finding proofs.

> The question then soon becomes: How can we best delegate this search to a computer, in such a way that we can focus on a clear description of the relations that hold between the concepts we are reasoning about? Which symbolic languages let us best describe the situation so that we can reliably reason about it?

These questions are largely answered. Or, at least, the methodology for investigating these types of questions is well-developed.

I think the more interesting question is co-design. What do languages and logics look like when they are designed for incorporation into new-fangled AI systems (perhaps also with a human), instead of for purely manual use?


So, there's an area of research that's under way called "AI Assurance" which seeks to answer many of these questions.

Some things they're attempting:

- Creating explainable outcomes by tracing the inner works of ML models.

- Looking for biases in models using random inputs & looking for biased outputs.

- Using training sets with differently weighted models to find attacks and biases.

etc.


The tragedy is that GOFAI did all these things as built-ins. Procedural expert systems have been doing introspection, backtracing, declaring confidence intervals etc since the 1960s. Layering "assurance" on top of inherently jittery statistical/stochastic and neural systems seems to misunderstand how these models evolved, where they come from and why there are alternatives.


While horses don't consume fossil fuels, they also don't solve the problem of transoceanic flight. Yet for some reason every discussion of airplane design ends up dominated by a vocal contingent of buggy whip salesmen.


Turns out, we don't need transoceanic flight; we can put horses on stirrups, and have an elegant system of clockwork to allow them to power a boat.

https://www.newyorker.com/tech/annals-of-technology/the-past...


Carmack or someone said that all of the pieces needed for AGI are already solved, it's just a matter of someone scouring through 40+ years of AI research and finding the right papers/techniques and putting the concepts together


I'm not entirely sure we have even identified all the problems of AGI yet, if so would be interested to see that list?


And while airplanes don't solve the "problem" of human connectedness, it's inevitably a conversation led by advocates of transoceanic flight still hankering for flying cars and living in the Jetsons world of the 1950s futurists - long after transoceanic flight has become an actual problem.

Same thing. Different parochial perspective.


And people who work on transoceanic flights advertise unicorns to people.


Another tragedy, GOFAI hasn’t made a dent on any of these problems for a very long time.


I agree with your broad point however I think the proof example is off.

Finding a proof is the hard part. Verifying a proof is the easier part. So the AI could search for likely proofs and we can use other mechanisms to reliably verify them. This doesn’t require high accuracy in the search stage to be reliable overall.


I used to believe exactly this, and I think this is mostly correct. There are exceptions though. The one that made me reconsider this proposition was Andrew Ng's early work. "New Fangled" AI to precisely control a helicopter! [1] Can't really accept mistakes or imprecision in the outcome. A lot of it was RL work [2]

[1]: https://www.youtube.com/watch?v=M-QUkgk3HyE&ab_channel=Stanf...

[2]: https://papers.nips.cc/paper/2003/hash/b427426b8acd2c2e53827...


GOFAI was never more than a rules engine. If-then statements.

Agree with you about probabilistic AI being useful in low-stakes situations, at least at first.


> GOFAI was never more than a rules engine.

GOFAI "aged out" of the AI label and became: compilers, databases, search algorithms, planning algorithms, programming languages, theorem proving, a million things that are still commonly used in NLP/CV/robotics, comp arch, etc. Aka, most of what makes computers actually useful.

If something's over 30 years old and is still called AI, that's just shorthand for saying it's a failed idea that in the best case hasn't had its moment yet.

> If-then statements.

99.999% of the software I use is just if-then statements.

(Also, this is like saying that deep learning is Linear Algebra.)


If-then questions can lead to non-deterministic outputs with some simple feedback systems

not disagreeing completely, but.. both "questions that are reasoned about", and "the code that reasons about questions".. need more careful classification in order to make use of these new data methods..

personally, I see the hype on DeepLearning to solve "find pattern in varying digital content" that is so clearly useful to the FAANG content Feudal Lords, is engaging in an investor shouting match that paves over simple use cases where DeepLearning is really not appropriate.


Sorry, thats really wrong.

I wouldn't call theorem provers just "if-then" statements. By that logic, everything, even large models, are if-then statements.


Technically, they are. There was a post on HN recently about how every model is equivalent to a decision tree.


I mean, if it can be specified by a computational process, that’s trivially true, right?


I'm not so sure about that.

Decision trees are finite set of if-else branches, and work on finite input - much like models. I assume many of the classic arbitrary-size-input algorithms cannot be represented by a decision tree.

Adding two arbitrary-sized numbers comes to mind - how would one model it as a decision tree?

    if (a[0] + b[0] == 0) {
        c[0] = 0
    } else if (a[0] + b[0] == 1) {
        c[0] = 1
    } else if ...
    ...
    } else if (a[0] + b[0] == 10) {
        c[0] = 0
        if (a[1] + b[1] == 0) {
            c[1] = 1
        } else ...
        ...
    }
    ...
You can see that for arbitrary-sized input, this decision tree would have grow infinitely - which is contradictory to the finite nature of decision trees.

I can't think of a proof that such a computation cannot be represented by a decision tree, and it's possible that my definitions aren't quite correct. But intuitively, I think this is the way it works.

Of course, if I'm wrong, any explanation on why and how would be very welcome.


Well said


The author undersells himself - AlphaGo basically searches the game tree the same way he describes of GOFAI. Monte Carlo Tree Search is old yet essential. The neural network mainly improves the game evaluation heuristic, using function approximation which I'm sure the author is familiar with. Modern AI abilities are mind boggling but they're not that complicated to understand, especially with a GOFAI background!


The authors attitude is a bit annoying to me. This same kind of complacency is a roadblock to progress


On who or what benefits from generative AI (Dall-E, GPT3, etc.), I'm reminded both of Veblen and Warholl

Thorstein Veblen's The Engineers and the Price System, which looks at the relationships between art (engineering, in the original sense of the term), on the one hand and the nexus of management, ownership, and profits on the other.

Andy Warhol industrialised art. Or more accurately, applied 20th century production methods to art --- the practice of applying any specific age's best mass-production practices to the production of artworks long pre-dated him, look to, for example, Michelangelo's workshops churning out cultural affirmations for the newly-established Medici dynasty in renaissance Italy. In both cases, technical methods of production were applied to art.

Veblen asks qui bono. Warhol answers how. Both look to technology.

I'm being less coherent here than I'd like though I hope the gist of the issues I'm pointing at comes through.


"Finally, a vital question is, how will this affect today’s working artists? Here the answer is not so optimistic."

I have a different take on this. I think this technology will allow more people, not less, to make money as a living (so, professionally) in a visual arts related industry. So I'm broadening the field to include not just "artists" but "commercial art" as well (designers, commercial illustrators, video/film post-production, etc.).

The reason is that it changes and lowers the bar to entry for these fields, automates away a lot of the labor intensive work, thereby lowering the cost of production.

Whenever something becomes cheaper (in this case, labor for art), its consumption increases. So in the future, because producing commercial art is so much cheaper, it will be consumed a lot more.

At the same time, we're not at the point where we can actually remove humans entirely from the process. AI generated art is a different process and requires a different skillset, but it still requires skill and learning to do well.

The analogy would be something like a word processor reducing the number of secretaries needed in the workforce, but increasing the number of office workers. People no longer need someone to take notes / dictation, but all kinds of new workflows emerged on top of the technology, and almost all office workers need to know how to use something like a word processor.

Therefore, the opportunity here to do is to build tooling that make it easier and more accessible for more people to work with AI image generation.

Disclaimer: I'm doing exactly that (building tooling to make content generation easier and more accessible) with https://synapticpaint.com/


> Whenever something becomes cheaper (in this case, labor for art), its consumption increases. So in the future, because producing commercial art is so much cheaper, it will be consumed a lot more.

I'm not sure how that would apply here. There's never been a shortage of art. Art has always had more supply than demand, and now we just added even more supply to saturate the market. I was previously a more likely client for an artist than I am now where I can get my computer to spit out any image I want in like 30 seconds. But I have no more desire for art than I did before.


> Whenever something becomes cheaper (in this case, labor for art), its consumption increases. So in the future, because producing commercial art is so much cheaper, it will be consumed a lot more.

I have the opposite view. With lower barrier of entry it will get over-saturated, over-produced and consumers will suffer from content fatigue leading to less interest in AI generated media as a whole.

An analogy is luxury goods. Reducing price of luxury goods decreases demand for them.


How about website/frontend design as an analogy?

It'd be great if that was made easier so that more folks could participate/make a living. Then again, I think that every time that was made simpler (eg. Flash, Dreamweaver/export to html, JQuery, ...) has resulted in a slew of crap.

So: the lower the barrier to entry, the more actual skill/artistry becomes important for a high quality result.

Phrased differently: once the drudge work becomes mechanised, the concept of quality is lifted to a new level. This highlights aspects that used to be stuck in the mud of the drudge work, enabling a more profound understanding... by those with the necessary skills to do so.


> I think this technology will allow more people, not less, to make money as a living (so, professionally) in a visual arts related industry. > ... > Whenever something becomes cheaper (in this case, labor for art), its consumption increases.

But not its price, and definitely not the compensation for the labour to produce it.

Making a living as a mediocre-to-good artist is already incredibly difficult; increasing the supply of poor-to-good artists through AI-assistance isn't going to make it any easier.

> The analogy would be something like a word processor reducing the number of secretaries needed in the workforce, but increasing the number of office workers.

Only if the word processor wrote documents without the assistance of a typist, or an author.


NFTs notwithstanding, people paying big money for art aren't usually just "buying" .JPG files. There's often some physical component, sometimes quite large, heavy and difficult to reproduce. Oil paintings, castings, mixed-media pieces larger than a car, etc.

These sorts of things are less likely to be subject to someone typing a command into an AI generator program. AI stuff will have impacts on grunt work, and will be used to come up with crazy ideas when artists are stuck, and there will be some people who will make a lot of money with a line of text and some software, but overall, the art world values uniqueness, the human touch or at least the human conceit that goes into "computer art" or "machine art".

Art is only worth what people (real live humans) will pay for it. The NFT market has finally started to crater as people have realized there's nothing there there.


"Working artists" (lol) are gonna be just as subservient with or without 2D typewriters, because the "creative industries" are to art basically what porn is to sex. (Anglophones have this linguisic quirk where half of the time "art" is synonymous with "graphics", which to other languages sounds like trying to tie your shoelaces with one hand fused to your face.)

Truth be told, I'm yet to see an AI-generated or AI-assisted work that provokes any internal experience other than thinking "hey, cool pixels/sounds/sentences/whatever". Not even the usual "somebody paid a lot of money to make this thing, so I better pay attention" which is the official function of commercial art.

It's certainly a very interesting academic exercise, and possibly a lucrative line of business. But if the value proposition is supposed to be "now it's easier for more people to create more complex stuff" (i.e. operate at a higher level of abstraction), why do I keep finding many "manually produced" works that speak to me on some level, even among the shitfountain of mass culture - while no AI-made thing has yet got me thinking anything other than "hey, cool tech?"

I'll start worrying when an AI chooses to ignore its incentives and, instead of doing something more rewarding, goes on to create a work of art just because. I'll also start worrying when people start becoming unreceptive to non-AI art, which I think is more likely to happen within our lifetimes.

Because what I see here is some cool tech for creating some ersatz sensory stimuli. And obviously, with enough compute you can make this advanced enough to confidently supplant the previous generation of tech for creating ersatz sensory stimuli. Somewhat less obviously, this reduces the risk of a spontaneous transcendental experience when perceiving AI art to a safe margin. Which is... probably good for business somehow?

At the end of the day, the human sensory system has a finite complexity, so you can create more and more compelling simulacra. Have at it. What'll happen is the next generation of humans will grow up awfully prone to Wile E. Coyote moments - tragic as well as comic.


>I'm yet to see an AI-generated or AI-assisted work that provokes any internal experience other than thinking "hey, cool pixels/sounds/sentences/whatever"

Maybe because you knew it was AI generated before you looked at it?


Or because I knew it was AI generated when I looked at it. Either way, that shouldn't be the differentiator.

An artwork is judged by how it exists within the totality of the context. We can consider "art" in the broadest sense, as "artifice" - and from that standpoint the algos themselves can be seen as staggering works of art in their own right. But on the other hand if we view the algos strictly as tools, and only consider individual pieces of content that are created through them, so far it's been only "meh" with a very occasional "hmm".

A similar example is the audio gear scene. There are many electromusical devices that are true works of art, worth of historical study even - then someone picks em up and starts by making bleeps and bloops that may have some novelty value but don't create a lasting impression.

I try to keep an open mind - even Facebook can be seen as AI-facilitated performance art evoking a profound feeling of dread, but that's a much broader context.


> while no AI-made thing has yet got me thinking anything other than "hey, cool tech?"

That’s just, like, your opinion man.


That's exactly what it is. Show me some mind-blowing AI art :-)


People spend an awful lot of time talking about current successes in AI without often reflecting on how much (or little actually) AI impacts their lives. Despite all of the energy put into current gen AI, as far as every day impacts the biggest things I can think of are:

- Spam filtering/email sorting

- Web search

- GPS/Wayfinding

- Voice assistants

These are the only practical applications of "AI" that I use more or less everyday (I'd be happy to reminded of others). Of these 4 I personally have found spam filtering to be getting worse recently, as well as web search. The first 3 were all more or less solved over a decade ago, and, while I find Siri convenient, I wouldn't mind much at all if voice assistants completely disappeared tomorrow.

I'm not denying we've had an amazing decade of pushing the research needle further. There have been tons of impressive AI projects out there. However the practical, day-to-day improvements we've seen with the existence of AI seem to be few and far between, and this is even more true when you start asking about any AI work done in the last decade. I was happier with the state of "AI" in my life in 2006 than I am today.

I just find it a bit fascinating how much energy has gone into both generic data science as well as more serious AI research and yet how little the reach of AI has grown in the last 10 years. All of the cool AI that I use existed before data science was declared the "sexiest job".


Most novel AI serves the user perfectly well. And since this type of AI requires lots of labelled data, these users are typically large data harvesting organisations.

Some other applications that you may use daily are translate and face unlock/recognition.

It’s interesting that you mention search and spam filtering which both include an adversarial component. It seems to me that the adversarial AI has become better, in line of expectation from the democratisation of AI tools and knowledge.


You might be surprised where AI shows up.

Use a credit card? Fraud monitoring, KYC, and other financial models run through (e.g. Early Warning service).

Log into a website? Application monitoring with anomaly detection.

Own a 401k with shares in a financial vehicle like an ETF? AI used to predict the market for in-the-money trades.

Gone to the ER? Risk levels of mortality, sepsis, etc. are constantly pushed to your medical record (in many top-tech hospitals, like Parkland Hospital in Dallas and similar).


Do any of those applications use neural nets in any non-trivial way? I'm pretty sure that kind of stuff is all classical statistical modeling.


Yep, they do. But remember, neural nets are a part of ML, not the universe of ML. The amount of sunshine between statistical learning and ML and mostly negligible.


I believe most of these applications are based on statistical methods, but not what is commonly known as AI/ML.


This is like when the Scooby Doo gang pulls the mask off the mayor. ML and statistical learning are broadly the same thing. There is arguably some, but not much, sunshine in the area of RL.

AI itself is a superset of whatever difference one may see between Stats and ML, as algos don't specifically require uncertainty.


I agree with your take mostly. But also, some things are getting better work-wise like video/image editing. Where something can take an animator a week or two now takes all of 1 minute or less. Some startups in the motion capture space are doing some wild things and in a few years even small indie game studios will have mocap parity with AAAs.


>The heart of GOFAI is searching – of trees and, more generally, graphs.

and because of this GOFAI, unlike the title suggests, and algorithmic solutions will continue to underpin a huge chunk of applications that don't fall in any 'natural' or generative domain. When you have an algorithmically optimal or closed form mathematical solution to a problem trying to approximate that with some new method from data doesn't make sense just because it's cool.


> The image generator seems to understand that you can’t see through opaque objects

I thought this isn't the case for Stable Diffusion. Wasn't it the humans making the source images who understood things like that, and their knowledge became encoded in the latent space of the model? I'm not an expert. Please correct me here.


Hmm. Wonder what "astronaut riding a glass horse" would do then?


I just tried it out. This prompt (without any additional description) didn't had a satisfying expected output with Stable Diffusion 1.5.

With some other keywords, it generated some cool looking images but not any where there was a clear transparent horse and visible legs of an astronaut or something.

It is generally just very hard to compose a prompt where multiple subjects interact in a specific instructed way.


Alot of GOFAI was about symbolic reasoning which is back in fashion thanks to areas like Neurosymbolic AI and Graph Learning. One could argue that Generative AI like Stable Diffusion is incorporating some of these techniques. What is prompt engineering and textual inversion if not the manipulation of symbols? Add more recent interest in areas like Answer Set Programming and "what is old is new again". It's not like GOFAI dissapeared. Even technologies like Expert Systems still see use under a new name. Sometimes you don't need a heuristic approximation of the rules that you get from all ML approaches. You need exact representations or already know the rules and the challenge is inference.


One could very well argue that a Markov Chain as seen in Diffusion models is nothing more than following a graph from a point sampled from the prior back to the original distribution. The edges in a said graph are generated either via a scoring model which predicts an estimate of the gradient directly, or by generating a conditional pdf and sampling from it.

This is no different to tree search, not really anyway, it's just that ANNs are used to handle the intractability of expanding an infinite and continuous space or providing the probabilities within a MCTS - which is what AlphaGo and AlphaZero do.

One could argue that LMs / Causal Transformers are also doing treesearch, indeed one of our evaluation metrics, perplexity, is a measurement of the model's uncertainty in the transition p(token_i | token_{i-1} ... token_{0}).


I'm like an old man yelling at ML. I lost interested in AI since ML was introduced. A*, Simulated Annealing, Minimax, Propositional Logic are not cool anymore. I feel quite obsoleted in this field. Probably because the need to make money, the businesses involving AI are still niche, jobs are a few. ML based is getting out of niche, Generative AI is mass now. But there's still a few business needs, I hope it would pass the stage of gimmick vibe to being a platform to build on top.


Art is alot more than JPEGs. I don't have JPEGs on my walls, I have paintings and etchings and collages and even a small tapestry. I have some small ceramics, and a handmade charred wooden bowl. I have photographs printed on fibre-based paper, and funny little watercolours.

Art is going to be just fine.


It could affect commercial artist deeply, like game artists and commercial illustrators making logos and icons and whatnot.

But it won't affect studio artists at all. Studio art is not about "the image", it's about the practice, physical qualities of the artifacts, and an ongoing evolution of the artist.


The subreddit for stablediffusion has several examples of high quality stitching of SD-generated images. If I am interpreting your vernacular of "studio artist" correctly, then yes, studio artists will be affected.

Artists that produce a "real" medium like charcoal, sculpting, etc. aren't directly affected yet, but could be in the future.

As always, there is a power law distribution when it comes to perceived value. It will be interesting to see how this evolves.


I’m thinking about the “art world”. People who invest in art bet on the artist continuing to put out work, evolve a narrative, grow their brand. The art is important because of who made it. “I own a Matisse”. I don’t see how diffusion models can disrupt that world.


As always, they mis-spelled the acronym for "Machine Learning". There's nothing "Artificial" or "Intelligent" here but a mathematical algorithm operating on an algorithmically-encoded dataset.

If anything, it's closer to an encryption algorithm where the keys can decrypt deterministic parts of the plantext from the cyphertext and soften the edges a bit.


While I don't really endorse it, I understand the objection to a name with the word "Intelligence" as part of it.

I don't understand the objection to the word "Artificial".

Why do you say that there's nothing "Artificial" about these programs. Now, they may be contexts in which you could call a program "natural" in the sense of, "the natural way to do something", but, at the same time, are not all computer programs, in a different sense, artificial?


This is a long lost battle since AI has been a term used to describe far simpler things than that for over 60 years.


Eh, AWS or Azure are by all accounts not aerosols consisting of a visible mass of miniature liquid droplets, yet we call them Cloud.


I like to equate it to a lossy compression.


Auto encoders really hit this home for me- you are trying to find an efficient compressed notation of the dataset, and the most efficient way to do that hopefully ends up learning useful rules about the data.


Compression is intelligence[1] - which goes to point out exactly how misguided the OP's attempted distinction between ML and AI is.

[1] http://prize.hutter1.net/#motiv


Also very confusing, they misspelled "Bad" right at the outset. (Though I do find that can be a slightly useful signal about the sort of reasoning to be expected and how much attention to pay to it)


I'm surprised the article doesn't mention Dreyfus and his infamous What Computers Can't Do: The Limits of Artificial Intelligence, especially given the bold title.

For me, GOFAI was simply the earliest attempts at experimenting with the Physical Symbol System Hypothesis: A physical symbol system has the necessary and sufficient means for general intelligent action - Newell & Simon[0]

And like any hypothesis, all we have to do is falsify it...

[0] https://en.wikipedia.org/wiki/Physical_symbol_system#Argumen...


I wonder how long it will take until you will see a lot AI assisted services pop up. There is a lot of low-budget work that's constantly in need, from thousands of flyers for events to small websites for independent business.

I think it might be just around the corner, a few years maybe.

I imagine a consultant interacting with an AI-powered solution to rapidly create a unique website, with text generated from bullet points and a design generated from a description and a short sketch. Those generated designs could be refined based on feedback from the client until a compromise is reached. I think it can be really low-cost.


Why are deep neural networks not considered statistical ML?


If you can do good old fashioned AI, you most definitely can do new fangled AI. In fact, I'm almost certain you'll have a better understanding of the fundamentals.

At the heart of AI is mathematics, and that will never change.


Why do the eyes in the generated images always look a little off? Most facial features usually appear photorealistic to me, but the eyes always have a little smudge or something in them that gives them away.


https://blog.oimo.io/2022/10/21/human-or-ai-walkthrough/

This is a blog that investigates ai eyes and particular and how to distinguish them from human artist made eyes.

  *However, in the case of AI painting, it will almost certainly change the coloring of the left and right eyes and how to add highlights . Humans can understand that ``the left and right eyes have the same physical shape and are placed in the same situation, so there is naturally a consistency there. '' I don't understand the theory "I don't really know what an eye is, but it's something like this that's placed around here, isn't it?" Still, it looks like it, so humans can recognize it as eyes, but there are still many defects in the details.

  Among them, the most distinctive feature is the “ highlight that melts into the pupil and breaks the pupil ”. Humans know that ``first there is the eyeball, there is the pupil in it, and then the surrounding light is reflected to form a gloss'', so ``the highlight does not block part of the pupil. It can be understood as a matter of course that the shape of the pupil itself does not collapse, even if the AI     does the same, but AI that learns only by looking at the final illustration can understand the ``logical relationship between the whites of the eyes, pupils, and highlights''. I don't recognize anything . Or rather, I can't. I didn't give it as data.

  The unnatural deformation of the pupil is also one of the judgment materials. Humans know that "the pupil is originally a perfect circle", but AI trained by looking only at the final completed illustration does not know "the original shape of the pupil" . Therefore, such an error occurs.

  Another feature of AI drawings is that they often subtly change the color of the left and right eyes . Of course, there are characters with different eye colors on the left and right (heterochromia), but in most cases , characters designed that way can be clearly recognized as having different colors . It is one of the criteria for judging that the colors are similar at first glance, but if you take a closer look, they are different.

  However, even if there is such a character, it is not strange, so it is not an important basis. Also, it is natural for the color of the left and right eyes to change depending on the surrounding environment, so be careful not to make a mistake.*


Maybe because the training data have a lot of bad photos with the red dot in the eyes.


I think Hofstadter's book GEB is a whirlwind tour of cool ideas in GOFAI -- that concludes that it's all hopeless for AI because of the liar paradox.

Written in 1979.


I feel like the article is building up to some interesting points and then it just ends.


Is this written by GPT-3 again?


Is deep blue really considered AI? I thought that was more of a brute force algo?


Well, it's definitely artificial, and looking at how it plays chess it is arguably a form of intelligence as well. Implementation details are secondary.

Also, why can't a brute force algorithm not be AI? Machine learning is arguably brute force too; a neural network just brute forces some realistic sentences by doing a bunch of matrix multiplications, if you get down to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: