Hacker News new | past | comments | ask | show | jobs | submit login
AI startup claims to automate app making but actually just uses humans (theverge.com)
410 points by smohnot on Aug 15, 2019 | hide | past | favorite | 289 comments



For all those people asking why the VCs did not catch this: what sort of diligence would you do?

I don't know the company or the details. Assume two scenarios:

1. The target company is willing to lie, fabricate code, mix in tensorflow etc.

2. The company will not outright lie, and will answer honestly. However, they are very optimistic about their chances, and about their ability to deliver some sort of AI-enabled solution.

Right now they have -- let's hypothesize -- some sort of funnel and they route bits of code to different developers. They think they will replace some of it. They are using various AI libraries.

Suppose you believe that even if the AI won't eventually code the whole app from scratch, it will make huge strides in certain areas that we don't even know about. These strides will dramatically reduce the cost of making an app (eventually, you believe). Suppose you think that this company is basically an exploration of those areas.

In other words, be generous to the diligence undertaker.

Now, how would you know? What steps would you take to that you suspect these people did not?

(Because this is the internet and no one knows for sure: this is a real question, and not a rhetorical attack on people asking why VCs were "tricked")


>Now, how would you know? What steps would you take to that you suspect these people did not?

Hire two or three ML people for the VC firm and have them audit the company's code. Anyone with some engineering experience is going to be able to tell if the company just shows you random tensorflow code or has actual data and a codebase.

With all these blockchain and AI mumbo-jumbo companies it would probably be a good idea to have more tech workers around rather than just business types.


Why would a company allow a VC firm that they haven't received money from allow the VC firm to audit their code? Seems like a recipe for disaster.


Investments goes through "due diligence" between signing a term sheet and closing. During this time, the VC gets to look at the company's financials, legal liabilities, and all the other stuff that they don't get to look at before then.

On the engineering side, it's common for investors to ask for a list of software vendors and open source licenses. It would seem pretty legit to me for an investor to ask an outside firm to audit whether a product's code matches what they claim during the pitch.


for the same reason any company accepts due diligence: they won't get the investment if they don't accept.

Of course all of these are negotiable and not all investors do their due diligence, but it is in a well-performing VC's interest to do so.


I don’t know how common it is.

All I know is that I had a short gig last year to do a code review on a potential acquisition, so this can happen at least sometime.

This was for a much smaller product, though.


That is scenario #1. What about scenario #2?


Although I haven't quite cured cancer yet, I'm optimistic that if you give me a billion dollars I'll find a cure for cancer. So can I please have a billion dollars?


Sometimes the answer has to be yes, right? Otherwise how can you ever do something expensive which hasn't been done before?


Good VCs almost never fund the idea, but the team. So, if Atul Gawande comes up and ask that question, he most surely will get it


> very optimistic about their chances, and about their ability

this is the very definition of due diligence...you don't take their opinions as fact. to answer how:

* meet with their senior engineers * review code * talk to former colleagues * talk to former professors * review any academic papers, blogs, github repos * talk to customers or MVP users

If you don't have the first hand ability to directly evaluate their AI technology, or an extended network who can assist, you should not be investing in AI startups.


well if they aren't lying and the investors have an accurate picture of what the company is actually up to then there's no problem, that's how venture capital is supposed to work, VC investment is always risky.

But there's an increasingly large amount of companies that are purely selling snakeoil that is way beyond any sort of grey area. You just need to look at the infosec scene.


I feel the business model of artificial artificial intelligence / fake it until you make it, is a valid business model, unless you obfuscate your progress to investors.

If you can either hire an AWS cluster or an army of cheap human contractors for the same money, and your AI-process relies more on common sense over easy calculations, why would you not hire the human contractors?

They are basically taking Searle's Room, but instead of a robot, they are filling it with necktops/meatbags. If it works, it works. Why reverse the argument? The room is filled with humans, and not a robot, so it can't be (artificially) intelligent? Even if the humans do nothing more than following a tree-script?

I would ask them to write out/sketch out a complete project, from inception, to code delivery, which steps require humans and which steps already have automation. They should already have this. If suspect, walk through it in person.

Remember, the AI hype is a market for suckers. VC's who lose their mind over the AI hype, while they were level-headed when dealing with traditional software companies, deserve this.

From what it sounds like, it is a traditional IT outsourcing (sweatshop) company, with more focus on automation and structuring the process. If they can unify software building blocks (pagination, REST api's, user profiles, ...) and better match experts with glueing these building blocks together, they can be very efficient and low-margin.

Forward-thinking: In 10-20 years, there will be an outsourcing company that is highly reliant on decision and data science. Funding and market fit decides if it is this AI startup or another.


> The purpose of AGI is twofold: first to improve the efficiency of organizations of humans by facilitating communication and access to information, and second, to automate the economy with respect to those functions that require human labor.

> With respect to communication, the goal is to make information easy to find and publish. ... When enough narrowly intelligent experts are added to the network, you should not care (or may prefer) that your conversation be with machines rather than humans.

http://mattmahoney.net/agi2.html


> For all those people asking why the VCs did not catch this: what sort of diligence would you do?

I mean, it claims to do 80% of the work to create an app in an hour. I'd have them demonstrate that - "OK, have it generate one for this random concept I just came up with" - and provide the resulting code for an independent developer to look at.

If the answer is "that'll take a week", there's your tell.


> I mean, it claims to do 80% of the work to create an app in an hour

No, it claims that it will do that, not that it does do that. If it was “does”, yes, the VC would be able to verify it. If it is “will”—and that's what they are seeking funding to build out—its a lot harder to verify.


Disclaimer - I'm a VP E at Engineer.ai

You're more or less correct in your understanding, we're trying to get to 80%, and are certainly not there yet. The other thing to keep in mind is that there's a lot more than goes into developing software than just code.

We've already been able to automate problems such as selecting the optimal creators (developers, QAs, designers) for a given project from our capacity network, the ability to price out and estimate timelines for a project (something that takes our competitors 2-3 weeks), onboard and evaluate engineers on our platform, predict, arbitrage and scale cloud infrastructure for our client's projects, along with a bunch of other areas.

There's definitely a long way for us to go, however we have been able to show proven success on these problem areas already.


but how is the accuracy of your quicker timeline estimates? being faster isnt always the only important factor.

however i disagree with your third paragraph, you have not solved any problems in the problem area you are raising money for, which is AI created software. best i can tell, based on your commentation you are just streamlining onboarding, hiring and estimating. No offense, but none of this has anything to do with AI, IMO. Just a glorified project management and outsourcing company at this point.


If it claims it will do that, it sounds like they should be raising an Angel or Seed round, not a massive A round.

They raised $29.5 million in a Series A round. That sounds like find-product-market-fit money, possibly a bit of growth money -- NOT pre-product money. You dont need to raise $30M to go from pre-product to product.

The financiers were Swiss VC firm Lakestar and Singapore’s Jungle Ventures led the financing and participation from Softbank’s DeepCore. I'm surprised this would get past them.


> If it claims it will do that, it sounds like they should be raising an Angel or Seed round, not a massive A round.

That is what they claim, though they claim it's 80% done, too (they seem to like “80%” a lot.)


That is a good one. They should do that.

But, as I understand the claim, it doesn't do 80% of the work but are 80% of the way there. Even if you can easily see that they are actually 10% of the way there, and this is just optimism/hyperbole, you may think the approach has merit.


> But, as I understand the claim, it doesn't do 80% of the work but are 80% of the way there.

No, they claim both that it will do 80% of the work to build the app within an hour and that they are 80% done with the app. (Both claims being 80% makes it easy to mistake them for a single claim, but both are stated separately in the article.)

Tangentially, the two 80% claims together seem to me to pretty forcefully bring to mind the 80/20 rule even before considering whether the claims are completely fraudulent.


For #2, ask them to give a process flow of how an app goes from customer request to executed. Drill down on the differences they claim between their approach and "standard development" or their competitors. Clarify which differences are already realized, and which are optimistic projections.


Thanks for answering. I think this would be a sound approach.

I will point out that different reasonable people -- let's assume yourself and myself -- taking this approach can come to different conclusions as to whether it is worth investing.


It seems like VCs make their money by hoping someone dumber than them is willing to buy out the business, I feel as though generating revenue or sustainable business models is secondary to being able to run massive losses, convince someone else (or the market) that profitably is just a few quarters away so that they buy the business at an inflated price and when it collapses use it as a tax write down.


VC don't care. It's expected that some investment will fail one way or the other. VC's investment approach is to invest in 10 startups and hoping one will hit it big.


if the take is "VCs expect startups to fail so they don't do due diligence..."

I mean you can already tell how ridiculous that is just by laying it out


They do do due diligence. I meant they don't care if the due diligence turns out to be wrong because it's part of the risk profile of the investment. This was a response to GP's question of how could VC's due diligence miss the mark so badly.


It's not an unsolvable problem. Software is much easier to read than to write. It's much easier to see if something works than to make it yourself.


>Software is much easier to read than to write

If you mean software is easier to use than create, sure. But if you mean it's easier to understand existing code than write new code, countless rewrites suggest it's not so simple.


Yes, but let's say they are just exploring the problem but they haven't solved it.

You can see that they are breaking the problem into various cost estimates, and other features, and just running a simple regression to figure out which features they should target for automation.

The problem with ML is that you don't know now if it will work. You just have an intuition that, eventually one of these areas will be able to be automated.

You notice that their approach to the front end is to make various modules (log in etc) that the AI can automatically deploy in various starter packages.

Kind of simplistic at the moment, but maybe it will yield something?


You have not read much PERL or J for that matter then :)

Of course there is plenty of unreadable code in any language.

Javascript after Babel and Webpack is done with it is pretty near unreadable as well.

In fact pretty much most autogenerated code is horribly annoying for humans to read.


Not quite sure that that's even true


Do a quick audit. Get a technical person to submit a sample, and find out what their AI returns. The result to examine should be almost instant.


You'd be surprised. Investor tech due diligence is often very, very light. More than likely nobody needed to be tricked.


if the company was lying to VC's, is it possible for them to recoup their investments outside of a civil lawsuit? I'm imagining some sort of criminal fraud investigation could happen, but dont really know.


>if the company was lying to VC's, is it possible for them to recoup their investments outside of a civil lawsuit?

It's all a game. The founders are friends with VCs, then when they fail out of the startups, they join the VCs.

From my perspective it is all a game against the foolish and greedy LPs.


> Duggal “was telling investors that Engineer.ai was 80% done with developing a product that, in truth, he had barely even begun to develop.”

Due diligence? I mean, I understand VCs are not the smartest bunch but if you are investing $30M, please do the due.


Also HN predicted the fraud when it was launched here 9 months ago https://news.ycombinator.com/item?id=18391280



> I understand VCs are not the smartest bunch but if you are investing $30M, please do the due.

Interestingly how nonchalantly you mentioned that.

It wasn't the case with me. Elizabeth Holmes/Theranos case opened my eyes that there exist significant amount of people that hold significant capital (millions) and are, hm, not the smartest bunch, as you politely put it.

It still somewhat amazes me. You'd think these people were careful, considerate, etc.


>You'd think these people were careful, considerate, etc.

I'm sure that on average, they're capable, intelligent people. But they're also a product of "right-place-right-time", many times they were "nerds" who were of the right age to get ahead of the internet and become filthy rich.

Now, many of them believe they have insight to offer on everything, from politics to philosophy. PG has complained about this on Twitter: why don't we listen to these successful people on other topics?

Because they aren't as smart as they think they are.


VCs use portfolio theory. Quantity over quality. Due diligence takes too much time, effort and money that can be better spent just getting into another deal.


>> VCs use portfolio theory.

I dont buy this argument. I used to help manage a large portfolio. Portfolio theory does not mean that you can put in garbage and magically get more than garbage (actually, it did with CDOs, since they were tranched, but even that ended up tragic if you recall 2008.)

Portfolio theory, esp with A-round and beyond VC where your portfolios are smaller (~15 to 30 entities) requires due diligence.


You don't know it's garbage. Disqualification is easier than qualification. Funds say "no" to the obviously bad or incompatible. Whether something is a "yes" takes research and is never certain.

The entire game is picking the winners so how much are you going to dedicate to predicting that (which is massively unpredictable) vs just investing in another shot that may be a winner.


an apparatus of justification

when i played counter strike we called it spray and pray

i should call it the portfolio theory of ballistic delivery


Remind me to register Pseudo Venture Capital as a trade name... PVC has a nice ring.


You can get many many nice PVC rings worth more than that startup.


Must be easy money, then.


I wrote some details on why or why not VCs do tech DD:

https://medium.com/startup-grind/technology-due-diligence-or...

> please do the due.

In conclusion..it's not your money, so why do you care?


Literally none of the news articles on HN have anything to do with you personally. Why do you read or comment on them?


Couple of comments:

1. Fake it 'til you make it is pretty accepted in startup world. It's only a problem if you don't actually make it. If you do, then you're a hero---even if you made wildly unrealistic projections initially [and got lucky]. It's kindof unfair, but nobody said life is fair :)

2. Most software people (like me) assume that due diligence goes deep into software. I've been through DDs at several companies, including my own startup: it's not that deep. I would say growth metrics, financials, legal structure, executive team is more important.

3. If you haven't read the Theranos story, read it. It's a good example what can happen in the extreme, edge case.


"Fake it til you make it" should not extend as far as lying to your investors.

If "fake" for this company meant that they told customers there was AI and there wasn't, no big deal. Customers agree to a service at a certain price. Why do they care how the company accomplishes it?

Investors, however, do care about whether cost-saving "AI" works today vs. in 2045.


Ha! This company tried to recruit me a little bit ago. The CTO walked me through the business model, and it was pretty obvious they were just a typical agency. I pointed that out, and he got defensive and tried changing subjects.

In their defense, there is a slight twist in that they subcontract to hundreds of other agencies when those agencies have additional capacity. Essentially, they arbitrage on that.

But, yeah, the pitch that they use AI to build apps -- it's pretty ridiculous. They don't. Even with a very open mind to that phrasing, it's still a huge stretch.


Disclaimer - I'm a VP E at Engineer.ai

To clarify, while we intend to use AI to solve a variety of different problems, we're not using it for actual code synthesis (ie. building apps). Instead we are leveraging code reusability and programmatic stitching/merging for our software assembly line.

In addition to that, we are leveraging various AI/ML techniques throughout the rest of the product development lifecycle, for areas such as pricing/specing/ideation, infrastructure management/scalability, code reusability itself and matching, creator (developer/QA/design) resource matching, sequencing and dependency prioritization, and more.


Yeah, none of that sounds like AI. It sounds like standard features of IDEs and PaaS. I can't imagine you have a programmatic way to save much time on pricing/specinf/ideation because machines can't do that yet.

Also, the clear message of the company was "AI writing code that would otherwise be written by humans".

Again, would strongly suggest you stop posting anything about this situation without consulting a lawyer. Based on your HN posts, you can't claim ignorance anymore.


I could imagine how their conversations with investors are going.

“When we said we use AI we meant An Indian”


You just made me spit protein shake onto my keyboard. You gonna buy me a new keyboard?


As someone who used to work at one of these places: shocking.

It pretty much always is this way. They pretend it is AI, then when it comes out that it is pretty much all humans, they pivot to admitting it is "human-assisted".

The humans were truly creating data that was being fed back in, that wasn't a lie. Engineers would have to poke at the bot a bit to get it out of corners it would get itself into occasionally.

The big issue is the VC nature of the business. You are fighting a shot clock on an extremely hard problem. So you have to rush things out to get to the next step, then realize at the next step all of the data you collected, oops, can't be used because there was a small issue.

Or maybe they realize a model was inaccurate and has to be rebuilt.

I truly don't think a VC-funded true AI company is possible, especially for hard and fairly unbounded problems (speech is one thing, engineering is just... that's insane).

If someone made a sustainable AI company that could run infinitely, that company would have a huge shot due to that financial position.


You could call it “artificial artificial intelligence”.


Yup, for many years that was Amazon Mechanical Turk's actual marketing slogan.

I haven't seen them use it lately, but I might have missed it somewhere.


I recently wrote two blog posts that touch on this. I honestly think many people cannot tell real automation from "a box full of little elves with a tech interface." (I often compare it to the MIB2 scene where Will Smith opens the automatic mail sorting machine and reveals a multi-armed alien rapidly flinging mail, not robotic parts.)

It's made me less aggravated with certain things to realize that. It also makes me wonder if founders are genuinely being intentionally deceptive or just unclear where to draw that line themselves.

How much AI inside the box do you need to qualify as an AI company when advertising what you do and wooing VC money? I bet some people honestly don't know and some of those people may be in decision-making positions at such companies.

Serious tech people may be clear on that, but most companies involve more than just tech people. If your PR people don't really get it and your tech people don't have adequate power to insist "You cannot market the company this way," then it will get sorted out in ugly headlines and court cases and the like.


I think people come up with the idea of marketing things as having AI behind them, before the implementation is fully realized. Once they have funding and employees, they can't exactly back down. So they have to put humans behind the solution as a stopgap. In their minds, it's temporary: they're just gathering more data, they have real paying customers that they want to keep until their solution is ready, etc. The little lie becomes a big lie and sooner or later it will blow up for a lot of companies.

Uber's house of cards is a very transparent example, but there are many others who don't even disclose that humans are at the wheel.


On the upside, it means that dystopian dreams of automation taking all our jobs and creating an 80% permanent unemployment rate are laughable.

There will be plenty of paid tasks for people. They will just be online, remote and we will need to sort out how to make this make financial sense for all involved parties so it doesn't turn into a permanent underclass.


"The company claims its AI tools are “human-assisted,” and that it provides a service that will help a customer make more than 80 percent of a mobile app from scratch in about an hour"

By the 80/20 rule, that would no doubt be the 80 percent that takes only 20 percent of the time to write; the remaining 20 percent that the tools can't do is what takes 80 percent of the time to write.


Majority of AI startups are starting with manual approach to generate training data set for future algorithms...

Plus it is a faster way to validate demand for given business model.


Are there any proven examples of a company doing this (starting with a manual approach), receiving investment, and eventually developing a working AI product?


Depends on how you define AI. If you're talking hands-off, learned-from-scratch ML and deep learning. I'm not sure. If you're talking "used a human to learn the steps and then slowly automated those steps", then you're describing basically every company that's ever existed.


>If you're talking "used a human to learn the steps and then slowly automated those steps", then you're describing basically every company that's ever existed.

So, obviously not that definition then....


That's a pretty weak definition then. Might be your issue right there.


There was a fairly hyped scheduling assistant which had human intervention to start...

Ah here [1] it is, X.ai.

[1] - https://news.ycombinator.com/item?id=11520681

* Interestingly, searching on Google for “x.ai human” lead me back to the HN discussion as the 4th link! I’m seeing HN discussion surface in organic search results a lot more lately.


That's likely because your Google profile points HN threads as relevant / what you're searching for. Try using DDG for a less biased search result.


This did not occur to me. This would be mildly terrifying if true, except Incognito mode returns the same results.


The hn link is 5th result for me on google, but that link is the bottom of the second page on ddg (20th result I think?).

I think google is following you in your incognito mode!


Does incognito mode change your IP address? I wouldn't doubt google still "knows" it's you


seems like many self-driving car startups follow this model?


There is no such thing as a self-driving car, and there won't be for at least 20 more years.


thanks! that's clearly what the above comment was asking. . .


I think demand for VC-subsidised labor and real estate will always be high.


[flagged]


That second paragraph is unkind, unnecessary, and unwelcome.


So their founders should be charged with securities fraud and go to jail if they do not tell the investors that it is what they are doing.


One thing I've noticed in fundraising is that many potential investors almost expect you to have some AI-driven solution.

Many don't even know what AI is, and would't be able to sniff out bullshit no mater how much due diligence there's involved. Dumb money is flowing in, as long as you have a great pitch and sleek presentation.


Any sufficiently advanced statistical model is indistinguishable from AI, at least for the purpose of VC dollars.


Smells like Theranos. It's surprising how easy it is to fool VCs these days.


Other than the scale of funding and being medical related which were the wild parts about the Theranos scandal.


At least it can't literally kill people…


Stress kills people


That's true of any job, though.


Contrarian view. Read the article top to bottom - there is no fraud here. This is exactly how it should be done.

VC's dont know shit about AI and you cant expect them to.

Anyone building a cutting edge AI product, SHOULD NOT build it before selling it product.

First use humans to build/sell the product and then in parallel train the AI to take over. Often the training phase is best done using the human taskers.

The CEO - 'Sachin Dev Duggal' is doing it exactly right. Anyone claiming otherwise, including the journalist who wrote this post, don't know what they are talking about.


> Anyone building a cutting edge AI product, SHOULD NOT build it before selling it product.

If they are selling a service and AI is part of the blsckt-box implementation, sure.

If “its being done automatically by a machine” is your selling point, and you haven't built a product that does that when you sell the product, it's fraud, pure and simple.


If we are talking pure and simple, ALL business takes what you would call 'fraud' to get there.


I wouldn’t see it as a problem if human actions were systematically recorded into a structured dataset to be used as training data.

But it seems from the article that the labor is not used for this purpose at all.


Speaking more generally, I do think it is a systemic problem in the world of AI that we have to rely so much on human-labeled data, often done by low-paid workers in other countries (like Amazon's Mechanical Turk) or harvested off friction deliberately inserted into the human experience (like CAPTCHA).

The AI promise was that eventually the need for human labeling would end, but the curve currently is going in the opposite direction and it's reasonable to question whether it will ever reverse.


So, question:

Why is this fraud, but Uber isn't?

This company claims they're using humans to build apps while they develop an AI platform out of hand-wavium.

Uber claims they're using humans to drive cars while they develop self-driving cars out of hand-wavium.

Seems like the same model to me.


Yeah. This gives me about 0 pause.

Imo personal opinion, 'AI' at this point is about augmentation of human action to reduce costs (time, materials, human attention, compute, etc), and actually, if you know what you're doing, it works and can make you money.

My group works extremely heavily in this space. We use a combination of human annotation and ML to speed up human annotation and improve the products of the ML component. Rinse, wash hands, recur until 95% of predictions are 95% accurate or better. Use ML to find the 5% of predictions that aren't up to snuff and lay hands on them (this is the part where you have to pay people). There is nothing shameful about including humans in the process.


Well, it just goes to show you, it's always something — if it ain't one thing, it's another.


Truth and the fact one is about goals vs what they are currently doing. Having unrealistic goals isn't fraud if everything is disclosed properly but making claims about what you have now is.

Claiming that you plan on researching the cure for cancer while you flunked high school biology and plan to do your research by trying to research ancient books would be an extraordinarily stupid investment but if you applied the money towards said fool's errand there would be no fraud there.

Claiming you have invented robots when they are really just metal suits with hired people inside would be fraud.


LOL, obvious #MagicalPixieDust peddler is obvious. Real AI is currently three-to-eight years away, just as it has been for the last 40 years. They shoulda just said it uses “computers”.

In the meantime, you know what does work here and now? Building up a domain-specific language to the level of that domain’s expert users, empowering those users to tell their machines what they want without requiring a CS degree to do it.

Small steps make Progress.


Don’t they all?


this is the right response.

if you see an early stage company using "AI", then assume they are manually doing most of the work right now.

They may have a clever way of making it smart in the future


But I'm using a linear regression, isn't that AI!?


Disclaimer - I'm a VP E at Engineer.ai

We actually wrote a blog post a little while ago that might answer a lot of the questions I'm seeing here: https://blog.engineer.ai/a-little-bit-about-ai-and-more-stra...


This post doesn't clear much up. The things you describe that are done by AI sound like project bootstrappers, libraries, or code-gen (in an IDE). None of those require "AI".

I just ran a tool that bootstrapped most of a CRUD app for me. Was it AI? No, because the program I ran didn't do any app-specific coding.

My honest advice is to talk to a lawyer and get this company off your resume ASAP.


I like where this is going. Almost daily now, we're seeing reports of "AI startups/companies/products/features" getting unmasked. Technical people knew it all along, but corporate-speak, prefabricated demos, half-baked products and puff pieces were slowly inflating that bubble. Glad it's bursting.


If I were using that company & found out after the fact that they were mostly people, I might feel a little misled, but I also kind of wouldn't care. AI is a hot buzzword, but what I really care about is can I input resources (time, money, unpolished diamonds, whatever) in one end of your black box and get predictable results out the other end. If the answer is yes, do whatever you want (in an ethical manner). Whatever you're building, whether it's powered by people, software, IBM Watson, or free range chickens pecking buttons for treats, I'm happy if it works at a price I care to pay.

Until we've truly built self-replicating machines, I just assume whatever you're selling me requires a lot of people to stay competitive anyway. There's no farm-to-table AI raised by AI farmers yet.


Any language or system sufficiently detailed to accurately describe the steps necessary to solve the problem turns into a programming language.

A very large number of companies have tried to automate software development with little success.

What is supposed to make these folks special?


Probably the AI part is in the configurator, at most spec generation. Development is still done by humans.

https://imgur.com/a/hlsALdj

So a fancy new SAP but with cheap consultants.


probably automated the selection of templates in Android Studio and xcode ,lol.


I'm pretty sure we already all knew they were using humans 9 months ago. Take a look at the comments here: https://news.ycombinator.com/item?id=18391280

It seems like they were fairly explicit about it, so I'm not sure if the outrage is justified. komali2 even noted explicitly, "There doesn't appear to be AI involved. A very good business model, but no AI."


VC funding needs to stop. It's a complete cancer on the software industry. All this money going towards half-baked promises that are completely overvalued, only to fund companies for decades that never turn a profit.

Maybe I'll hire an animator or something and go to VC firms and ask them for money by showing them an animation of a new flashy product I've never designed. Better than working an honest living it seems.


Similar approach used by a lot of self-proclaimed self-driving car companies. They have a driver and an engineer in the front seats but can't stop from saying we have self-drivings cars on the streets :-) .

Also, the same pattern with Cloud hosted companies. It might be true these days but back in the day - a lot of them were claiming to be hosted in the Cloud to look cool but actually, they were using colo data centers.


I was a going to register "soylent.ai" and put up a roll of shame but the ai registrars charge an arm and a leg. Perhaps someone else will.


so if humans do it, is it called 'Organic Intelligence' which could then be called 'Artisanal Intelligence' aka AI.


> The company was sued earlier this year by its chief business officer, Robert Holdheim, who claims the company is exaggerating its AI abilities to get the funding it needed to actually work on the technology. According to Holdheim, Duggal “was telling investors that Engineer.ai was 80% done with developing a product that, in truth, he had barely even begun to develop.”

Ouch.


This reminds me of a couple of KYC companies, the ones that help you check a user's passport and other docs.

They talk a lot about algos, then when they demoed it to me it comes out that they actually send my picture to India for a human to look at. There's literally 24h service with real people there doing the "image recognition".


This is yet another black mark on softbank. Seriously? This should be day 1 of DD for anyone looking into any AI companies.


The HN title is "AI startup that raised $30m claims to automate app making just uses humans". That's a painful and confusing sentence. The real title is "This AI startup claims to automate app making but actually just uses humans". Can someone set a more grammatical and accurate title?


They contacted me for a Software Engineer position, 2 months back by them. I checked Glassdoor review, majority of those are stating that CEO is not a person you will like to work with and several of them saying it is just manual labor, no AI, everything they market is fake. I am glad I trusted those reviews.


I was attempting something like this but the company paying me to do it lost patience around 30 days where I was only able to identify widgets visually from mockups from past training data. This was a nice step but going to know what to do with those widgets contextually got pretty rough.


You had 30 days to do an automated website builder with visual input? How can the stakeholders be that dellusional.


I'm happy we are starting to move on from all the AI hype and BS. Hopefully some of that VC money will start shifting to something useful. Mitigating climate change, or educating children, or feeding children ... Nah. Just kidding. VCs just want to pretend they are Tony Stark.


Hmm, I met these folks in Lisbon late last year, at Web conference. They did tell me it’s humans building, and their play was to build MVPs quickly with AI APIs - which I thought was honest and useful. Of course, I’m not a VC :D


> The number of companies which include the .ai top-level domain from the British territory Anguilla has doubled in the last few years, the WSJ reports.

This sounds like some statistics manipulation. Why limit yourself to Anguilla?!


Maybe you're joking, but .ai is the ccTLD for Anguilla. I don't think they meant to imply the companies in question reside there.


I'm always amazed by the funds that companies manage to acquire from VCs without a (developed) product. Having recently read Bad Blood it's horrifying to see how often similar situations arise.


Did they update their web site? Because as it stands now, it’s clear that they‘re a standard agency connecting developers to people who want work done, with some vague stuff about AI helping to match them.


They already tried this kind of thing in the late 18th century:

https://en.wikipedia.org/wiki/The_Turk


At GitStart we use a global pool of devs and mentioned that upfront.

We have still deployed a ton of models to improve quality and SLAs, but embrace our human nature upfront.

This is bad faith to the extreme.


People are very creative at spinning consulting shops as AI software something or other to get a higher PE. The most prominent example being Palantir.


So they feed their AI with food instead of data


The automation they described sounds like automation of part of project management. why they are not selling that? :-D


Reminds me of Kite - AI Autocomplete and Docs for Programmers. Just always s/ai/marketing/


So it's an "I" startup? Almost like you get more than you paid for, haha.


It’s not a lie if you think ‘artificial intelligence’ means ‘pretending to be clever’


Relevant xkcd: https://xkcd.com/2173/

> "Yeah, I trained a neural net to sort the unlabeled photos into categories." [...] Engineering tip: when you do a task by hand, you can technically say you trained a neural net to do it.


They took "fake it till you make it" to another level.


I mean, "do things that don't scale," amirite?


37 k. for an app and it uses AI.

Didn't got through the bullshit test :)


Wizard of Oz prototypes aren't meant to scale...


their investors should have clarified what the "A" in AI stood for -- actual intelligence! :)


Is this securities fraud?


i mean they need to generate a training set first right?



Good grief.

When will people wake up and realize that AI today is just capable of "curve fitting"?

Yes, that is a bit of a simplification. But not far off.

Neural networks depend on back propagation. They are really just another type of optimizer for maximum likelihood, using gradient descent. They work better on high dimensional, non linear data than other methods before.

But if the function you are attempting to model is non differentiable, neural networks won't help you.

They certainly aren't capable of performing magic tricks like writing an app for you.


This is not true at all, and no offense, is a misunderstanding of how NNs can be used in other applications.

For one, what you are modeling itself does not need to be differentiable, only the network itself needs to be.

Second, using neural networks in combination with other techniques for program synthesis is an active area of research currently, and although it is currently at around 50 lines or so, your fundamental assertion here is wrong.

Third, there are a number of ways that deep learning could be leveraged in app development. The easiest way would be to make heuristic-type decisions around UI/UX or to build parts of said GUIs using existing code blocks. This has already been used to some extent in website design (e.g. https://arxiv.org/abs/1705.07962 and https://blog.floydhub.com/turning-design-mockups-into-code-w...). So it's certainly possible that it could be used in conjunction with templates to build common app types.

Now, this startup is clearly not doing that, but, that doesn't mean that it's a) impossible to leverage AI for app development and b) all of deep learning is "just curve fitting"


To be fair, neural program synthesis has been an active area of research for a while and it doesn't seem that it's going to ever take off for good, like machine vision or NLP.

I keep linking to this page:

https://blog.keras.io/the-limitations-of-deep-learning.html

But what Chollet says is still the case. Machine-learning a mapping from arbitrary specifications to programs is many, many times more difficult than classification. Unless someone comes up with a completely new architecture that is for neural program synthesis what CNNs are for vision and LSTMs for sequence learning, and then some, then there's not going to be any big advances in the field.

Source: I study algorithms that learn programs from examples from my PhD and you need three things for it that neural nets lack: a) generalisation, b) the ability to learn recursive functions and c) higher-order representations (i.e. quantified variables).

Personally, I was very excited with DeepMind's differentiable neural computers, but it seems very hard to train on anything but toy problems.


> I study algorithms that learn programs from examples

What does work?


Simple, narrow stuff that requires expensive experts in AI to keep updating along with good hardware. Alternatively, you can pay a cheaper human to more jobs even better with modern tools that make most of them easy. It's what almost all successful and almost-successful companies do.


> Simple, narrow stuff that requires expensive experts in AI to keep updating along with good hardware

This does seem to describe most machine learning.


Inductive Logic Programming works:

https://en.wikipedia.org/wiki/Inductive_logic_programming

(But that wikipedia article is a bit behind the times).


Late edit: Also, Inductive Functional Programming works. Check out Magic Haskeller:

http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...


For one, what you are modeling itself does not need to be differentiable, only the network itself needs to be.

No. In some simple cases you can model a non differentiable function with a differentiable one accurately. But your model is likely not to perform well if the underlying relation between your parameters and observations is non differentiable, because your model will likely be unstable.

Second, using neural networks in combination with other techniques for program synthesis is an active area of research currently, and although it is currently at around 50 lines or so, your fundamental assertion here is wrong.

Sorry, but NNs haven't made any significant progress in the realm of program synthesis. I can set up a model with Transformers and BERT that spews out thousands of lines of code that stylistically look correct, but does not compile. NNs aren't the right tool for the job here.

Third, there are a number of ways that deep learning could be leveraged in app development

That has nothing to do with the original post. There's no AI that's going to write an app for you.


>Sorry, but NNs haven't made any significant progress in the realm of program synthesis. I can set up a model with Transformers and BERT that spews out thousands of lines of code that stylistically look correct, but does not compile. NNs aren't the right tool for the job here.

Di.. did you even bother reading what I wrote? You wouldn't use NNs to directly output code, you'd use it to guide a search process.


>> When will people wake up and realize that AI today is just capable of "curve fitting"?

You are (possibly inadvertently) misquoting Judea Pearl and he was talking specifically about deep learning, not "AI":

“All the impressive achievements of deep learning amount to just curve fitting,” he said recently. [1]

"AI today" still means many more techniques and algorithms than deep learning. For example, SAT Solvers, logic programming and theorem provers, classical planning, classical search, adversarial search (MCTS) etc are alll AI techniques that have nothing to do with "curve fitting".

So could we please all not throw about big proclamations about what "AI today" is ("just" or not), without first making sure that we have a thorough understanding of what we are saying?

I thank us all in advance.

____________________

[1] https://www.quantamagazine.org/to-build-truly-intelligent-ma...


For example, SAT Solvers, logic programming and theorem provers, classical planning, classical search, adversarial search (MCTS) etc are alll AI techniques that have nothing to do with "curve fitting".

None of those techniques are new, nor are they fueling the AI hype cycle.

I purposely conflated AI with "deep learning" because it is the source of the hype. And in reality, what most AI startups claim to be using.


Deep learning is not new either. LSTMs are from the 1990s. CNNs are from the 1980's (and note well that most recent successes of deep learning are basically tweaks of these two architectures).

What is new is the hype around deep learning that took off after 2012, and because Google and Facebook decided to champion it.

In any case, as far as I can tell "AI today" is anything that is "AI" and that exists "today". How do you mean "AI today"?

>> I purposely conflated AI with "deep learning" because it is the source of the hype.

I don't understand why you would do that. You are aware that there is hype and that it is increased by misuse of the term AI. And you purposefully misuse the term AI in a way that increases the hype? Why?


At this point, the battle for terminology is lost. What the casual observer is talking about when they talk about "AI" is "Deep Learning" is Curve Fitting.

The war for terminology is more lost than differentiating "Hacker" from "Cracker" when referring to computer security. There's a specialist arena where the distinction is occasionally respected. This is not that forum.

Artificial General Intelligence is so far off that it's not a general conversation topic, it's not even a specialized conversation topic - It's a fantasy conversation topic.


> This is not that forum.

I think "Hacker News" is exactly that forum.


aside: 2012 is the date because people figured out how to use GPU's to do the training, which meant that instead of 3/4 tries at getting the thing to work in a day you could do 30-40. This made things a lot more practical...


yep yep yep this is very much it as well although if you recall it was also around the time when Andrew Ng at google was doing things with DL on massive amounts of computation on CPU with DL too.


Where did I ever claim that deep learning is new? I am fully aware of its history. Before deep learning, research on neural networks dates all the way back to the 1950s.

You are completely missing the point: "deep learning" and AI are mostly synonymous in the current hype cycle, and it began with breakthroughs in deep learning.

How do you mean "AI today"

The AI industry that I work in.


> You are completely missing the point

Can you please edit swipes like that out of your comments here? They tend to degrade discussion. If you simply provide correct information, your comments will be stronger and their effect on the thread at large more salutary.

https://news.ycombinator.com/newsguidelines.html


That is hypocritical then. This is not a swipe from op?

So could we please all not throw about big proclamations about what "AI today" is ("just" or not), without first making sure that we have a thorough understanding of what we are saying?

I thank us all in advance


It's a matter of degree (https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...). I agree with you that that comment implied that the person being addressed didn't know what they were talking about, which is arguably a swipe and should have been replaced with something with no such insinuation. But "You are completely missing the point" is much worse.

It's easy to understand how a mild swipe provokes a more aggressive one; indeed it's hard to resist being carried by that current, but that's just what the site guidelines ask us all to do: https://news.ycombinator.com/newsguidelines.html.


No, that is not a swipe. That is me being very nice and polite in telling people to make sure they know what they're saying before saying it, followed by an awkward attempt at humour to avoid any misunderstanding of my tone.

Edit: Sorry, I see you are the OP in the thread. Note that I did not aim that specifically at you and I included myself in "us". I understand you may have felt frustrated that I challenged your knowledge of AI but I sincerely think that you could have researched the subject a bit better before stating what you think it is.

Edit II: At the very least you could have tried to talk a bit more about why you think that "AI today is just capable of curve fitting". Making such a strong statement without any attempt to back it up with some kind of explanation (I'm not saying you should reference sources and bring "evidence" or anything, just explain it) comes across as a bit, well, ill-informed. With respect.


I get (and am glad) that you intended to be nice and polite, but I don't think that's how that statement exactly lands with the reader. At least I didn't think so, and certainly not the person it was addressed to. It still carries the connotation that they are uneducated and ignorant on the topic. So does "you could have researched the subject a bit better before stating what you think it is".

If someone's comments seem ignorant or under-researched, the way to address that is not to put them down, however mildly, but to add correct information about the topic. This has the bonus effect that, in the case where they actually do know a lot about the topic but just have a very different view of it, you won't inadvertently insult them. Also, it's worth remembering that if X is the topic, then "the level of someone's knowledge about X" is actually already a step off topic. Stepping off topic can be great when the step is in a curious direction, but definitely not when it's in a provocative direction.


I just saw this. Thanks for the reply and the advice. I'll try to follow your advice. I think I'm a bit too hot-headed sometimes and I forget to read my own comments as if they were someone else's comments replying to myself. Also, I'm certainly carrying along bad conversation habbits from other internet forums that didn't try as hard as HN to cultivate reasonable debate.

I'm actually annoyed at myself about this, so I'm definitely trying to remember to be more careful in my comments. Your level-headed moderation is a great help in that, thanks.

I have to say something though- curiosity is only one side of the coin (the coin being the pursuit of knowledge, I guess). The other half is passion. Passion is what causes heated debate, but it's also what causes people to debate in the first place. I think it's a hard balance to strike and we will all need an adult in the room, to help focus our conversations, for a long time to come. Probably not what you want to hear though :)


that you could have researched the subject a bit better before stating what you think it is.

You just defended a swipe using another swipe.

Your argument is rooted in terminology. Yes, in academia, AI means more than deep learning. Practically speaking, AI and deep learning are synonymous in startup land. And yes, supervised NN techniques are just curve fitting, and are not practical for program synthesis. Which was the original subject of this monotonous thread.


If you believe that "AI" is used differently in academia and in startups, then making it plain that you use the term in one or the other way could have helped clarify your meaning.

I agree that this thread is dragging on a bit, but it started with a very bold proclamation expressed in strident language criticising peoples' apparent ignorance of the subject- by yourself: "Good grief" and "When will people wake up" rather set the tone of your comment. If you choose to open a conversation like that, with a broadside against "peoples'" ill-informed views I would expect you are prepared to take a bit of criticism regarding the lack of depth of your own views. If not and my criticism has upset you, I apologise, but in that case, maybe you can try to be less provocative in how you express your views in the future, because provocativeness tends to elicit robust reactions.

Edit: In any case I just wanted to say: I get that you're annoyed by our conversation but I'd like to thank you for keeping it civil (if a bit tense) and not resorting to personal attacks. Cheers.


That's a weird thing to tell someone. A person can't say that they're being misunderstood because that degrades a discussion? Degrades how? Are you putting yourself as supreme judge of the situation? Reading through the back and forth, I agree that OP had some of their statements made into strawmen. Does it degrade to mention that I think they may be in the right? Maybe miss worded a bit, yes. But the general point made throughout, I think is valid. Is me disagreeing with you degrading your sensibilities?


Of course not, but there's a long road between "disagreeing" (which is no problem) to "saying you're being misunderstood" (may be fine, but usually is already a step down in discussion quality) to "you're completely missing the point" (outright swipe). Commenters here should stay in the former zone and avoid the latter. That's not hard to do.


> That's not hard to do.

As a third party, may I suggest that this statement amounts to a 'swipe'?


I don't see why? I say that because it sounds like the kind of thing that's hard to do, but actually isn't.


Well, if you meant it in that straightforward sense, then it was not a swipe.

But I've seen the same statement used in a derogatory manner in the sense of "get this done, it's not difficult [if you're not an idiot]" by a bad manager or two. To me it does carry that other sense.


I don't understand why you pointed out that SAT solvers etc are not new, if you were not comparing them to deep learning, for the purposes of discussing what is "AI now".

AI is most certainly not synonymous with deep learning. It is just people in the industry who do not know anything about AI and who recently jumped on the deep learning bandwagon, who think they are, and people in the tech press who don't have the time to do proper research. I don't see why we need to perpetuate their misconceptions.

Actually, we don't.


>> It is just people in the industry who do not know anything about AI and who recently jumped on the deep learning bandwagon, who think they are,

Yes, I suppose this is not the best way to say what I wanted to say without getting peoples' back up. I'm leaving it as it is since it's already been read a few times from what I can tell, but here's a less rash version.

What I mean is that, because of the tremendous recent success and public exposure of deep learning, many people have become interested in it who do not have a background in AI, or even in computer science, and who therefore enter the field with big gaps in their understanding of what "AI" means. That is my experience anyway.

Well, it's a shame to work in a field and not understand its history, not least the history of what has already been achieved and what has failed, and how, so that one does not have to repeat history. So it's in everyone's interest to avoid making statements with great certainty when this certainty is not backed up by long-term knowledge.

For the record, I'm a newcome to the field myself. But I have a background in classic AI, specifically logic programming, so I do know the long story.


I don't know that anyone claims that deep learning is the only technology needed for AGI. The biggest proponents will say that it's the missing piece. But a full system will involve lots of ideas including tree search.


>What is new is the hype around deep learning that took off after 2012, and because Google and Facebook decided to champion it.

Have you seen what neural nets are now capable of? Speech synthesis/transcription, voice synthesis, image synthesis/labeling/infill, style transfer, music synthesis, and a host of other classes of optimization problems which have intractable explicit programmitic solutions.

The hype is justified, because ML has finally arrived, thanks primarily to hardware, and secondarily to the wealth of modern open research, heavily influenced congregations of leading researchers enabled by funding at Google, Facebook, etc.

The problems being solved by "curve fitting" ML were simply unsolvable by any practical, generalizable means before recently, and the revolution is just getting started.


Yes, I've seen what neural nets are now capable of- they are capable of exactly what they were always capable of, except "now" (in the last few years) we have more data and more compute to train them to actually do it. Says Geoff Hinton [1].

I have also seen what neural nets are incapable of. Specifically, generalisation and reasoning. Says François Chollet of Keras [2].

AI, i.e. the sub-field of computer science research that is called "AI" and that consists of conferences such as AAAI, IJCAI, NeurIPS, etc, and assorted journals, cannot progress on the back of a couple of neural net architectures incapable of generalisation and reasoning. We had reasoning down pat in the '80s. Eventually, the hype cycle will end, the Next Big Thing™ will come around and the hype cycle will start all over again. It's the nature of revolutions, see?

So hold your horses. Deep learning is much more useful for AI researchers who want to publish a paper in one of the big AI conferences, and to the FANG companies who have huge data and compute, than it is to anyone else. Anyone else who wants to do AI will need to wait their turn and hope something else comes around that has reasonable requirements to use, and scales well. Just as the original article suggests.

_________________

[1] http://techjaw.com/2015/06/07/geoffrey-hinton-deep-learning-...

  Geoffrey Hinton: I think it’s mainly because of the amount of computation
  and the amount of data now around but it’s also partly because there have
  been some technical improvements in the algorithms. Particularly in the
  algorithms for doing unsupervised learning where you’re not told what the
  right answer is but the main thing is the computation and the amount of
  data.
[2] https://blog.keras.io/the-limitations-of-deep-learning.html

  Say, for instance, that you could assemble a dataset of hundreds of
  thousands—even millions—of English language descriptions of the features of
  a software product, as written by a product manager, as well as the
  corresponding source code developed by a team of engineers to meet these
  requirements. Even with this data, you could not train a deep learning model
  to simply read a product description and generate the appropriate codebase.
  That's just one example among many. In general, anything that requires
  reasoning—like programming, or applying the scientific method—long-term
  planning, and algorithmic-like data manipulation, is out of reach for deep
  learning models, no matter how much data you throw at them. Even learning a
  sorting algorithm with a deep neural network is tremendously difficult.


> We had reasoning down pat in the '80s.

We had something, but if we really had reasoning "down pat", we would not now be reading an article about someone faking automated app development. Programming is all about reasoning.


That programming needs reasoning doesn't mean that reasoning is the only thing you need if you want to do (automated) programming, or that reasoning is only useful in (automated) programming.

That is relevant to your comment. The work on automated reasoning (or "inference", etc) really started in the '50s with Church and Turing, then reached a peak in the late 80's and 90's with work on automated theorem proving (there was a great big push at the time to solve very hard problems to do with the soundness and completeness of inference procedures, particularly resolution) and is still going on (for example with Constraint Programming and Answer Set Programming etc). The result of this work was logic programming. I'm leaving out all the work on functional programming that was just another branch of the same tree, if you like, because I don't know it that well but I'm sure others on this board can complete the picture. Then of course there was all the other classical AI stuff on planning, grammar learning, game playing etc etc that you can read about in Russel & Norvig.

Now, all this work could potentially be turned to the task of automated programming- but automated programming was never the goal of all that research. There was a lot of work on program synthesis, but that was just another AI sub-field with its own specific goals, that were not the overarching goals of the field as a whole. That is why we don't have automated programming at the push of a button, today: because it was never the main subject of AI research.

Edit: bit of a plug. Like I say in another comment, my PhD is on algorithms that learn logic programs from examples and background knowledge (both of which are also logic programs). That's Inductive Logic Porgramming. Our stuff works. We can learn recursive programs and even invent sub-programs that are necessary to complete a programming task and that are not provided by the user. We are making big leaps all the time and we're way, way ahead of neural program synthesis and the like. There's also a whole field of Inductive Functional Programming that does the same stuff but with functional programming languages. Automating app development with that sort of technique is mainly a matter of engineering- the research is out there. But, you haven't heard anything about it because the hullaballoo about deep learning is covering everything else up and most people don't even know there is AI outside of deep learning. Hence my comments in this thread (rather obviously).


I think there's significant equivocation going on here regarding the word "reasoning". Automated theorem proving is no more like human reasoning than a steam engine was like John Henry.

All the stuff you talk about is rigid and omits the thing that makes human "reasoning" valuable - context switching.

I think technical people are often blind to this because they don't do very much of it themselves, but it's the fundamental thing that makes people different from machines, and complementary.


Apologies for the confusing terminology. When I say "reasoning" I mean it in the way it's used in logic-based AI where it basically means the automation of propositional and first-order theorem proving. I would say that we do have that down pat, as per my comment.

I'm not sure what you mean by "context switching" but I will agree with you that following rigid inference rules is not how most people think most of the time. However, we do have the ability to think in this way and this way of thinking is very useful for certain problems where we can't just intuitively come up with a good solution. For instance, scientific thinking is of this kind.

Historically what's really been missing from most attempts at simulating human reasoning is "common sense"- background knowledge about the way the world works. If we could successfully encode even a ten-year old's worldly knowledge, we could probably build an automated reasoning system that would appear much smarter than a ten-year old, by dint of it being a) much faster, b) much more accurate and c) much more, well, logical. But, we have so far failed to instill common sense to our programs and models so they remain at best idiot savants; if not simply idiots :/


> That is why we don't have automated programming at the push of a button, today: because it was never the main subject of AI research.

I don't think that's the only reason at all. I think it's a much harder problem than you're giving it credit for. Great to hear that you're working on it, though.


Oh, I don't think it's an easy problem. It's just that there's a lot of research that is just begging for some good engineering to turn it into an actual application. The effort wouldn't be trivial and the result would not be fully automated- it would still require a lot of input and much tweaking from software devs. The developers' work would change from writing code to providing examples or coding specifications and then correcting or extending automatically generated programs, so you 'd still need actual programmers who actually understand programming to do all this. Personally, I don't find that a bad thing. I like coding and I wouldn't hasten to remove so many peoples' source of income (and source of pride in their work, also).

So perhaps my bad for using the turn of phrase "at the push of a button"- it suggests more automation than what I have in mind.

When I'm done with my PhD I might even consider launching a product :)


"AI" today isn't well defined. Many people mean deep learning. Others mean ML. Others mean even more. More than anything, "AI" is a marketing term, where vague buzzwords thrive.


Yes, I am aware that there is a gigantic feeding frenzy right now and people who have no idea what they are talking about are falling over each other to write "expert" articles where they make a complete mess of what is AI, what is machine learning and what is deep learning.

But, that there are people who don't know what they're talking about is no reason to copy them, or to perpetuate their mistakes.

Do you disagree?


Yes, I disagree. It's not (just) that those people don't know what they're talking about. The term doesn't have a standard usage/definition. Or at least not standard enough to decide whether the traveling salesman problem, linear regression, or a SAT solver are AI.


That AI doesn't have an agreed upon definition doesn't mean that we can say it is whatever we want to say it is!

For example, you may perhaps argue (at a very big stretch) that travelling salesman is an AI problem- that's arguing on the details. But to claim that all of AI is reducible to one sub-sub-field of AI, deep learning? That's just ignoring the fact that there is an entire research field that is commonly called "AI", with conferences and journals that have "AI" in their name, that are not about deep learning, with many thousands of researchers who consider their work "AI" and who do not work with deep learning, not to mention the 50 or so years and the mountain of work that this field considers its own, that also is not about deep learning.


> without first making sure that we have a thorough understanding of what we are saying?

> I thank us all in advance.

Off topic: I've seen this line a few times before, and somehow it always drips with condescension. As though the speaker assumes the understanding of everyone else is borne from laziness and not background.


It's good to be wary of hype, but this comment seems overly down on where we are by focusing on AI today doing 'curve fitting' ; sure, supervised learning is at it's core that, but when it achieves things like image captioning or super accurate translation calling it just curve fitting is reductionist and downplays how powerful these techniques are.

Furthermore, the notion that neural networks can only be used as part of supervised learning (and in particular with backpropagation) is totally off. Just to cite a few examples: http://proceedings.mlr.press/v48/taylor16.pdf https://arxiv.org/pdf/1908.01580v1.pdf

And, well, all of deep RL and unsupervised learning is not 'curve fitting'. So yes, even if your 'model' (and really this should be 'objective' or 'loss', the neural net is the model, but whatever) neural nets can indeed help you. Though it's true they can't magically start doing super complicated things like app development yet ; the whole point is to understand how to use them and build hybrid systems that play to neural nets' strengths (as with eg self driving systems).


A lot of things can be cast as curve fitting! Speech is just a curve of air pressure over time, so a function that takes the curve from a microphone and sends a new curve to a speaker can, if the curves correspond in interesting ways, be a conversational AGI.

Curve fitting in millions of dimensions is qualitatively more interesting than the 2D graph paper exercise people think of.


For me, the bar is not high. If you can just replace the knowledge worker that is lower than 1 standard deviation less than the mean worker, that will be enough for me to say that it is General AI. What is a good test for this?:

In under 4 hours can you have a program where I write something like a very detailed email (maybe I spend 10 minutes trying to craft the email) where I give a direct link to an internal file folder where there is a .pdf file and a .xlsx file. Each file contains names and birthdates amoung other things. I need the program to combine the data in these files and output it to .xlsx for me, giving me a list of names and birthdates from each file, combined in some sort of coherent manner that I can make sense of.

If you can get a program to do something like that via a detailed email-like directive before lunch, not a bunch of python that constantly breaks every 6 months, that's beating about 30% of the workforce and might as well be General AI.


You will get a system that can do specifically that or very very similar tasks given a carefully worded email which amounts to an English like programming language created by people that cost an order of magnitude more than the people that would have massaged the data into a spreadsheet.

This is actually probably a decent idea but it wouldn't be anything like general AI this year or next.


I agree totally. But I hope you get the drift here. To get to a GAI system, it has to be something like the test provided. You have to go into detail, but it's not something that you'd have to hard code either.

If I wanted the GAI to take in .docx files, or just first names, or to hop over to another directory and find things there, it should be able to do those things too. Basically, think of a task that you would give to a not-too-bright nephew that is in the firm for a summer internship. You'd need to give very clear instructions, but you should be able to coax him into getting some of the busy work done for you.


Anybody who's worked in an office knows that people that ask for things don't give sufficient, consistent, or logical instructions, because otherwise they would be programmers. Also, they just don't know what they want, at least at first. In some places, things are regimented so requests are made formally and kicked back if they are inadequate. But ordinarily people who work in the same office engage in dialogue, and knowing what questions to ask is orders of magnitude harder than processing the surface meaning of the original instructions. Even for a bright person with a lifetime of experience, I don't think it's unreasonable to take a few months to grasp how a business works, and think of how much data that would represent for an AI.


This Jakob Nielsen article is 5+ years old now, but it tends to suggest that someone who could complete that task is actually one standard deviation above average, rather than below: https://www.nngroup.com/articles/computer-skill-levels/


Oh dear.


I don’t want to sound critical, instead just providing some information: deep learning models like RoBERTa (a type of BERT, I experimented with Facebook’s just released large model this morning) can perform anaphora resolution (coreference) answer general questions, score whether two sentences contradict each other, etc. One model solves several very difficult problems whose solutions have evaded hand-coding efforts for decades.

There are experimental seq models that transform paper text into figures or joint models that transform figures and text into some code, but you are correct that these are not production ready.


I think I read this paper because someone posted it on HN (or linked to it in a comment):

https://arxiv.org/abs/1907.07355

We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.

So it seems to be the benchmarks that are flawed and not BERT and friends who are that good at text comprehension. And that is not surprising. It would be surprising if language understanding just arose spontaneously by training a very big net on a very big corpus.


Referencing that paper is a red herring. It's a flaw in one specific dataset, not with BERT or any model. Everything GP posted about about BERT models answering questions, performing coreference resolution, etc, is still true and is not at all affected by the flaw in this one dataset (out of many that these models have been tested on). Heck, you can just try out any of these models yourself on completely novel questions you come up with, and see that they work.


>They certainly aren't capable of doing magic tricks like writing an app for you.

It's the money-men that need to realize it. Practitioners know that it's simply rebranded statistical methods. The usage is exactly the same. That said, some of the things these guys offer are useful; but branding it as "AI" is another story.

By the way, is SoftBank going to implode, or what?


I get the backlash against the hype, but to suggest that DL is simply "rebranded statistical methods" is just wrong. Yes, backprop existed before the recent explosion, that doesn't mean they're haven't been major leaps forward.


It's not that DL is simply "rebranded statistical methods", it's that the companies with names ending in "dot AI" are simply using "rebranded statistical methods". Lots of money is being poured into anything that calls itself "AI" right now, and a lot of people are starting to roll their eyes when they hear that term.

I work with DL researchers on a daily basis, and I feel like there have been a lot of cool developments, but much of it is not ready for prime time. I would really like to see a list of successful DL deployments in the wild, with some info about accuracy if possible.


I work in NLP and the industry has pretty much completely shifted to DL based approaches (ie. translation vendors pretty much only use neural engines now, language modeling and classification is very often neural as well now).

There's definitely a lot that's not quite ready for prime time (reinforcement learning comes to mind as something that is particularly promising but not working really yet).

> It's not that DL is simply "rebranded statistical methods", it's that the companies with names ending in "dot AI" are simply using "rebranded statistical methods". Lots of money is being poured into anything that calls itself "AI" right now, and a lot of people are starting to roll their eyes when they hear that term.

This is definitely true - but I took it as a bit different from your original comment.


>Yes, backprop existed before the recent explosion, that doesn't mean they're haven't been major leaps forward.

I didn't mean to imply that things were not advancing, only that what we claim is "AI" are processes we've been doing in statistics forever. But, yes, of course the tools and techniques improve.


Softbank implode? Why?


Because they are a massive fund throwing money into every half-baked idea coming out of Silicon Valley?

https://www.zdnet.com/article/softbank-group-looking-to-ride...

That image was copped form an investor presentation. How can anyone take that seriously?


Presumably the OP is referring to the insane amount of money Softbank has invested in the AI space.


> But if the function you are attempting to model is non differentiable, neural networks won't help you.

What? Any classification function is non-differentiable because it can only ever take discrete values. Yet this is a task that neural networks do every day. You can approximate a non-differentiable function with a differentiable one.


That is not what I was referring to. Your relationship (neuron) needs to have a gradient, i.e. it needs to be differentiable, in order for backprop to work.

There are many situations that are not differentiable. Images work well because adjusting color values during back prop can easily be done with a gradient. For example, during backprop it is feasible to increase the level of blue by 10% by increasing its RGB value. Likewise for 3%, 15%, -7%, etc.

For other types of data, a gradient may not exist.


> For other types of data, a gradient may not exist.

You don't calculate a gradient on the data, you calculate a gradient on the parameters of the model. But I'm going to assume you mean that some types of data are not well represented by floating point numbers, which is true, but there are workarounds, e.g. vector embeddings for words or simply one-hot encodings for categorical values.

What is true is that if you need to make a hard decision in one part of the network, for example deciding that the image is a cat, in order to further process that decision. This would not be differentiable. But we can represent objects in a non-discrete way, with embeddings, hidden state vectors etc. My point is, people are getting around this problem.

I also want to add that I agree that we are way far off from building apps automatically, but I think you're giving way too little importance to the advances that's been made. Just take a look at the latest GANs, or the advances in machine translation using the Transformer and attention architectures, or Neural Turing Machines.


You don't calculate a gradient on the data, you calculate a gradient on the parameters of the model.

Whether a gradient exists will depend on the type of data and what you're trying to predict.

But we can represent objects in a non-discrete way, with embeddings, hidden state vectors etc. My point is, people are getting around this problem.

Great progress has been made using that, e.g. advancements in NLU. But it is not a complete workaround. Embeddings don't work if your relationship is non differentiable. And you will lose the gradient for very long contexts (deep networks), even using new techniques like attention and transformers.


Can you give an example of a relationship which is non-differentiable?

It seems to me that there is no difficulty in using it to model fixed functions {0,1}^n -> {0,1}^m .

I don’t know for sure that all such functions can be learned reasonably efficiently, but I do know that there are feedforward standard neural nets which, with some weights, can approximate any such function arbitrarily closely.

But, again, I’m not saying that these functions can be learned effectively by backprop, idk if they can be.

Are these functions examples of relations you would say aren’t differentiable.

I’m confident that SOME of these functions can be learned by backprop, but I don’t know what would make these ones “more differentiable” than others, So, I don’t see how differentiability could be the obstacle here?

But I may be misunderstanding.


The problem is that the true, hidden function being modeled is non differentiable. We can try to approximate that hidden function with one that is differentiable, yes. But no good approximation is guaranteed to exist depending on the nature of the problem. An example from op:

What is true is that if you need to make a hard decision in one part of the network, for example deciding that the image is a cat, in order to further process that decision. This would not be differentiable.

CNNs can work for image classification because a smooth gradient will exist for incrementally "stepping" color values during backprop. But asking the question "is this an image of a cat" is much different than "what do I do if this is an image of a cat".

Also very deep neural networks (i.e. many, many time steps) essentially lose their gradient, or context. Even with attention and transformers. Which is partly why we haven't seen AIs that can write lengthy programs, books etc that are coherent and grammatically correct. And we probably never will by only relying on current "curve fitting" techniques.


"what do I do if this is an image of a cat"

You can train another NN to predict the best course of action for each decision of the first net. Or even train a single net to choose actions based on the initial input. Not sure what's the problem here.

we haven't seen AIs that can write lengthy programs, books etc that are coherent and grammatically correct. And we probably never will

Have you completely missed the recent NLP breakthroughs (BERT, GPT-2, XLNet, etc)? OpenAI even refused to publish their model because it could generate long coherent and grammatically correct text.


What do you mean by "good approximation"? Arbitrarily close?

Can you give an example of such a function? They very well may exist. But I think we can agree that non-differentiability of the target function is not sufficient.

The function that describes the neural network itself has to be differentiable. Whether we can create a differentiable function/NN for any kind of input remains to be shown.


Can you give an example of such a function?

Brownian motion. Try using a NN to model stock prices (not Brownian exactly, but same concept).


> Try using a NN to model stock prices

IIRC, the first neural network book I read in the 1990s had that as the big illustrative application in the latter part of the book.


Yes, but it doesn't work well.

There are plenty of other time series problems where NNs don't outperform classical methods such as ARIMA.


Once again I ask: What do you mean with "works well". Just because some other ML method is better in practice doesn't mean that a NN can not achieve the same degree of approximation in theory.


What do you mean with "works well"

There is no empirical data to support that it is more effective than other methods.

Theory is not that valuable when evaluating papers in AI. It is all about the empirical results.


I'm wondering if the ARIMA models aren't a very simple case of NN (time convolution+regression) ?


That may not be what you meant but it is what you said. The "function you are trying to model" does not have to be differentiable as we have already established. Only the model itself (which is the neural network) needs to be differentiable.


The model needs to be differentiable.

The function needs to be continuous.

Discontinuous functions are not differentiable.

You can approximate discontinuous functions with continuous ones (e.g. with logistic functions).

In the end, we are still talking about curve fitting and optimization, not artificial general intelligence.


When will people wake up and realize that computers today are just capable of "fetch decode execute" in a loop?


This one really hits the nail:

"They are really just another type of optimizer for maximum likelihood"

But like always in our industry - we have to hype something beyond believe to keep new projects rolling.

Ill let ppl dream but I know enough to see how it all will end.. like always..


Do you actually work in the ML industry? Or are you just saying software writ large?

If you actually work in the ML industry, I find it astonishing that you don't think that there have been major breakthroughs in the last 7 years


If only people directly working in ML notice these advancements, I don't think it makes sense to be astonished that some people do not consider them "major breakthroughs".


Wait, so is ML overhyped or not? First I thought that everybody was covering AI as the next big thing which is absurd. But now, "nobody notices these advancements"? Which is it?

My astonishment that you didn't think there had been any breakthroughs was only if you directly worked in ML, otherwise you are just misinformed.


Should I be astonished by some field which recently got a lot of traction* made some breakthroughs ?

(*investors money)

I'm just stating that "again" in our industry, masses will now flock to "new shiny thing" simply because it is hyped and investors throw money at it. While managers will try to fit ML/DL/AI into pretty much anything either if it makes sense or not.


> recently got a lot of traction

This flips causality - funding didn't really really pick up until the last 5 years or so - about two years after the initial breakthrough.

Look, I agree with this comment - managers are super eager to apply ML to tasks it has no business being applied to (at least yet). But your original claim was that all the new techniques are just glorified maximum likelihood optimization. That's just false.


I don't feel any less right about that statement. When you dumb things down to a single sentence - current AI really only is what I cyted..


I mean your original statement was only four sentences, but that's fine.


Neural nets are one way to program which use the computation power available. It's a way to keep getting the benefits of Moore's law.

The hardware is co-evolving with the software. GPUs with SIMD architecture, or TPU which are even more specialized will give a computational advantage to methods like neural nets which they have been designed for.

The magic comes from the memory/computation power combo.


I am ignorant, might be really wrong and stupid, but aren't most problems in the world just curve-fitting in high dimensional spaces?

Isn't human education simply discovering spaces that somewhat resemble a reality with arbitrarily many dimensions and finding functions that match curves/surfaces/bodies to some degree of precison and exactitude?


Yeah I broadly agree that there is a lot of confusion about capabilities.

Some years ago I was working somewhere and the management had caught the AI/ML bug and were obsessed with the idea of using ML to generate business "insights". They'd get some vague & unspecified data about a client's business operations, we'd input it into the ML and voila: "insights" about how to improve their business (and make us money)

They didn't know what these "insights" would be, they expected machine learning to magically generate them on its own.

I tried explaining that at a high level, ML can only really give you answers you already know are possibilities. It won't offer up some totally novel answer that you've not trained it for - i.e. you've got to know what the answers could be before you even start.

We got shut down by the parent company not long after that.


You can train a neural network without backpropagation (with a gradient method) and it will just be slower.

There are plenty of derivative free methods as well.

It’s not clear whether a function being nondifferentiable means anything without further specification since classification is practiced quite successfully in many cases.


they really should drop the term ‘AI’ and call it what it is - statistics


Honestly, after studying neural networks for the past year-ish, you just said one of the smartest things in this forum.

I'd just like to say it should be complex statistics. But that's just me.


Would you say that AlphaGo is really just statistics?

Even AlphaZero, which, iirc, is trained entirely using self-play, with no starting data from other players?


If I recall correctly their neural network was mostly trained to judge board situations accurately. So yes, I suppose you could consider it statistics.

Of course the situation becomes rather interesting when you start training it against itself, but you're still fundamentally trying to find a good statistic to estimate your chances of winning.


My understanding of “statistics” is that it is about either probability distributions, or gleaning some aggregate information from an existing data set which is taken to be either a (not necessarily uniform) random sample from some distribution, or describing the entire population.

Perhaps I am not using the right definition of “statistics”?


Given that your definition of statistics contains nested levels of 'either or', it shouldn't really surprise you that it doesn't really generalize well.

More generally I'd consider statistics to be the applied version of probability theory. Of course in this case the very thing they were trying to compute also fit the definition of a 'statistic'.

If you consider this is to be too broad, then keep in mind that it's simply better when you can apply the concepts and techniques from probability theory to more things.


While AI learners are not actually perfect generalizers so to speak, they are also quite clearly not purely statistical correlation machines, and there is a lot of evidence to show for this, such as the surprisingly similarities of ConvNets with low level human perception, etc.

It seems to me that there is a category of people who are eager to dismiss deep learning altogether and say "iT's JuSt stAtIStics" even though there is a good amount of evidence to show that it isn't the case. That isn't real science, it's human bias.


they are also quite clearly not purely statistical correlation machines, and there is a lot of evidence to show for this, such as the surprisingly similarities of ConvNets with low level human perception, etc.

Wrong.

If you actually study signal processing, you will find out that CNNs aren't something magically rooted in something other than "statistical correlation machines." CNNs in fact work because they're used to calculate cross correlation!

"Convolving" in terms of a CNN is a misnomer, it's the same as calculating cross correlation in terms of signal processing.


Yeah, I'm aware that it's cross-correlation from the signal processing world, there are a lot of misnomers like that in the DL world, e.g. "deconvolutions", etc.

> Wrong.

The best way to respond to this, somewhat humorously would be "Wrong".


That wasn't the point. The point was, contrary to your claims, CNNs are indeed all about statistical correlation.


It seems odd that GANs would learn to generate images with precise yet lifelike shape and texture based purely on statistical correlation.


No it doesn't. Because that's what the CNNs used by the GANs are fitting on, cross correlation.


Sounds to me like you're being overly reductive of statistics and probability.


There are a lot of folks in this thread who are AI "experts", fighting tooth and nail to defend their startup/PhD/career path, even if it means defending complete nonsense.


Yet these neural networks are capable of exceeding human performance on tasks that just a few years ago people like you were saying no machine could do.

Look: maybe ML is just curve fitting. But maybe human consciousness is too. As the scope of ML expands, we're going to have to confront the reality that there's nothing special about our own minds.


Maybe consciousness are just a bunch of conditional goto statements executed parallized with stocastic lag?

My wild guess is that digital computers are not very efficient at AI. It would be really interesting if nets could be implemented in an analogue fashion.


You may want to read this paper. https://arxiv.org/abs/1904.01557


Disclaimer - I'm a VP E at Engineer.ai

AI is certainly not magic, and as an industry we're super far away from what would be considered real AI in the technical sense. That being said, AI has become a catch all term for everything as simple as linear regressions, all the way through to neural networks.

We don't claim to be able to write apps using AI, we're a platform that is trying to use AI and general automation in order to optimize the traditional SDLC. Actual code generation/synthesis is years away in my opinion and there is far more impact that can be had by going after other manual aspects of software development.


> we're a platform that is trying to use AI and general automation in order to optimize the traditional SDLC

I don't think you can get away with corp-speak/buzzwords here this easily. Could you elaborate on how exactly you're using AI to "optimize" software development?


To me this sounds like their product is a pile of boilerplate / templates that can be combined to build an app.

If I were take a guess of the flow: As a customer creates a "new app" they go through some wizard type of process that will start to narrow down which templates are needed and what information to prompt the customer to fill in.

Once they have all of that they take that bundle of templates and "content" and hand it off to some developer to glue it all together and then perhaps add some other automation to handle small changes by the customer later automatically.

Could be a clever way to speed up app development if you can narrow the scope down but "AI" it is not.

Just some speculation, arm-chair-quarter backing


You are exactly correct. I saw them do a demo at the Collision conference in Toronto and the process involved going through a lengthy setup wizard about the project and its characteristics.


Disclaimer - I'm a VP E at Engineer.ai

Glad you were able to stop by at Collision! The process today is certainly not as user friendly as we'd like and can be quite time consuming. We're doing a revamp of the particular experience you saw in Toronto to streamline the process and also the ability to create clickable prototypes automatically!

Though we're still developing that tool, we intend to unveil it at WebSummit this year. Hope to see you there and get your feedback on it!


Disclaimer - I'm a VP E at Engineer.ai

Happy to elaborate - in a nutshell what we're trying to do is automate as many parts of the traditional software development lifecycle as we can, and for whatever cannot be automated, put in place the right tooling to allow for repeatable results.

Our thesis is that most applications today have a huge amount of duplication at a code level, and process level. We're trying to use reusable building blocks (well structured libraries, templated user stories, wireframes, common errors, etc.), in order to immediately solve that duplication. That being said, we're not talking about automatic code generation, it's more about being able to assemble these reusable building blocks together at the beginning of a project so you have a better starting point. There will always be customization required for any project however, and that is a human led process.

Apart from actual development, we're also trying to automate processes around project management, infrastructure management, and QA. For example, what we've already been able to do is automatically price and create timeline estimates for a project without any human involvement, determine which creators on our network are best suited for a given project, evaluate and onboard developers on to the network, setup developer environments, and a lot more!


Sorry if I'm missing something obvious but it's not very clear to me how the first part significantly benefits from AI. Code re-use is just good software engineer practice, are you somehow able to figure out what libraries to use automatically? Isn't this trivial to perform by a human anyway?

The latter part, as far as figuring out what work to assign and estimating time-frames does seem like a legitimate AI use case though.


Disclaimer - I'm a VP E at Engineer.ai

We're attempting to tackle the problem holistically. That means that we're tackling every single step of the traditional product development process. All the way from how you ideate, price, and spec, to sourcing and managing developers through to QA and infrastructure management.

For example, today, our ideation/pricing/spec tools leverage applied ML, creator management leverages facial recognition for fraud prevention, and infrastructure management uses statistical modelling.

We're trying to make code re-use a repeatable and predictable process rather than just a best practice. Today in the industry it's a purely led by developers, and very often is done solely at their discretion in a manual fashion. We're attempting to platform enforce code reuse, across autonomous distributed teams and products. Apart from just deciding what the optimal building blocks for a project are, the actual assembly or intelligent merging of these building blocks in an automated way is non trivial and mirrors modern automative assembly lines.


Can you provide specific examples and validation? I too can write a program that "creates timeline estimates for a project without any human involvement" - doesn't mean that its estimate is accurate. Can you provide specific examples of how you are automating traditional SDLC using AI?

You are here on a forum of technical people, can you be appropriately technical?


Disclaimer - I'm a VP E at Engineer.ai

All of our project timelines are generated fully automatically. Today we are hovering at around a 90% accuracy on those estimates, and are moving more and more towards solving that last 10%.

We put our money where are mouth is - for example if our system generates a spec with a timeline of 10 weeks and a price of of 10K, and we take 15 weeks, we do not charge more than 10K.

Unfortunately I can't reveal more details of how we generate those timelines automatically apart from the fact that is uses NLP, CNNs, and regression analysis as it is proprietary and core to our business.


Lots of companies don't charge for work that falls outside the estimated amount of time, you guys are far from the only ones doing that. It doesn't take AI to do that. And anyone would find your description of the methodology vague to the point of being useless.

> NLP, CNNs, and regression analysis

No one is asking you to reveal your algorithms in detail, but any information at all besides just naming 3 statistical methods would go a long way in convincing people of the validity of your assertions.

Maybe you're just using human estimators and are using NLP/CNN/regression analysis to compute their daily coffee supply.


Disclaimer - I'm a VP E at Engineer.ai

Apologies if it came across as vague. You're welcome to try out our pricing and timeline estimation system if you'd like to get a sense for how it works - it's all public (https://builder.engineer.ai).

That particular tool uses historical data from our user story management system and repository system to glean insights such as average amount of time taken on customizing features, complexity of features and the interactions between them, common errors, developer efficiency by feature grouping, etc. This is all then used as input data into our pricing and timeline estimation system.

Collecting this data was no small feat, we had to build a significant amount of project management and developer tooling in order to get the granularity of data required.

This is also why we're confident that we'll be able to improve our accuracy beyond 90% - as we build more projects, the data collected from that process will feed back into these models.


The problem I've always seen with structuring code, is that text files are basically one dimensional, meaning that you always run into conflicts between putting things close to each other that are similar in one dimension vs. those that are similar in another dimension.

I'm not lucid enough tonight to give good concrete examples, or be more specific about how this relates to repetition, but I feel like there are deep problems with designing software automatically even looking at mundane, small scale stuff.


But how is any of this related to AI?


Code generation is quite possible. See for example the one-shot program induction in Excel's FlashFill:

https://people.csail.mit.edu/rishabh/papers/cacm12.pdf

Which btw is absolutely an artificial intelligence application albeit one that has nothing to do with neural nets and deep learning.

Perhaps, if your company has trouble with acquiring data and training large deep neural nets, you could benefit from looking at other techniques that do not have such stringent requirements and that are much better suited to smaller companies (i.e. anyone but Google, Facebook, Amazon, Netflix et al).


I'm very curious how are you using AI to optimize software engineering. Do you have a linter that catches bugs? Do you have a system that generates code from a very high level language? I'm really curious what is the actual part of software development you guys managed to automate. Saying "we use AI and automation to optimize software development" doesn't make any sense if you can't explain exactly which problem you guys managed to solve. When you go to an aerospace engineer you don't say "Hey! I managed to optimize airplane building". You say "Hey! I made this software that given your jet engine, optimizes it for cost/safety/efficiency!"


Disclaimer - I'm a VP E at Engineer.ai

It's not just one problem we're tackling, it's actually more like 40 small issues that we're working on. You actually named a few right there - static code analysis, automatic UI generation from YAML. I also want to be clear that not all of it is AI or ML. For example how we price and spec out ideas (https://builder.engineer.ai/) is fully automated and leverages NLP and NNs, and how we handle developer verification uses facial recognition. However many of the problems we are going after don't require AI; heuristics based approaches and statistical models can actually have better results in many cases.


> That being said, AI has become a catch all term for everything as simple as linear regressions, all the way through to neural networks.

And that's a problem. The problem is that this trend builds unrealistic expectations for pretty much anyone that doesn't know how the tech works.

Business people imagine some magic box that will just churn out stuff with (close to) zero workers involved.

Customers imagine the same magic box churning out tailored products built with AI fairy magic.

Then reality sets in, and people (investors included) start losing faith, and we're onto the next AI winter.


Great, since you're here: what's your comment on the allegation that engineer.ai defrauded investors with fake AI?


Disclaimer - I'm a VP E at Engineer.ai

We've been very transparent with our investors on where we are in the process of creating this platform both pre-investment and post. They actually responded to the WSJ article:

A spokeswoman for Deepcore said it has complete confidence in Mr. Duggal’s vision and team.

A spokesman for Jungle Ventures said it is a proud investor in Engineer.ai and its technology, adding that “the AI landscape is a varied spectrum.”

A Lakestar spokeswoman said it also has confidence in Engineer.ai and its team, adding that “growth in the AI space does not happen overnight.” It said Engineer.ai had been very careful in presenting its technology to Lakestar and other investors


All due respect but I wouldn't expect a VC to say anything else. These guys want to be able to peddle their stake onto someone else in a future round. I'll be more interested in what those investors do in your next funding round.


I was trying to read up on the site to evaluate what you said, but the UI's horribly broken on Firefox latest. Not the best first impression.


Disclaimer - I'm a VP E at Engineer.ai

Apologies for that experience! We're currently working on a new iteration of that site and will be launching it very shortly. Come back soon and let us know what you think.


Happy to do so; thanks for the head's up. I use Firefox exclusively and an increasing number of people I work with do as well, except for testing.


Anyone remember Spinvox :

https://kernelmag.dailydot.com/features/report/2573/spinvox-...

My eyes popped open when I read who the author of this was ! Utterly Loathsome - but apparently doing some journalism in 2012.


AI is the biggest fraud of the 21st century. Especially Deep Learning. Deep Learning is a bubble that has no application in reality. And I mean NONE. Even in cutting edge FAANG companies that claim to use modern AI techniques, Deep Learning is barely used. Because it's simply not reliable enough for real datasets. Classical statistical techniques, along with human domain expertise are what runs the world. Not new-fangled hyped up stuff.


> AI is the biggest fraud of the 21st century.

That's a bit much. ML techniques have large and proven market applications[1]. And there's a bunch of hangers on trying to spin the buzz into a quick buck. This seems like pretty boring, run of the mill fraud to me.

[1] Which, to be fair, tend to all fall within the realm of "do with a cheap computer what an expensive human can do easily", like looking at or listening to things. "Writing software" should have been an obvious cue that they were way beyond the known reach of the technology.


This is what makes me so angry at all the hype and fraud in the AI space. It's causing people to throw the baby out with the bathwater by ignoring all the real world progress that has been made over the last decade. Particularly in NLP and CV. The problem is that when the general public thinks AI, they think HAL9000 and Skynet, not Facebook auto tagging or speech recognition on their phone or home assistant, and unfortunately there are far too many people out there making money selling science fiction as reality.


I dont really agree with your comment. DL is used heavily in the semi industry for image segmentation, defect classification, simulation speed cut-down, and plasma optimization. It's been a game changer, saving engineers up to 30% of their time.


This is absurdly wrong, DL is used in industry all the time.


Show me one example where Deep Learning is used in production?


See my comment above. Image processing alone has saved millions of dollars in engineering hours alone.


google translate ?


AI isn't a fraud, which is exactly why there's a problem. A gray area exists that's easy to exploit.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: