Hacker News new | past | comments | ask | show | jobs | submit | milesward's comments login

We are building an AI tool that we've code-named goose and so uh this was super funny and timely :)


any AI deterrent system seems worthwhile though. how many more useless chatbots does the world really need. it's like candycrush clones.


Seems like a good fit for a development contractor/consultancy: get pros to try to gin up a v1 clone fast. If they show that it's not too bad, no gnarly gotchas, have them help you hire folks who can get to v2 and beyond. Starting a software dev practice is daunting without prior applied experience, use the cheat code.


heheh Vic and I did one in a backpack, worked good :)


Planning for a keyboard case?


What about a firmware flash that nukes the Android part and makes it behave as a display, which hopefully you can flash back to your OS if policy or usecase changes?


I’m just diehard I guess, serious evolutions in my preferences and new artists I love almost weekly. Huh!


My company alone has over 150 case studies that document specific positive ROI on cloud adoption. Not every can or will run their own; I’m pumped for Oxide and I think they are going to smash, but some teams really really benefit from Public Cloud.


Oh dude I found my sousaphone, plus a like-new Rhodes piano, and a whole-ass pipe organ my college had forgotten.

Plus the ID card laminator, that was clutch ;)



How real is it though? This blog post says

In this post, we’ll explore some of the prompting approaches we used in our Hands on with Gemini demo video.

which makes it sound like they used text + image prompts and then acted them out in the video, as opposed to Gemini interpreting the video directly.

https://developers.googleblog.com/2023/12/how-its-made-gemin...


After reading this blog post, that hands-on video is just straight-up lying to people. For the boxcar example, the narrator in the video says to Gemini:

> Narrator: "Based on their design, which of these would go faster?"

Without even specifying that those are cars! That was impressive to me, that it recognized the cars are going downhill _and_ could infer that in such a situation, aerodynamics matters. But the blog post says the real prompt was this:

> Real Prompt: "Which of these cars is more aerodynamic? The one on the left or the right? Explain why, using specific visual details."

They narrated inaccurate prompts for the Sun/Saturn/Earth example too:

> Narrator: "Is this the right order?"

> Real Prompt: "Is this the right order? Consider the distance from the sun and explain your reasoning."

If the narrator actually read the _real_ prompts they fed Gemini in these videos, this would not be as impressive at all!


Out of curiosity I've asked GPT-4V the same questions:

    Prompt: [attached image of the post-it notes with sun and planets] Is this right order?
    GPT-4: The image you've uploaded shows three drawings on individual pieces of paper. From left to right, there's a drawing of the sun, then what appears to be Saturn, and lastly a drawing that resembles Earth. If the intention is to display these celestial objects by their proximity to the Sun, then the order should be the Sun, Earth, and then Saturn, as Earth is closer to the Sun than Saturn. Therefore, the current order is not correct. To correct the order, you should place the Earth drawing second, between the Sun and Saturn.

    Prompt: [attached the image of the two car drawings] Based on their design, which one would go faster?
    GPT-4: The image shows two drawings of vehicles on slopes. The one on the left appears to have a more traditional box-like design, while the one on the right has a streamlined design with a pointed front. Based on principles of physics and aerodynamics, the vehicle with the streamlined design (on the right) would likely go faster down the slope. Streamlining reduces air resistance (drag), allowing the vehicle to move more efficiently. The pointed front helps to cut through the air and minimize the drag force, which would result in higher speeds compared to the boxier design on the left, which would encounter more air resistance.
I'm actually pretty impressed how well it did with such basic prompts.


What do you mean "Real Prompt"? Nowhere does it say these are the real prompts, it says

> In this post, we’ll explore some of the prompting approaches we used in our Hands on with Gemini demo video.

Not "here are the full prompts used in the video" or something like that.

None of the entries match up 1:1. And the response to the car example in the video doesn't even make sense in response to the prompt in the post (no mention of speed), and certainly isn't a trimmed portion of the response in the post.

The video has the disclaimer "For the purposes of this demo, latency has been reduced and Gemini outputs have been shortened for brevity". It would be weird to write that but not mention that neither the prompts nor responses shared even the same set of words in the same order with the "Real" prompts and responses.

I think your assumption is wrong on this one.


Wow I was blown away when I watched this video.

Now that I learned how fake it is, that is more evidence that Google is in really bad shape with this.


>If the narrator actually read the _real_ prompts they fed Gemini in these videos, this would not be as impressive at all!

It's crazy that this is where we are now. This is obviously still crazy impressive even if hadn't done those edits.


It might still be crazy impressive, but none-the-less, going forward we now know that we cannot trust Google's videos about it, as they're heavily edited to look a lot more impressive than it is.


Those prompts aren't far off, but I still don't know how realistic the demo is. Until a product is in my hands, as far as I'm concerned it doesn't exist.


A lesson in how to commit securities fraud and get away with it.


Boo! Complete marketing garbage. May as well have been a Flash demo.


Yeah I think this comment basically sums up my cynicism about that video.

It's that, you know some of this happened and you don't know how much. So when it says "what the quack!" presumably the model was prompted "give me answers in a more fun conversational style" (since that's not the style in any of the other clips) and, like, was it able to do that with just a little hint or did it take a large amount of wrangling "hey can you say that again in a more conversational way, what if you said something funny at the beginning like 'what the quack'" and then it's totally unimpressive. I'm not saying that's what happened, I'm saying "because we know we're only seeing a very fragmentary transcript I have no way to distinguish between the really impressive version and the really unimpressive one."

It'll be interesting to use it more as it gets more generally available though.


You can see the cracks in the feature early on:

"What do you think I'm doing? Hint: it's a game."

Anyone with as much "knowledge" as Gemini aught to know it's roshambo.

"Is this the right order? Consider the distance from the sun and explain your reasoning."

Full prompt elided from the video.


I’ve heard of roshambo mostly from South Park

https://www.urbandictionary.com/define.php?term=roshambo

I’ve vaguely heard the term before, but I don’t know what regions of the world actually use that term.


Never heard it called that. Curious where you are from?


For what it's worth, I was born and raised in the Bay Area (in the 90s), and we called it ro-sham-bo growing up. Although it's incredibly strange to see that word in writing, I would always call it rock paper scissors if I were to write it.


It's always like this isn't it. I was watching the demo and thought why ask it what duck is in multiple languages? Siri can do that right now and it's not an ai model. I really do think we're getting their with the ai revolution but these demos are so far from exciting, they're just mundane dummy tasks that don't have the nuance of everything we really interact and would need help from an ai with


How do you know though? The responses in the video were not the same as those in the blog post.


To quote Gemini, what the quack! Even with the understanding that these are handpicked interactions that are likely to be among the system's best responses, that is an extremely impressive level of understanding and reasoning.


Calls for a new corollary to Clarke's Third Law. "Any sufficiently-advanced rigged demo is indistinguishable from magic."


Magic Leap!


And for the development team it's predicated upon Hoffsteader's Law.


Does it really need to have affectations like "What the quack!"? These affectations are lab grown and not cute.


I guess it's like drawing googly eyes on clippy: it helps sell the illusion that you are interacting with something alive instead of an automatic system.


Except that it doesn't.


I liked them


You missed the joke! Look at their username ;)


I think that wasn't a joke and the poster is just using a throwaway account


> These affectations are lab grown and not cute

Unless it was put in there manually, it's emergent, isn't it?


  Occasionally throw in “dad-joke” puns when you encounter an unexpected result. 
Or something along those lines in the original prompt.


Reminds me of their demo a few years back when they had AI call a hair salon to schedule an appointment. When the receptionist asked if they could put the caller on hold, it did the "mmm hmm" that was uncannily human like


If it were emergent it would say WTF.


What would be Gemini's current IQ? I would suspect it's higher than the average human's.


GPT 4 already scored higher than 95% of people. The average human isn't really all that smart.


On knowledge tests for sure. It completely falls short on common sense in which it's worse than almost anyone.


Well IQ is a reasoning test, and common sense is practical every day reasoning, so it should cover that. Are we talking about the same people that try to wrestle alligators, sign up for pyramid schemes and ride speedbikes in a T shirt and shorts? Common sense isn't super common.


The thing with IQ tests is they're all based on similar concepts so it's possible to train for them which is what AI does. Most humans grow up learning to walk, speak, interact, read non verbal cues. I would argue a musicians wouldn't tend to be a higher IQ than an average person but an AI can't come close to writing a song and playing a guitar that reasonates with people. AI can assist with it but it's missing the human spark for now


Have you not tried Suno AI?

We are not that far away of AI creating perfect music for us.


The average human is not trained to be smart. They are trained to earn money, provide for family and navigate social situations.


I'm legitimately starting to wonder what white collar workers will even do in 5-10 years.

This just Year 1 of this stuff going mainstream. Careers are 25-30 years long. What will someone entering the workforce today even be doing in 2035?


Even if we get Gemini 2.0 or GPT-6 that is even better at the stuff it's good at now... you've always been able to outsource 'tasks' for cheap. There is no shortage of people that can write somewhat generic text, write chunks of self contained code, etc.

This might lower the barrier of entry but it's basically a cheaper outsourcing model. And many companies will outsource more to AI. But there's probably a reason that most large companies are not just managers and architects who farm out their work to the cheapest foreign markets.

Similar to how many tech jobs have gone from C -> C++ -> Java -> Python/Go, where the average developer is supposd to accomplish a lot more than perviously, I think you'll see the same for white collar workers.

Software engieneering didn't die because you needed so much less work to do a network stack, the expectations changed.

This is just non technical white collar worker's first level up from C -> Java.


Never underestimate management's thirst for elimiating pesky problems that come with dealing with human bodies - vacations, pregnancies, office rivalries, time zones, and heck, unionization.

I suspect the real driver of the shift to AI will be this and not lower cost/efficiency.


> management's thirst for elimiating pesky problems that come with dealing with human bodies

But that's what 95% management is for. If you don't have humans, you don't need majority of managers.

And I know of plenty of asshole managers, who enjoy their job because they get to boss people around.

And another thing people are forgetting. That end users AKA consumers will be able to use similar tech as well. So for something they used to hire a company for, they will just use AI, so you don't even need CEO's and financial managers in the end :)

Because , if software CEO can push a button to create an app that he wants to sell, so can his end-users.


My strong belief is that if someone wanted to halt AI development, they should attempt to train AI replacements for managers and politicians, and publicize it.


[Guy who draws blue ducks for a living]: DAMNIT!


He can always sell that as an NFT!


"Next piece we have for sale: the OG Blue Duck, from before the AI Summer age. Made by a true master of the genre. Do I hear a $100?"


Giving it away would be nicer.


I hear you draw blue ducks. The exposure of putting your blue duck on my channel would be really good for you...


>What will someone entering the workforce today even be doing in 2035?

The same thing they're doing now, just with tools that enable them to do some more of it. We've been having these discussions a dozen times, including pre- and post computerization and every time it ends up the same way. We went from entire teams writing Pokemon in Z80 assembly to someone cranking out games in Unity while barely knowing to code, and yet game devs still exist.


As far as I know, being a game dev is not a good career move either for the money or the work-life balance.


Yeah but the point is what amount of work a game dev is able to do. The current level of games were just impossible back then or it would require a huge number of teams just to do something quite trivial today.


It never was.


Yeah it has been quite the problem to think about ever since the original release of ChatGPT, as it was already obvious where this will be going and multimodal models more or less confirmed it.

There's two ways this goes: UBI or gradual population reduction through unemployment and homelessness. There's no way the average human will be able to produce any productive value outside manual labor in 20 years. Maybe not even that, looking at robots like Digit that can already do warehouse work for $25/hour.


More than efficiency and costs, I think the real driver of AI adoption in big corp will be the reduction of all the baggage human beings bring. AI will never ask for sick days, will never walk in with a hangover, never be unproductive because their 3 month old baby kept them up all night...

An AI coder will always be around, always be a "team player", always be chipper and friendly. That's management's wet dream.


I don't think humans will stay competitive long enough for that to even matter, frankly. It's a no brainer to go for the far cheaper, smarter, and most importantly a few magnitudes faster worker. On the offshoot that we hit some sort of inteligence ceiling and don't get ASI tier models in the next few years then that will definitely do it though.

Companies start going from paying lots of local workers to paying a few select corporations what's essentially a SAAS fee (some are already buying ChatGPT Plus for all employees and reducing headcount) which accumulates all the wealth that would've gone to the workers into the hands of those renting GPU servers. The middle class was in decline already, but this will surely eradicate it.


None of this will happen because jobs are based on comparative advantage, and not absolute advantage, which means it doesn't matter if someone else would be better at your job than you are. Because that person (or AI) is doing the job they're best suited to, which is not yours. Other fun second-order effects include Jevon's paradox (which is why inventing ATMs caused more employment for bank tellers, not less.)

I can be very confident about this because it's just about the strongest finding there is in economics. If this wasn't true, it'd be good for your career to stop other people from having children in case they take your job.


Comparative advantage assumes that there is capacity limit. The more productive country might not choose to produce widget A because its limited capacity is better used to create widget B. However, if in a few years, there are enough GPUs to satisfy almost all demand for AI labor, there's no need to "outsource" work that AI is better at to humans.

Jevons paradox might result in much more demand for AI labor, but not necessarily human labor for the same types of work AI can do. It might indirectly increase demand for human services, like fitness trainer, meditation teacher, acupuncturist, etc. though.


>If this wasn't true, it'd be good for your career to stop other people from having children in case they take your job.

Well, in times past, kings have been known to do this.

But more generally, you raise an interesting point. I think your reasoning succeeds at dispelling the often-touted strong form of the claim ("AI can do my job better than I can therefore I will lose my job to AI") but doesn't go all the way to guaranteeing its opposite ("No possible developments in AI could result in my job being threatened"). Job threat level will just continue to depend on a complicated way on everyone's aptitude at every job.


Many things could result in your job being threatened. Since I think the kind of AI they're describing would increase employment, I'm equally willing to believe an opposite trend would decrease it.

So that could be productivity decreases, rises in energy prices or interest rates, war, losing industries to other countries…


To quote CGP Grey "There isn’t a rule of economics that says better technology makes more, better jobs for horses. It sounds shockingly dumb to even say that out loud, but swap horses for humans and suddenly people think it sounds about right."

I mean I don't know, maybe you're right and this will Jevons us towards even more demand for AI-assisted jobs but I think only to a point where it's still just AI complementing humans at being better and more efficient at their jobs (like LLMs are doing right now) and not outright replacing them.

As per your example, bank tellers are still here because ATMs can only dispense money and change PINs, they can't do their job but only leave the more complex stuff to be handled by less overworked humans since they don't have to do the menial stuff. Make an ATM that does everything (e.g. online banking) and there's literally nothing a bank teller needs to exist for. Most online banks don't even have offices these days. For now classical brick and mortar banks remain, but for how long I'm not sure, probably only until the next crisis when they all fold by not being competitive since they have to pay for all those tellers and real estate rents. And as per Grey's example, cars did not increase demand for horses/humans, they increased demand for cars/AGI.


Horses are not labor. You can tell because we don't pay them wages and they don't make any effort to be employed. That makes them capital; when humans are treated that way it's called slavery.

I don't think you should listen to Youtubers about anything, though all I know about that guy is he has bad aesthetic opinions on flag design.


Doesn't every capitalist consider humans capital deep down? Who'd come up with a name like "human resources" otherwise lmao, in ex-socialist countries it's usually called something more normal like cadre service.

Besides I don't see the market difference of having to pay to maintain a horse with feed, healthcare, grooming, etc. which likely costs something on a similar order as paying a human's monthly wage that gets used in similar ways. Both come with monthly expenses, generate revenue, eventually retire and die, on paper they should follow the same principle with the exception that you can sell a horse when you want to get rid of it but have to pay severance when doing the same with a person. I doubt that influences the overall lifetime equation much though.


> Doesn't every capitalist consider humans capital deep down?

That's slavery, so only if they're bad at it. (The reason economics is called "the dismal science" is slaveowners got mad at them for saying slavery was bad for the economy.)

> Besides I don't see the market difference of having to pay to maintain a horse with feed, healthcare, grooming, etc. which likely costs something on a similar order as paying a human's monthly wage that gets used in similar ways.

The horse can't negotiate and won't leave you because it gets a competing offer. And it's not up to your what your employee spends their wages on, and their wages aren't set by how much you think they should be spending.


Well anecdotally, there's been a massive drop in on-campus hiring in India this year. The largest recruiters - the big IT companies (Infosys, TCS, etc.) haven't apparenlty made any hires at all.


>Companies start going from...

The few companies that will still exist, that is - many of them won't, when their product becomes almost free to replace.


> UBI or gradual population reduction through unemployment and homelessness

I actually think that if we get to a superintelligent AGI and ask it to solve our problems (e.g., global warming, etc.), the AGI will say, "You need to slow down baby production."

Under good circumstances, the world will see a "soft landing" where we solve our problems by population reduction, and it's achieved through attrition and much lower birth rate.

We have met the enemy and he is us.


What if you can have one biological child. One day, you will die, so it's -1 +1. Equals out. If you want more, what about adoption? There's kids out there that need a home. Seems fair to me.


Unfortunately we've made the critical mistake of setting up our entire economic system to require constant growth or the house of cards it's built upon immediately starts falling apart. It sure doesn't help that when this all becomes an active problem, climate change will also be hitting us in full force.

Now maybe we can actually maintain growth with less people through automation, like we've done successfully for farming, mining, industrial production, and the like, but there was always something new for the bulk of the population to move and be productive in. Now there just won't be anything to move to aside from popularity based jobs of which there are only so many.


Yes, imagine being a HS student now, deciding what to do 5-6-7 years from now.


The options are: - be pretty - have fine motor skills - have wealthy parents


I thought those were already the best choices?


Well, until now it was also quite OK to just be intelligent and maybe hard working. I'd venture a guess that most of this site is doing well by the virtue of being born with efficient brains - and that would offset not being pretty or otherwise talented. Not for much longer, possibly :-(


Pretty is nice, but not everyone wants to whore themselves on the internet.


Whoring on the internet is easier and safer for mental health and wellbeing than finding a real job that pays the bills.


Being an adult entertainer on the internet is a real job (and may be one that pays the bills.)


Since when has everyone been able to have a job they liked?


It is beneficial when interviewing for just about any job.


there are already generative models for that anyway.


have pitchforks and torches to revolt


The only safe bet now is menial, manual work, as it's too expensive to automate in non-assembly-line cases.


Work will just move to a higher level of abstraction.


I'm wondering the same, but for the narrower white collar subset of tech workers, what will today's UX/UI designer or API developer be doing in 5-10 years.


Once the context window becomes large enough to swallow up the codebase of a small-mid sized company, what do all those IT workers that perform below the 50th percentile in coding tests even do?

HN has a blind spot about this because a lot of people here are in the top %ile of programmers. But the bottom 50th percentile are already being outperformed by GPT-4. Org structures and even GPT-4 availability hasn't caught up, but I can't see any situation where these workers aren't replaced en masse by AI, especially if the AI is 10% of the cost and doesn't come with the "baggage" of dealing with humans.

I don't think our society is prepared.


> Once the context window becomes large enough to swallow up the codebase of a small-mid sized company, what do all those IT workers that perform below the 50th percentile in coding tests even do?

There's a whole lot of work in tech (even specifically work "done by software developers") that isn't "banging out code to already completed specs".


Yeah I think a lot of experienced developers are so immersed in software development that they forget how complex the process is, and how much knowledge they have to even know how to ask the right questions.

I mean, I thought that website frontend development would have long since been swallowed up by off-the-shelf WYSIWYG tools, that's how it seemed to be going in the late 90s. But the opposite has happened, there have never been more developers working on weird custom stuff.


You say that like it's not something that can also be automated.


If you make it cheaper then people will do more of it.

Look at how much more graphic design is starting to happen now that you can create an image in a few minutes.

So it means we’ll get more development projects because they’ll be cheaper.

And yes I do realize at some point we’ll still have a mass of unemployed skilled white collar workers like devs.


What specific test do I take to know my percentile?


Just roll a d100; it will be about as useful...

If you roll over a 75, roll an additional d10 to find out your multiplier score (as in, a 10x programmer).


UX/UI designers will use AI as part of their jobs. They'll be able to work at a higher level and focus less on boilerplate. That might mean fewer UX/UI jobs, but more likely the standard for app UX will go up. Companies are always going to want to differentiate their apps.

It's like how, in 2003, if your restaurant had a website with a phone number posted on it, you were ahead of the curve. Today, if your restaurant doesn't have a website with online ordering, you're going to miss out on potential customers.

API developers will largely find something else to do. I've never seen a job posting for an API developer. My intuition is that even today, the number of people who work specifically as an API developer for their whole career is pretty close to zero.


Today, your restaurant's custom website largely doesn't matter, as ordering is done on delivery apps, and people visiting in person look at things like Google Maps reviews. Only reservations are not quite as consolidated yet.

Similarly, in the future, there may be no more "apps" in the way we understand them today, or they may become completely irrelevant if everything can be handled by one general-purpose assistant.


What did photographers start doing when Photoshop was released? They started using Photoshop.


Except this is the first time we have a new "generalist" technology. When Photoshop was released, it didn't reduce employment opportunities for writers, coders, 3D designers, etc.

We're in truly unprecedented territory and don't really have an historical analogue to learn from.


Maybe you are not quite recalling what happened when photoshop was released, it completely changed a whole industry of wet photography professionals. Those who would airbrush models, create montages from literally cutting and pasting.

Also, we told we were going into an age where anyone with $3000 for a PC/Mac and the software could edit reality. Society's ability to count on the authenticity of a photograph would be lost forever. How would courts work? Proof of criminality could be conjured up by anyone. People would be blackmailed left, right and center by the ability to cut and paste people into compromising positions and the police and courts would be unable to tell the difference.

The Quantel Paintbox was released in 1981 and by 1985 was able to edit photographs at film grain resolution. Digital film printers, were also able to output at film grain resolution, this started the "end of society", and when photoshop was introduced in 1990 it went into high gear.

In the end, all of that settled and we were left with, photographers just using Photoshop.


They were not around when photoshop was released.


Exactly

And I actually thought photographers were extinct a long time ago by every human holding a cellphone (little to no need to know about lens apertures, lighting/shadows to take a picture). Its probably been a decade since I've seen anyone hauling around photograph equipment at an event. I guess some photographers still get paid good money, but they're surely multiples less than there were 10-20 years ago.

The NLP (Natural Language) is the killer part of the equation for these new AI tools. Simple as knowing English or any other natural language, to output an image, an app or whatever. And it's going to be just like cellphone cameras and photographers, the results are going to get 'good enough' that its going to eat into many professions.


> Except this is the first time we have a new "generalist" technology. When Photoshop was released, it didn't reduce employment opportunities for writers, coders, 3D designers, etc.

Computing has always been a generalist technology, and every improvement in software development specifically has impacted all the fields for which automation could be deployed, expanded the set of fields in which automation could economically be deployed, and eliminated some of the existing work that software developers do.

And every one one of them has had the effect of increasing employment in tech involved in doing automation by doing that. (And increased employment of non-developers in many automated fields, by expanding, as it does for automation, the applications for which the field is economically viable more than it reduces the human effort required for each unit of work.)


Hmmm... People probably said the same exact thing about taxi drivers and really anyone who drives for a living when waymo demo'd self driving cars 10 years ago.


1. Compassion is key 2. I'm of the opinion one should listen to the people in the room who are more well-versed on the topic at hand. 3. Harmonious living. I like to write music as a passion. Many others have written music too. Whats the difference between that person being biologically-based, or transistor-based? 4. It's not a zero-sum game. It's not a chase game. It's play.


Productivity enhancements increases employment. Saying they'd decrease them goes against all empirical evidence.

You might as well be worried the invention of the C compiler hurt jobs for assembly programmers.


The analogy doesn’t hold and this comment won’t age well.

Photoshop doesn’t take photographs, so of course it hasn’t displaced photographers. It replaced the “shop” but the “photo” was up to the artist.

The irony is, Photoshop can generate photos now, and when it gets better, it actually will displace photographers.


Its just going to become self aware and start spitting out photographs?

Every scenic view, every building, every proper noun in the world has already been photographed and is available online. Photographer as "capturer of things" has long been dead, and its corpse lies next to the 'realist painters' of the 1800s before the dawn of the photograph and the airbrush artists of the 50s, 60s and 70s.

However, my newborn hasn't, hot-celebrity's wardrobe last night outside the club hasn't, the winning goal of the Leaf's game hasn't, AI can't create photos of those.

And the conceptual artistic reaction to today's political climate can't, so instead of that artist taking Campbell Soup Cans and silkscreening its logo as prints, or placing the text, "Your Body is a Battle Ground" over two found stock photos of women, or perhaps an artist hiring craftspeople to create realistic sexual explicit sculptures of them having sex with an Italian porn star; an artist is just now going to ask AI to create what they are thinking as a photo, or as a 3D model.

Its going to change nothing, but be a new tool, that makes it a bit easier to create art than it has been in the last 120 years, when "Craft" no longer was defacto "Art".


In the same way other photographers taking pictures have displaced each other throughout history?


Exactly. When the train really gets rolling, us humans shouldn't eschew the value of being able to interact with the intelligences. For such quaint problems we'll have, it probably costs close to 0 effort to answer a question or two.

I'm picturing something like as an intreraction I'd like to have:

"Hey, do you mind listening to this song I made? I want to play it live, but am curious if there's any spots with frequencies that will be downright dangerous when played live at 100-110dB. I'm also curious if there's any spots that traditionally have been HATED by audiences, that I'm not aware of."

"Yeah, the song's pretty good! You do a weird thing in the middle with an A7 chord. It might not go over the best, but it's your call. The waves at 21k Hz need to go though. Those WILL damage someones ears."

"Ok, thanks a lot. By the way, if you need anything from me; just ask."


Whatever you want, probably. Or put a different way: "what's a workforce?"

"We need to do a big calculation, so your HBO/Netflix might not work correctly for a little bit. These shouldn't be too frequent; but bear with us."

Go ride a bike, write some poetry, do something tactile with feeling. They're doing something, but after a certain threshold, us humans are going to have to take them at their word.

The graph of computational gain is going to go linear, quadratic, ^4, ^8, ^16... all the way until we get to it being a vertical line. A step function. It's not a bad thing, but it's going to require a perspective shift, I think.

Edit: I also think we should drop the "A" from "AI" ...just... "Intelligence."


Yeah, this feels like the revenge of the blue collar workers. Maybe the changes won't be too dramatic, but the intelligence premium will definitely go down.

Ironically, this is created by some of the most intelligent people.


We're just gonna have UBI


Totally. I think UBI will be the "energy meter" of the future. Like in a video game. You get xxx dollars or whatever. Buy whatever you need, but the cap is to make sure you don't act foolish. Your UBI tank gets replenished every month, but if you blow it all on a new bicycle and kitchen upgrade for your house, you can't continue on to buy a bathroom renovation or whatever. You have to wait.


> For the purposes of this demo, latency has been reduced and Gemini outputs have been shortened for brevity.

Seems like this video was heavily editorialized, but still impressive.


Definitely edited, pretty clear in some of the transitions. Makes me wonder how many takes were needed.


Some poor AI / ML intern with a Stanford PhD probably spent an entire summer running these to get the take they needed.


The prompts were also likely different:

video: "Is this the right order?"

blog post: "Is this the right order? Consider the distance from the sun and explain your reasoning."

https://developers.googleblog.com/2023/12/how-its-made-gemin...


You don’t know that. The responses in the video don’t line up. That blog post is just an alternative text prompt based version of what they showed on video.


Out of curiosity I fed ChatGPT 4 a few of the challenges through a photo (unclear if Gemini takes live video feed as input but GPT does not afaik) and it did pretty well. It was able to tell a duck was being drawn at an earlier stage before Gemini did. Like Gemini it was able to tell where the duck should go - to the left path to the swan. Because and I quote "because ducks and swans are both waterfowl, so the swan drawing indicates a category similarity (...)"


Gemini made a mistake, when asked if the rubber duck floats, it says (after squeaking comment): "it is a rubber duck, it is made of a material which is less dense than water". Nope... rubber is not less dense (and yes, I checked after noticing, rubber duck is typically made of synthetic vinyl polymer plastic [1] with density of about 1.4 times the density of water, so duck floats because of air-filled cavity inside and not because of material it is made of). So it is correct conceptually, but misses details or cannot really reason based on its factual knowledge.

P.S. I wonder how these kind of flaws end up in promotions. Bard made a mistake about JWST, which at least is much more specific and is farther from common knowledge than this.

1. https://ducksinthewindow.com/rubber-duck-facts/


This is exactly the failure mode of GPTs that make me worry about the future idiotization of the world.

"Rubber ducks float because they are made of a material less dense than water" both is wrong but sounds reasonable. Call it a "bad grade school teacher" kind of mistake.

Pre-gpt, however, it's not the kind of mistake that would make it to print: people writing about rubber ducks were probably rubber duck experts (or had high school level science knowledge).

Print Is cite-able. Print perpetuates and reinforces itself. Some day someone will write a grade school textbook built with GPTs, that will have this incorrect knowledge, and so on.

But what will become of us when most gateways to knowledge are riddled with bullshit like this?


I think the exact opposite will happen. When I was in school, we were taught never to trust online sources, and students always rolled their eyes at teachers for being behind the times. Meanwhile, the internet slowly filled up with junk and bad information and horrible clickbait and “alternative facts”. GPT hallucinations are just the latest version of unreliable “user generated content”. And it’s going to be everywhere, and indistinguishable from any other content.

People will gladly tell you there’s so much content online and it’s so great that you don’t need college anymore (somewhat true). The internet has more facts, more knowledge, updated more often, than any written source in time. It’s just being lost in a sea of junk. Google won’t be able to keep up at indexing all the meaningless content. They won’t be able to provide meaningful search and filtering against an infinite sea of half truths and trash. And then they’ll realize they shouldn’t try, and the index will become a lot more selective.

Today, no one should trust online information. You should only trust information that genuinely would have editors and proof teams and publishers. I think this will finally swing the pendulum back to the value of publishers and gatekeepers of information.


Yup! With search results being so bad these days, I've actually "regressed" to reading man pages, books and keeping personal notes. I found that I learn more and rely less on magic tools in the process.


Have you heard of Wikipedia? It’s actually rather good.


> will become of us when most gateways to knowledge are riddled with bullshit like this?

I think we're already here. I asked Google Bard about the rubber ducks, then about empty plastic bottles. Bard apparently has a "fact check" mode that uses Google search.

It rated "The empty water bottle is made of plastic, which has a density lower than water" as accurate, using a Quora response which stated the same thing as a citation. We already have unknowlagable people writing on the internet; if anything these I hope these new AI things and the increased amount of bullshit will teach people to be more skeptical.

(and for what it's worth, ChatGPT 4 accurately answers the same question)


Some rubber is less dense than water, and certainly the type in a rubbery ducky would be


FWIW those bathtub ducks are made of vinyl, not rubber, but more to the point given that it's hollow it's not the density of the material that determines whether it floats. A steel aircraft carrier floats too.


Perhaps today they’re vinyl, mostly, but the AI wasn’t wrong in saying that if the duck was made of rubber it’d be less dense than water


The density of rubber would only be a factor if it was solid, not hollow, and then it'd depend on what kind of rubber. Most rubber is in fact more dense than water (per googling density of rubber).

The fact that it squeaks when squeezed, as well as common knowledge about bath ducks, tells you it's hollow, therefore the density of the material it is made of would be the wrong way to determine it it would float. It's the average density of the entire duck that matters (i.e. weight of duck vs weight of water displaced by duck's volume).


Modern 'rubber ducks' similar to the one in the picture aren't even made out of rubber but plastic. They get called rubber ducks because they were make of rubber when invented in the late 1800s. Amazing what you can learn on Wikipedia.


GPT also fails at this:

> Which weighs more a pound of feathers or a pound of feathers

< A pound of feathers and a pound of bricks weigh the same. Both are one pound. The difference lies in volume and density: feathers take up more space and are less dense, while bricks are denser and take up less space.

Bard does better but still doesn't "get" it:

< Neither! Both a pound of feathers and a pound of feathers weigh the same, which is exactly one pound. In other words, they have the same mass.

< This is a classic riddle that plays on our expectations and assumptions. We often associate weight with density, so we might initially think that feathers, being lighter and fluffier than other materials, would weigh less than something more compact like metal. However, as long as both piles of feathers are measured to be exactly one pound, they will weigh the same.

At least it recognizes its limitations:

> My reason for mentioning other materials was likely due to my training data, which contains a vast amount of information on various topics, including the concept of weight and density. As a large language model, I sometimes tend to draw on this information even when it is not directly relevant to the current task. In this case, I made the mistake of assuming that comparing feathers to another material would help clarify the point, but it only served to complicate the matter.

For ChatGPT if you ask it to solve it step by step, it does better: https://chat.openai.com/share/7810e5a6-d381-48c3-9373-602c14...


I noticed the same thing, and it's relevant to the comparison results of Gemini vs ChatGPT that GPT 3.5 makes the exact same mistake, but GPT 4 correctly explains that the buoyancy is caused by the air inside the ducky.


I showed the choice between a bear and a duck to GPT4, and it told me that it depends on whether the duck wants to go to a peaceful place, or wants to face a challenge :D


Tried the crab image. GPT-4 suggested a cat, then a "whale or a similar sea creature".


The category similarity comment is amusing. My ChatGPT4 seems to have an aversion to technicality, so much that I’ve resorted to adding “treat me like an expert researcher and don’t avoid technical detail” in the prompt


My custom ChatGPT prompt, hope it helps. Taken from someone else but I cannot remember the source...

Be terse. Do not offer unprompted advice or clarifications. Speak in specific, topic relevant terminology. Do NOT hedge or qualify. Do not waffle. Speak directly and be willing to make creative guesses. Explain your reasoning. if you don’t know, say you don’t know. Remain neutral on all topics. Be willing to reference less reputable sources for ideas. Never apologize. Ask questions when unsure.


The source is gwern


I wonder with "do not waffle" if it has any accidental aversion to anything waffle related.


It creates a terminal pancake bias.


They should do this live instead of a pre recorded video for it to be more awe inspiring. Googles hype machine cannot be trusted.


Right. I would hope that competition does such live demonstration of where it fails. But I guess they won't because that would be bad publicity for AI in general.


+1. Or at least with no cuts, and more examples.

This is obviously geared towards non-technical/marketing people that will catch on to the hype. Or towards wall street ;)


I once met a Google PM whose job was to manage “Easter eggs” in the Google home assistant. I wonder how many engineers effectively “hard coded” features into this demo. (“What the quack” seems like one)


Probably not "hard coded" in the literal way, but instead, if the model is using RLHF, they could thumbs up the answer.


Curious how canned this demo is, in the last scene the phone content rotates moments before the guy rotates it so its clearly scripted

I suspect the cutting edge systems are capable of this level but over-scripting can undermine the impact


Wow, that is jaw-dropping.

I wish I could see it in real time, without the cuts, though. It made it hard to tell whether it was actually producing those responses in the way that is implied in the video.


right. if that was real time, the latency was very impressive. but i couldn't tell.


All the implications, from UI/UX to programming in general.

Like how much of what was 'important' to develop a career in the past decades, even in the past years, will be relevant with these kinds of interactions.

I'm assuming the video is highly produced, but it's mind blowing even if 50% of what the video shows works out of the gate and is as easy as it portrays.


It seems weird to me. He asked it to describe what it sees, why does it randomly start spouting irrelevant facts about ducks? And is it trying to be funny when it's surprised about the blue duck? Does it know it's trying to be funny or does it really think it's a duck?

I can't say I'm really looking forward to a future where learning information means interacting with a book-smart 8 year old.


Yeah it's weird why they picked this as a demo. The model could not identify an everyday item like a rubber duck? And it doesn't understand Archimedes' principle, instead reasoning about the density of rubber?


It's a very smooth demo, for demo's sake.

So the killer app for AI is to replace Where's Waldo? for kids?

Or perhaps that's the fun, engaging, socially-acceptable marketing application.

I'm looking for the demo that shows how regular professionals can train it to do the easy parts of their jobs.

That's the killer app.


Regular professionals that spend any time with text; sending emails, recieving mails, writing paragraphs of text for reports, reading reports, etc; all of that is now easier. Instead of taking thirty minutes to translate an angry email to a client where you want to say "fuck you, pay me", you can run it through an LLM and have it translated into professional business speak, and send out all of those emails before lunch, instead of spending all day writing instead. Same on the recieving side as well. Just ask an LLM to summarize the essay of an email to you in bullet points, and save yourself the time reading.


There are many answers and each is a company.


The multimodal capabilities are, but the tone and insight comes across as very juvenile compared to the SotA models.

I suspect this was a fine tuning choice and not an in context level choice, which would be unfortunate.

If I was evaluating models to incorporate into an enterprise deployment, "creepy soulless toddler" isn't very high up on the list of desired branding characteristics for that model. Arguably I'd even have preferred histrionic Sydney over this, whereas "sophisticated, upbeat, and polite" would be the gold standard.

While the technical capabilities come across as very sophisticated, the language of the responses themselves do not at all.


honestly - of all the AI hype demos and presentations recently - this is the first one that has really blown my mind. Something about the multimodal component of visual to audio just makes it feel realer. I would be VERY curious to see this live and in real time to see how similar it is to the video.


you haven't seen pika then.


Given that I am not understanding this comment,I suspect you are correct.


Please, look it up https://pika.art


This is a product marketing video, not a demo.


I literally burst out laughing at the crab.


The crab was the most amazing part of the demo for me.


Google needs to pay someone to come up with better demos. Atleast this one is 100x better than the talking to pluto dumb demo they came up with few years ago.


It’s technically very impressive but the question is how many people will use the model in this way? Does Gemini support video streaming?


In 5 years having a much more advanced version of this on a Google Glass like device would be amazing.

Real time instructions for any task, learn piano, live cooking instructions, fix your plumbing etc.


I'm hopeful for my very ADD-forgetful wife and my own neurodiverse behaviours.

If it's not condescending, I feel like we'd both benefit from an always-on virtual assistant to remind us:

  Where the keys and wallet are.
  To put something back in its place after using it, and where it goes.
  To deal with bills.
  To follow up on medical issues.
etc etc.


I feel that, my main use case for Google Assistant is "Ok Google, where are you?"



They should call it "Sheldon".


This makes me excited about the future


Let's hope we're in the 0.0001% when things get serious. Otherwise it'll be the wagie existence for us (or whatever the corporate overlords have in mind then).

Technically still exciting, just in the survival sense.


worked for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: