I'm kinda confused why Unity keeps doing this - they keep putting out high-end demo after high-end demo, but that's not where there core userbase is. Their main users are people who build games for phones and indies, with basically zero usage in the AAA space. And Unity's performance/stability is still not that great afaik.
It seems to me that they are trying to prove that they are a serious 'AAA' engine, but these demos aren't that convincing to me - AAA is a lot more than putting fancy shaders on high-poly models, it's about handling huge amounts of objects in a dynamics situation, displaying large worlds via streaming, having a workflow that accommodates every creative professional, and offering great performance and visuals even on very complex scenes.
I've hear that even these highly impressive demos are fake - they built a ton of custom code for each one they rebuild core Unity features, meaning if you wanted to replicate this for yourself, you'd be in for a ton of development.
Comparatively UE5's Nanite demo showed off a tech that's ready to go for production.
> Their main users are people who build games for phones and indies, with basically zero usage in the AAA space
They don't need to convince their already huge and already convinced core users, so there's not much point in building big tech demos that apply in that space. The aspirational AAA stuff that Unity puts out does 3 things I think:
- convince current indies with big dreams that their investment in unity has long legs if they grow larger and more ambitious.
- open up to non-gaming gfx tech sectors: broadcast, movies, simulation, etc.
- push their engine to its limits so they know where it hurts the most in places that actual users are not pushing. (it has many known rough edges and areas, but Unity users already report these)
These core users (indie developers) aren't "convinced" anymore, they're starting to be skeptical of Unity's management of their engine. Since Unity has failed to fix so many bugs and left too many half-faked features abandoned, they are starting to think Unreal is a much more stable choice in the long term. Gamedevs value engine stability and workflow improvements far more than just shiny graphics and new features, because at the end of the day the most important thing is how you can actually ship a game without problems, not how it would look good in trailers.
Also, Unreal has enough money to burn (from Fortnite) to venture off into new spaces. Unity is taking far more of a risk here, since these indie gamedevs are still the main revenue flow (especially considering that Unity sells their engine in a monthly seat-based system rather than a revenue sharing model like Unreal). If these core users move away and investors become skeptical that Unity's going to make money (they're still burning tons of investor money every year without significant revenue), what we're going to see is a total disaster.
I don't know why you are getting down voted, I don't necessarily agree with everything you said, but I thought It least it contributed to the discussion.
I'm a full time Unity dev, but have many years of Unreal experience, and we occasionally look at what Unreal is doing now. To be honest, I just don't want to code in Blueprints or go back to C++.
If I were building a multiplayer game, or a big open world game, then I might be convinced to give up C#, and go back to Unreal.
With regard to management of the engine, I think there have been some less than optimal decisions, (URP vs HDRP, Multplayer, DOTS) but its really a huge project I hate to criticize because I'm not sure I would have done anything differently.
"Coding" in Unity and UE4 simply isn't fun. UE4's C++ macro abomination is bad but I find that 90% of the value in amateur game development is in the art content. Since I'm not an artist, I've simply given up game dev as a hobby.
I believe Unity's management is correct here. To venture into new territory is crucial.
The revenue from mobile apps is quickly drying up now that we are getting stronger anti-gambling laws. Because let's be honest, mobile games make money by hooking rich gambling addicts. That's why the industry calls them "Chinese Whales" and the "99.9%-ers" who are allowed to play for free so that the actual customers have enemies to crush with their pay-to-win items.
That said, I have a very low expectation of Unity-the-company to pull this off successfully.
I'm not a game dev and wanted to have a look at Unity so take my opinion for what it is but when I installed Unity the first thing I wanted was to change the size of the tiny fonts of the IDE in my 4k monitor. Well, it seems you cannot change the fonts. My mind exploded.
Since 2019.3 (at least) under Edit > Preferences > UI Scaling exists to let you adjust icon/text scaling. It defaults to using your desktop scaling settings.
It's been my opinion for a while that the $150/mo seat fee, from not all of their users, isn't covering their burn rate, and that they are loosing market share in money marking projects to Unreal. Ideally they would like proper percentages, but since they lack a compelling advantage, they must price lower relative to Unreal. Feel free to prove me wrong with actual numbers (but: 1800 employees in SF ...) @ av, 200k/employee, I make that 360m/yr just on salary :) So, anyhow, I completely agree with this post. And, looking for cashflow and better rendering tech, they buy Weta .... maybe they will pull this off .. but maybe not, also.
I don’t disagree with your post, but it’s worth noting that most of Unity’s revenue comes from Unity Ads and their other services. Software licenses/subscriptions is only a quarter of their revenue.
> - open up to non-gaming gfx tech sectors: broadcast, movies, simulation, etc.
This is a big one, and I guess will be true in the near future if not already. If their tech is easy / common enough, some sites will benefit greatly from this, such as virtual museum / zoo, better google street view interface, virtual space exploration and physics demonstration for educational purpose, etc.
They have explicitly stated in earnings calls and their strategy documents that their focus is not only on gaming, but also owning the toolsets of the metaverse
They compete with UE, so whatever UE does, they must also, for at least the appearance of parroty.
But make no mistake: Unity is the reason Unreal suddenly got cheaper and open source a few years back. Both companies read The Innovator's Dilemma and both know WTF they are doing.
BTW Will Epic's ludocrisly, mission-distortingly successful Fortnite, like Wrigleys, cause them to become The Fortnite Company? (similarly will Rockstar become the GTAV Company?)
They will always be the Unreal Tournament company to me. Unreal and Return to Na Pali are still two of my favorite games to this day. Honestly wish someone would make games like those again.
Unreal getting paid only when a user makes money [1] also compounds the dog-fooding. I believe that is how a company should be run - align your interests directly with the users'.
With Unity, the price per seat means only people who leave will send a signal, which will be too late. They are not incentivized to make day-to-day work efficient, which is why there are a million small bugs.
[1] - https://www.unrealengine.com/en-US/faq - "This license is free to use and incurs 5% royalties when you monetize your game or other interactive off-the-shelf product and your lifetime gross revenues from that product exceed $1,000,000 USD."
Licensing their engine is just a side business for Epic and I’d bet on average their not getting anything even close to 5% (all their bigger clients should have custom agreements). But that’s fine for them since they’re making loads of money from Fortnite with much less effort (than you would need to make anything similar from just a game engine).
I’m not sure how the overall quality compares, but I’d expect Unity’s users to be way more critical even if it had more bugs just because of all the indie/small developers who’d fall below the 1mil threshold for Unreal but have to pay for Unity.
They are more or less the Fortnite company. The Fortnite money printer churns out cash directly into the maw of the Epic Store money incinerator. This might turn out to be a great strategy in the long run but either way, they wouldn't be able to pursue it on this scale without Fortnite.
Epic games states that Fortnite has 350 million accounts and 2.5 billion friend connections. It seems like this is the next step of social networking and I'm sure the connections are much more authentic than Facebook's.
>It seems like this is the next step of social networking
Many years ago I was playing Runescape and a friend quipped that MMOs are basically Myspace with a game attached to it. And I think he was right. MMOs live and die by the network effect. Runescape had 150 million user accounts something like a decade ago, if not even before that. I'm sure World of Warcraft, Lineage 2, and League of Legends had at least comparable numbers.
I think if this were the next step of social networking, then it would have already happened. Perhaps it already did? We just call them gaming communities. If that's the case, then video games have created competing social networks that exist at the same time and aren't just being eaten by the biggest one.
I would still expect that most people on FB have a higher number of ‘authentic’ relations there on average than someone who plays Fortnite has in-game. Unless all of your friends/family play it all the time as well.
I guess it depends where do you live, though. Where I’m from Messenger is still the most popular messaging platform so most people (unless they have very strong feelings about FB) have an account.
Location/demographic probably does have a rather large impact.
There are very few close relationships that I have on Facebook. I have it because I needed an account for a while, and haven't deleted it so that it can function as a placeholder should anyone try to impersonate me.
Meanwhile, almost all of my friends have a presence on Steam - including some who I rarely if ever do or have seen in person.
The great thing about this is that all the code / features they dogfood in Fortnite gets major improvements. Network play for example got a lot better simply because Fortnite ironed out lots of small, latency and scalability issues.
I'm just starting to play with Unity, but I am playing with it at all because I can essentially learn along the way with them since they are still releasing things that I just don't quite need yet. This may not make sense to some people, but no offense to them... their opinions don't matter. We are the ones using this, not them.
> They don't need to convince their already huge and already convinced core users
I actually disagree with this on a much broader level: I think that every company should continue to convince their core users.
1. Their core users are always considering alternatives
2. The people using their competitor’s products will see how well you treat your core users and may look at their current company and see the lack of it. It’s like when a company chases new customers by giving discount after discount but never once offering a discount to great long-term customers. It leaves a bad taste in the mouth of the people who’ve carried your company.
In my sphere (of ~90%/10% unity/unreal devs), a lot of Unreal's recent releases have been extremely tempting reasons to start migrating from Unity to getting more familiar with Unreal. They're inspiring and make a lot of upper-end Unity tech feel outdated by comparison. Demos like this are a nice reassurance that Unity's capable of whatever we need to wrangle it to do, even if we're not building hyper-realistic experiences like this now. It's nice to not feel like my preferred stack is falling behind the curve.
Unreal seems to be making these things easier to achieve through tooling for games, where I feel Unity is just trying to point out that you could still reach this level of fidelity if you wanted to, like you said.
I think if I were to start a high fidelity project with a big team, I'd start seriously considering the art pipeline benefits you're getting from Unreal. It looks like they're trying pretty hard to take some of the tedious work out of lighting and setting up LODs, which could be a big time savings.
I always have this quote from Garry Newman of GMod & Rust (a Unity Project) fame at the back of my mind when hearing any news about Unity's new demos.
> The short movie demos aren't impressive. It's embarassing when half your keynote is a group of artists explaining how they spent 6 months re-writing half the engine to render something that they could made in Maya in a week.
Really wish Unity stopped being recommended as a great first choice for hobbyists with a bit of programming experience in general. In my experience I've found it a lot harder to get going with it, and requires dealing with a lot of systems which are in various stages of deprecation or early access compared to UE.
I'm just a hobby game dev but I'm super glad I chose to stick with Godot. It has always felt very intuitive and the alpha for version 4 is looking super great too. I don't know if Unity cares about 2d games but to me it seems like Godot is a way better free choice for 2d and unreal is a way better choice for 3d
> Show me a 100 player PUBG type game running at solid 60fps on a current gen console or mobile. We already know you can render a GTA3 cutscene.
So, as I understand, he complains that the demo team spends more time trying to make demos that looks like movies instead of making demos that are more focus on the game parts.
I've been using Unity professionally for large indie/AA development for a few years now. Before that I did five years of AAA with Frostbite. I've also used a bunch of other engines, including Unreal, in my spare time.
Unity is definitely a very stable engine with a lot of features and very nice and simple to work with. I know several developers who see Unity as a better alternative than Unreal for any scale because the development process is so smooth. As a graphics programmer I feel liberated by how easy it is to set things up.
I've also looked into performance quite a lot and it's not bad. There are issues, especially on certain platforms, and a lot of gotchas. But their analysis tools are really mature and help solve a lot of issues without having to load up external tools. It's definitely possible to make beautiful and performant Unity games.
This is the view they want to promote. What they don't want you to know is that the engine keeps churning out half baked features which never get fully fleshed out or documented properly. A lot of indie developers also complain about support but at least with their enterprise support you often get swift and helpful answers.
What I feel really hurts whenever I use UE is iteration time. Everything takes too long. E.g. compilation (code, shaders), but there are lots of other examples.
That was my thought too, that the target of these demos wasn’t just game devs. I always saw these high-end demos as a way to move into more traditional storytelling, like movies and TV. I think of these high-end demos as a kind of skating to where the puck is going to be as opposed to where the puck is. (Or where you want the market to move).
Just to add to what everyone else is saying, they could simply believe that showing these demos is what attracts the smaller devs.
People who are just getting started will see this and see that Unity must surely be capable of everything they need, and it may even be trivial there, if it can also handle this.
It's also a piece of aspirational marketing. It doesn't make sense to show the reality of the mobile game space as that won't get anyone excited. It's what you could do, not what you will do.
This is my thinking as well. Unity needs to show that it possesses the prestige features that attract novice developers who aren't yet thinking about things like ease of use and production workflow.
They want to compete in the AAA space. The reason Unity has little usage in the AAA space is because in the past Unity hasn't been good enough performancewise. They need to show to developers and convince publishers that Unity is a safe and viable choice for console development. If they fail at accomplishing this they're ceding a huge amount of marketshare to their competition.
An anecdote from my studio:
My studio is work for hire with mostly Unity devs. The "big ticket" contracts are in Unreal, so the studio is starting to push for devs to learn Unreal, so we can start landing those contracts. In this case, the publisher perception of Unity/Unreal is creating a business need in our company to move away from Unity and into Unreal.
> I've hear that even these highly impressive demos are fake - they built a ton of custom code for each one they rebuild core Unity features, meaning if you wanted to replicate this for yourself, you'd be in for a ton of development.
It's sadly even worse. To replicate most of this, you need access to the C++ part of the engine source code. If you're small (e.g. less than $10 mio annual revenue), they won't even discuss pricing with you.
> I'm kinda confused why Unity keeps doing this - they keep putting out high-end demo after high-end demo, but that's not where there core userbase is.
I fully agree. One of the top voted comments on the YouTube demo video even captures this pretty good in a humorous way:
> Can't wait to use this to make another low poly game
Check out Rust and Escape from Tarkov, they are a big step above your typical Unity games. So even if its not their core userbase they certainly have such titles already, and it seems that they want more.
As someone who bought it 4+ years ago in alpha it's a game that has come a long long way, same with rust really. Its not without it's issues, hard to say which are engine problems or just bad decisions made by the devs, which is hard to blame them for as their situation is a bit different compared to many other game companies.
If you actually read the case study, there are 0 mentions of Unity. This PDF is about Multiplay (acquired by Unity in 2017) and the backend; the game is on a highly modified Source engine.
Pretty sure they want to break out of the core userbase. I once interviewed there and they really stressed that they were a 3D platform and not just a game engine.
In that context it would make sense to make demos for the capabilities that people don't know that you have.
AAA definition for other lay-readers confused by the jargon:
games produced and distributed by a mid-sized or major publisher, which typically have higher development and marketing budgets than other tiers of games.
In my opinion, it mainly stands for low-risk productions where you recycle a proven core game mechanic with amazing and new graphics. For example, the Assassin's Creed series where each game looks unique and great, but they all play very similar.
That’s a little backwards, AAA games are expensive which makes them inherently risky. This is why most successful companies try to minimize risks by using proven franchises and gameplay.
However, you do see plenty of exceptions like the original Dark Souls which took serious risks. Which was then followed by two lower risk sequels.
I don't work in the industry, but as an outsider your question seems to answer itself. I would wager that Unity wants to take the step from the hobbyist space to the professional space — because surely that's where the bigger sums are made. Even if they're not there right now, consistently putting out content that indicates that you're working on it is a great way to captivate your audience over time and shift the perception of what your brand is about. In the startup space, this would be similar to building in the open — it helps you signal your brand, build a reputation and, hopefully, build a customer base that matches your sell.
Although you mean well, this sounds a lot like "why does<X poor country> launch satellites or do science when their population is starving". Doing one doesn't contradict the other.
Can't afford to just keep narrowing down on existing customer base. They need to keep expanding and competing, for e.g. with Unreal https://youtu.be/bErPsq5kPzE
Unity has been courting the VFX community for years for building prerendered scenes. Unreal is doing much better in that area than they are. Gaining more ground here was why they acquired WETA Digital. If Unity can gain a foothold, then they don't need a lot of users...just a few really big ones.
I was going to comment similarly. Unless Unity has a competitive answer to Lumen + Nanite, they're fucked out of the hyper realistic pipeline. They need to come up with an answer to those or double down on their actual strengths for indie development.
Something else I haven't seen mentioned in the replies: demos are shiny promotion-bait (both of the internal title-promotion style, and also the company itself hungry for any kind of engagement).
They are probably still hoping to compete with Epic in the RealTime cinema engine market. Although it seems that Unreal has pretty much become the standard now for big film productions.
They do it maybe for similar reasons some car manufacturers are involved in Formula 1: prestige, pushing their engineering competence, application of research that benefits their products, etc.
In his last keynote, John Carmack mentioned the possibility of the Metaverse being something like a Unity Plugin that federated platforms could tie into.
This is indeed rendered in realtime, but one thing to note is it's a "4D" capture, more-or-less meaning each frame of the animation is its own asset. This makes it possible to reproduce subtle physics like the lips sticking together slightly when the actor opens her mouth. The amount of storage space, alone, makes this impractical for anything other than demos. Unity claims they will be able to achieve this level of fidelity using a deep learning-based compression that will allow stuff like this to appear in game cutscenes, but all the movements will still be pre-baked. The only interaction possible will be moving the camera. At that point the technology will be very useful, but it's still a ways away from having such a realistic character that can react to you dynamically.
(Though whether that's just a couple years of software technology progress, or a decade+ for hardware progress, who can say?)
Unity recently acquired Ziva, which specializes in the detailed animation of humans and other animals. They were known for their (not realtime) physics-based solutions, but now they have an ML model for faces, apparently. As far as I know, it's still in beta and not widely available. Unity says they will re-release this demo with the Ziva face in a matter of weeks and the quality will be even higher. And possibly allowing interactivity as well?? I guess we'll see in a few weeks.
Superresolution. You have a lower resolution animation (less pixels = less calculations) and then use superresolution to turn that into a 4K image. This is reality right now for NVIDIA GPUs ( I think it’s called DMSS)
There is one out there from 5 years ago or so that is similar to Google's Seurat but for animated stuff, I think pre-baking triangle culling for different views within a limited volume. I can't remember the name of it, from the details I remember (there was a realistic orangutan or something like that rendered with fur) I should be able to find it on Google, but Google search has become degraded recently.
Nvidia DLSS is an important part of how they achieved 30Hz at 4k resolution, but that's more of a shading assist and doesn't affect the animation. The facial animation will be compressed with Ziva's ML solution.
Cutscenes work a lot better (more immersive) if they can correctly reflect runtime-defined assets, e.g. your own character with your customizations, gear and clothes, etc, or the dynamic state of the environment in which gameplay was happening: destruction debris, current time of day, and such.
Cause they want to push the limits and make their engine look amazing. Also, if they research hard enough, in-game becomes nearly as good as video to the point you can’t tell.
Movement won't be pre-baked, a physics engine sim will be baked in to the neural network, and movements will be another dimension for the deep learning network. And then all of that will be baked into an agent that has been trained to carry out motives (with a simulation of your character, etc). The same applies to speech as movement. And the deep-learned compression rate will be magnificent.
No predictions, just an explainer of how AI agents are trained. For instance, RL is about presenting an environment via rules (gravity, etc), and letting the agent learn its way around, thus discovering what it can and cannot do (a policy for the environment).
You didn't explain how anything actually works, you gave a very crude prediction with a lot of holes of how you think something will work in the future.
"Look at our demo of a hyper-realistic human character!"
eighteen million things happen around the character to distract the viewer's attention from the human character throughout most of the video
Not the most confident tech demo. Looks like a reasonable degree of evolution, but as they start to get up toward turn-of-the-century movie CGI in games, now you run into the same issues that you see in mocap for movies--including that a lot of details of an actor's expressions and movements actually aren't captured, so you have to have animators go back and laboriously add all those lost details back in to have something that looks convincing.
(Mind, I say turn-of-the-century, but man, the fundamental techniques for rendering skin convincingly have come a long way, even in current game engines.)
Someone who knows better can correct me, but I assume they put the character in a complex environment to show that they are rendering a highly realistic character while still rendering a complex environment. I remember very impressive character rending tech demos from 10-20 years ago, but it was a single character in a static environment.
Also they have a character making weird facial expressions in a weird situation - I legitimately cannot tell whether the occasional uncanny valley effect I felt was due to intentional direction or just the limitation of the tech. I suspect this is intentional.
It isn't perfect yet, and there are subtle queues that this isn't a real person. The character and dialog do seem to be deliberately leaning into this so these flaws don't detract. To me, the character started moving her hands in a very unreal way at the start. And then I realized the movement I was seeing was almost certainly motion captured and likely exactly as the actor performed it!
eighteen million things happen around the character to distract the viewer's attention from the human character throughout most of the video
Yes. Why did they do all those crappy CG effects while showcasing photorealistic characters? Two people seated across a chessboard in a realistic room would have been more effective.
It's getting to the point that everybody is doing this. Unreal Engine has Metahuman Creator.[1] Even Second Life has reasonably good heads now.[2] And has facial tracking on their roadmap.
I didn't think I would care about face tracking, but Star Citizen uses a Face over IP system and man it really enrichens the online experience. You can communicate a lot more subtlety with it.
It also has the benefit of necessarily head tracking as well, so you can also do TrackIR style head tracking which again adds a lot to the experience to be able to move your view around independant of the mouse and in ways a mouse can't move.
I guess the simple reason is because then you wouldn't believe it was computer generated but it could just be a movie of two people playing chess. It needs to add a touch of the bizarre or fantastic to remind the viewer they are watching something that isn't real. Though, frankly, the talking portion was reminder enough of that.
They're showing off a lot of things at once here. The real time raytraced lighting on their vfx is impressive. To do that in tandem with their human while maintaining 30fps is pretty something.
Every thing draws from the same well so to speak, a human that looked a little better but had to exist alone in a white room would have far less utility for real things.
This is really impressive assuming it is as automated/scalable as Unity implies.
The skin and hair look great which are both really challenging for their own reasons, but one particular detail that was surprising was at 1:36. If you watch her lips closely you can see where the top and bottom lip stick together as they open. If they can simulate that level of skin detail when the motion is backed by mocap data this could be a huge quality jump for character-driven and dialogue heavy games.
There are definitely still moments where the lips, teeth, and tongue look "3D" but I don't think I would notice if I weren't hyper focusing on the mouth and just enjoying the story.
Realism in regards to rendering software has nothing to do with realism in the final product.
Pixar and Disney have been among the most influential and revolutionary companies in terms of rendering techniques. And most of their products are stylized.
Stylized looks come from the assets, not from the rendering engine. The rendering engine gives you physically correct light interactions (which you almost always want, and even then you can turn knobs to make it non physically correct or outright do whatever you want using shaders), things like fur/hair rendering, fluid simulations, volumetrics etc. All these things can be tweaked to non realistic values by the artist.
If you want a movie/game where characters are made of wax, or fire, or smoke, or something even crazier with shaders, you want as a basis a renderer that is physically based and uses mathematical models based on the real world, and then tweak from there.
Then your art will always resemble the physical reality we already know. Computers and mathematics enable us to render literally anything. And all you can think of are things you see every day. I'm looking at wax, fire and smoke right now, there's a candle on my desk. If it's realism I wanted, I could just stare at that (and sometimes do).
The primary point of my argument is that we don't have things in mind that current rendering engines can't handle, precisely because our current engines are so hell-bent on replicating reality. Why dream of what your tools can't make?
An example would be good, but I think hypertele-Xii's point isn't about what engines can handle, but about where engine developers are (and aren't) spending their money and time, and what they're making easy for users to do --- that if they make realism easy, then that's where users will go.
Not OP, but I think rendering scenarios where light doesn't follow straight lines would be problematic for current rendering engines. Also, some hypothetical portal or other dimension scenarios.
I really enjoyed games like Red Dead Redemption 2 and CyberPunk 2077. They are not per-se reality as in as you and I live today, but different lifestyles in different times.
I feel like these types of games really benefit of the work that is put in to making humans and environments look realistic.
Heck, even if you're making some weird alien world you're still going benefit from realistic physics engines for example, because that's just how our universe works.
So go and do that, but unless the way that universe works is defined as "whatever is easy for a computer to simulate and render" then you will still likely benefit from techniques that were originally developed to simulate the real world.
That's totally cool. We're all entitled to like and value different things. The awesome thing with video games is that because there are so many of them, one can pick from a spectrum of realism. It's not an all-or-nothing problem.
There are a ton of people who value photorealism, and that's why resources are being put into it. And it's certainly not seen as "wastefulness" by those people. Personally, I would absolutely love to be able to put on a pair of VR goggles and be unable to tell if I'm viewing a real-time render, or a remote stereoscopic camera feed of a real scene. Yet, I don't think that fantastical physically-unrealistic rendering is a "wasteful" use of resources.
If we were discussing a video game, I'd agree, but we're discussing a tool for creating video games. A tool that influences creators' understanding of what they can make. If the tool becomes opinionated, it will produce works accordingly. When Unity makes hyperrealistic humans easy to make, that's what we'll be seeing people make with it.
Even if we got to the point where the digital humans are indistinguishable from real footage, all your realism is gonna go out the window when you encounter a bug where a character is half-clipped into a wall. Or when an animation breaks and someone is standing with a backwards bent knee, and no reaction on their face.
The idea that the ultimate endgame for video games is to have them look like a playable movie, or be a real-world simulation just seems silly to me.
Photography gave painting (and visual arts in general) a massive boost by the virtue of stripping the aspiration of going for "ultra-realism" in paintings. After all, that's what photography can accomplish much better and much easier. This made painters look for other directions to push their art into, which freed up artists' bandwidth for a lot of other, previously overlooked and not as popular genres, such as surrealism.
Not saying that photography is responsible for creating those genres, they existed before. But (relatively) cheap and accessible photography definitely diverted attention from ultra-realistic paintings to other painting genres that enjoyed greater growth due to that.
After all, what do you think will happen once it becomes very easy and cheap for anyone to have ultra-realistic 3D models and animation in videogames? It will become much less valued, as it is something that anyone can do very easily. So people doing 3D art will need to look in other directions to distinguish their work. I have already been observing it happen over the decades in gaming.
For present day examples: Nintendo manages to nail the style of visuals in a lot of their first party titles without even attempting photo-realism. Guilty Gear -STRIVE- (made by ArcSystem Works) is killing it with their over-stylized visuals. Atlus and their Persona series keeps pushing more abstract and surreal depictions with each game, and they resonate well with the audience. And I've only listed AAA-tier examples. Once you dive into indies, you can go on forever with lots of well-executed products that pretty much intentionally went against photo-realism.
I think the plethora of filters and their popularity totally disagrees with this. Also, things like HDR processing to make things look eerily unnatural, desaturation f/x, etc are all ways of taking the reality into a new place.
As a serial buyer of games, I'd like to disagree (if only to prove that there is at least one). I often buy games just to marvel at the technical aspects and realism of them. I find realism and reality extremely interesting. Now, "resalistic" doesn't need to be "everyday" looking. Take CP2077 for example.
Reality is also hard and that's why people aim to reach it. Making alien world is easy because the viewer has no reference point, so you can go banana with it.
It is the opposite. Reality is easy because it is familiar. Our brains are tuned to it from the core. To make something truly alien is to make something you've never seen, heard, touched, smelled, nor felt before. That is incredibly difficult, has always been, and has the potential to birth whole new genres of art.
There will never be new genres or art or music. Everything has been discovered already. Like colours, they are all already known. Sure you can have a new take something or maybe discover a new subgenre, but the majority is already there. And just because a genre would have a new "name" (just a branding thing anyway) doesn't mean it's not part of some other genre already.
You can't capture psychedelic and or mystic experiences, which are very subjective anyway and yes, rooted in _reality_ or require a reference to _reality_ as they are the so far away from that.
Do you believe that there were no genres of music born in the 20th century, or that we conveniently discovered all the genres just a few decades ago? Neither of those seems reasonable to me.
You can't automate art. AAA is in large part about converting money into more money by way of video game sales. That's why it shouldn't be synonymous with anything elite.
You can certainly pour all these resources being spent on rendering 'hyper-realistic digital humans' into developing tools, workflows and processes that give artists power and utility for crafting wonderous worlds.
You could always say "I could have made that", but the important part is: you didn't.
No one cares if you automate the painting of a black square. They care about the intention of the mind that thought it was important enough to do. That's why it's art.
Eh, keep building this technology and give me the ability to push a script into an AI and it render my world for me with a little tweaking here or there. We're not there yet but it recalls a moment in one of Iain Banks' novels where one of the minds "lies." Since it would be trivial to generate a false flag that had all the data necessary to prove an event "happened" you had to trust that he mind wasn't lying about what it witnessed.
What is truly astonishing to me is we are close, not quite there, but close. I remember when The Spirits Within came out in 2001 and we were so blown away. I look at it now and it feels so amateurish in comparison.
Your post is being downvoted because blocking cookies does not solve the problem. It is blocking cookies that causes the issue. When cookies are blocked (in my case, third-party cookies are blocked), an overlay appears over the video explaining the website that hosts the video will not allow the video to be played without a targeting cookie. FWIW, removing the overlay was not enough to get the video to play on the website; but I haven't looked any further into it—easier to just go to YouTube to play the video.
What's interesting is that for most of my childhood (80s and 90s), aside from a few wow moments I was pretty underwhelmed by the standard of the tech. It was like it was trying hard to be something it wasn't yet ready to be. I used to walk into television shops and think they all just look crap. Computers used to frustrate me so much - crunch crunch to do anything, and 256 colours was deemed good (!?). The first music players where you might be able to get one whole album onto a memory card that was too expensive to put a price in the rs catalogue (or perhaps too volatile a price). Anyway, tech was crap. Then around 2005 or something, it started becoming what it was meant to be. You could buy a computer and it could do everything you needed it to do; of course you always wanted more number crunching, but you could see where it was only just right the corner. Then GPUs started doing computation, and computation stopped being thought about. Memory was super fast and copious. One now felt limited by programming capability, not hardware. I'm now genuinely excited by technology. As much as it pains me to say it, the television departments are places of wonder.
It's in that context that this post feels like another step towards achieving some promised vision. If this is realtime, it is truly fantastic.
Now to hope we can deal with climate change and despots and poverty so it's not all for nought.
As an aside, one early wow moment was the first time I saw a mini camcorder, then another for the first genuinely mobile phone; both Sony I think. Also, though a bit later, I remember seeing an in-car GPS and deciding it was basically a perfect interface for it's task.
If you're interested in tech becoming really magic, lookup the VRchat club scene. Real-life skilled DJs put out of clubs by covid, setup a twitch stream with a webcam on their DJ controller, while wearing full-body VR themselves, also streaming their DJ software and in-game view to an in-world stage screen in front of up to 80 guests, many if not most also wearing full-body VR, dancing along together, in any kind of digital world and wearing any kind of semi-humanoid (or not) 3D model you can imagine. The "metaverse" has already been running for a few years, but it's not Meta's, it's a thin multiplayer VR wrapper over Unity.
Yes even voice chat feels dystopian if thats all your interaction with other humans. Social platforms have ironically made us less social beings. Real ourselves live in the digital world while our fake zombie selves do the mundane tasks in the real world. It does not feel right at all. Its easier to connect and find similar people through internet so i get why it is like this, but i think its also the factor thats making us less mentally balanced, causing real world to feel more foreign and harder to interact with. Drug of social interaction that's not real which leaves you lonely in the end.
I would say the cause is that current internet social connection techniques don't have enough depth to them. Telephone doesn't let you see hand or facial emotions of the other person. Twitter/facebook is only using a keyboard + reading text. Instagram you can see a snapshot of an instant and peoples thoughts about that instant. etc...
VR is working to try and bridge all of these by essentially creating a transporter/teleporter to a shared physical space. Imagine if VRChat was a 1-to-1 replication of you. I feel like this is one of the end goals of VR.
Its easier to connect and find similar people through internet so i get why it is like this, but i think its also the factor thats making us less mentally balanced
This isn't true of just internet, this is true of cities. People no longer care about one another, each person/bond is replaceable with another one. i.e dating apps only help prove this fact. We no longer need to rely on one another to survive and therefore are more independent. We are less willing to give up (sacrifice) parts of our independence and less tolerant of flaws in others. I think this is what leads to higher divorce rates now. How many people live in high rise buildings and actually know many of their neighbors?
It’s that you’re alone in a room with a screen strapped to your face looking at pixels instead of seeing, touching, feeling, smelling, hearing, sweating, dancing, and sensing in the middle of a rave of physical human beings
It’s literally senseless in comparison, lossy digitalization of basic analog humanity ie dystopian
They are Hyper-Realistic Digital Stills of Humans.
There's a lot of room for improvement in the animation department, though. For me the worst offender is the mouth, it moves in a completely unrealistic manner. The second worst offender is the head/neck movement, which moves with robot-like precision. Finally, the eyes, which are (granted) not as "dead" as in other models, stare too much and communicate too little.
The hand movements were jerky too. Just like they were in the 1990s. I don't understand why we're still struggling with producing animation that isn't uncanny valley?
Most hand animations are done with keyframes, because mocapping hands is difficult. And with keyframes animation, much like modelling geometry traditionally vs photogrammetry, we can't produce the needed amount of details in animations for them to look real.
Moreover, we are extremely good at spotting something off with hands movement.
Mouth movements were not 100% either. FWIW mouth animation is actually quite hard to get right, I remember artists at a previous company I worked at (also doing super realistic digital humans) spending hours try to get it just right, but still always seem to be able notice it being a bit off...
Electric cars/bicycles/scooters/motorcycles, computers in everyones pocket, huge cheap TV-s, fast wireless internet, robotic lawn mowers, radar cruise control and the list goes on and on and on...
Drones, for sure. Only a few years ago, a decent digital camera cost $400. Now I can buy one just as good, for the same price, that fucking flies.
Anyone who doesn't experience a genuine "Holy shit, I'm actually living in the future" moment when they first encounter a Mavic Mini or similar modern consumer drone is probably in need of pharmaceutical help or talk therapy.
well, high quality digital is a whole different game from film. in the 1980's the closest thing was a VHS camcorder and my 4k micro four thirds digital camera absolutely blows a 1980's VHS camcorder out of the water.
Yes the image quality of a 4K camera is technically less than that of a film video camera, but show a 1980's movie producer a digital workflow and their mind would be blown. You shoot the picture and can then immediately review and even edit.
So 4K video beats home VHS on quality by leaps and bounds, and for anyone who would have been producing video on film the workflow for digital is amazing.
Hmmm, a 4K camera that can fly around for 30+ minutes with a controller attached to your cell phone (which in and of itself would blow the mind of any person in the 80s) that has the visual clarity of an image shot on 35mm motion picture camera weighing in at 70+lbs that could only be used aerially by renting a professionally piloted plane/helicopter at more $$ per hour rental than the drone you fly on your own costs?
No, I can't imagine any benifits for modern tech from the 80s ;-)
depends what you mean by convenience. The image quality of a 4k camera isn't that much better, but that's largely because it doesn't need to be. Cameras in the 80s already produced great pictures. The difference is that they can weigh less than a pound and be run for hours at a time without worrying about paying ridiculous amounts for film.
My mind was blown out the back of my head the first time I saw Super Mario Bros in an arcade in 1986-7. I wonder what would have happened to my sanity if someone had shown me a modern game one minute later.
The way people use computers and the Internet has been a huge change. Wikipedia sounds like something that couldn't possibly work, except it turns out that it (mostly) does. Self-landing rockets are pretty impressive. I think the rise of free and open source software would be surprising to most people. The fact that Russia and the United States haven't directly fought a war with each other in all this time and our cities haven't been reduced to rubble by nuclear weapons would seem pretty remarkable to an 80's person. (Though one might want to hold off a few weeks/months before declaring premature victory on that front.) Dystopian predictions about the environment were kind of right -- the effects are there, but American cities don't look like Blade Runner quite yet. Manipulation of society doesn't look like 1984 unless you're in an authoritarian country. Instead, big brother watches you from electronic devices that people voluntarily buy and use, and "big brother" is usually a private adtech company.
My parents weren't just released from prison after having been locked away from society since the 80s. Nor have they just awoken from 35 year comas. Sure, they'll have a different perspective than someone who doesn't remember the 80s but they'll surely also have a different perspective from someone with a more abrupt and recent introduction to the latest modernity.
They'll have watched The Jetsons, Blade Runner, 2001, 2010: The Year We Make Contact, and a host of other science fiction that all made the year 2000 look like the year 2100. In 1981, we sure didn't think the Berlin Wall would fall and the USSR would cease to exist a decade later (unless that came about by an apocalypse).
I mean, yeah, if they're satisfied with viewing demos on Youtube. The vast majority of people can't actually play the newest games at their best settings :D
Interesting comment from a game developer on /r/pcgaming [0]
> The problem with Unity cinematics is that they are much less representative of their tools and engine out-of-the-box capabilities than Unreal's demos.
> They use heavily customized assets specifically for a video and even do a lot of custom changes to the actual engine held by hot glue and duct tape to make it work. Then you try to recreate only fragments of what was show and nothing works. Even their released demos are usually not very portable and useful for devs, while in Unreal you can pretty much copy useful modules and most of it just works. This is why devs struggle to make things that look like demos Unity released 5 years ago...
> Unreal is less smoke and mirror. They focus more on things that are actually available inside the editor (eg. the main character in Matrix Awakens was taken straight from the metahuman editor instead of spending hundreds of thousands of dollars on custom scanned model rigged by a huge team of world experts).
> Unity does much better job with their demos to impress general public, but look at comments on their own forums and you will see over and over again how many devs are frustrated with their marketing tricks that don't turn into something real.
> Some devs say that this difference comes from the fact that Unreal guys have to eat their own food that they make, because they also make games (like Fortnite), while Unity is more detached from actual needs of game devs and recently they became obsessed with film industry, even acquired Weta Digital that made LOTR and Avatar. It should be exciting and impressive, but the comments from their licensees (waiting for years to fix core issues) were quite negative.
The "glued together" may be a little overstated. Their "Mega-City" demoed in 2019 is available to download and run as a standard unity project. I haven't looked at any of the code, but the fact that it runs on a mainline version of unity and not some forked, hacked together version should count for something.
> they tend to show off how tech will look 4-5 years down the line
Unity's tech demo "Adam" is now more than 5 years old and I still don't see any Unity games coming even remotely close to that. Unity's tech demos are just hoaxes: Carefully crafted demos with custom engine parts and basically hand-written stuff like shaders and controllers all over. The irony is that these demos are primarily aimed at developers, yet you can't take a single piece of those things and just make it work on your own setup. In contrast when it comes to Unreal it's usually just a Ctrl+C and Ctrl+V away from success ...
Unity games? No, not to my knowledge. But the industry has certainly gotten extremely close to realizing the tech being leveraged in Adam. Some may even argue that some UE5 showcases with real samples have succeeded it, despite this being a non-real time showcase.
I don't think this is because you can't just copy-paste an Adam asset into the Unity engine, however. AAA devs are going to leverage their own assets regardless.
I think that's more of an image problem than a "technologically possible" problem, however. Many of Unity's huge successes come from the indie side, the Ori's, the Hollow Knights, the Cupheads. And it has a very big grasp on the mobile market. Even Pokemon Go decided to use Unity despite the IP being backed by a publisher known for their in-house engines. so a AAA studio isn't thinking of Unity for their next Call of Duty, but maybe for their Call of Duty mobile title.
They seem to be trying to combat that sentiment with the likes of DOTS,HDRP, and whatever Ziva is trying to do, but those are still TBD.
rendered hair gets better all the time, but it's still not there.
sucks for me because I am weird and I find it hard not to pay close attention to hair in real life, sometimes. this makes flaws in rendered hair extremely obvious to me. :(
Definitely, the hair effects are still not there. Something is off, the hair moves as a block in an unnatural way. The rest of the face is pretty good though. The eyes are weirdly intense, but I suppose it was a deliberate choice and some people with blue eyes look like that.
I agree about the hair. It looked great when it didn't move. I thought the eyes were great; to the extent that they were intense, she's an intense looking lady; and they may have done so to amplify the effects.
I also thought the lips were great. It was the interplay between lips and teeth that really threw me off.
It occurred to me yesterday that at some point someone will inevitably advocate for the banning of deep fake pornography in their country by creating a porno starring the politicians of their lower house. I reckon it’d be illegal within the week.
Interesting, but I wonder how this would be enforced. I wouldn't be surprised if over half these cases were people who live in entirely different countries. would another country extradite a citizen over this? This has "technically" been possible for centuries so I wonder if there's any case study surrounding this.
It is clear that the Unity devs spent much time modeling the bones, muscles, blood vessels, fat, and unique skin properties of the face. I would not be surprised if surgeons were consulted. Was this attention to detail given to other parts of the human body?
A lot of people are working on it. They mainly live on Patreon. For VR, specifically. It’s the only feasible way to approach 6dof porn, currently, AFAIK.
I mean you could have hired a hyperrealist oil painter to paint porn characters 30 years ago but it is so much more work and expense compared to people.
I don't see how a game engine is much different for the foreseeable future. It is so far away from under cutting humans in this domain in terms of time and expense.
The mouth movement[0] sold it for me. Mouths are incredibly hard to do in CG, because you have to have realistic interactions between lips, teeth, and tongue. Not sure how they scanned this so well (maybe it was hand animated?) but it looks excellent.
Really? Because that was the worst part for me. To me it looks absolutely and positively synthetic. I mean, it is better than Terrance and Phillip from South Park, but perhaps because it's "close to reality, but not there yet" it falls into the uncanny valley to me.
Perhaps this is similar to how some people aren't bothered by bad kerning (to which I'm fairly tolerant).
You're right. The mouth movement was awful. I have no idea what OP is on about saying it was good. Maybe my uncanny valley is also steep, but nothing about this demo struck me as good.
The tech maybe is good, but I wouldn't know it from this video. The animation and lighting are just awful.
Link in the OP states screen space global illumination was used, which is probably why the part where the character descends below the surface seems really fake. Still very behind Unreal Engine unfortunately, which already supports ray traced "real" global illumination.
Am I the only one who finds that lipsync is ALWAYS off in every single hyper-realistic game / demo? Is lipsync also subject to uncanny valley or what? Why is it so damn hard to sync audio with lips in these simulations?
Go to 1:34 in the video, slow it down to 0.25x on Youtube, and press play.
> Unity’s Demo productions drive advanced use of the Unity real-time 3D platform through autonomous creative projects, led by Creative Director Veselin Efremov.
I was going to say that the overlaps between one lock of hair to another, between hair and background/hair and skin, and the edges of lips and teeth looked a little poorly keyed, like the objects had high resolution textures but the surface map/motion rig was low-poly...which would be annoying but probably easy enough to ignore in an indie film.
But live? On a single (hard to get) consumer GPU? That's seriously impressive. It makes me wonder how much of this is hand-tuned rigging and how much is physics based; if you tried to shake hands with this digital human using a game controller or VR rig, how would that look?
Wait till you see Unreal's Matrix demo running live on a PS5 (even with interactive parts) instead of just some hand-crafted marketing material from Unity on YouTube yet again.
Yes, and did you actually take a look into that? I made the point in another comment: What they provide is either cherry picked assets, doesn't even work properly until you do quite a lot of plumbing or it shows how much is actually "hand crafted" and not using engine specific things. For example you import a captured model and animation that is basically just keyframed vertices, while they claim how cool and procedural their animation toolchain is ... That's not how this works. Unreal does this kind of specific customization for demos, too ... but on a much lower scale. Unity demos barely show actual engine features but are just made to impress the general layperson.
I remember for a long while everything was "Toy Story" quality in real time, the PS2, the PS3, etc. It never really was.
But at some point, we definitely passed it. The room is nifty, but mostly been done. But that is a pretty good person. Lip sync is a bit off somehow... I think perhaps just too overexaggerated in the motions. But I couldn't tell you from a still frame that wasn't a real person, in real clothes.
I also continue to find it amusing that we can build a person like that and sell it as commercial tech, but we still have to record people talking. (Though TTS has taken an interesting turn lately, after years of not much.)
> I also continue to find it amusing that we can build a person like that and sell it as commercial tech, but we still have to record people talking. (Though TTS has taken an interesting turn lately, after years of not much.)
I suspect part of the problem is that dialog in games is still largely "static"; if the writing is pre-canned then it does not make that much sense to try to develop advanced tts to act it out. The situation will become interesting if we manage to produce sufficiently dynamic dialog system where it is not feasible to use pre-recorded voice acting anymore.
Tangentially related: Why can’t movie studios use a SnapChat-like filter to replace the mouth movement of foreign language films with the mouth movement of the voiceover actors? I feel like that technology definitely exists. There are so many great foreign movies & shows but the voiceover + unmatched mouth movement can be so distracting. Is it just too expensive?
I wouldn't want to watch something like that, I want to watch the original film. Then again I also never watch dubbed movies/tv, I prefer reading subtitles.
Yes, motion is highly uncorrelated. E.g. the whole body has no movement when she raises a hand with a chess piece. Humans don't do that. Everything would move from the chest up to assist the motion.
It's motion captured, that's probably just how the actress moved. Real stuff looks fake all the time. The parts that look wrong in the animations are mostly due to small imprecisions, especially in the facial ones.
In-engine is not in-game. I'm not that interested in what a game engine can do unless it's also constrained by doing all the other things a game engine can do. Show me what's possible in unity at 100fps on a consumer system while doing normal game loop things as well.
I've noticed that the more "realistic" animated humans become, the less conventionally attractive they become. I began to notice this years ago when characters started to have skin blemishes etc. It struck me as a simple gimmick then. Like all they could do was increase the texture resolution and to show that off you need "interesting" facial features.
All of this is just trickery to distract you from the uncanny valley. The real test of this rendering is "perfect" faces like Charlize Theron or George Clooney. Those are real people too, by the way.
I also notice that the latest "hyper-realistic" models are almost always female too. I wonder why that is.
Most of the comments that I can see below are restricted to discussing the impact of announcement and demo on the Unity vs. Unreal situation. However, it turns out that here are other very relevant and important issues which arise from the technological specifics behind the demo which, although they obviously directly impinge on the rivalry, have potentially broader implications.
'Uncanny valley' issues related to face-based (as well as body motion-based) realism in simulation for both games development and video production, whilst there have been recent very significant advances in addressing them are still a (pun unavoidable) rapidly moving target, a target where this demo highlights both substantial advances and still as-yet unresolved serious shortcomings.
The advances mostly relate to the introduction of machine learning and 15TB of training data into the content production workflow to reduce a rendering job from 6 hours per 50 frames to 3 milliseconds per frame as detailed in this article:
Returning for just a moment to the UE vs Unreal thing, those results were discussed in an article about a demo created using Ziva Dynamics's Real Time Trainer and Unreal Engine 4.26, but Ziva was subsequently acquired by Unity.
The discussion I'd be interested to see here (or somewhere else!) is about how ZRT compares with (or perhaps even fits into?) Unreal Engine's Metahuman workflow and resulting content creation and realism results.
Perhaps more importantly, how might the ML-driven capabilities of ZRT (will they still continue to be available under Unreal as well as Unity?) and ML-based impact upon realism be expected to manifest themselves
Here's a link to the ZRT 'preparation workflow', which a Z-brush user who was trialling the trainer setup says took them about an hour to produce a workable result:
How come the latest cutting edge tech demos showcasing photorealism are barely more ‘realistic’ than a PS4 video game cutscene from a few years ago? There wasn’t any part of the demo video where the guy looked or moved like a human.
Those cutscenes weren't rendered in real-time, instead taking longer than one second to render one second of video. This one is rendered in real-time. It's impressive for sure to render lifelike humans, but to do so in real-time is even more impressive.
Don't most games use realtime cutscenes these days? Nothing shown here is a gigantic leap to what you see in Death Stranding for example. I'm not looking at her and seeing an actress it still just looks like a very good 3D model.
Yes, but that's the point. The should have released it as an App that we can interact with, pan around, move the chess pieces. It might as well just be a video.
Actually they should have plugged in stockfish and let us play against their character.
It does not look like they simulate the muscles underneath the skin which makes it feel uncanny unfortunately. Close though! Very close and impressive indeed. Static rendering is excellent.
Just google Unity tutorial. If I remember correctly Unity uses C#.
You are not going to be doing anything close to this out of the gate but I do recall it being a pretty easy environment to learn for novices.
That’s great. Other studios are working on realistic problems and performance as well like Pearl Abyss. Difference is, they actually have games using the tech.
I want to make a demo like this but I don’t even know where to start. Does it take thousands of hours of skills and training to become this good at demos?
Nope. Nope. Nope. Do not want. That’s too good. It’s creepy good. I need to know what’s real and what isn’t. This is so real that I’m freaking out a bit.
It seems to me that they are trying to prove that they are a serious 'AAA' engine, but these demos aren't that convincing to me - AAA is a lot more than putting fancy shaders on high-poly models, it's about handling huge amounts of objects in a dynamics situation, displaying large worlds via streaming, having a workflow that accommodates every creative professional, and offering great performance and visuals even on very complex scenes.
I've hear that even these highly impressive demos are fake - they built a ton of custom code for each one they rebuild core Unity features, meaning if you wanted to replicate this for yourself, you'd be in for a ton of development.
Comparatively UE5's Nanite demo showed off a tech that's ready to go for production.