Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Uses Videogame Footage to Recreate Game Engine (gatech.edu)
254 points by ingve on Sept 13, 2017 | hide | past | favorite | 63 comments



This is reminiscent of the Tribes 2 development, when the developers revealed they lost the original Tribes source code, and thus the physics calculations.

Beta testers constantly complained that falling didn't feel right, jetting and skiing (the game's main movement) was slow and soupy. Players were leaking videos and photos showing the differences in motion.

At one point, Dynamix hired a top player to playtest, as he "remembered the best" how it felt. All to reclaim the physics calculations of some game from the Leisure Suit Larry company.


As a matter of fact, there is still a regular group of Tribes2 players online. This past Memorial Day weekend, I booted up a VM, d/l'd the ISO's, etc. and played a capture the flag match for a few hours... eventually with 16+ on the same server.

The game is 16 years old... old enough to drive a car and it's still being played!


REALLY?? I wonder if there's a Tribes 1 revival group...

Tribes 1 was an absolute favourite in my household growing up. It was the first real viable, extendable MMO fighter I'd played. I don't know if I've had as much fun in a similar format (and I enjoy [like the odd ice cream treat] some modern ones like BF1). Maybe it's just sentimentalism. It was the first game I got my younger siblings into, and they became obsessed. More than me... haha...

SHAZBOT!


Tribes 1 was open sourced a number of years ago and is free to play. Tons of mod servers and vanilla still exist and of course SHAZBOT!


You got me excited for a few minutes, but...no, Tribes 1 was not open sourced. The game was released as freeware though, along with a lot of the other Earthsiege and Tribes games.

They can be downloaded here as iso files: https://www.tribesuniverse.com/


oh apparently it wasn't open sourced but the source has been floating around online for quite some time http://www.tribalwar.com/forums/archive/index.php?t-679947.h...


Well... there goes my night.


Get it up and running / working. Snap Crackle Pub is the 'main' server nowadays it seems. You're in luck, as it seems tonight 8pm EST there's gonna be a special event going on?

-----

https://www.reddit.com/r/Tribes/comments/6vmsqu/tribes_2_sma...

Smackdown in Tribestown happening Tomorrow at 8pm EST on SNAP LAK Server. 67.222.138.13:28000

Midair guys are coming over to Tribes2 to play a few games and warm up for the Closed Beta Release of Midair. Should be a good time. If you need a copy of the game preconfigured check out my Tribes 2 Config here. Join us in Discord for more info.

------

...and download the map pack here => put in prog files / dynamix / tribes / maps / etc. => http://t2.branzone.com/


The best I got was BFBC2, BF3 wasn't as good for me (too much griefing, overpowered, non-fun, skill-mismatch). I have hope for BF1 but haven't gotten in to it.

The crazy thing is that with T2 technology, they technically support(ed) 128 players in the same map! Epic, epic, epic, epic, on a scale that can't be imagined today for melee / team combat.

Or maybe that's just my nostalgia and lack of a Windows PC kicking in. 8v8 doesn't do it for me compared to Tribes. Eve looks awesome but it's not as punchy, crunchy, or visceral.

Any random game recommendations to recapture that feeling?


BF1 can be downright hairy. That's what drew me in. You can't get 128 to my knowledge, but there are regularly games with 64 players. It's probably the closest thing. (plus a bit of nostalgia from BF Vietnam and 1942 with my siblings as well)

For altogether different reasons I really enjoy Star Wars Battlefront.

I'm not a huge gamer, though. There's probably something out there that I'm not aware of. I did buy a Windows PC/gpu just so I could do some gaming, though... (well, and for some basic VR dev project)


Tribes 1 was the first FPS I played after about...Star Wars Dark Forces, I think. It's the first game that I had the opportunity to play on a LAN, and it's the first game that I remember using mouselook in.

We had a community center near home, with about 10 gaming PCs. Tribes was a huge favorite, especially with all the different mods that were available.

We switched almost completely to AVP when that came out, though.


>We had a community center near home, with about 10 gaming PCs. Tribes was a huge favorite, especially with all the different mods that were available.

You just reminded me. I had a real slack-off sort of teacher(s) in my 9th and 10th grade communications labs, and Tribes would run on the old Dell boxes they had in there for whatever work. We had a whole system worked out with a watcher and everybody playing over LAN in the lab whenever the teacher stepped out of the room. I'd forgotten all about it.


I just downloaded AVP (the original), I think it was available on GoG / Steam? Works pretty well with Parallels on Mac, VM + Windows evaluation license!


Yep, I see it on GoG and Steam.

I had an interesting experience with that. I'd borrowed the game from a friend, and it didn't play nicely with the integrated SiS graphics chip on my computer (performance was fine, but a lot of elements of the game didn't render). I contacted Fox Interactive's support team. They asked for my address, and I got a "replacement" copy of the game a few months later. To be clear, I didn't ask for a replacement, and stated that I was borrowing the game from a friend. Still, a nice surprise.

Also, the engine itself was open-sourced years ago, and a Linux port was made. It should be perfectly possible to clean up some bitrot and get it working on modern systems, including natively on the (x86) Mac. Makes me curious if it would run on the PPC Mac that I've got in my closet.


Noting else quite does it for me like T2 does/did, I know a lot of it is nostalgia but the game is also just clean. Somehow it looks and feels better than many modern games to me. I still think the UI (buggy as it was sometimes) looks great, it still feels futuristic and game UI's 10 years later look like trash.

Having in game mail, IRC, and an actual clan system I think was awesome and ahead of its time as well.


If you're into team-based base building games.. opensource too, http://tremulous.net asymmetric teams FPS + RTS


I can remember playing it with my Voodoo card. T2 had Glide support and with that on, you could see much further than players with OpenGL cards.


Killer soundtrack, too!


I'm not surprised, because get things lost to history all the time, but do you have a source?


Love T2, I'm not affiliated directly (aside from being an KS backer) but if you liked the tribes franchise, take a look at Midair, a game being developed by a lot of former T2 players (although from what I understand, it has more of a T1 feel or a blend of the two): https://www.playmidair.com/


Wasn't the mountain surfing/skiing a physics bug, that they liked so much they just kept in the game?


Same thing with bunny hopping in Quake, Half-Life and subsequently Counter-Strike. For an even deeper cut there are at least for older CS-games a sub-genre of surf maps specifically about navigating large areas with lots of pitfalls through that feature (previously bug).


I believe they took pains to prevent the bunny hopping in Quake 3.


Indeed they did try that with a patch, but it had unintended consequences with "regular" movement feeling clunky, so they reverted it. It seems bunny hopping is a tightly linked side-effect of the set of physics rules that make those games "feel" good.


Why wouldn't they just sample the game repeatedly to reverse engineer the physics?

Tribes original certainly remained in play while tribes 2 dwv was happening.


Yeah I could see using this to match physics and gameplay feel for games. Their example uses MegaMan physics and is quite close. It might be a good analysis tool as well to make sure gameplay tweaks to physics, lighting, etc between versions or when comparing changes almost like a gameplay diff.


The Leisure Suit Larry company was Sierra. Loved playing those games! Hero’s Quest, later renamed to Quest for Glory was probably my favorite. The physics in those games were amazing for the time, so I’m not surprised.


Sierra were the producers, Dynamix were the developers. Of course, things seemed muddled when both Sierra salesmen and Dynamix developers were hanging out on the internet forums with us players. I think Sierra bought Dyanmix at one point.


Sierra was both a publisher and a developer led by husband and wife team Ken and Roberta Williams. The first Leisure Suit Larry came out in 1987. Sierra didn’t acquire Dynamix until 1990, and they had nothing to do with Larry. Dynamix was known more for flight simulator games like Red Barron. The only forums around then were on Usenet and BBS’s. I used to run one of those BBS's. Fun times!


Or they could just have disassembled the original to try and recover the original calculations


If only they hadn't lost the source. It's apparently since been found at GarageGames, a game dev company started when Dynamix/Sierra fired all the devs after T2 was released.


Never expected to see tribes mentioned on hackernews!


Next up evercrack er Everquest.


It sounds like the only input is video. Video plus the user's controls might be much more interesting; there's a huge archive of input recordings available at tasvideos.org that could conceivably be used as a source, rather than making people actually play the damn things.

Or you could just take some of the other AIs designed to play games based on video and wire them up. Then just let your system learn to play AND run Mega Metroid Brothers.


The paper says that the input is video + the set of sprites used in each game. This simplifies the problem quite a bit. I tried to work from screenshots of Mario to recover the sprites, and it was a surprisingly challenging problem.


>Then just let your system learn to play AND run Mega Metroid Brothers.

One of the many games on the triangle with vertices Mega Man, Super Metroid, and Mario Brothers?


I'd play something in that space. At least to see if it is good.


I don't have any experience in the field, but reading the paper, it seems impossibly weak and almost useless. This should only work for a very limited type of game, in which case it will never help anyone speed up game construction more than a simple sprite engine. It seems more an exercise in using openCV.

What am I missing here, because I'm positive that I am missing something?


Imagine this paired with a system like the one Bret Victor demos in "Inventing on Principle" [0] or in "Stop Drawing Dead Fish" [1], in which the user hand-animates what they want the game movement/overall gameplay to look like, adjusting as necessary. The system described in this paper could get you a decent way from that hand-animated mockup to a working game that looks and feels first and foremost like you intended, rather than working from a playable-but-bad-feeling prototype game engine and having to endlessly adjust to get the game feel "right".

[0]. https://youtu.be/PUv66718DII?t=29m20s [1]. https://youtu.be/ZfytHvgHybA


Looking at the paper the set of games it can build seems rather limited. It could work for cookie cutter platformers (the sort of games that engines that don't require programming let you build).


You're missing that not everything has to be a breakthrough. Sometimes fun research experiments are just cool and fun. I didn't get the impression that they were overselling the research.


Sure they are, "recreate game engine" when it actually is "optimally match a database of cooked if-then rules with simple linear functions". For one Mario level. Taking 2 weeks of runtime to learn it.

The set of facts is worthless for anything of complexity. It does not really generate the rules itself. (They are directly derived from the facts.)

What they did is only a small improvement over a typical expert system or CNN for a very limited case.

Choice quote: "Notably each fact can be linked back to the characteristics of a sprite that it arose from." Wrong. When you pick up a flower your sprite changes, but how does it know you can suddenly shoot bullets? Etc. And for more complex games a lot of data requires exploration well past the GUI. An action might change acceleration (suddenly nonlinear ice physics with momentum), or direction handling, or you can start flying, or many other things. What if the thing moves in a circle? What if there is just some probability that something results?

The approach will fail at modelling as soon as Mario level 1-4. (The one with rotating fireballs.) Or produce an insane representation of the engine. Note how it even cannot model the dampened triangle wave motion of the fireballs in the example - assumes they're a sparse line.

The paper presents no way to reduce these huge number of "if-then rules" to something actually useful either.

Since this doesn't even attempt to explore the state space, it also requires a huge database.

Calling this "recreate game engine" is akin to saying that since we have an algorithm that can solve checkers, it will solve poker, go and also whodunit. And can play Jeopardy too.

I even suspect it's not useful as a preprocessor to something that can actually play a game, as it will break later cases.


I was pretty impressed by the result until reaching " a relatively simple search algorithm that searches through possible sets of rules".

CNNs have done such impressive things that "outperforms convolutional neural nets" sounds like an achievement, but CNNs have never been the pinnacle of accuracy - their key advantage is flexibility. Feature learning costs some reliability, but gives a huge advantage in saving human time and effort.

This appears to be exactly the opposite approach, an AI system that gains its accuracy by working from heavily pre-defined rulesets. Feature engineering is fine in a stable, well-understood domain, but it reduces the impressiveness of the 'AI' result. And more worryingly, it cripples the flexibility of the agent in a open domain like "video games".

Hand-authoring a set of functions required to derive the model means embedding a huge portion of the game engine in the engine-learning framework - what's left to learn is basically just parameter values. Mario without powerups is a game entirely defined by 2D movement, collisions, animation, and a tracking camera. That's the same feature list that had to be hand-defined for the engine.

I don't mean to attack the authors. This is still an interesting result, and they do acknowledge this in P2 of 'Limitations'. (Albeit with some lofty claims about eventually understanding real video - are they planning to encode physics as their ruleset?) But the article really oversells the capacity of a system that was spoon-fed the essentials of what it had to learn.


People generally are willing to forgo the cost/benefit analysis of a machine learning solution. There is an abiding faith in future improvements in cost although I am not so sure anymore.


>What they did is only a small improvement over a typical expert system or CNN for a very limited case.

So what you're saying is they came up with some new, albeit small, way to do it?

http://paulgraham.com/sun.html


An artificial intelligence observed a game being played and recreated it and you aren't impressed at all? Man, the future must be boring for people living in it.

Yes, the technique it uses only works for a certain space of possible games. That means there is an obvious path to increasing the size of that space.


"Observed" after being fed lots of sprites and actual ways on how to play it and actually win at it in the objective function. And it "played" only one kind of game. "Obvious path" riiiight.

In addition, this is wrong to having been said to be new, such attempts have been made before and even stronger in results and generality. For example this (relatively dumb) approach from 2013 generalized kinda well, much better than I've seen a silly even deep network generalize: http://www.cs.cmu.edu/~tom7/mario/

So yes, they are overselling it a lot. I am 100% not impressed by this paper as it lacks critical detail. That it can parse stuff from 2D frames is not interesting, it is basic motion analysis which can be done even by a supremely stupid algorithm, not even a CNN.

I mean, Google best AI can play 15 rooms of a simple game...


You are comparing a system that learned to play a game (which indeed was very impressive), to a system that learned to make the game by observing the behavior from video. None of your points actually relate to the system described.


By "make" you meant "match some sort of a simple function approximation after hardcoding lots of knowledge about the system and the general function" right? Which is essentially what the neural networks and all the other optimization algorithms were made for?

(The algorithm as described will require a huge database for a game that is even slightly more complicated than Infinite Mario. And we don't even have the sources to try that.)

Even the object motion tracker part will choke in 3D environment. (It is a greedy matcher as they described it.)

Speaking of impressed, Google DeepMind paper is way more actually feasible to improve upon and rich in detail: https://arxiv.org/pdf/1606.01868v1.pdf Compare the two papers in straight quality. I understand why you'd publish any worthless junk in the current academic culture and do not agree we should actually do it.


My complaint is that the path to improving their space is "humans hardcoding endless rule lists".

Section 3.1 of the paper outlines a list of 'hand-authored' functions the agent used to derive events from images. They include animation, sprite-entity relationships, motion, collision, and camera movement. Which is to say, every component of Super Mario level 1-1.

That doesn't mean the paper is uninteresting, or useless. Defining facts based on those possible rules is still an intriguing result. I'm having real trouble working out from the paper how well their agent understood conditional changes like size and fire flowers - if it accurately recreated those rules, then I am impressed.

But "modeled without accessing the code" is a dubious claim about an agent that started with a list of the core rules included in its code. The Engine Learning section (3.2) mentions that automatic derivation of possible facts is a key area for future work. That is to say "this would be flexible if it did feature learning instead of needing feature engineering". Unfortunately, that's the problem in agent design, and the value of CNNs isn't unbeatable performance but the capacity for flexible feature learning. The press release here elides the issue of feature learning entirely when comparing performance.


The point of this is what is known as model-based learning. Basically, the long-term goal is to be able to predict the output of a given action (jumping, walking left, etc.) by an AI agent. When you can do this, then the agent doesn't need to die to know that jumping down a hole will end the game-- it can predict it. Once you've done this, AI techniques like that of Watson can control robots. They won't need to kill someone to know that driving a pole through a head is no good. They'll be able to 'reason' it out.


They are showing an approach that work, on a semi-realistic problem.

The idea of producing a rule-based system from deep-learning, while not exactly a breakthrough, is an interesting direction to take.

It is research. It is not designed to solve real-world problem but to give ideas to engineers. And really, I can see several simple systems that could be programmed by simple rules and learned from input/outputs.


the examples had the character move precisely like in the reference video. This just looks like the AI recorded the original video and played it back.

I thought the AI created a playable game engine from the reference video? If so, why did it need to replicate the exact movement of the game character? Why not come up with its unique set of movements in a fully flexible game engine?


Maybe that was just to show the recreation. An engine is pure code and a lot of times closed source. Just creating animation and similitude in that ways does not recreate a engine but could create those aspects. Engines have many aspects of game mechanics many you do not really see directly but as a byproduct.


But the impressive sell of this technique was the AI discerned enough patterns and physics from the reference video to auto generate a game engine that replicated/simulated the original game. This alludes that it understood the player character and now can control the sprite representing the player character via unique paths through the game map.

It's waaaayy less impressive if it's just programmatically processing video frames, tracking pixels to generate coalescence, generating a library of sprites based on pixel coalescence, and then playing back the same sequence of sprites programmatically...


more AI theater. there will be a moment in about five years when people are like "what happened to all that stuff about auto-driving cars, evil AI, etc.?"


Siri, what happened to all that AI nonsense from 5 years ago?

I'm sorry, Dave, I'm afraid I can't answer that.


Seriously, tech journalism needs to get it's shit together because it's becoming annoying. If I see another article about a mediocre algorithm presented as AGI, I'm gonna start posting on my blog about how every failed experiments is really Skynet being lazy in its teenager years


Agreed, although I think this case shows a different problem - it's not a tech news site but a university press release.

University press has a horrible tendency to oversell research in the name of getting news coverage, often completely burying possible flaws or limitations of the result. The University of Maryland infamously put out a major release on concussion treatment based on a study that didn't exist (1). Similar but lesser abuses appear to be almost constant. It seems like worthwhile-but-unspectacular thesis result gets spun as a groundbreaking insight in its field.

1: http://nymag.com/scienceofus/2016/01/chocolate-milk-concussi...


Its* Stupid autocorrect


They wrote a blog post a couple of month ago with more details: https://medium.com/@mark_riedl/automated-game-understanding-...


"The technique relies on a relatively simple search algorithm that searches through possible sets of rules that can best predict a set of frame transitions"

so, their script/app doesn't reproduce a game engine at all, it instead analyzes pixel arrays from video frames and maintains rules about how the pixel arrays typically transform from one state to another. this sounds more like a useful video analytics tool than an "AI that makes game engines". if i was responsible for the headline/marketing, i would've gone with "Artificial Intelligence Can Predict the Ending of a Movie" or something (as long as the movie is 8-bit and only has 256 possible colors!)

in other words i dont think Unreal or Unity are worried about this tech.


The algo learned about the underlying rules of its environment - the physics - by watching video. Imagine applying this to more complex games, and then to the real world (the ultimate game!). Cool approach; to me it seems to really resemble how people learn in real life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: