Hacker News new | past | comments | ask | show | jobs | submit login

Not to take anything away from the work but I think the title is misleading. A* is an optimal algorithm for graph search so you cannot make it 100x faster. What this does is make path finding faster by changing the structure of the graph.



It's a very practical optimization (not just a theoretical algorithm) so I don't think it's really misleading in the context of game development.

That being said there's an important gotcha, those techniques require a significant amount of precomputations (especially the bounding box one) so it might not be practical if you have very dynamic maps.


The map in my game is completely dynamic and every pathfinding technique I've tried is way too slow. I've had to resort to a really stupid path finding algorithm where the enemies just looks a few tiles ahead (meaning they get stuck very easily). Any pointers?


Check out "influence maps" or "potential fields" or "scents" or however they may be called -- google any of those phrases in combination with pathfinding, look at 1-2 videos and you'll grok the idea in a minute, I promise.

They may not solve all your problems or even any of them (hard to tell knowing nothing about your game, after all), but I was absolutely dumbfounded when I played around with them at how cheaply you can get some rather complex (seeming) behaviors (e.g. put the player and some enemies in a maze, give the players a scent that strongly attracts the enemies, but also give them a slight scent that makes them repulse each other, and they'll automagically split up and surround the player).

At the very least, if you're stuck with what you're doing at the moment, play around with those, and maybe you can at least find a use for them that frees up some processing resources for precise path finding where nothing short of it will do.


Sometimes also called "flow fields." They are great if your agents are trying to reach a common goal.


Have you seen any efficient algorithms that avoid local minima/maxima[0]? All my research leads to either algorithms resembling Dijkstra, or complicated research papers I'm not sure how to implement.

[0] Potential fields can't tell you a route is a dead end until you reach the dead end. Then they only tell you you're at a dead end, but don't tell you how to get out and find a viable route. Add-on exit-a-dead-end algorithms I've seen have very inefficient movement.

Algorithms to avoid the dead end in the first place have to look ahead so far that you might as well use Dijkstra/A*.

----

Some papers I've turned up:

Potential fields:

- http://www.gamedev.net/page/resources/_/technical/artificial...

- http://www.heikohoffmann.de/htmlthesis/node56.html

- http://www.researchgate.net/publication/222662144_Solving_th...

- https://randomaccessmaths.wordpress.com/2013/10/27/potential...

Harmonic Potential Functions (related to potential fields, they attempt to solve the local minima/maxima problem)

- http://repository.cmu.edu/cgi/viewcontent.cgi?article=1625&c...

- http://www.itst2007.eurecom.fr/site/var/html/h1053/file1221....

- http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=429591...

- http://ncsu.summon.serialssolutions.com/search?s.cmd=addFace...


You're way ahead of me, I really just dabbled, and I kinda embraced the inefficiency. Though when I did the calculation on the GPU the bottleneck kind of became reading pixels of the texture data for individual agents to make decisions on - otherwise, the potential field updated way faster than the agents could move, and local minima didn't exist long enough to matter. But of course that all really depends on the game map (resolution) and gameplay, and even just having requirements like map sections where some agents don't fit through, while others do, might make it kinda useless.

I think the main power is that potential fields reduce the complexity of the calculation to the map and the number of goals, with an infinite number of agents being able to use that at little cost, but you pretty much always have to build additional pathfinding on top of it. For example, for some games it might be fine for agents to follow the potential field generally, but every tick one of them gets to use Dijkstra/A*, and if that points the other way it follows that path until coming close to another agent or something. Too much efficiency can be off-putting, but also be too predictable and easy to exploit as a player, that's also worth noting. But of course, I'm kind of rationalizing my lack of deeper knowledge here, I'd love to be able to do it perfectly, and then tone that down for realism/gameplay. Alas :)


Assuming a game with square tiles and vector movement:

All shortest paths are either a straight line, or consist of the straight line from the original location to an outside corner, a series of straight lines between outside corners, and a straight line from an outside corner to the destination. (An outside corner is any 2x2 square with three walkable and one wall tile.)

You can easily find all the outside corners. You can find which pairs of outside corners can be walked between using N^2 ray casts. Once you've done that, you can detect when an outside corner has been removed and drop it from the map, or detect when one has been created and perform N ray casts to add it to your pathing list.

Running A* on the resulting structure is really fast, because it's smaller and more connected than a tile list. The slowest remaining part is finding the corners immediately reachable from your start/end points, which I think could also be sped up but I'd benchmark it before worrying about that.


Change your map generation to generate a higher level graph first with ensured paths between nodes and generate subareas so that they connect to higher-level graph?


Maybe I misunderstood you but the players can change the map during gameplay so I don't think that will work.


How free-form are the map changes? Do you mean opening and closing doors - or completely changing the terrain as in for example Minecraft?

One option would be to look for strategies that "look intelligent", instead of being perfectly optimal paths at every simulation frame.

Run slow pathfinding for an enemy and store the path. Have them follow the path until they hit a player-created obstacle, and only then re-run pathfinding. Besides being faster, this could let the players use the enemy behaviour to their advantage by creating traps or such.


I'm assuming that changes to any given position X on the map happen much less frequently than pathfinding operations that touch on said position X. Is this the case? If so, then investing the pre-calculation effort (updating the precalc matrix, not re-generating the whole thing) at map update time, limited to only those areas potentially benefiting, might be an effective strategy.


This may be a naive comment (I know very little about pathing), but one thing I've wanted to experiment with is the idea that enemies shouldn't have perfect pathing when the map changes. I always think, why do enemies know how to get from point A to point B? If they have some knowledge of the map, then you can pre-compute paths, but as the terrain gets modified, those paths will be destroyed. It seems to me that it might be more efficient to modify paths, iterating over the changes in the map rather than iterating over each of the enemies. If locations become unavailable, then "dumb" pathing techniques might be reasonable, possibly using some kind of swarming algorithm, or ant-like behaviour to get around local obstacles.


Are you path-finding every frame? Can you take a snapshot of the game state and run full-path finding on a background core just once every second?


It's not on every frame but depends on how often the enemy moves (every 150 ms or so). That's a good suggestion anyway!


calculate the path once and cache it. only recalculate it if the agent runs into an obstacle or maybe in a random time interval to make it cheaper.


Look into RRT, it works way too well for how simple it is, and it's great for reducing search complexity for pathfinding.


There's pathfinding and pathfinding. This thread is about exact algorithms for rectangular grids, not approximate algorithms for a continuous space.


There's pathfinding and pathfinding, and then there's pathfinding. Social network analysis also requires route-planning for degree-of-separation computation, but since in those cases you just have a digraph, not a [continuous or discrete] metric space, all you get to use there is Dijkstra.

(Anyone who knows better, I'd be delighted to find out I'm wrong—for the sake of my own search runtimes.)


They're already using a heuristic, I gave them another one to try. It's easy to discretize random samples to a grid.


No, A* heuristics (including extreme ones like jump point search) do not prejudice optimality of the found shortest path, the same that would be found by Dijkstra's algorithm without heuristics. They are only "heuristic" with respect to the subordinate purpose of finding the optimal path cheaply (ignoring as many graph nodes as possible), while the RRT heuristic produces a suboptimal result (not the absolute shortest path, but the shortest path that goes through arbitrarily chosen nodes)


Have you tried probabilistic roadmaps? Depending on how fast the map changes, you might be able to incrementally update a roadmap as you go.


Prison Architect has dynamic maps and uses a flow system for path finding escape tunnels. Each grid on the map is simply given a direction telling the ai which way to dig, and its updated periodically. Each tile determines the best direction to direct the Ai and when viewed at a distance looks like a decent pathfinding routine. The benefit of this system is that you can cycle through the map over many frames, not every frame.


How big is your map? If it's still too slow after you've amortized it over a frame or two then I'd guess it's an implementation issue or you're running it too often per entity. At least assuming your graph size is <1000x1000 or so, then the above should generally hold true... That said, flow fields are fantastic tools too.


Is the map dynamic as in it changes in an unpredictable way all the time? If that's the case the best you can do maybe to path find in the static parts then try your best to follow the path while avoiding things nearby. If the map predictably changes between 2 or 3 states you could precompute paths on that and change between them when the map moves.


I guess you have done it, but have you looked into D* and it's variants? more specifically D-Lite they are A variations for dynamic grids


100x faster means improving the coefficients. A* is optimal in terms of asymptotic execution time, but that doesn't mean you can't improve the overall running time.


The classic example is matrix multiplication, where we know of incresingly sophisticated algorithms with increasingly better asymptotic performance, but in practice most matrix multiplications are for sub-100x100 (IIRC) matrices where the naive algorithm we all learned as undergrads is fastest.


> in practice most matrix multiplications are for sub-100x100 (IIRC) matrices

I would say that you have the causation reversed: we just don't bother doing large matrix multiplications, because they're so expensive (and yet still don't usually cross the intercepts into making the "better" algorithms worth it, due to those algorithms having ridiculous constant coefficients.)

I would love to run Floyd-Warshall on my 40-million point network graph—but it just ain't gonna happen.


> we just don't bother doing large matrix multiplications, because they're so expensive

Sure, I agree. I'm not a specialist in numerical lin.alg., but I've heard it said that if you're spending a lot of time on large dense matrix multiplications, you're probably approaching the problem in the wrong way.


A* is optimal in the sense that it is guaranteed to find the optimal solution (shortest path), there are no guarantees in terms of speed IIRC.


I'm sure you're right in this case, but I wonder if it would be possible for an 'optimal' algorithm, where the optimality is in terms of high-level operations like compares required, to be out-performed by a non-optimal algorithm due to practical considerations such as cache utilisation. An order of magnitude more loads but in a cache oblivious pattern could be a win compared to a pathological pattern.


Certainly.

Jon Bentley had an article on two string reversal algorithms with the same algorithmic complexity. On a very large string, one would finish almost immediately; the other was pessimal for paged virtual memory and wouldn't finish for hours.


FTA: "It has been said that "pathfinding is a solved problem." Just because something is "solved" doesn't mean it can't be improved especially with regards to execution time."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: