Title's misleading. A* is also 100x faster than A* ;) It depends a lot on the graph you give it.
JPS by itself (not JPS+ in this video) is notable for not requiring any precomputation to achieve some speedup.
If you're willing to precompute some things, there are lots of other techniques available for unweighted grid maps. http://www.cs.du.edu/~sturtevant/papers/GPPC-2014.pdf has an overview of techniques and some benchmarks that use real-world game maps (Baldur's Gate, Dragon Age).
I believe the "goal bounding" from the video is called "arc flags" in the literature.
Yes, was just about to post, "Exact Routing in Large Road Networks Using Contraction Hierarchies" which, after a one-time pre-processing step returns an optimal route on a continent-scale network in milliseconds.
Only if the area changing is near the start of the search, not near the goal. (i.e. near the sensors of a mobile robot, rather than near the destination far from the robot).
D* can be pathologically slower than repeated A* searches under such conditions.
I've been using a modified D*. It increases the grid size as the path moves out of the robot's observable area. This allows a fast search even when the goal is in a environment with known dynamic obstacles.
Ideally speaking, you need to plan in higher dimensions to get an optimal path in dynamic environments
Pearl, J. (1984), Heuristics, Addison-Wesley. In this text, Pearl proves that for a given heuristic, A* will perform computations on fewer (or the same number of) nodes than any other graph algorithm using the same heuristic.
The pre-computation JPS(+) does is cheating because it's essentially providing a quicker-to-evaluate heuristic than A* gets to use.
So yes, I agree - the title is misleading! However, it's still a pretty cool efficiency for situations where it's applicable.
It's misleading indeed. This version of JPS+ is basically a preprocessing optimization - does graph compacting offline to compute a smaller representation that's faster to run guided search on.
And of course there's a serious down-side: if the graph changes at runtime (eg. player builds a new door, or blows away a bridge) the preprocessing has to be redone, and it's not cheap.
Oh man, Baldur's Gate pathing was abysmal, and not just because of moving obstacles (including the 6 members of your party interfering with each other while all moving greedily-not in formation-to the same general area).
With respect to the techniques that require precomputation, however, I wonder how they perform with such moving obstacles. Potentially quite terribly, I imagine.
A technique that is reportedly useful for dynamic situations is D*, since 90% of the world is static and most dynamic content is clustered, you could probably get away with using that in a radius and then devolve to the precomputed technique outside of that
Not to take anything away from the work but I think the title is misleading. A* is an optimal algorithm for graph search so you cannot make it 100x faster. What this does is make path finding faster by changing the structure of the graph.
It's a very practical optimization (not just a theoretical algorithm) so I don't think it's really misleading in the context of game development.
That being said there's an important gotcha, those techniques require a significant amount of precomputations (especially the bounding box one) so it might not be practical if you have very dynamic maps.
The map in my game is completely dynamic and every pathfinding technique I've tried is way too slow. I've had to resort to a really stupid path finding algorithm where the enemies just looks a few tiles ahead (meaning they get stuck very easily). Any pointers?
Check out "influence maps" or "potential fields" or "scents" or however they may be called -- google any of those phrases in combination with pathfinding, look at 1-2 videos and you'll grok the idea in a minute, I promise.
They may not solve all your problems or even any of them (hard to tell knowing nothing about your game, after all), but I was absolutely dumbfounded when I played around with them at how cheaply you can get some rather complex (seeming) behaviors (e.g. put the player and some enemies in a maze, give the players a scent that strongly attracts the enemies, but also give them a slight scent that makes them repulse each other, and they'll automagically split up and surround the player).
At the very least, if you're stuck with what you're doing at the moment, play around with those, and maybe you can at least find a use for them that frees up some processing resources for precise path finding where nothing short of it will do.
Have you seen any efficient algorithms that avoid local minima/maxima[0]? All my research leads to either algorithms resembling Dijkstra, or complicated research papers I'm not sure how to implement.
[0] Potential fields can't tell you a route is a dead end until you reach the dead end. Then they only tell you you're at a dead end, but don't tell you how to get out and find a viable route. Add-on exit-a-dead-end algorithms I've seen have very inefficient movement.
Algorithms to avoid the dead end in the first place have to look ahead so far that you might as well use Dijkstra/A*.
You're way ahead of me, I really just dabbled, and I kinda embraced the inefficiency. Though when I did the calculation on the GPU the bottleneck kind of became reading pixels of the texture data for individual agents to make decisions on - otherwise, the potential field updated way faster than the agents could move, and local minima didn't exist long enough to matter. But of course that all really depends on the game map (resolution) and gameplay, and even just having requirements like map sections where some agents don't fit through, while others do, might make it kinda useless.
I think the main power is that potential fields reduce the complexity of the calculation to the map and the number of goals, with an infinite number of agents being able to use that at little cost, but you pretty much always have to build additional pathfinding on top of it. For example, for some games it might be fine for agents to follow the potential field generally, but every tick one of them gets to use Dijkstra/A*, and if that points the other way it follows that path until coming close to another agent or something. Too much efficiency can be off-putting, but also be too predictable and easy to exploit as a player, that's also worth noting. But of course, I'm kind of rationalizing my lack of deeper knowledge here, I'd love to be able to do it perfectly, and then tone that down for realism/gameplay. Alas :)
Assuming a game with square tiles and vector movement:
All shortest paths are either a straight line, or consist of the straight line from the original location to an outside corner, a series of straight lines between outside corners, and a straight line from an outside corner to the destination. (An outside corner is any 2x2 square with three walkable and one wall tile.)
You can easily find all the outside corners. You can find which pairs of outside corners can be walked between using N^2 ray casts. Once you've done that, you can detect when an outside corner has been removed and drop it from the map, or detect when one has been created and perform N ray casts to add it to your pathing list.
Running A* on the resulting structure is really fast, because it's smaller and more connected than a tile list. The slowest remaining part is finding the corners immediately reachable from your start/end points, which I think could also be sped up but I'd benchmark it before worrying about that.
Change your map generation to generate a higher level graph first with ensured paths between nodes and generate subareas so that they connect to higher-level graph?
How free-form are the map changes? Do you mean opening and closing doors - or completely changing the terrain as in for example Minecraft?
One option would be to look for strategies that "look intelligent", instead of being perfectly optimal paths at every simulation frame.
Run slow pathfinding for an enemy and store the path. Have them follow the path until they hit a player-created obstacle, and only then re-run pathfinding. Besides being faster, this could let the players use the enemy behaviour to their advantage by creating traps or such.
I'm assuming that changes to any given position X on the map happen much less frequently than pathfinding operations that touch on said position X. Is this the case? If so, then investing the pre-calculation effort (updating the precalc matrix, not re-generating the whole thing) at map update time, limited to only those areas potentially benefiting, might be an effective strategy.
This may be a naive comment (I know very little about pathing), but one thing I've wanted to experiment with is the idea that enemies shouldn't have perfect pathing when the map changes. I always think, why do enemies know how to get from point A to point B? If they have some knowledge of the map, then you can pre-compute paths, but as the terrain gets modified, those paths will be destroyed. It seems to me that it might be more efficient to modify paths, iterating over the changes in the map rather than iterating over each of the enemies. If locations become unavailable, then "dumb" pathing techniques might be reasonable, possibly using some kind of swarming algorithm, or ant-like behaviour to get around local obstacles.
There's pathfinding and pathfinding, and then there's pathfinding. Social network analysis also requires route-planning for degree-of-separation computation, but since in those cases you just have a digraph, not a [continuous or discrete] metric space, all you get to use there is Dijkstra.
(Anyone who knows better, I'd be delighted to find out I'm wrong—for the sake of my own search runtimes.)
No, A* heuristics (including extreme ones like jump point search) do not prejudice optimality of the found shortest path, the same that would be found by Dijkstra's algorithm without heuristics.
They are only "heuristic" with respect to the subordinate purpose of finding the optimal path cheaply (ignoring as many graph nodes as possible), while the RRT heuristic produces a suboptimal result (not the absolute shortest path, but the shortest path that goes through arbitrarily chosen nodes)
Prison Architect has dynamic maps and uses a flow system for path finding escape tunnels. Each grid on the map is simply given a direction telling the ai which way to dig, and its updated periodically. Each tile determines the best direction to direct the Ai and when viewed at a distance looks like a decent pathfinding routine. The benefit of this system is that you can cycle through the map over many frames, not every frame.
How big is your map? If it's still too slow after you've amortized it over a frame or two then I'd guess it's an implementation issue or you're running it too often per entity. At least assuming your graph size is <1000x1000 or so, then the above should generally hold true... That said, flow fields are fantastic tools too.
Is the map dynamic as in it changes in an unpredictable way all the time? If that's the case the best you can do maybe to path find in the static parts then try your best to follow the path while avoiding things nearby.
If the map predictably changes between 2 or 3 states you could precompute paths on that and change between them when the map moves.
100x faster means improving the coefficients. A* is optimal in terms of asymptotic execution time, but that doesn't mean you can't improve the overall running time.
The classic example is matrix multiplication, where we know of incresingly sophisticated algorithms with increasingly better asymptotic performance, but in practice most matrix multiplications are for sub-100x100 (IIRC) matrices where the naive algorithm we all learned as undergrads is fastest.
> in practice most matrix multiplications are for sub-100x100 (IIRC) matrices
I would say that you have the causation reversed: we just don't bother doing large matrix multiplications, because they're so expensive (and yet still don't usually cross the intercepts into making the "better" algorithms worth it, due to those algorithms having ridiculous constant coefficients.)
I would love to run Floyd-Warshall on my 40-million point network graph—but it just ain't gonna happen.
> we just don't bother doing large matrix multiplications, because they're so expensive
Sure, I agree. I'm not a specialist in numerical lin.alg., but I've heard it said that if you're spending a lot of time on large dense matrix multiplications, you're probably approaching the problem in the wrong way.
I'm sure you're right in this case, but I wonder if it would be possible for an 'optimal' algorithm, where the optimality is in terms of high-level operations like compares required, to be out-performed by a non-optimal algorithm due to practical considerations such as cache utilisation. An order of magnitude more loads but in a cache oblivious pattern could be a win compared to a pathological pattern.
Jon Bentley had an article on two string reversal algorithms with the same algorithmic complexity. On a very large string, one would finish almost immediately; the other was pessimal for paged virtual memory and wouldn't finish for hours.
FTA: "It has been said that "pathfinding is a solved problem." Just because something is "solved" doesn't mean it can't be improved especially with regards to execution time."
I wonder if any games use a landmark approach to bound their A* search.
The basic idea of landmarks is that you choose a small number of landmarks (e.g. four corners of a map) and for every landmark, you pre-compute the distances to every node. You can then use this information to get a lower-bound for the distance to the target via the triangle inequality: d(p,t) >= |d(p,LM) - d(t,LM)|.
This is obviously weaker than the goal bounding explained in the video, but the precomputing only takes O(n). We used it quite successfully in a global router (for EDA), where the heuristic lower-bound is often much better than a standard A-star heuristic based on geometric distances, because edges could have wildly different costs.
In this particular case not a PCB but microprocessors.
Global routing is a high-level routing approach: routing millions of nets (= connections usually between 1 output pin of a gate and N input pins of other gates) on a grid graph with potentially billions of nodes is an extremely difficult Steiner tree problem.
To get good solutions, a common approach is to first solve the problem on a much coarser graph where edges have capacities. I.e. instead of having to place all connections disjointly, edges in the graph can accommodate a number of parallel connections.
A later detailed routing step will produce the actual disjoint routes while following the outline that was obtained during global routing.
Even back 15 years ago, EAGLE's autorouter was pretty decent if you used it right. Lay out power traces, ground planes and anything high speed or sensitive (analogue signals etc.) yourself, then let it do the busywork of hooking up all your blinkenlights and push buttons.
There are certainly things you shouldn't delegate to an autorouter, but in the same way, the trend towards hair-shirt-wearing insistence that everything be done manually is a productivity cost.
It's no different to programmers who insist on hand-optimizing assembler: Rarely necessarily any more and usually done more out of ego.
Yeah, I automatically went into what I know :/ Its impossible to route manually there days, which doesnt mean automagic tools are even remotely good, same goes for synthesis (fpga).
Just watched the entire video - very interesting; however, there are two major caveats the speaker mentions that is worth noting.
1) This requires expensive precomputation. He mentioned it took 10 hours on one of the large starcraft maps.
2) (I may have misunderstood this point) This does not necessarily respond well to map changes - so in maps where you have the map changing structure (new walls, etc) you will have to run the precomputation again.
With that said, very interesting stuff coming out of Digipen.
I think the Goal Bounding Gate optimisation he talks about could help with both: it's still O(N²), but if N is limited to those gateway nodes that would cut down on costs tremendously. But that all depends on whether it's cheap to find those gateway nodes automatically.
So, basically, it is about a much better heuristics, not a general algorithm.
Of course, with a specialized heuristics one could get a lot of pruning, but generally, if there is no way to come with a specialized heuristic when there is no way to have a jumping heuristic (one have to take one step at a time) and there is no way to look ahead for a specialized nodes, the A* is still the king.
But in AI almost everything is about finding a good-enough heuristic.
JPS is great. I remembered it from a former talk a while ago.
From the little experience i have with pathfinding, it seems to be an interesting topic with many little details to be explored. Like smoothing a path, smoothing/tuning a path in regards to loss of "speed" when turning, avoiding dynamic obstacles like other "agents", "flocking" of groups, various different ways of representing a "map", and probably many more.
Contraction Hierarchy (CH) is also at least 100x faster as A* . CH has nice properties of having good memory efficiency, optimality, any graph etc. And I am very sure JPS is not that generic applicable like CH. Still all 'fast' routing algorithms have their own drawbacks of engineering complexity, pre-processing time and memory requirements, potential lack for time dependent support etc.
If you want to have the best summary of routing algorithms currently available then read 'Route Planning in Transportation Networks' (2014) https://arxiv.org/abs/1504.05140
And if you want fast implementations for routing algorithms like Dijkstra, A*, CH and Landmarks check out GraphHopper: https://github.com/graphhopper/graphhopper (note: I'm one of the founders)
I had an idea when I took my algo class to overlay a small graph of "highways" on a map and computer all pairs shortest path to that sub graph. Then when you do your search, you do BFS from both the start and goal till you reach a *highway node" and then you have a short but not optimal path between two points without too much computation.
Something like that is an essential optimization when pathfinding over long distances. You often need to answer the question can I go from A->B->C long before you need to say how exactly you do it.
I had trouble watching the video on GDC (it kept breaking when changing chapters, for some reason), so for anyone else with the same problem, here's a Youtube Mirror:
In a sense the JPS+ algorithm seems to be to be turning the grid into s kind of nav mesh and if you were to explicitly do this instead you might actually get a better and more general solution that's also easier to update in real time.
The boundary checking improvement would be simplified by dividing the map into islands divided by gates (per his closing remarks) limiting the damage to precalculated data from changes to the map (assuming they occur in one island at a time)
This seems to be specifically for grids. Of course essentially no modern AAA game uses grids for navigation anyway, they use convex polys in a nav mesh.
There was a paper by Microsoft Research where they find shortest path from one point to another in time equivalent to 5 memory reads + they sped up significantly the precomputation times and lowered memory requirements.
JPS by itself (not JPS+ in this video) is notable for not requiring any precomputation to achieve some speedup.
If you're willing to precompute some things, there are lots of other techniques available for unweighted grid maps. http://www.cs.du.edu/~sturtevant/papers/GPPC-2014.pdf has an overview of techniques and some benchmarks that use real-world game maps (Baldur's Gate, Dragon Age).
I believe the "goal bounding" from the video is called "arc flags" in the literature.