You preprocess the grid into connected components, marking every grid cell with the CC it is in. Then before pathfinding check that the start and end are in the same component. This is an O(1) check, albeit with a potential of O(n log n) space required for storing the array of CC lookups.
Until the algorithm runs out of nodes to explore it can't know that the destination is unreachable. Checking if a node is enclosed is determining if its graph is disjoint from the graph that the other node is on. The only way to prove the graphs are disjoint is to explore every node in the graph.
In this case it's enough to set the algorithm to "bi-directional". If one side is enclosed it will stop. This only breaks if both disjoint parts are infinite (for example, an infinite wall between start and end.
Or rather, yes, sort of, but you only need to do it once (or once per update on a changing grid):
You find the connected components of the grid. This is easy (loop over the entire grid. Any grid cell that has not been assigned a CC, assign the first free CC and floodfill from there.) Then to check if a is reachable from b, just check that a and b are in the same connected component. This is O(1) time (albeit with O(n log n) worst case spacial complexity for the CC structure required).
The problem with the naive version of this is that updates can, in the worst case, require a scan and update of arbitrarily close to the full grid. There are better versions for when updates are frequent, but they take longer to query (although still sublinear time!) and are substantially more complex.
Flood fill is the hard part - A* is basicaly flood fill too, just with heuristic and quick stop condition.
Preprocessing is cheating :). If you preprocess and floodfill the map anyway, you can go one step further, do Dijkstra, save the results using a trick, and you don't need run-time pathfinding at all.
And you can store full paths from each cell to each cell using just O(N * N) memory, no matter the path lengths - that's a neat trick BTW:
you store for each cell pair (P,Q) what is the next cell you should go from P if you want to eventually get to Q. that allows yu to reconstruct full path between each pair of cells quickly.
EDIT to clarify - cheating is good, it's just I don't think O(N log N) is good tradeoff for the ability to return quicker, if you can do away with the whole pathfinding for O(N * N).
EDIT 2: why O(N log N)? It seems you could write the resulting component for each cell, so worst case O(N) space?
Why O(n log n) space? Because you require O(n) connected components in the worst case, which means that you need O(log n) space per cell in the worst case. Although now that I think about it that pushes the amount of time to O(log n).
In practice you can treat it as O(1) time and O(n) space, though.
Also, that preprocessing with result saving is substantially slower than the connectivity alone. The connectivity alone is O(n) / O(n log n) preprocessing time - you check each cell a maximum of 5 / 9 times (depending on 4 or 8 way connectivity).
Also, that preprocessing with result saving fails miserably when you do updates to the graph.
> I'm not sure we're talking about the same thing.
We are.
> But you can just write for each cell the index of component it belongs to. It can't belong to many, right?
Right. Each cell belongs to 1 and only 1 connected component.
However, there can be O(n) connected components overall. Think of a grid of cells all blocked off from each other, for instance.
As such, storing which connected component a cell belongs to technically requires O(log n) space, worst-case. Which means have overall you have n cells each requiring O(log n) space, or O(n log n) space overall.
In practice you can treat this as effectively O(n), however.
I use this in one of my games. The API could be slightly simpler/better though. But having abstractions like this available makes me so much more productive.
Now I'm just waiting for a library/API for hexagon map's ;)
So... with a screen sized to about 1280x580 (a comfortable size on my 13-inch BMA), the "Start Search" button starts completely off the screen. Any smaller than that, and the "Select Algorithm" box is partially clipped off the screen. In any case, there's no scroll bar, which really confused me.
That aside, this is really neat, and it's interesting to see a visual representation of the different algorithms.
This is really cool! Reminds me of the Berkeley PACMAN Ai project: (http://ai.berkeley.edu/project_overview.html), but seems easier to customize, try out different edge cases and compare algorithms.
On Mac 15 inch, visualization is very neat. very good implementation of the algorithms. Regarding Zyxley's comments; this can be made responsive so that it works on all screen sizes.
You could use pretty much any pathfinding algorithm (A* for example) even if you don't have a grid based graph.
In the implementation, you would just need to modify how the algorithm transverses each valid walkable/unwalkable node (how to go from waypoint to waypoint) and possibly how much extra cost each node has (water/mountainous terrain could have increased cost for example) instead of letting it go the usual up/down/left/right and possibly diagonals for grids.
If the waypoint based approach isn't ideal for you, then you should look up "navigation mesh" [1] pathing which is a bit trickier to implement but works well when you have a bunch of large empty/walkable areas.
Directions algorithms on real life maps are quite different from what they teach you in CS classes. Just think of the amount of computation to get the best directions between SF and NYC, taking traffic into consideration. I'm pretty sure none of the services actually give you the best solution, but something that's pretty close to it.
Full USA map graph contains around 24 million nodes, 58 million edges. Your regular A* just doesn't cut it.
A* runs faster if you have better path cost estimates. To get the shortest paths, you want an estimate that's as close to the true cost, but not greater than it. You can precalculate to generate good estimates. Traffic makes the costs higher than normal, so those precalculated estimates are still useable (if you raise the costs because of traffic, the estimates continue to be lower than the true cost), and still much better than what you get without precalculation.
http://kevanahlquist.com/osm_pathfinding/