Proof that there isn't a graph search algorithm that is complete with finite memory

168 Views Asked by At

Is there a proof that any graph-search algorithm capable of exploring any graph (where there is a upper bound on the degree of each node and there is an ordering of the edges at each node-i.e left to right, and is connected) completely must require an arbrarily large amount of memory? As claimed here->www.cs.hmc.edu/csforall/Introduction/Introduction.html at the very bottom of that page

1

There are 1 best solutions below

2
On

Think about it this way: If you don't have any kind of rule to keep track of which subgraph descendants of a node you have already visited, how can you know if you visited all of them? This then becomes a recursive problem: When you start visiting one child from a node, you have to somehow determine that you've explored the entire subgraph from that child, but then when you get to that child you have to somehow determine that you've explored each subgraph starting at each child of that child, etc. If you have a directed acyclic graph, the optimal memory-efficient way to do this is with depth-first search, but even then, e.g. for a binary tree of depth $n$ you will have to keep track of up to $n$ nodes along the path you are on that you are currently exploring. The only possible way around this problem is pretty unsatisfying: You traverse each edge from a node with equal probability (both forward and backward edges) and then after enough iterations (depending on the number of nodes and the structure of the graph), with high probability you will have visited all nodes. This requires only constant additional memory. But there is never any guarantee that you actually have visited all nodes, no matter how long you run the random walk.