Several years ago, in an attempt to improve Freenet’s performance, I came up with a new idea for how to route requests in Freenet. Previously, Freenet would simply forward a request to a node that contains information similar to what was being sought.
This “next generation routing” algorithm was much smarter. It would maintain detailed statistics about response times for different neighboring nodes, and forward a given request to whichever neighbor was most likely to provide the fastest response, based on their past performance. The algorithm is described in-detail on the Freenet website.
We implemented this, in fact, we implemented a much more sophisticated version of this. It would take into account the latency, throughput, reliability, and other relevant qualities of neighboring nodes, before making its prediction about the response time for each.
Unfortunately, we were never able to get it to work satisfactorily. At the time, we did not know whether this was due to a fundamental flaw in the concept, or some bug or bugs in the implementation. As has happened in the past, Freenet’s inherent complexity made diagnosing the problem akin to finding a needle in a haystack. Isolating problems in P2P networks is especially hard because often the problem doesn’t manifest itself at the level of individual peers, but only at the level of the entire network. This means that the most common debugging technique, isolating the bug through a process of elimination, isn’t effective.
Anyway, this ambitious approach eventually gave way to a much simpler and better understood routing algorithm in Freenet 0.7, which we began in 2005, but we never did figure out whether the fundamental idea for NG routing was sound.
Well, it turns out that it was. Last year my friend and occasional collaborator Oskar Sandberg (now employed by Google) wrote a paper called Decentralized Search with Random Costs which investigated this, and found that this technique was effective for efficiently routing in a small world network.
Perhaps some day we can revisit this in Freenet, although this time we’ll have to take a much more cautious approach to ensure we don’t wind up in the same “needle in a haystack” situation that we did last time.