I don’t think Senator Norm Coleman really understood what he was letting himself in for when he allowed British anti-war MP to testify before Congress. Regardless of where you stand on the Iraq war, the result is very entertaining (click on “Galloway’s Testimony” video link).
This guy has done a podcast of a running commentary of his Indy experience. Its pretty cool actually, his tastes are obviously different to mine, and Indy seems to be responding to that. I must say, after my last blog post, it is kinda fun to post blog entries about cool stuff you find through Indy, although this guy has taken it to a whole new level.
Here is where I live.
The US Supreme Court heard oral arguments in the MGM vs Grokster case today. For those living under a rock, at issue is the legal decision that prevented the movie industry from killing the VCR in the mid-80s (the “Sony-Betamax decision”). In retrospect the Supreme Court did them a big favour since most of the movie industry’s revenue now comes from video rentals. Unfortunately the movie industry has not learned its lesson.
“Secondary copyright infringement” is when you yourself don’t actually infringe copyright, but you somehow facilitate someone else doing it. I assume that this was originally intended as a way to get at the people that run “swap meets” where people exchange copies of software and CDs in violation of copyright law.
In the 1980s the Supreme Court said that the creator of a technology cannot be sued for secondary infringement if their technology is “capable of substantial non-infringing use”, in effect creating a “shield” against secondary liability for technology creators. In the case of Grokster, two previous court judgements have said that this doctrine protects decentralised P2P software, in the same way that it shielded the creator of the VCR. The movie industry would like to see this shield weakened enough that Grokster and similar P2P file-sharing networks are no-longer protected by it.
Their opponents (myself included) fear that any weakening of this shield will create exactly the kind of legal uncertainty that can kill innovations before they have even made it out of the venture capitalist’s office (and as a veteran of a number of VC’s offices, I can attest to the fact that nothing turns them off like the threat of a legal battle).
If you don’t mind Real Video, you can watch a great debate between Fred von Lohmann, Senior Staff Attorney for the Electronic Frontier Foundation, and Theodore Olson, Former Solicitor General for the Bush Administration (2001-2004) and Representative of the Recording Industry and Motion Pictures Association here.
The argument only took place a few hours ago, but you can read a good summary from someone that appears to know their stuff here. His assessment? It went better for Grokster than he expected, but it is extremely dangerous to draw any conclusions from the oral argument phase of a court case.
Back when studying AI at University, Genetic Algorithms were always something I found quite exciting – a way to allow computers to figure out how to do things all by themselves.
I decided it was time to toy around with some of these ideas again. The first decision was to figure out the “building blocks” of our artificial creatures. I had to construct them in a way that was flexible enough that they could evolve novel strategies, but which is tolerant to mutation, and capable of merging.
For example, most computer software is not tolerant to mutation. If you change a random character in a typical computer program, chances are that it will stop working completely. Similarly, computer software is not tolerant to merging. You can’t really take two computer programs, stick them together, and expect them to do anything sensible.
So I opted for an approach based on the way biological brains work. Essentially you have a bunch of “neurons”, each of which is connected to every other neuron by two connections, one in each direction. Every neuron has an “activation”, represented by a number between 0 and 1. For some neurons, this activation is set externally, this is the input to our little brain. Similarily, some of these neurons are treated as “outputs”, their level of activation representing the output from the brain.
The brain proceeds in a series of discrete time steps. At each step, every neuron “transmits” its level of activation along all of its outgoing connections as an “impulse”. The stronger the activation, the stronger the impulse. Neurons determine their level of activation for the next time step by adding up all of the incoming impulses from other neurons, the higher the impulse, the stronger the activation (this is determined according to a simple sigmoid function).
There is one last wrinkle. Not all connections are equal, each has a “weight” associated with it, represented by a positive or negative number. Any impulse going through the connection is multiplied by this number. So, for example, an impulse from a neuron with activation of 0.5 going through a connection with weight 0.5 would only be 0.25 by the time it reached the destination neuron.
It is by adjusting the weights of these connections that our genetic algorithm makes these creatures (hopefully) do something intelligent.
Next I wanted to test it on a simple problem. I decided to try to evolve simple brains which could solve the boolean “exclusive-or” function. Very simply, this has two input neurons and a single output neuron. If the inputs are different (ie. one is ‘1’ and the other is ‘0’) the the output should be ‘1’, otherwise it should be ‘0’.
To illustrate the learning process, I set up a population of 10 creatures, each consisting of 4 neurons (this includes the input and output neurons), which runs through 4 iterations before we read the output.
I then drew a graph of what output is given for all input values between 0 and 1. Recall that we only care about the input and output for values of 0 and 1, but by looking at the intermediate values, we get an idea of the creature’s internal thought process.
This output can be represented as a two dimensional image, where black indicates that the output was 0, and white indicates an output of 1. The goal is therefore that the top-left and bottom-right corners are black, while the other corners are white. You can see an animation representing one such “run” here:
You will note that it starts out grey, but quickly finds a way to achieve the desired goal.
That was my progress for this weekend, next weekend I will try to make these things do something far more interesting (I hope).
Update: More on Genetic Algorithms
A friend sent me a link to this Flash movie which presents an interesting vision of the future where collaborative filtering of news has made conventional news media obsolete.
Certainly that is a likely scenario – collaborative filtering gets people what they want, but not necessarily what they need.
As it pertains to news delivery, it would probably tend to shield people from facts that make them uncomfortable (think – not being told that George W has won the election because your collaborative filter has decided that it would make you unhappy ;).
It could thus lead to increased balkanisation of the population (as is starting to happen with blatantly partisan news organisations like Fox News) because different people will only be exposed to those facts that reenforce their existing world view.
Of course, I suspect there are probably technical solutions to this, the question is whether people will actually see it as a problem :-/
Those that have been curious as to what I have been working on for the past few weeks may be interested in this page, which describes “Dijjer”.
In short, it is to BitTorrent what Google was to Yahoo when it first appeared. An easy to use, elegant, standards compliant P2P Content Distribution application that I modestly believe will blow the competition out of the water when its done.
I am blogging this here because I know my blog is relatively low traffic, please note the request on the Dijjer page regarding attracting too much publicity at this early stage, I want it to be stable and robust before the Slahdot hoardes get their hands on it 😉
William Fisher is Larry’s guest blogger this week. William is the author of “Promises to Keep – Technology, Law, and the Future of Entertainment” in which he advocates the replacement of copyright with a government-run scheme of compulsory taxation, the proceeds of which are then allocated to creators.
Now the libertarian in me instinctively dislikes the notion of the effective nationalisation of an entire industry, and so I thought it was time to dust off a proposal I put together several years ago which still seems rather relevant today. We called the idea FairShare and it describes a way to reward artists without coercion, without copyright, all while appealing to the self-interest of those from whom it extracts money.
Perhaps I am the last person on the planet to discover it, but I just stumbled on www.publicwhip.org.uk, an excellent website which contains detailed information about UK MP’s voting records and other related information. Of particular interest is a Java map where they have used a clustering algorithm to visually represent the positions of MPs on a map based on their voting records. One interesting thing of note is that while the libdems were almost exactly half way between the tories and labour in 1997, they are now closer to the tories in their voting records. Even the fact that they are between the two at all is interesting given that some argue that they have taken over from labour as the major left-wing party. Not according to their voting records it seems.
Wouldn’t it be great if the European Parliament did something useful with our money for a change and funded a similar system at the EU level?
Larry links to this campaign to oppose an Iraq draft. While the whole draft thing is a somewhat transparent scare tactic by those opposed to the war, what interests me is that they have a nice graphic tracking the progress of the email through the United States.
It makes me wonder why more people don’t use one of the many IP address to location converters (there is at least one free open database which does this) to map things like website visitors and such-like more often…