Category Archives: Java

Java 6 released for OSX Leopard – but its 64bit only :(

When Leopard came out, a lot of Java developers were dismayed that the developer preview of Java 1.6 no-longer worked, and many even questioned Apple’s commitment to continued Java development on the Mac.

Well, great news, Apple has just released “developer preview 8” of Java 6 which runs on Leopard. Go to http://developer.apple.com/ – you may need to register or log in, and then find it in the downloads section.

Update: Turns out its only for 64 bit Intel Macs – arrgh!

Netflix Prize – is RMSE a good measurement?

The Netflix Prize is a pretty cool competition where Netflix has made available 100 million user ratings of movies, and you must use these to try to predict what movies users will like. The idea is to stimulate innovation in the field of “Collaborative Filters”. I won’t go into detail, you can read more at the link provided.

One important question is how one measures the success of a collaborative filtering algorithm. Netflix has opted to look at the average difference between the what algorithm predicts a user will rate a movie, and what they actually rate the movie. To be more precise, they look at the square root of the average of the squared differences (aka Root Mean Squared Error or RMSE). This is similar to a normal average, except it is swayed more by larger values.

A naive approach that just looks at a user’s average rating, and the average rating on each movie gets an RMSE of about 1.02. Netflix themselves score around 0.95, and if you can get it down (lower is better) to 0.85, you are in line to win $1 million. Currently, the best performers are around 0.87, but as they get closer to the goal, progress gets much more difficult.

Like seemingly every other geek on the planet, I decided I would try my hand at this. My goal was not to win, since most of the current top entrants work by blending predictions from a bunch of different algorithms. While this is well within the rules of the competition, it does rely much more on perspiration than inspiration, and thus I find it rather dull. Also, I would question whether the approach is fast enough for practical use (something the competition doesn’t consider).

My attempt is not a terribly original approach, I’m using a perceptron learning algorithm (perceptrons are simple precursors to what are today known as “neural networks”), and my best score to-date is 0.905. I consider this respectable (especially given the simplicity of my approach and that its about two weekends of effort), but its not going to win any time soon (its around 300th on the leader-board).

Then I started to think about what it means for an algorithm to score 0.905, or 0.95, or 0.85, or 1.02 – what bearing do these different scores have on the actual user experience?

Well, user experience is a hard thing to measure, but if we make some simplifying assumptions, we can get an idea. Most collaborative filtering algorithms are used to choose things (in Netflix’ case the things are movies), predict a user’s rating of those things, and then present the things with the highest predicted ratings to the user.

If the user finds the things they are presented with to be appealing, then they should rate them highly, if not, they won’t. Thus, if we look at user’s actual ratings of things with the highest predicted ratings, we should get a better idea of how much difference the algorithm makes to the user’s experience.

Here is a graph which shows actual versus predicted ratings for two algorithms (click to see it full size), the top one is my algorithm half way through its training with an RMSE of 0.913, the bottom one is the algorithm based on simple averages of user and item ratings, this has an RMSE of 1.022.

Click for full size

You can see the predicted ratings along the bottom, and for each, you can see the proportion of ratings of 1 star, 2 star, 3 star, and so on. As one would hope, when the algorithm is predicting a rating of 1, the majority of the actual ratings are indeed 1, and when the algorithm is predicting a rating of 5, the majority of those ratings are actually 5.

But when you compare the two sets of predictions you see something a bit surprising, there isn’t really a huge difference between the two algorithms, even though one is using a fairly naive average-based approach that doesn’t even consider a user’s individual likes and dislikes, and the other is an algorithm that does significantly better than Netflix’ own approach.

The pie charts at the right show the proportion of actual ratings when the predicted rating is over 3.5 stars. This is intended to approximate what the average user might see.

For the smart algorithm, the actual ratings over 3 stars represent 98.9% of those ratings that were predicted to be over 3.5 stars.

But for the naive algorithm, the percentage is 94.4%, a 4.5% difference. Is 4.5% really that impressive? I think a user might have a hard time noticing such a difference in performance. I’m not the first to express concerns of this nature.

You can find the raw data in the form of an Excel spreadsheet here:
Netflix RMSE effectiveness data spreadsheet

Making Java better suited to implementing DSLs

Jonathan, Scott, and I recently had an interesting conversation about Domain Specific Languages, or DSLs. With the growing popularity of frameworks like Wicket and Rails, the concept of implementing a language on top of a language is growing increasingly relevant. One of the problems with implementing a framework on Java is that it does somewhat constrain the user (many would argue that its for the user’s own good), in ways that languages like Ruby do not (those same people would argue that this is one of the reasons why Ruby is so slow). Still, this leads to a situation where Java-based frameworks like Wicket can tend to get rather verbose.

Imagine an Eclipse plug-in that would allow you to create arbitrary new Java syntax, which is transparently translated to and from the underlying Java code by the IDE. The new syntax is essentially only a presentation layer thing, similar to syntax highlighting, it never makes it into the code on-disk.

You could use this to implement a much more convenient syntax for common idioms, similar to the concept of Groovy’s builders, but more generic. For example, you could define something specific for Wicket, a syntax that lets you build Wicket components much more conveniently, but when the .java file is saved, Eclipse transparently translates the custom syntax to vanilla Java. Similarly, when the file is loaded back in, it translates the vanilla Java code back into the convenient syntax (this would be non-trivial, as it would require some pretty nifty pattern matching, but it should be doable).

This way you essentially get to layer a domain-specific language on top of Java, but developers aren’t obliged to use it (or they can create their own according to their own tastes), and you benefit fully from Java’s speed and efficiency.

Eclipse freezing on SVN update on a Mac? I have the solution

I use the Subclipse plugin for the excellent Eclipse Java IDE, and one thing that has been really bugging me for the past few months is that Eclipse sometimes freezes when I do a Subversion update, requiring me to kill Eclipse and restart it.

Fortunately, I managed to find a solution thanks to Sam Halliday (who reported the same problem some months ago to a mailing list, I emailed him personally to see if he ever found a solution).

the problem is that Subclipse doesn’t really play well with command line svn, so you need to install the javahl bindings. To get these, you can use Fink as follows:


$ fink install svn-javahl
$ cd /System/Library/Java/Extensions/
$ ln -s /sw/lib/libsvnjavahl-1.jnilib .
$ ln -s /sw/share/java/svn-javahl/svn-javahl.jar .

Next time you start up Eclipse, go to Window -> Preferences… -> Team -> SVN and select JavaHL (JNI) instead of JavaSVN (Pure Java).

Update (2007-05-22): Several weeks have now passed and the problem has not reoccurred, so I think its safe to say that this solution has been effective.

Karma at work

I read this essay by Jonathan Schwartz, President of Sun Microsystems, and was rather disappointed by his slavish support for the existence of software patents. In it he repeats the thoroughly discredited and naive argument that more IP is better because IP means innovation, and thus we need software patents. Not in my wildest dreams could this have been followed, just a few days later, by this, Kodak winning a suit against Sun in which it alleges that Java infringes on some of their patents (all of them classic examples of what is wrong with patent law), and now they want half of Sun’s operating profit from 1998 to 2001!Hey Jonathan, why did Sun need to steal Kodak’s precious intellectual property – and if you didn’t, perhaps, having experienced the wrong end of US patent law, you can reconsider your position on software patents?