Category Archives: business

The purpose of software project management

I recently read the article The sad graph of software death by Gregory Brown.

Brown describes a software project wherein tasks are being opened faster than they are being closed in the project’s task tracker.  The author describes this as “broken”, “sad”, and “wasteful.”  The assumption behind the article seems to be that there is something inherently bad about tasks being opened faster than they are being closed.

The author doesn’t explain why this is bad, and to me this article and the confused discussion it prompted on Reddit are symptomatic of the fact that most people don’t have a clear idea of the purpose of software project management.

Another symptom is that so many software projects run into problems, causing tension between engineering, product, and other parts of the company.  It is also the reason there is such a proliferation of tools that purport to help solve the problem of project management, but none of them do because they don’t start from a clear view of what exactly this problem is.

Two complimentary goals

In my view, the two core goals of project management are prioritization and predictability.

Prioritization ensures that at any given time, the project’s developers are working on the tasks with the highest ratio of value to effort

Predictability means accurately estimating what will get done and by when, and communicating that with the rest of the company.

A task tracker maintains a record of who is currently working on specific tasks, which tasks are completed, and the future tasks that could be tackled. As such, the trackers do not address the two core goals of project management directly.

I have actually thought about building a project management tool that addresses these goals, i.e. prioritization and predictability, much more directly than is currently the case with existing systems.  Unfortunately, to date the value to effort ratio hasn’t been high enough relative to other projects :)

When a task is created or “opened” in a task tracker, this simply means “here is something we may want to do at some point in the future.”

Opening a task isn’t, or shouldn’t be, an assertion that it must get done, or must get done by a specific time. Although this might imply that some tasks may never be finished, that’s ok. Besides, a row in a modern database table is very cheap indeed.

Therefore, the faster rate at which tasks are opened rather than closed is not an indication of a project’s impending demise; rather, it merely reflects the normal tendency of people to think of new tasks for the project faster than developers are able to complete those tasks.

Once created, tasks should then go through a prioritization or triage process; however, the output isn’t simply “yes, we’ll do it” or “no, we won’t.”  Rather, the output should be an estimate of the value provided to complete the task, as well as an estimate of the effort or resources required to complete it. Based on these two estimates, we can calculate the value/effort for the tasks.  It is only then that we can stack-rank the tasks.

Estimating value and effort

Of course, this makes it sound much simpler than it is.  Accurately estimating the value of a task is a difficult process that may require input from sales, product, marketing, and many other parts of a business.  Similarly, accurately estimating the effort required to complete a task can be challenging for even the most experienced engineer.

There are processes designed to help with these estimates.  Most of these processes, such as planning poker, rely on the wisdom of crowds.  These are steps toward the right direction.

I believe the ultimate solution to estimation will exploit the fact that people are much better at making relative, rather than absolute, estimates. For example, it is easier to guess that an elephant is 4 times heavier than a horse, than to estimate that the absolute weight of an elephant is 8000 pounds.

This was recently supported by a simple experiment that I conducted.  First, I asked a group to individually assign a number of these relative or comparative estimates.  Then, I used a constraint solver to turn these into absolute estimates.  The preliminary results are very promising.  This approach would almost certainly be part of any project management tool that I might build.

Once we have good estimates for value/effort, we can then prioritize the tasks.  Using our effort estimate, combined with an understanding of the resources available, we can come up with better time estimates.  This will enhance predictability that can be shared with the rest of the company.

Pivotal Tracker

I have had quite a bit of experience with Pivotal Tracker, which I would describe as the “least bad” project management tool. Pivotal Tracker doesn’t solve the prioritization problem, but it does attempt to help with the predictability problem.  Unfortunately, it does so in a way that is so simplistic as to make it almost useless.  Let me explain.

Pivotal Tracker assumes that for each task, you have assigned effort estimates which are in the form of “points” (you are responsible for defining what a point means).   It also assumes that you have correctly prioritized the tasks, which are then placed in the “backlog” in priority order.

Pivotal Tracker then monitors how many points are “delivered” within a given time period.  It then uses these points to project when future tasks will be completed.

The key problem with this tool is that it pretends that the backlog is static, i.e. that new tasks won’t be added to the backlog before tasks are prioritized. In reality, tasks are constantly being added to any active project, and these new tasks might go straight to the top of the priority list.

Nevertheless, the good news is that Pivotal Tracker could probably be improved to account for this addition of new tasks without much difficulty.  Perhaps a third party could make these improvements by using the Java library I created for integrating with PT’s API.   :)

Breaking down tasks

Most tasks start out as being quite large, and need to be broken down into smaller tasks, both to make it easier to divide tasks among developers, but also to improve the accuracy of estimates.

However, there isn’t much point in breaking down tasks when nobody is going to start work on them for weeks or months.  For this reason, I advise setting time-horizon limits for task sizes.  For example, you might say that a task that is estimated to be started within three months can’t be larger than 2 man-weeks, and a task to be started within 1 month cannot be larger than 4 man-days.

As a task crosses each successive time-horizon, it may need to be broken into smaller tasks (each of which will, presumably, be small enough until they hit the next time horizon).  In practice this can be accomplished with a weekly meeting, that can be cancelled if there are no tasks to be broken down.  We would assign one developer to break down each oversized task and then the meeting would break up so that they could go and do that.  Typically each large task would be broken down into 3-5 smaller tasks.

This approach has the additional advantage that it spreads out the process of breaking down tasks over time and among developers.

Resource allocation

So how do you decide who works on what?  This is fairly simple under this approach.  Developers simply pick the highest priority task that they can work on (depending on skill set or interdependencies).

At OneSpot, when we broke down tasks, we left the subtasks in the same position in the priority stack as the larger task they replaced.  Since developers pull new tasks off the top of the priority list, this has the tendency to encourage as many people as possible to be working on related tasks at any given time, which minimizes the number of projects (large tasks) in-flight at any given time.

Conclusion

To conclude, without a clear view of the purpose of successful project management, it is not surprising that so many projects flounder with many project management tools failing to hit the mark. I hope I was able to provide the beginnings of a framework to think about project management in a goal-driven way.

Tungle: A wasted opportunity

Apparently Tungle has shut down development, although they still allow people to sign up. Turns out their acquisition by RIM last year must have been an acquahire (technically an acquisition, but really an admission of defeat).

Tungle had an incredibly viral business model, perhaps the most viral I’ve seen since Plaxo, solving a problem I and many others encounter on a near-daily basis:  Help people schedule meetings and calls with each-other.

So what went wrong? Their usability SUCKED. I desperately wanted Tungle to work, but almost every time I tried using it to schedule a meeting with someone, something would screw up and we’d have to resort to manually scheduling via email.  This was embarrassing when it happened, but even so I tried over and over again.  Every time I did could have been an opportunity for Tungle to sign up a new user, if their usability wasn’t so bad.

So if there is anyone out there looking for an idea that could be the next LinkedIn-scale viral phenomenon, all you have to do is reimplement Tungle, but this time get the usability right.  If I weren’t already rather busy I’d be doing this myself.

The game theory of sealed-bid auctions

According to Wikipedia, a “frenemy” is someone who is simultaneously a partner and a competitor.

The term is typically used in social relationships, but frenemies can also exist in economic relationships, and I was fortunate enough to encounter such a situation in a project I’m working on, and I decided to explore it. The conclusions are directly applicable to at least one specific situation where real money is at stake (I describe this at the end). More interestingly, this may suggest a selfish rationale for unselfish behavior in a wider class of situations.

First, a little background:

Sealed Bid Auction

Consider a simplified version of eBay where everyone bids once on an item, nobody sees each-other’s bid, and the highest bid wins. This is called a “First-price sealed-bid auction”.

One day you find a trustworthy guy called Bill that promises that they can pay you $200 for a Nexus One phone. You discover that Nexus One phones frequently sell for less than $200 on eBay, and several of these phones are auctioned off every day. Bill hates to use eBay and won’t ever use it no-matter what, and he doesn’t really care what you bid for the phones, so long as he gets them. You realize that there is an opportunity to make some money here.

So what do you bid when you see one of these phones? Its a compromise, since if you bid $200 you’ll definitely win the phone, but you’ll make no money. If you bid less, then you’ll make more money if you win, but you’ll win less frequently. If you bid too little then you’ll never win and you’ll make no money.

This is not a hard question to answer if you are reasonably smart and have a decent amount of information about past winning bids, since you’ll be able to discover, given a bid b1, a function f1(b1) that will tell you the probability of winning the auction, given whatever you bid is. Let’s assume that you are smart enough and you do have enough information. Let’s say your cut of the profit is c1, then, applying some high school math, your expected profit is $200*c1*f1($200*(1-c1)).

From this equation you are able to decide the optimal bid (and therefore how big your cut is) to maximize your own profit. You can either do this through some fancy math, or just experimentally plug in different values for c1 until you find what works best, the latter being my preferred approach (because I’m lazy and computers aren’t).

Bill gets clever

Then Bill comes to you and tells you that he may not always be able to pay you $200 for the phone, sometimes he’ll pay more, sometimes he’ll pay less, but he will tell you what he will pay before you must place your bid. Fair enough you think, you’ve got your function f1(b1), you can determine the optimal bid depending on whatever Bill is willing to pay.

Weeks go by and you are making good money off this relationship. One night you go out for a beer with Bill, and he drops a bombshell. He tells you that actually, he is playing the same game you are. He knows a guy, Jim, who is buying the phones from him. He won’t tell you who Jim is (Bill isn’t an idiot), but it turns out that Jim is paying even more for these Nexus Ones than Bill is! Worse still, it turns out that not only is Bill no idiot, he is at least as smart as you are. He has determined the probability of you winning your bid, and is choosing his cut to optimize his total profit, in the exact same way that you are (although his function, f2(b2), won’t be the same as your’s because you are reducing his bid by your cut).

You go home drunk and don’t think about it much, but the next day you have a real headache, and its not just all the beer you had with Bill last night. What exactly is the relationship between you and Bill? In one sense, you are on the same side. If your cut combined with Bill’s cut is too big, then both of you will make less money. But in another sense, you are on opposite sides, the smaller his cut, the bigger your cut can be, and vice versa.

Given this relationship, and assuming that you can’t deal directly with Jim, Bill can’t deal directly with eBay, and Bill is at least as smart as you are, how do you maximize your profit?

My Simulation

A mathematician or game theorist at this point would probably go into a dark room for a week, month, or year, and come out with the mathematically optimal answer. Since I don’t have the time or patience required for a rigorous mathematical treatment, I decided to do a few experiments instead. You can find the code here.

The Auction

In the results described here, Bill’s buyer pays $1 (ok, we’re somewhat abandoning the Nexus One analogy here) to Bill for a phone. I wanted to create a reasonably realistic auction, which I do in the bidWinProb() method on line 30. Basically I simulate 100,000 auctions, each consisting of 10 bidders whose bids are spread according to a Gaussian distribution with mean of $0.50, and standard distribution of 0.1. I then record all of the winning bids, which allows me to quickly estimate a probability of winning for any given bid.

Yes, I realize that I could do this much more efficiently mathematically, but speed doesn’t really matter here, and this solution is simple enough that bugs are unlikely.

It has the added benefit that you can just provide it with actual data from a real auction, rather than simulated data.

Optimize Cut

The first step is that we want to be able to determine the optimal cut for ourselves, given whatever cut Bill has, or the optimal cut for Bill given whatever our cut is. This is done in the optimizeCut() method on line 78. The approach is basic trial and error. While a more sophisticated, accurate, and efficient approach is certainly possible, the current implementation is more than adequate for our needs (not to mention being easier to be confident that it does what its supposed to).

Experiment #1: Best combined cut

So what happens if Bill and I were entirely transparent with each-other, and agreed that we’d find the best cut for both of us, and then split it down the middle?

In this situation it turns out that Bill and I should collectively take $0.28 of the dollar Bill is paid if the auction is won (which when we consider the possibility of losing the auction, is an expected revenue of $0.24). Split down the middle this is an expected revenue of $0.12 each.

Experiment #2: Iterated selfish cut

But what if Bill and I can’t agree on an even split, and we are basically each left to our own devices to make as much as we each can?

One approach to do this would be for me to optimize my cut based on whatever Bill is paying me (which will depend on his cut), and for Bill to optimize his cut based on his perspective of his probability given various payments to me (which will depend on my cut).

When I tried this the results were interesting. I started from a point where Bill isn’t taking any cut at all, then each of us take turns to optimize our cuts based on the other one:

% Cut          $ Cut
Bill    Me      Bill    Me      Ttl $   1/2
0.000   0.280   0.000   0.244   0.244   0.122
0.100   0.220   0.080   0.159   0.239   0.120
0.140   0.190   0.109   0.128   0.237   0.118
0.160   0.180   0.119   0.112   0.231   0.115
0.160   0.180   0.119   0.112   0.231   0.115

This is interesting because they converge to where Bill takes a 16% cut, and I take an 18% cut. Further, when you look at the expected revenue, it is close to evenly split between us, but not exactly. Note that after this iterative process Bill and I collectively are making only $0.231, whereas we would be making $0.244 if we had worked together and split the difference.

Bill makes 3% more money if we work together and split the profit, and I make 9% more!

Experiment #3: Machiavellianism

My last two experiments I call “Evil Me” and “Evil Bill”. In “Evil Me” I exploit the fact that Bill will optimize his cut based on my cut, so rather than selecting my cut based on Bill’s cut, I select my cut to maximize my profit knowing that Bill will optimize for whatever is in his best interests given my cut. Evil Bill is the same but the other way around.

% Cut          $ Cut
Bill    Me      Bill    Me      Ttl $   1/2
0.100   0.270   0.055   0.134   0.189   0.094  <-- I'm evil
0.240   0.120   0.151   0.057   0.208   0.104  <-- Bill is evil

Wow, so in this situation being Machiavellian allows me to make $0.134 while poor Bill only makes $0.055! Bill reverses this if he is Machiavellian and I am not.

What is fascinating is that, while I increase my profit by 20%, Bill's drops to less than half of what it was before! One of us being Machiavellian helps that person a bit, but it hurts the other a lot.

Conclusions

So what are our conclusions based on this one experiment?

  1. If our sole motivation is to maximize our own profit, we should adopt the Machiavellian approach, even though this will totally screw Bill.
  2. If we at least have some semblance of compassion for Bill, the next best approach is to pre-agree a split with Bill, and then optimize together, as in Experiment #1.
  3. The pre-agreed split should probably be 50:50, because if Bill and I can't agree on a split and were to optimize against each other as in Experiment #2, then we wind up making approximately the same amount anyway, but we both make less than if the split is pre-agreed.

An important caveat is that these conclusions are based on a single experiment that makes a variety of assumptions. These assumptions seem realistic enough that hopefully these results do generalize, but we can't say that for sure.

Unanswered questions

Some may feel dissatisfied by the fact that I've answered these questions experimentally, rather than explore the math behind why we get these results. I would strongly encourage such people to feel free to explore themselves, especially if they enjoy algebra and statistics. I'll gladly link to any serious attempts here.

Further, I've explored a situation where one participant is manipulative, while the other tries to make the best of whatever situation they are in. But what if both participants are manipulative? My suspicion is that in that case it degenerates into a Game of Chicken with an ultimate outcome that is far worse for both participants, but further investigation is definitely warranted.

Why is this problem relevant?

Realtime bidding is a recent innovation in advertising where advertisers bid against each-other for the right to show you an advertisement. The analogy with the situation described above is that the sealed-bid eBay is an ad exchange, I'm an ad network, and Bill is the advertiser. Bill is selling something where he has a (hopefully) known profit margin, and he must decide how much of this profit margin he will spend to sell his product. Similarly, I as the ad network must decide on my cut.

Note 26th July 2010 - This is a republish of an article from 24th July with significant additions and modifications.

A quick thought on “downsizing” strategies

Sometimes companies realize that they need to reduce their staffing costs, and typically they do this by getting rid of employees.

Two common approaches to this are:

  1. A layoff
  2. Freeze wages and hiring, and wait for people to leave

Option #2 is braindead.  With a layoff, you get rid of your worst people, but with a wage and hiring freeze, you’ll typically lose your best people.

$20k, 10 weeks in Austin, advice from 20 accomplished entrepreneurs: Capital Factory

A while back my good friend Josh Baer asked me if I’d like to be involved in Capital Factory, a really interesting project to help stimulate innovation in the tech space.  The idea is pretty simple – entrepenuers submit their ideas to us, and we choose 10 of them.  They get a $20k investment in cash (on very reasonable terms), $20k in free stuff (legal advice, stuff like that), and most importantly, the support and advice of 20 successful and well known mentors, typically people who have gained a lot of experience building companies.  Check out the list, they are an impressive bunch (I’m almost certainly the least qualified among them, and I’ve been around the block a few times! ;).

Applications are due on April 3rd, so if you are interested don’t waste any time.  Take a good look at CapitalFactory.com, and if you think its for you – apply!

Twitter clone Yammer wins TC50

Am I the only one to be a little surprised by the fact that the winner of the TechCrunch 50 competition, basically “American Idol” for tech startups, was a spectacularly unimaginative Twitter clone? I mean, even the idea to build a Twitter clone has been done to death (*cough* Pownce *cough*)!

Silicon Valley is starting to remind me of Hollywood, increasingly devoid of fresh ideas, money is being pumped into “Its like the Care Bares Movie meets Robocop!”-style unimaginative combinations of well-worn ideas.

If I had a dollar for every “Its like X, but for the enterprise!” business plan, I could probably start a VC fund myself.

Thinking about a web startup? Read this

One of the big mistakes people make when thinking about a new web startup is to model their strategies on companies like Facebook and Google.  This is a little like modelling your career on that of a lottery winner. The people behind those companies were certainly smart, but they were also fantastically lucky.

I just stumbled on a great article which supports and elaborates on this point.  It should be mandatory reading for anyone with aspirations to start a web company.

Why the Bay Area isn’t necessarily the place to be

I really thought this was a great recent post on the 37 Signals blog, explaining some of the reasons you might not want to build your tech startup in the Bay Area, and its something I wholehartedly agree with.

The dominant Bay Area business approach is raising a few million in Venture Capital, hiring 15 employees and getting a nice office, not worrying much about actually generating revenue, all in the hope of getting acquired in 18 months by Google or Yahoo. I know because I’ve been down that path more than once.

This model obviously works spectacularly well for some people, but they are not recipes for building sustainable businesses.  I won’t repeat the entire post, suffice to say that it does a good job of articulating why I’m bootstrapping Uprizer Labs in Austin, and quite happy to stay here for the time being.

Don’t get me wrong, I’m not saying that there is anything wrong with venture capitalists, all the VCs I’ve worked with over the years have been good, honest people who genuinely want you to succeed.  Obviously their model works too, otherwise their limited partners wouldn’t be entrusting them with billions of dollars, as they do.  And some business can only work with the kind of cash injection at the outset that venture capital provides.

But the problem is that venture capital can allow entrepeneurs to delay or bypass the critical initial stages during which you must really validate whether there are customers out there for what you are building.  If you do that, then really all the venture captical does for you is to pay you a salary while you march towards failure.

With my current project I made the decision early-on that I would go out and find customers for what I was building before I even started to build it.  That is what I did, and I was fortunate enough to find two great customers who have worked with me as I build SenseArray to ensure that it meets their needs.

Knowing that there are people out there ready to write you a check once you’ve built what you are building gives you a great sense of comfort, which I’ve sometimes lacked even after I’ve raised venture capital.  I can’t guarantee that I’ll be successful this time around, but so far I couldn’t ask for things to have gone any better.

My latest commercial project: SenseArray

I’ve just been putting the finishing touches to the website for my latest project, its called SenseArray.  Its a stand-alone software application that companies can license, which allows them to easily integrate personalized recommendations into their service or website.  I’ve been working with several companies over the past few weeks and months that are deploying the SenseArray technology, and I’m now ready to cast a wider net.

The basic idea behind SenseArray is simple: the success of almost any website or service depends on your ability to figure out what your users want, and then give it to them as expeditiously as possible.  This can be the difference between success and failure, or between some success, and huge success.  The problem is that its hard to figure out what your users want, and often they all don’t want the same thing.

I’m not the first to try to tackle this problem, indeed, its not the first, nor the second time I’ve tackled this problem, as I developed two collaborative filters for Revver, and one for Thoof.

But building collaborative filters isn’t easy, it takes time, and a lot of trial and error.  I’ve done the hard work, and successfully navigated the minefield.  The end result is that SenseArray scores more than 5% better than Netflix’ own algorithm with the Netflix Prize dataset.

Furthermore, all collaborative filters suffer from a common problem, they all suck when it comes to new users, because they don’t know anything about those users yet.  SenseArray doesn’t suffer from this problem, because it can utilize all kinds of data about users, much of which is available right from the very first moment they visit a website (such as their IP address, their operating system, and the referring website).

Anyway, take a look at the new SenseArray website, let me know what you think, and tell your friends about it if you think it might be useful to them!

Orson Scott Card on software companies

An interesting essay by Orson Scott Card that I found via Thoof:

The environment that nutures creative programmers kills management and marketing types – and vice versa. Programming is the Great Game. It consumes you, body and soul. When you’re caught up in it, nothing else matters. When you emerge into daylight, you might well discover that you’re a hundred pounds overweight, your underwear is older than the average first grader, and judging from the number of pizza boxes lying around, it must be spring already. But you don’t care, because your program runs, and the code is fast and clever and tight. You won. You’re aware that some people think you’re a nerd. So what? They’re not players. They’ve never jousted with Windows or gone hand to hand with DOS. To them C++ is a decent grade, almost a B – not a language. They barely exist. Like soldiers or artists, you don’t care about the opinions of civilians. You’re building something intricate and fine. They’ll never understand it.

BEEKEEPING

Here’s the secret that every successful software company is based on: You can domesticate programmers the way beekeepers tame bees. You can’t exactly communicate with them, but you can get them to swarm in one place and when they’re not looking, you can carry off the honey. You keep these bees from stinging by paying them money. More money than they know what to do with. But that’s less than you might think. You see, all these programmers keep hearing their parents’ voices in their heads saying “When are you going to join the real world?” All you have to pay them is enough money that they can answer (also in their heads) “Geez, Dad, I’m making more than you.” On average, this is cheap. And you get them to stay in the hive by giving them other coders to swarm with. The only person whose praise matters is another programmer. Less-talented programmers will idolize them; evenly matched ones will challenge and goad one another; and if you want to get a good swarm, you make sure that you have at least one certified genius coder that they can all look up to, even if he glances at other people’s code only long enough to sneer at it. He’s a Player, thinks the junior programmer. He looked at my code. That is enough. If a software company provides such a hive, the coders will give up sleep, love, health, and clean laundry, while the company keeps the bulk of the money.

OUT OF CONTROL

Here’s the problem that ends up killing company after company. All successful software companies had, as their dominant personality, a leader who nurtured programmers. But no company can keep such a leader forever. Either he cashes out, or he brings in management types who end up driving him out, or he changes and becomes a management type himself. One way or another, marketers get control. But…control of what? Instead of finding assembly lines of productive workers, they quickly discover that their product is produced by utterly unpredictable, uncooperative, disobedient, and worst of all, unattractive people who resist all attempts at management. Put them on a time clock, dress them in suits, and they become sullen and start sabotaging the product. Worst of all, you can sense that they are making fun of you with every word they say.

SMOKED OUT

The shock is greater for the coder, though. He suddenly finds that alien creatures control his life. Meetings, Schedules, Reports. And now someone demands that he PLAN all his programming and then stick to the plan, never improving, never tweaking, and never, never touching some other team’s code. The lousy young programmer who once worshiped him is now his tyrannical boss, a position he got because he played golf with some sphincter in a suit. The hive has been ruined. The best coders leave. And the marketers, comfortable now because they’re surrounded by power neckties and they have things under control, are baffled that each new iteration of their software loses market share as the code bloats and the bugs proliferate. Got to get some better packaging. Yeah, that’s it.

After founding 3 venture-backed software companies with mixed success, and running a mostly voluntary free software project for 7 or 8 years, I’m not sure that I agree with this essay, but I’m pretty sure nothing in my experience causes me to disagree with it.