26 August 2010

Lifehacker Pack for Android: Our List of the Best Android Apps

Lifehacker Pack for Android: Our List of the Best Android Apps
We're using the great app search and sync service AppBrain to create a Lifehacker Pack for Android. If you install the AppBrain App Market on your Android phone and sign in through the AppBrain site (using a Google OAuth, no-password-revealed log-in), you can check off and click install multiple apps from the list

Waste #2: Extra Features | Agile Zone

Waste #2: Extra Features | Agile Zone
Our first best weapon against extra features is a short feedback cycle. Frequent product demos will expose features that we're working on that our customers no longer believe will give them a competitive advantage. Even better than frequent demos are frequent production deployments. Getting the software in the wild on a regular basis and then tracking feature usage can easily expose features that are not needed. Removing features from the system will reduce the complexity, maintenance load, and likelihood that things will go wrong going forward.

Our second weapon against extra features is a healthy dose of "YAGNI." YAGNI stands for "You Ain't Gonna Need It." This phrase represents one of the original principles of eXtreme Programming, that of only adding functionality when it is necessary to meet a clear and present need of the customer.

Wikipedia's article on YAGNI [3] provides the following useful summary of the disadvantages of extra features:

* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.

20 August 2010

Fun with the Anthropic Principle - Cedric's blog

Fun with the Anthropic Principle - Cedric's blog

One day, someone called Steve sends you an email in which he predicts that tomorrow, team A will win against team B. You don’t think much of that email and you delete it. The next day, you learn that indeed, team A won. A few days later, you receive another email from Steve which, again, makes a prediction for the result of an upcoming game. And again, the prediction turns out to be correct. After a while, you have received ten emails from Steve, each of which accurately predicted a game outcome. You start being quite shocked and excited. What are the odds that this person would randomly guess correctly ten matches? 1 over 2^10 (1024), about 0.1%. That’s quite remarkable. In his next email, Steve says “I hope that by now, I convinced you that I can guess the future. Here is the deal: send me $10,000, I’ll bet them on the next match and we’ll split the profits”. Do you send the money?

Surprisingly, a lot of people fall for this kind of scam on a daily basis. If you think about this a little bit, you can probably see how the scammer did it: he started by sending a prediction that A will win to 512 recipients and one where B will win to the other 512. After the game is finished, he repeats the process with the 512 that received the right result. Every time a new match result comes in, the number of recipients is divided by two, but the remaining recipients have all received 100% accurate predictions so far. By the time we reach the 10th match, there is only one recipient left — you. And to you, the sender of this email has proven that he has an uncanny ability to guess the future while all he did was walk through an entire solution space until he had reached a point where he could carry out his scam.

How does this short story relate to the Anthropic Principle? Proponents of creationism and intelligent design usually make claims along the lines of observing the amazing complexity that lies around us and ascribe these observations to the existence of a god. This is very similar to yourself learning about this person who seems to be able to guess the future. Surely, only the existence of a supreme being can explain for such amazing feats, right? Here are other similar claims: “The eye is such a complex organ that it couldn’t have evolved to become what it is, it must have been created by someone”. “If the atmosphere mix of the Earth had been off by just a few percent, human life would not be possible”. “Our cosmos would not exist if the constants that underlie it were off by just a tiny fraction of a decimal”. Are all these numbers so remarkable that the only way to explain them is by the existence of a supreme being? Of course not.

There are millions of universes that are similar to ours, and which have all these microscopic variations in their constants. And if you’re not convinced, it’s easy: just go ask the people who live in these universes. Except that… you can’t, of course. Because life never emerged in these universes. What happened is that you got lucky: you were born in one of these few universes where life was possible. Unlucky people never realized that they were unlucky since they were never born, and as such, they were never able to ponder these questions. This realization is the very definition of the anthropic principle: all these magic values that surround us and that make life possible are actually unremarkable, because they, and you, are the product of a statistical event. There is nothing so magical about our eyes that can only be explained by the existence of a supreme being. The simple truth is that if the eye was not the complex organ that it is today, you wouldn’t be around to ask questions about it. Interestingly, the Anthropic Principle is not mutually exclusive with the existence of a deity (actually, nothing really is, which is part of the problem). You can still believe that some god decided that you would be part of the lucky experiment, but the Anthropic Principle is certainly strongly supporting evidence for the mechanism of evolution and also the proof that a lot of the seemingly magical properties that permeate the world around us can be very simply explained by high school level probability concepts.

19 August 2010

Belas Blog: Daisychaining in the clouds

Belas Blog: Daisychaining in the clouds. Interesting idea, although I would generally go for a more traditional fan-out model that looks like a tree rather than a daisy chain.
The idea is that, instead of sending a message to N-1 members, we only send it to our neighbor, which forwards it to its neighbor, and so on. For example, in {A,B,C,D,E}, D would broadcast a message by forwarding it to E, E forwards it to A, A to B, B to C and C to D. We use a time-to-live field, which gets decremented on every forward, and a message gets discarded when the time-to-live is 0.

The advantage is that, instead of taxing the link between a member and the switch to send N-1 messages, we distribute the traffic more evenly across the links between the nodes and the switch. Let's take a look at an example, where A broadcasts messages m1 and m2 in cluster {A,B,C,D}, '-->' means sending:

Schmidt: Erase your identity to escape Google shame • The Register

Schmidt: Erase your identity to escape Google shame • The Register, wow, that's a scary vision.
Increasingly bonkers Google governor Eric Schmidt has seen the future, and you might have to change your name to be a part of it.  According to the man in charge of the company de facto in charge of the web, young people's tendency to post embarrassing personal information and photographs to Googleable social networks means that in the future they will all be entitled to change their name on reaching adulthood.

02 August 2010

UtilityVsStrategicDichotomy: Martin Fowler

One of the most important ways in which these efforts differ is where the risks lie. For utility projects the biggest risk is some kind of catastrophic error - you don't want the sewage pipe to break, or to miss payroll. So you need enough attention to make sure that doesn't happen, but other than that you want costs to be as low as possible. However with strategic projects, the biggest risk is not doing something before your competitors do. So you need to be able to react quickly. Cost is much less of an issue because the opportunity cost of not doing something is far greater than costs of software development itself.

This is not a static dichotomy. Business activities that are strategic can become a utility as time passes. Less often, a utility can become strategic if a company figures out how to make that activity a differentiator. (Apple did something like this with the design of personal computers.)

One way this dichotomy helps is in deciding between building custom software and installing a package. Since the definition of utility is that there's no differentiator, the obvious thing is to go with the package. For a strategic function you don't want the same software as your competitors because that would cripple your ability to differentiate.

Ross goes so far as to argue that there shouldn't be a single IT department that's responsible for both utility and strategic work. The mindset and management attitudes that are needed for the two are just too different. It's like expecting the same people who design warehouses to design an arts museum.