Thursday, December 31, 2009

Tablet thoughts

I don't know anything about Apple's tablet, and I generally don't pay much attention to the speculation about their unannounced products. However, John Gruber has a nice post today which discusses the hypothetical Apple tablet.

The best part is the core product question -- how does this new product fit in with all existing products? Revolutionary products are underestimated because we evaluate them relative to existing products. This quote gets it:
Like all Apple products, The Tablet will do less than we expect but the things it does do, it will do insanely well. It will offer a fraction of the functionality of a MacBook‚ but that fraction will be way more fun. The same Asperger-y critics who dismissed the iPhone will focus on all that The Tablet doesn't do and declare that this time, Apple really has fucked up but good.

That was certainly the case with the iPod, or as slashdot put it, "No wireless. Less space than a nomad. Lame."

When considering revolutionary new products, we can not simply compare them with existing products, but must instead compare them with the products that don't yet exist, but should. For example, the PC was more than just an expensive, hard-to-use typewriter -- it was a whole new thing that just happened to have some typewriter features. Obviously this comparison is much more difficult than the "count the checkboxes" approach that we like to use when evaluating the "better mousetrap", but it's critical if we're going to understand or create anything truly new.

I have no idea what Apple is planning to release, but to me the revolutionary product need is in bridging the virtual and physical worlds. If you spend your entire day in front of the computer, this need may not seem real, but if you move between the two worlds you may notice that they are strangely disconnected. For example, imagine that I'm looking at a picture on my computer and want to give you a copy. In the physical world, I would simply hand you the print (I would have gotten double-prints), but with computers it's nearly impossible. Yes, there may be some complicated 10-step process that I can use to share the image, or maybe I can download and install some obscure software, but I'm not going to do that and neither are most other people. Imagine if I instead had a simple (built-in) gesture for passing the photo off to person standing next to me, and it were just as easy as handing them a real photo.

Of course exchanging photos is just one small example of these physical/virtual interactions. It's a whole new category, so many of them, including the most important, haven't been invented yet. However, you can get some ideas by thinking of the marketing cliche where two people are standing around a computer collaborating on something, taking quick notes, working off a recipe, etc. Those images occur in marketing because they are appealing, but they don't occur much in real life because our existing devices and software are awful. Current laptop computers are too bulky, awkward, and keyboard centric (the ui needs to be gesture-centric), and the iPhone is too small and limited. I want something about the size of a notepad that can be used naturally while standing up and walking around, just like an actual pad of paper, except that it's fully integrated with the virtual world as well as the physical world.

I hope this is what Apple is building -- it would be a great product. (or someone else could build it, though honestly I can't imagine anyone besides Apple getting it right)

Saturday, November 28, 2009

So I finally tried Wave...

Last week, TechCrunch published a story about me not yet trying Google Wave ("Gmail Creator Thinks Email Will Last Forever. And Hasn't Tried Google Wave"). The is apparently unacceptable, or as one commenter put it, "Paul may have been trying to be cool and ironic, but really he should be ashamed for not having tried Wave yet." I'm not sure if this is because I have an obligation to try all new products, or because my views on the longevity of email will seem hopelessly naive once I try Wave, but either way, I mustn't disappoint the good people of TechCrunch :)

The Google Wave About page and video does a good job of summarizing what Wave is and how it works. If you want to learn more about Wave, I would start there and skip this post. That said, here are my thoughts on Wave:

First off, Wave is clever and full of interesting ideas.

Second, comparisons to Facebook and Twitter are nonsensical. If Twitter were CNN Headline News, Google Wave would be Microsoft Office. Wave is less of a social network and more of a productivity tool. It's Google Docs meets Gmail, or as Google puts it, "A wave is equal parts conversation and document. People can communicate and work together with richly formatted text, photos, videos, maps, and more."

Third, although Wave is very promising, it's clear that it still needs some refinement. This is why Google calls it a "preview release". The trouble with innovative new ideas is that not all of them are worth keeping. While developing Gmail, we implemented a lot of features that were either not released, or not released until much later. Some of the most interesting ideas (such as automatic email prioritization) never made it out because we couldn't find simple enough interfaces. Other ideas sounded good, but in practice weren't useful enough to justify the added complexity (such as multiple stars). Other features, such as integrated IM, simply needed more time to get right and were added later. Our approach was somewhat minimal: only include features that had proven to be highly useful, such as the conversation view and search. It's my impression that Wave was released at an earlier stage of development -- they included all of the features, and will likely winnow and refine them as Wave approaches a full launch. The Wave approach can be a little confusing, but it allows for greater public feedback and testing.

From what I've seen, the realtime aspects of Wave are both the most intriguing, and the most problematic. I think the root of the issue is that conversations need to be mostly linear, or else they become incomprehensible. IM and chat work because there is a nice, linear back-and-forth among the participants. Wave puts the conversation into little Gmail-like boxes, but then makes them update in realtime. The result is that people end up responding (in realtime) to things on other parts of the page, and the chronological linkage and flow of the conversation is lost. I suspect it would work better if each box behaved more like a little chat room. A single Wave could contain multiple chats (different sub-topics), but each box would be mostly self-contained and could be read in a linear fashion.

So now that I've tried Wave, do I expect it to kill email? No. The reason that nothing is going to kill email anytime soon is quite simple: email is universal (or as close to it as anything on the Internet). Email has all kinds of problems and I often hate it, but the fact is that it mostly works, and there's a huge amount of experience and infrastructure supporting it. The best we can do is to use email less, and tools like Wave and Docs are a big help here.

I don't know what Google has planned for Wave or Gmail, but if I were them I would continue improving Wave, and then once it's ready for the whole world to use, integrate it into Gmail. Moving Wave into Gmail would give it a huge userbase, and partially address the "email is universal" problem. They could use MIME multi-part to send both a non-Wave, HTML version of the message, and the Wave version. Wave-enabled mail readers would display the live Wave, while older mailers would show the static version along with a link to the live Wave.  

Friday, November 20, 2009

Open as in water, the fluid necessary for life

"Open" is a great thing. Everyone likes it. Unfortunately, nobody agrees what open is. There are many meanings, but in general, I think "open" must be the opposite of "closed". In the world of abstract things like software, protocols and society, closed is secret, hidden, or locked.

"Closed" limits our mobility, prevents discovery, and discourages new connections. Imagine being in a building where all of the doors are locked or guarded, and it's difficult to move from room to room or leave. A closed world is one where people are forced to stay in their place, sometimes because of physical constraints, but more commonly because they simply don't know where else to go. A closed world is giant prison.

In an open world, people are able to see more clearly, and more easily explore new ideas and possibilities. An open world is more fluid -- people and ideas easily flow over boundaries and other borders. This openness is what makes the Internet so powerful. The Internet is melting the world, but in a good way.

Open standards and open source software are important for making technology open and available to everyone, but it's important to remember that open goes beyond tech. Wikipedia makes knowledge open to everyone. Blogs and YouTube make broadcasting and mass communication open to everyone -- news and events that would have been suppressed in the past are now reaching the whole world.

These things have been discussed to death, but there's another "open" that still seems a little frivolous: our lives. We like to joke (or complain) about people who share every boring detail of their lives and thoughts on Facebook or Twitter, but they may be doing something important.

Most of our happiness and productivity comes from the everyday details of our lives: the people we live and work with, the books we read, the hikes we take, the parties we attend, etc. But how do we choose these things? How do we know what to do, and how do know if we'll like it? The obvious answer is that we do and like whatever the TV tells us to do and like. I'm not certain that's the best answer though.

By sharing more of our own thoughts and lives with the world, we contribute to the global pool of "how to live", and over time we also get contributions back from the world. Think of it as "open source living". This has certainly been my experience with my blog and FriendFeed. Not only do people occasionally say that it has helped them, but I've also met interesting new people and gotten a lot of good leads on new ideas. These are typically small things, but our lives are woven from the small details of everyday living. For example, I saw a good TED talk on "The science of motivation", shared it on FriendFeed, and in the comments Laura Norvig suggested a book called Unconditional Parenting, which turns out to be very good.

The next step is for people to open more of their current activities and plans. This is often referred to as "real-time", but since real-time is also a technical term, we often focus too much on the technical aspect of it. The "real-time" that matters is the human part -- what I'm doing and thinking right now, and my ability to communicate that to the world, right now. We see some of this on Facebook, FriendFeed, and Twitter, and also location-aware apps such as Foursquare, but it's still fairly primitive and fringe. When this activity reaches critical mass, it should be very interesting for society. It dramatically alters the time and growth coefficients in group formation. It enables a much higher degree of serendipity and ad hoc socializing.

The basic pattern of openness is that better access to information and better systems lead to better decisions and better living. This general principal is broadly accepted, but we're just now discovering that it also applies to the minutiae of our lives.

Sharing your boring thoughts and activities may seem narcissistic and self-absorbed at first (I'm still kind of embarrassed about having a blog), but there is virtue and benefit in it. Naturally there will be challenges and fear along the way, but in the long term we're contributing to a more open, fluid society, where people are more able to find happy, productive lives. It also encourages us to be more accepting of others. Everyone is flawed, and the more we see that we aren't alone, the less we need to fear that truth.

People can not truly live and thrive in a prison -- we require freedom and mobility. This may explain my incomprehensible analogy, "Open as in water, the fluid necessary for life".

Go forth and share.

Tuesday, October 13, 2009

Applied Philosophy, a.k.a. "Hacking"

Every system has two sets of rules: The rules as they are intended or commonly perceived, and the actual rules ("reality"). In most complex systems, the gap between these two sets of rules is huge.

Sometimes we catch a glimpse of the truth, and discover the actual rules of a system. Once the actual rules are known, it may be possible to perform "miracles" -- things which violate the perceived rules.

Hacking is most commonly associated with computers, and people who break into or otherwise subvert computer systems are often called hackers. Although this terminology is occasionally disputed, I think it is essentially correct -- these hackers are discovering the actual rules of the computer systems (e.g. buffer overflows), and using them to circumvent the intended rules of the system (typically access controls). The same is true of the hackers who break DRM or other systems of control.

Writing clever (or sometimes ugly) code is also described as hacking. In this case the hacker is violating the rules of how we expect software to be written. If there's a project that should take months to write, and someone manages to hack it out in a single evening, that's a small miracle, and a major hack. If the result is simple and beautiful because the hacker discovered a better solution, we may describe the hack as "elegant" or "brilliant". If the result is complex and hard to understand (perhaps it violates many layers of abstraction), then we will call it an "ugly hack". Ugly hacks aren't all bad though -- one of my favorite personal hacks was some messy code that demonstrated what would become AdSense (story here), and although the code was quickly discarded, it did it's job.

Hacking isn't limited to computers though. Wherever there are systems, there is the potential for hacking, and there are systems everywhere. Our entire reality is systems of systems, all the way down. This includes human relations (see The Game for an very amusing story of people hacking human attraction), health (Seth Roberts has some interesting ideas), sports (Tim Ferriss claims to have hacked the National Chinese Kickboxing championship), and finance ("too big to fail").

We're often told that there are no shortcuts to success -- that it's all a matter of hard work and doing what we're told. The hacking mindset takes there opposite approach: There are always shortcuts and loopholes. For this reason, hacking is sometimes perceived as cheating, or unfair, and it can be. Using social hacks to steal billions of dollars is wrong (see Madoff). On the other hand, automation seems like a great hack -- getting machines to do our work enabled a much higher standard of living, though as always, not everyone sees it that way (the Luddites weren't big fans).

Important new businesses are usually some kind of hack. The established businesses think they understand the system and have setup rules to guard their profits and prevent real competition. New businesses must find a gap in the rules -- something that the established powers either don't see, or don't perceive as important. That was certainly the case with Google: the existing search engines (which thought of themselves as portals) believed that search quality wasn't very important (regular people can't tell the difference), and that search wasn't very valuable anyway, since it sends people away from your site. Google's success came in large part from recognizing that others were wrong on both points.

In fact, the entire process of building a business and having other people and computers do the work for you is a big hack. Nobody ever created a billion dollars through direct physical labor -- it requires some major shortcuts to create that much wealth, and by definition those shortcuts were mostly invisible to others (though many will dispute it after the fact). Startup investing takes this hack to the next level by having other people do the work of building the business, though finding the right people and businesses is not easy.

Not everyone has the hacker mindset (society requires a variety of personalities), but wherever and whenever there were people, there was someone staring into the system, searching for the truth. Some of those people were content to simply find a truth, but others used their discoveries to hack the system, to transform the world. These are the people that created the governments, businesses, religions, and other machines that operate our society, and they necessarily did it by hacking the prior systems. (consider the challenge of establishing a successful new government or religion -- the incumbents won't give up easily)

To discover great hacks, we must always be searching for the true nature of our reality, while acknowledging that we do not currently possess the truth, and never will. Hacking is much bigger and more important than clever bits of code in a computer -- it's how we create the future.

Or at least that's how I see it. Maybe I'll change my mind later.

See also: "The Knack" (and the need to disassemble things)

Sunday, September 13, 2009

Left brain, Right brain, and the other half of the story

In my head, this post and yesterday's post on risk and opportunity are deeply connected, but logically they needed to be split apart.

The theory of the left-brain / right-brain split is that the left hemisphere of our brain handles linear, logical processing (cold logic) while the right hemisphere is more emotional, intuitive, and holistic (evaluating the whole picture instead of considering things one component at a time). Naturally, some people are more left-brain dominant while others are more right-brain dominant. This divide is discussed quite a bit elsewhere -- I recommend starting with the TED talk by Jill Bolte Taylor, a neuroanatomist whose left hemisphere was damaged by a stroke, causing her to become right-brain dominant.

I'm actually somewhat skeptical that the left-brain / right-brain split is as real as people assume, however it seems to be metaphorically correct, so for my non-surgical purposes, it's "good enough".

To me, one of the most interesting aspects of this right/left divide is that many people seem to identify strongly with one side or the other, and actually despise the other half of their brain (see here for a few examples, and even Jill Taylor seems to be doing it to some extent). This seems kind of dumb. My theory is that both halves of our brain are useful, and that for maximum benefit and happiness, we should learn how to use each half to its maximum potential.

This is where I link in to yesterday's post on Risk and Opportunity. My suggestion was to simultaneously seek big, exciting opportunities ("dream big"), while carefully avoiding unacceptable risks ("don't be stupid"). In my mind, that is the right/left divide.

The left-brain ability to carefully double-check logic and evaluate the risks is very important because it helps to protect us from bad decisions. When we imagine the kind of person who believes things that are obviously false, falls for scams, ends up joining a cult, etc, we probably picture a stereotypically right-brain person.

However, what the left brain has in cold, efficient logic, it lacks in passion and grandiosity.

When I wrote about evaluating risks and opportunities, it was as though we use a logical process when make decisions, but of course that's not actually true, nor should it be. Our actual decision making is much more emotional (and emotions are just another mental process).

The right-brain utility is in integrating millions of facts (more than the left brain can logically combine) and producing a unified output. However, that output is in the form of an intuition, "gut feeling", or just plain excitement, which can sometimes be difficult to communicate or justify ("it seems like a good idea" isn't always convincing). Nevertheless, these intuitions are crucial for making big conceptual leaps, and ultimately providing direction and meaning in our lives.

So to reformulate yesterday's advice, I think we do best when using our right-brain skills to discover opportunity and excitement, while also engaging our left-brain abilities to avoid disasters, find tactical advantages, and rationalize our actions to the world. Left and Right are both stuck in the same skull, but not by accident -- they actually need each other. (the same could probably be said for politics, but that would be another post)

Coincidentally, I just saw another good TED talk that mentions these right-brain/left-brain issues in the context of managing and incentivizing creative people. It's worth watching.

Saturday, September 12, 2009

Evaluating risk and opportunity (as a human)

Our lives are full of decisions that force us to balance risk and opportunity: should you take that new job, buy that house, invest in that company, swallow that pill, jump off that cliff, etc. How do we decide which risks are smart, and which are dumb? Once we've made our choices, are we willing to accept the consequences?

I think the most common technique is to ask ourselves, "What is the most likely outcome?", and if that outcome is good, then we do it (to the extent that people actually reason through decisions at all). That works well enough for many decisions -- for example, you might believe that the most likely outcome of going to school is that you can get a better job later on, and therefore choose that path. That's the reasoning most people use when going to school, getting a job, buying a house, or making most other "normal" decisions. Since it focuses on the "expected" outcome, people using it often ignore the possible bad outcomes, and when something bad does happen, they may feel bitter or cheated ("I have a degree, now where's my job!?"). For example, most people buying houses a couple of years ago weren't considering the possibility that their new house would lose 20% of its value, and that they would end up owing more than the house was worth.

When advising on startups, I often tell people that they should start with the assumption that the startup will fail and all of their equity will become worthless. Many people have a hard time accepting that fact, and say that they would be unable to stay motivated if they believed such a thing. It seems unfortunate that these people feel the need to lie to themselves in order to stay motivated, but recently I realized that I'm just using a different method of evaluating risks and opportunities.

Instead of asking, "What's the most likely outcome?", I like to ask "What's the worst that could happen?" and "Could it be awesome?". Essentially, instead of evaluating the median outcome, I like to look at the 0.01 percentile and 95th percentile outcomes. In the case of a startup, the worst case outcome is generally that you will lose your entire investment (but learn a lot), and the best case is that you make a large pile of money, create something cool, and learn a lot. (see "Why I'd rather be wrong" for more on this)

Thinking about the best-case outcomes is easy and people do it a lot, which is part of the reason it's often disrespected ("dreamer" isn't usually a compliment). However, too many people ignore the worst case scenario because thinking about bad things is uncomfortable. This is a mistake. This is why we see people killing themselves over investment losses (part of the reason, anyway). They were not planning for the worst case. Thinking about the worst case not only protects us from making dumb mistakes, it also provides an emotional buffer. If I'm comfortable with the worst-case outcome, then I can move without fear and focus my attention on the opportunity.

Considering only the best and worst case outcomes is not perfect of course -- lottery tickets have an acceptable worst case (you lose a $1) and a great best case (you win millions), yet they are generally a bad deal. Ideally we would also consider the "expected value" of our decisions, but in practice that's impossible for most real decisions because the world is too complicated and math is hard. If the expected value is available (as it is for lottery tickets), then use it (and don't buy lottery tickets), but otherwise we need some heuristics. Here are some of mine:
  • Will I learn a lot from the experience? (failure can be very educational)
  • Will it make my life more interesting? (a predictable life is a boring life)
  • Is it good for the world? (even if I don't benefit, maybe someone else will)
These things all raise the expected value (in my mind at least), so if they are mostly true, and I'm excited about the best-case outcome, and I'm comfortable with the worst-case outcome, then it's probably a good gamble. (note: I should also point out that when considering the worst-case scenario, it's important to also think about the impact on others. For example, even if you're ok with dying, that outcome may cause unacceptable harm to other people in your life.)

I've been told that I'm extremely cynical. I've also been told that I'm unreasonably optimistic. Upon reflection, I think I'm ok with being a cynical optimist :)

By the way, here's why I chose the 0.01 percentile outcome when evaluating the worst case: Last year there were 37,261 motor vehicle fatalities in the United States. The population of the United States is 304,059,724, so my odds of getting killed in a car accident is very roughly 1/10,000 per year (of course many of those people were teenagers and alcoholics, so my odds are probably a little better than that, but as a rough estimate it's good). Using this logic, I can largely ignore obscure 1/1,000,000 risks, which are too numerous and difficult to protect against anyway.

Also see The other half of the story

Friday, April 17, 2009

Make your site faster and cheaper to operate in one easy step

Is your web server using using gzip encoding? Surprisingly, many are not. I just wrote a little script to fetch the 30 external links off news.yc and check if they are using gzip encoding. Only 18 were, which means that the other 12 sites are needlessly slow, and also wasting money on bandwidth.

Check your site here.

Some people think gzip is "too slow". It's not. Here's an example (run on my laptop) using data from one of the links on news.ycombinator.com:
$ cat < /tmp/sd.html | wc -c
146117
$ gzip < /tmp/sd.html | wc -c
35481
$ time gzip < /tmp/sd.html >/dev/null
real    0m0.009s
user    0m0.004s
sys     0m0.004s

It took 9ms to compress 146,117 bytes of html (and that includes process creation time, etc), and the compressed data was only about 24% the size of the input. At that rate, compressing 1GB of data would require about 66 seconds of cpu time. Repeating the test with a much larger file results yields about 42 sec/GB, so 66 sec is not an unreasonable estimate.

Inevitably, someone will argue that they can't spare a few ms per page to compress the data, even though it will make their site much more responsive. However, it occured to me today that thanks to Amazon, it's very easy to compare CPU vs Bandwidth. According to their pricing page, a "small" (single core) instance cost $0.10 / hour, and data transfer out costs $0.17 / GB (though it goes down to $0.10 / GB if you use over 150 TB / month, which you probably don't).

Using these numbers, we can estimate that it would cost $1.88 to gzip 1TB of data on Amazon EC2, and $174 to transfer 1TB of data. If you instead compress your data (and get 4-to-1 compression, which is not unusual for html), the bandwidth will only cost $43.52.

Summary:
with gzip: $1.88 for cpu + $43.52 for bandwidth = $45.40 + happier users

without gzip: $174.00 for bandwidth = $128.60 wasted + less happy users

The other excuse for not gzipping content is that your webserver doesn't support it for some reason. Fortunately, there's a simple solution: put nginx in front of your servers. That's what we do at FriendFeed, and it works very well (we use a custom, epoll-based python server). Nginx acts as a proxy -- outside requests connect to nginx, and nginx connects to whatever webserver you are already using (and along the way it will compress your response, and do other good stuff).

Thursday, January 22, 2009

Communicating with code

Some people can sell their ideas with a brilliant speech or a slick powerpoint presentation.

I can't.

Maybe that's why I'm skeptical of ideas that are sold via brilliant speeches and slick powerpoints. Or maybe it's because it's too easy to overlook the messy details, or to get caught up in details that seem very important, but aren't. I also get very bored by endless debate.

We did a lot of things wrong during the 2.5 years of pre-launch Gmail development, but one thing we did very right was to always have live code. The first version of Gmail was literally written in a day. It wasn't very impressive -- all I did was take the Google Groups (Usenet search) code (my previous project) and stuff my email into it -- but it was live and people could use it (to search my mail...). From that day until launch, every new feature went live immediately, and most new ideas were implemented as soon as possible. This resulted in a lot of churn -- we re-wrote the frontend about six times and the backend three times by launch -- but it meant that we had direct experience with all of the features. A lot of features seemed like great ideas, until we tried them. Other things seemed like they would be big problems or very confusing, but once they were in we forgot all about the theoretical problems.

The great thing about this process was that I didn't need to sell anyone on my ideas. I would just write the code, release the feature, and watch the response. Usually, everyone (including me) would end up hating whatever it was (especially my ideas), but we always learned something from the experience, and we were able to quickly move on to other ideas.

The most dramatic example of this process was the creation of content targeted ads (now known as "AdSense", or maybe "AdSense for Content"). The idea of targeting our keyword based ads to arbitrary content on the web had been floating around the company for a long time -- it was "obvious". However, it was also "obviously bad". Most people believed that it would require some kind of fancy artificial intelligence to understand the content well enough to target ads, and even if we had that, nobody would click on the ads. I thought they were probably right.

However, we needed a way for Gmail to make money, and Sanjeev Singh kept talking about using relevant ads, even though it was obviously a "bad idea". I remained skeptical, but thought that it might be a fun experiment, so I connected to that ads database (I assure you, random engineers can no longer do this!), copied out all of the ads+keywords, and did a little bit of sorting and filtering with some unix shell commands. I then hacked up the "adult content" classifier that Matt Cutts and I had written for safe-search, linked that into the Gmail prototype, and then loaded the ads data into the classifier. My change to the classifier (which completely broke its original functionality, but this was a separate code branch) changed it from classifying pages as "adult", to classifying them according to which ad was most relevant. The resulting ad was then displayed in a little box on our Gmail prototype ui. The code was rather ugly and hackish, but more importantly, it only took a few hours to write!

I then released the feature on our unsuspecting userbase of about 100 Googlers, and then went home and went to sleep. The response when I returned the next day was not what I would classify as "positive". Someone may have used the word "blasphemous". I liked the ads though -- they were amusing and often relevant. An email from someone looking for their lost sunglasses got an ad for new sunglasses. The lunch menu had an ad for balsamic vinegar.

More importantly, I wasn't the only one who found the ads surprisingly relevant. Suddenly, content targeted ads switched from being a lowest-priority project (unstaffed, will not do) to being a top priority project, an extremely talented team was formed to build the project, and within maybe six months a live beta was launched. Google's content targeted ads are now a big business with billions of dollars in revenue (I think).

Of course none of the code from my prototype ever made it near the real product (thankfully), but that code did something that fancy arguments couldn't do (at least not my fancy arguments), it showed that the idea and product had real potential.

The point of this story, I think, is that you should consider spending less time talking, and more time prototyping, especially if you're not very good at talking or powerpoint. Your code can be a very persuasive argument.

The other point is that it's important to make prototyping new ideas, especially bad ideas, as fast and easy as possible. This can be especially difficult as a product grows. It was easy for me to stuff random broken features into Gmail when there were only about 100 users and they all worked for Google, but it's not so simple when there are 100 million users.

Fortunately for Gmail, they've recently found a rather clever solution that enables the thousands of Google engineers to add new ui features: Gmail Labs. This is also where Google's "20% time" comes in -- if you want innovation, it's critical that people are able to work on ideas that are unapproved and generally thought to be stupid. The real value of "20%" is not the time, but rather the "license" it gives to work on things that "aren't important". (perhaps I should do a post on "20% time" at some point...)

One of the best ways to enable prototyping and innovation on an established product is though an API. Twitter is possibly the best example of how well this can work. There are thousands of different Twitter clients, with new ones being written every day, and I believe a majority of Twitter messages are entered though one of these third-party clients.

Public APIs enable everyone to experiment with new ideas and create new ways of using your product. This is incredibly powerful because no matter how brilliant you and your coworkers are, there are always going to be smarter people outside of your company.

At FriendFeed, we discovered that our API does more than enable great apps, it also reveals great app developers. Gary and Ben were both writing FriendFeed apps using our API before we hired them. When hiring, you don't have to guess which people are "smart and gets things done", you can simply observe it in the wild :)


In my previous post, I asked people to describe their "ideal FriendFeed". Since then, I've been thinking about ideas for my "ideal FriendFeed". Unfortunately, it's very difficult for me to know how much I like an idea based only on words or mockups -- I really need to try it out. So in the spirit of prototyping, I've used my spare time to write a simple FriendFeed interface that prototypes some of the things I've been thinking about. This interface isn't the "future of FriendFeed", it's just a collection of ideas, some that I like, and some that I don't. One thing that's kind of cool about it (from a prototyping perspective) is that it's written entirely in Javascript running in the web browser -- it's just a single web page that uses FriendFeed's JSON APIs to fetch data. This also means that it's relatively easy for other people to copy and change -- you don't even need a server!



If you'd like to try it out, you can see everyone that I'm subscribed to (assuming their feed is public), or if you are a FriendFeed user, you can see all of your public subscriptions by going to http://paulbuchheit.github.com/xfeed.html#YOUR_NICKNAME_GOES_HERE. The complete source code (which is just several hundred lines of HTML and JS) is here. In this prototype, I'm experimenting with treating entries, comments, and likes all as simple "messages", only showing comments from the user's friends (which can be a little confusing), and putting it all in reverse-chronological order. As I mentioned, this interface isn't the "future of FriendFeed", it's just a collection of ideas that I'm playing with.

If you're interested in prototyping something, feel free to take this code and have your way with it. As always, I'd love to see your prototypes action!



Tuesday, January 06, 2009

If you're the kind of person who likes to vote...

Now is your opportunity!

FriendFeed was nominated for three "Crunchies". Please vote for us in all three categories:




I can't promise that your vote will end the war, fix the economy, or save the environment (that one is here), but I can promise that your vote might be counted.

Sunday, January 04, 2009

Overnight success takes a long time

For some reason, this weekend has seen a lot of talk about what FriendFeed is/isn't/should be doing (see Louis Gray and others). One person even predicted that we will fail.

I considered writing my own list of complaints about FriendFeed. I think and care about it a lot more than most people, so my list of FriendFeed issues would be a lot longer. I may still do that, but there's something else also worth discussing...

One of the benefits of experience is that it gives some degree of perspective. Of course there's a huge risk of overgeneralizing (someone took a picture!), but with that in mind...

We starting working on Gmail in August (or September?) 2001. For a long time, almost everyone disliked it. Some people used it anyway because of the search, but they had endless complaints. Quite a few people thought that we should kill the project, or perhaps "reboot" it as an enterprise product with native client software, not this crazy Javascript stuff. Even when we got to the point of launching it on April 1, 2004 (two and a half years after starting work on it), many people inside of Google were predicting doom. The product was too weird, and nobody wants to change email services. I was told that we would never get a million users.

Once we launched, the response was surprisingly positive, except from the people who hated it for a variety of reasons. Nevertheless, it was frequently described as "niche", and "not used by real people outside of silicon valley".

Now, almost 7 and a half years after we started working on Gmail, I see things like this:
Yahoo and Microsoft have more than 250m users each worldwide for their webmail, according to the comScore research firm, compared to close to 100m for Gmail. But Google's younger service, launched in 2004, has been gaining ground in the US over the past year, with users growing by more than 40 per cent, compared to 2 per cent for Yahoo and a 7 per cent fall in users of Microsoft's webmail.

And that probably isn't counting all of the "Apps for your domain" users. I still have a huge list of complaints about Gmail, by the way.

It would be a huge mistake for me to assume that just because Gmail did eventually take off, then the same thing will happen to FriendFeed. They are very different products, and maybe we just got lucky with Gmail.

However, it does give some perspective. Creating an important new product generally takes time. FriendFeed needs to continue changing and improving, just as Gmail did six years ago (there are some screenshots around if you don't believe me). FriendFeed shows a lot of promise, but it's still a "work in progress".

My expectation is that big success takes years, and there aren't many counter-examples (other than YouTube, and they didn't actually get to the point of making piles of money just yet). Facebook grew very fast, but it's almost 5 years old at this point. Larry and Sergey started working on Google in 1996 -- when I started there in 1999, few people had heard of it yet.

This notion of overnight success is very misleading, and rather harmful. If you're starting something new, expect a long journey. That's no excuse to move slow though. To the contrary, you must move very fast, otherwise you will never arrive, because it's a long journey! This is also why it's important to be frugal -- you don't want to starve to death half the way up the mountain.

Getting back to FriendFeed, I'm always concerned when I hear complaints about the service. However, I'm also encouraged by the complaints, because it means that people care about the product. In fact, they care so much that they write long blog posts about what we should do differently. It's clear that our product isn't quite right and needs to evolve, but the fact that people are giving it so much thought tells me that we are at least headed in roughly the right direction. I would be much more concerned if there were silence and nobody cared about what we are doing -- it would mean that we are "off in the weeds", as they say. Getting this kind of valuable feedback is one of the major benefits of launching early.

If you'd like to contribute (and I hope you do), I'd love to read more of your visions of "the perfect FriendFeed". Describe what would make FriendFeed perfect for YOU, and post it on your blog (or email post@posterous.com if you don't have a blog -- they create them automatically). Feel free to drop or change features in any way you like. Yes, technically you're doing my work for me, but it's mutually beneficial because we'll do our best to create a product that you like, and even if we don't, maybe someone else will (since the concepts are out there for everyone).

Saturday, January 03, 2009

The question is wrong

On "Coding Horror", Jeff Atwood asked this question:
Let's say, hypothetically speaking, you met someone who told you they had two children, and one of them is a girl. What are the odds that person has a boy and a girl?

He then argues that our intuition leads us to the "wrong" answer (50%) instead of the "correct" answer (2/3 or 67%).

However, the question does not include enough information to determine which of these answers is actually correct, so the only truly correct answer is, "I don't know" or "it depends". I skimmed though the comments on the post (there are about a million), and didn't see anyone addressing this issue (though someone probably did). They mostly argued about BG vs GB for some reason.

The reason that this question is wrong is because it doesn't specify the "algorithm" for posing the question.

If we assume that boys and girls are born with equal probability (50/50, like flipping a coin), then families with two children will have two girls 25% of the time, two boys 25% of the time, and a boy and a girl 50% of the time.

If the algorithm for posing the question is:
  1. Choose a random parent that has exactly two children
  2. If the parent has two boys, eliminate him and choose another random parent
  3. Ask about the odds that the parent has both a boy and a girl
Then we can see that step two eliminated the "two boy" possibility, which leaves the 25% probability of two girls and the 50% probability of both a boy and a girl. Of course probabilities should add up to 100%, so the final probabilities are  25/75 (1/3) for two girls and 50/75 (2/3) for both a boy and girl. This is the "correct" answer described by Jeff, and it occurs because of the elimination performed at step two.

However, if the algorithm for posing the question was instead:
  1. Choose a random parent that has exactly two children
  2. Arbitrarily announce the gender of one of the children
  3. Ask about the odds that the parent has both a boy and a girl
Now we're back to having a 50% probability of there being both a boy and a girl. The difference is that there was no elimination at step two, and simply announcing the gender of one of the children does not affect the gender of the other child or change the probability distribution.

The problem with the question as originally posed was that it didn't specify which of these algorithms was being used. Were we arbitrarily told about the girl, or was a selective process applied?

By the way, if we're applying a selective process, then 100% is also a possibly correct answer, because at step two we could have eliminated all parents that don't have both a boy and a girl. Likewise, all other probabilities are also potentially correct depending on the algorithm applied.

Update: Surprisingly, some people are still thinking that my second algorithm yields 2/3 instead of 1/2 (see the confused discussion on news.yc). I think part of the reason is that I was somewhat imprecise with the concept of "elimination". The second algorithm does not eliminate any of the families, but if I announce that there is a boy, that does eliminate the possibility of two girls. This is where some people are getting lost and thinking that the boy+girl probability has become 2/3. The catch is that announcing the boy also reduced the boy+girl probability by an equal amount, so the result is still the same (it eliminated either BG or GB, I don't know which, but it doesn't matter).