Underestimating the ubiquity of data

Via FlowingData, I came across “Hal Varian on how the Web challenges managers” from the McKinsey Quarterly.

Varian, Google’s Chief Economist, speaks on a wide variety of issues, but all of them centre around the ubiquity of computing and free information. We are in a time of “combinatorial innovation”, where there’s an abundance of raw components, and innovation lies in using what is already available in the right combinations. In other words, we are standing at the start of a period of potential: we have what we need to innovate and now need to play around with it. Such periods revolve around a specific innovation (electronics in the 20s, integrated circuits in the 70s), and this time around, the fulcrum is the Internet.

This is similar to the point I suggested in a paper last term, where I argued that the ubiquity of tools positions us at the beginning of an “age of innovation” (borrowing the term from Felix Janszen). As more people become comfortable with computing and as tools for software innovation become more accessible, we have been and are going to continue seeing an acceleration in the realization of good ideas. Business practices and marketing, I predict, will take a back seat to quality and value to society. This is why the most successful online companies, such as Facebook, Twitter and Google, concentrate on the product first, and the revenue stream later. I have seen this baffle tradition business-types (and of course journalists), but a quality product is the only way a company can ensure that a better service created in some kid’s basement bedroom won’t pull the rug out from under you (as Facebook, Twitter, and Google have all done themselves).