A colleague forwarded to me an article from the New York Times, Entrepreneurs See a Web Guided by Common Sense.
The essence of the discussion is the evolution of the Web, and in saying web I mean the content–the documents, images, sounds, descriptive data, etc–not the technology of the Internet. The article refers to this next step as Web 3.0.
If Web 1.0 was about individuals dumping documents on the web to share with others, and Web 2.0 is about setting up communities that collaborate on the development of content, Web 3.0 seems so be about interpreting the content to make meaningful decisions. Will this transition be planned/managed or evolve “organically?” Probably both: the infrastructure can, to some extent, be planned to overcome current impediments, but the application of Web 3.0 will evolve organically as peoples’ mindsets change and evolve to understand what is the new possible.
The reference to Common Sense in the title refers to the ability to draw upon “meta data” contributed by a [Web 2.0] community, such as comments on a hotel, page rankings developed by Google, etc. to draw some conclusions about the under lying material.
Whereas today’s travel recommendation sites force people to weed through long lists of comments and observations left by others, the Web. 3.0 system would weigh and rank all of the comments.
While the evolution of the Internet has a technology path, the real value comes from its application: the use of the web content to address real-world problems. From that perspective, Web 1.0 was about making content easy to get to. Web 2.0 is mostly about the contribution of interpretations by a community. Web 3.0 is about consolidating the multiplicity of views in to a coherent message with decisionable actions.
To achieve this there are some technical underpinnings necessary to facilitate such activity and make the next step feasible. In particular, standardization reduces the hurdles to integration and thus sharing and thus processing. While the Internet made major strides for the network level, the Web for human access to content, Web services accomplishes the same thing for computer (and thus automated) access to content. It is that last step that enables the deployment of computing power to solve problems.
However, these are the technical elements of the equation, the semantic level too will need to be challenged. Services such as del.icio.us, Digg, Flickr and others offer a means to tag information with meaningful descriptors and it is through this tagging that the essential meaning is communicated without having to read through the full document or photograph. Tagging removes the need for computers to undertake the task of interpretation. Web 2.0 organizes the masses to contribute interpretations (or meaning/semantics) of content; Web 3.0 collates these interpretations.
But for Web 3.0 to work will require consistency of use and consistency of interpretation before we can expect to get wide-scale and reliable results. The processing of meta data (regardless of what form it takes) is not in place now. This is likely aided by the next technological step, and no doubt this requires some computing, may be even AI. However, indications are some practical first steps can be seen.
Smart Webcams watch for intruders, while Web-based e-mail programs recognize dates and locations. Such programs, the researchers say, may signal the impending birth of Web 3.0….
…in a lecture given at Google earlier this year, Mr. Lenat said, Cyc is now learning by mining the World Wide Web — a process that is part of how Web 3.0 is being built.
Separately, I.B.M. researchers say they are now routinely using a digital snapshot of the six billion documents that make up the non-pornographic World Wide Web to do survey research and answer questions for corporate customers on diverse topics, such as market research and corporate branding
Research is being done by a number of organizations, including:
- Nova Spivack [blog], Radar Networks, are currently in development.
- Oren Etzioni, University of Washington, KnowItAll
- W. Daniel Hillis, Metaweb Technologies
- Doug Lenat, Cycorp, Cyc
- Daniel Gruhl, I.B.M.’s Almaden Research Center in San Jose, Calif., Web Fountain,
- Prabhakar Raghavan, Yahoo
Leave a Reply