Perhaps it’s somehow appropriate, considering the title of this journal, to start with a “release version” that isn’t. The latest hypewave running through technology circles is “Web 2.0”. It’s caught on to the point where it’s a venture capitalist buzzword and every hip business plan claims to be an integral part of it, whatever it is.
So what the heck is Web 2.0? Tim O’Reilly has taken a crack at defining it. To lift some key phrases, it’s “Web services”; building services that apply to a wide spectrum of Web sites — from your uncle Bob’s hand coded home page to Amazon; community driven and peer-based sites that leverage input from many sources (examples: eBay and Blogger); collating and enhancing data from a wide variety of sources to create value (Wikipedia, Amazon); the end of visible software release cycles; independence from specific delivery devices (desktop, laptop, wireless PDA, and mobile phones)[1]; and lightweight user interfaces, development models, and business models.

It seems plain wrong that we’re using a “major release” version number label to define a multifaceted and evolutionary development that hails the elimination of software release cycles. All of the things that O’Reilly identifies are good, most of them are even right, but the “Web 2.0” term needs to die a swift death. Fortunately now that the “assimilate anything that sounds cool” school of marketers has their hands on it, it will be overexposed into obscurity in no time at all.

From a pure technology viewpoint, there is just one development that is driving major changes in the Web. Simple Web services driven through HTML and XML-RPC (and SOAP as well, although I refuse to apply the word Simple to SOAP). In fact, this is a pretty old capability, a remote procedure call. It’s novel only because it’s been deployed in an environment that is finally based on standards, openness and true interoperability. In order to illustrate why new is old, it’s helpful to provide a condensed history of how I see computing architectures evolving:

1: Put cards in central reader attached to large, expensive mainframe, wait for printed output.
2: Put cards in reader attached to remote mainframe, wait for printed output.
3: Log in, type characters into terminal attached to mainframe, wait for printed output.
4: Log in, type characters into video terminal, interact with mainframe, send most output to printer.
5: Log in, type characters into terminal with local storage and field validation capabilities, filling in a form defined by mainframe application. Press enter key to send validated data to mainframe, wait for response. Lots of transient data displayed on screen, printers become devices for “reports” rather than “output”.
6: Type characters into personal computer, perform local computations, print locally. Use terminal program to transfer data to and from mainframe.
7: Type characters into local application that stores information on a lower cost networked database server (client-server computing). Communicate results to others via printed reports.
8: Use mouse and keyboard to manipulate several applications and databases. Some databases interact in significant ways, but a lot of data is moved between systems manually. Rich, heavyweight user interfaces that display information graphically emerge. Communicate results via graphs, spreadsheets, and presentations, with paper copies for distribution and archival storage. Applications can receive events from other devices on the network and respond to them on demand.
9: Dawn of the Web. Click on hyperlinks to access static data that is rich in graphical information.
10: The data driven Web. Type characters into Web browser with local storage and field validation capabilities, filling in a form defined by Web server. Press submit button to send validated data to server, wait for response. (Sound familiar?) Use local storage to maintain session context with server. Output is rich in graphics, data storage capacities make it possible to retain source data. Paper becomes less important; information is stored in retained documents.
11: “Web 2.0” Browser uses Web service calls to poll server for new events, to send data to Web server without explicit user requests. User interfaces that respond interactively begin to emerge. Web pages start to look more like desktop applications.

Hopefully the repetitiveness of this cycle is evident from the above. Web 2.0 and client-server architectures are closely analogous. What’s different is that we’re no longer dealing with disconnected, incompatible application architectures, but with standards-based mechanisms that are easy and inexpensive to deploy. Now we have an expectation that our business partner’s order entry systems will interact with our purchasing systems, even if we didn’t get our solutions from the same vendor.

Get ready for the re-emergence of the rich user interface and the partial displacement of the Web browser as the primary way of interacting with distributed applications. Applications, both inside and outside the browser will be bringing back the best from the era of the interactive desktop application, but this time only the user interface will run on the client side, and a lot of that interface will be distributed from the server itself. The application will become a flexible set of distributed services, assembled as a set of components to achieve a specific goal. Web 2.0 is as old as it is new. The real innovation is in standardization.

[1]: There’s a whole argument here that all these devices are really just merging into one meta-device with varying capabilities, but that’s a subject for another post.

Mastodon