Category Archives: Michael Arrington

Gmail Mass Email Deletions

Michael Arrington (and others) reported yesterday on a problem with GMail as described here: Gmail Disaster: Reports Of Mass Email Deletions. Regardless of how this incident ultimately turns out, and without assigning neither blame nor praise to Google or anyone else who may or may not be involved, this is an incident that everyone interested in the future of network computing needs to take to heart. If I had been a guest on the (apparently now defunct) Gillmor Gang when Steve Gillmor launched into one of his “rich client is dead, long live the network” diatribes, I would have responded with something along the lines of this:
“I predict that sometime within the next year or two, there will be some kind of major incident–a serious security breech, a significant service outage, an accidental or deliberate release of data, a NOC screw-up, a government investigation, a service provider buyout or bankrupcy, whatever–that will cause anyone interested in moving to a thin-client/network-centric computing model to seriously reconsider their plans.”
This GMail incident may turn out to be nothing, but consider all of the other incidents that have happened in the past few years: the AOL user data release, the security breech at that credit card processing company, the brief service outage at salesforce.com, the government porn investigation (need to find citations for these). Considering all this along with what can be easily imagined in the future, are corporations really going to want to entrust some of their most sensitive data to third-party service providers whose behavior and business practices are completely outside of their control? Will individuals?
It’s worth remembering that we once operated on a centralized computing model based around mainframes, and we moved away from that model for good reasons (single point of failure, service degradation with increased usage, etc.). While there are significant benefits to centralized computing, there are significant risks and drawbacks as well. The same can be said for the decentralized, client-based model as well.
IMHO, the best approach would be a hybrid model in which data formats and communications protocols are open and standardized, data can reside either on servers or local client machines and can be easily and transparently moved or synchronized back and forth between the two as needed, and the applications used to view and edit that data can be either client-based, server-based or both. This way, individuals and corporations can choose the level of centralization that they are comfortable with, and everybody wins. Except, perhaps, those companies interested in selling you servers (Sun) or thick-client operating systems (Microsoft, Apple).