All posts by Scott Trotter

Scott’s Big Adventure

I bought a Vespa this past July. A GTS 250ie. Red. Dragon Red. I bought a whole bunch of accessories for it as well: GPS, satellite radio, hard case, saddle bags, and so on. I was ready, but ready for what?

With all this stuff, I figured that I now needed to actually go somewhere on it. At first it was going to be a weekend trip to the coast. Then it was a trip up to Seattle to visit my Dad. (I live in Portland, Oregon, BTW.) Somehow, it morphed from that into a month-long tour of the West. Go figure.

So here’s the general plan: Head east from Portland over to Yellowstone; south to Eastern Utah and Arches NP, Monument Valley, Four Corners and such; south some more to Painted Desert, Meteor Crater; Petrified Forest and a corner in Winslow Arizona; still further south to Tombstone along the Mexican border; back west to London Bridge; east along Route 66 (going “backwards”) to Flagstaff; north to the Grand Canyon; north still past Lake Powell and over to Bryce NP and Zion NP in SW Utah; out into the middle of Nevada to Area 51 to see if anything interesting flies over; down to Vegas and Hoover Dam; back west to Death Valley; north to Mammoth Lakes and the back side of Yosemite; through Yosemite then southwest to the California coast near SLO (that’s San Luis Obispo); up Highway 1 to Big Sur, Carmel, Monterey, Santa Cruz, Half Moon Bay and on into San Francisco; then north again up the coast, through the Redwoods; back into Oregon coming north on 101 up to Astoria; then back east to Longview and home again to Portland; That’s the plan anyway. Subject to change.

I figure that it will take about a month to do. I’m planning on leaving in late September and being back some time before Halloween. I had wanted to leave earlier, but various things have delayed me. I’m going to try to post a blog entry every day so that anyone is interested can follow along if they want. I’m going to be taking lots of video and pictures along the way, and I’ll post those to YouTube and Flickr respectively, with links from these blog entries.

See you when I get back.

Gmail Mass Email Deletions

Michael Arrington (and others) reported yesterday on a problem with GMail as described here: Gmail Disaster: Reports Of Mass Email Deletions. Regardless of how this incident ultimately turns out, and without assigning neither blame nor praise to Google or anyone else who may or may not be involved, this is an incident that everyone interested in the future of network computing needs to take to heart. If I had been a guest on the (apparently now defunct) Gillmor Gang when Steve Gillmor launched into one of his “rich client is dead, long live the network” diatribes, I would have responded with something along the lines of this:
“I predict that sometime within the next year or two, there will be some kind of major incident–a serious security breech, a significant service outage, an accidental or deliberate release of data, a NOC screw-up, a government investigation, a service provider buyout or bankrupcy, whatever–that will cause anyone interested in moving to a thin-client/network-centric computing model to seriously reconsider their plans.”
This GMail incident may turn out to be nothing, but consider all of the other incidents that have happened in the past few years: the AOL user data release, the security breech at that credit card processing company, the brief service outage at salesforce.com, the government porn investigation (need to find citations for these). Considering all this along with what can be easily imagined in the future, are corporations really going to want to entrust some of their most sensitive data to third-party service providers whose behavior and business practices are completely outside of their control? Will individuals?
It’s worth remembering that we once operated on a centralized computing model based around mainframes, and we moved away from that model for good reasons (single point of failure, service degradation with increased usage, etc.). While there are significant benefits to centralized computing, there are significant risks and drawbacks as well. The same can be said for the decentralized, client-based model as well.
IMHO, the best approach would be a hybrid model in which data formats and communications protocols are open and standardized, data can reside either on servers or local client machines and can be easily and transparently moved or synchronized back and forth between the two as needed, and the applications used to view and edit that data can be either client-based, server-based or both. This way, individuals and corporations can choose the level of centralization that they are comfortable with, and everybody wins. Except, perhaps, those companies interested in selling you servers (Sun) or thick-client operating systems (Microsoft, Apple).

The Veteran Newbie, But Is It Legal?

Mobilizing the river of news
Dave Winer is working his magic again. The father of blogging, RSS, OPML, and XML-RPC was growing frustrated trying to get readable news feeds on a mobile device (Blackberry, Treo, Q, etc.), and whipped together a script for delivering clean, text (no graphics) feeds that update every 10 minutes.
I always find it amusing when someone who has been deeply immersed in one area of technology, suddenly “discovers” a completely new area of technology and starts acting like a complete newbie. I’ll christen it “The Veteran Newbie Effect.” Dave Winer is certainly an experienced veteran of the tech industry, but I think that he’s been so completely absorbed by RSS and blogging and podcasting for the past several years that he’s vastly underinformed about many other areas of the industry. In this case, a little more research would have revealed that there are lot’s of ways to read news and browse websites using mobile devices. I’ve been doing it for many years now using various smartphones and Pocket PCs. I currently use an iPAQ 4700 Pocket PC with Egress for RSS reading and Opera for web browsing. I download podcasts directly to the PPC via WiFi and listen to them using PocketMusic. I’ve got a 6GB CF microdrive and a 4GB SD for storage. I also use it to watch video from time to time. I could even record, edit and upload podcasts using Resco Audio Recorder if I were into that sort of thing.
What Dave has done with the software he created is interesting because it’s an approach that hasn’t been done before, it’s just not necessary.
More importantly though, I wonder if it’s legal. I think that what he’s doing is taking multiple content streams from one source, stripping out the graphics if any, and republishing them from his own server under his own domain name. If that is in fact what he is doing, isn’t that a copyright violation? And isn’t using a domain such as nytimesriver.com a trademark violation?
I’ve got a lot a respect for Dave’s contributions over the years, but I think that he may have misfired a bit on this one.

IE7 Backward Compatibility a Red Herring

IE7 and standards compliance – Microsoft’s Chris Wilson charts progress
Richard MacManus of ZDNet interviewed Microsoft’s Chris Wilson, the Group Program Manager for IE, to address the issue of whether Microsoft’s latest web browser IE7 is – and will be – CSS and Web standards compliant. At one point the conversation veers into the subject of backward compatibility–the supposed need for IE7 to continue rendering non-standards-compliant pages in the same way that IE5 and IE6 do. Chris talks about what a big challenge this is for Microsoft, but in this particular interview doesn’t say much about what they are going to do about it. However, in other public articles and posting, Microsoft IE Program Managers including Chris Wilson have frequently raised the notion that any lack of standards-compliance in IE7 will be due to the need to maintain backward compatibility.
I really think that all of this talk from Microsoft about backward compatibility is a red herring. It’s an excuse not to do the right thing. You’ll notice that they never provide any data or statistics to backup their claims that backward compatibility is needed. All they offer are anecdotal stories about somebody’s mother visiting her favorite website and finding that it doesn’t work or look right any more.
I browse the web using Firefox, which I’ve used since it’s inception, and Mozilla suite before that. I’ve found that it is exceedingly rare these days to find a website that doesn’t look and behave correctly in Firefox, or Opera or Safari. While I wouldn’t claim my usage to be a representive sample, the experience is enough to convince me that there are very few web pages or sites in common or widespread usage that are hard-coded to work specifically with IE6 and break when viewed with anything else. The vast majority of the web is machine-generated, and most of the templates used have been updated so that, while they may not be paragons of web standards goodness, at least they don’t use IE-only features that will break in the other major browsers.
What Microsoft needs to do is to fix all of their bugs and add CSS feature support up to the baseline established by those 3 browsers. Then any site which follows the current best-known practices in web development will render correctly in IE7. If you don’t understand what that last sentence means, then read Zeldman’s book, or Cederholm’s book, or Budd’s book, et cetera.
Under that scenario, the only websites that will fail to render correctly are those which (a) use the old, discredited technique of “browser-sniffing” specifically for IE, and (b) fail to check for versions later than version 6 and provide a pass-through. For those sites–which really need to be exposed and upgraded–Microsoft could provide a button in the UI which switches the rendering back and forth between standards-compliant mode and “IE6” mode, and give users instructions to click that button if the site they’re viewing doesn’t look right. For that matter, Microsoft could probably detect 99% of the problematic sites by examining the code before its rendered, and switch into IE6 mode. They could have each browser keep track of which sites only work in IE6 mode, then upload those site lists to a central repository accessible to all IE7 users. They could even send those site’s developers an email detailing the virtues of standards-compliance… Naaah!

Intel Sheds XScale Processor Unit

Intel sells handheld chip unit to Marvell
(AP)

AP – Intel Corp. said Tuesday it will sell its division that makes processors for handheld gadgets to Marvell Technology Group Ltd. for $600 million in cash, as the world’s biggest semiconductor maker focuses on its main business of supplying chips for PCs and computer servers.
There are basically two ways of looking at this: (1) is that Intel has yet again lost patience with a strategic investment that it has made that hasn’t immediately yielded at multi-billion dollar revenue stream; and (2) is that Intel has plans to migrate it’s x86 architecture down into a mobile form-factor, and it wants to rid itself of a competing architecture before it does so. The part of me that once believed in Intel’s strategic competence wants to believe the latter, but the part of me that sold all my Intel stock leans toward the former.
Although Intel doesn’t break out the performance of the division, analysts said it remained unprofitable as Intel overestimated its ability to break into a business that was outside its core competence.
Funny, I remember a time when designing, manufacturing and selling chips was Intel’s core competency.

WebVisions 2005 Wrapup

I attended the [5th annual WebVisions conference][webvisions] which was held this past Friday (7-15-05) in [Portland, Oregon][portland], and I must say that I have mixed feelings as to whether or not it was worth my while. On the one hand, I live in Portland and it’s not very expensive ($95) so other than missing a day of work, it’s almost a no-brainer. But on the other hand, I sure didn’t learn very much that I didn’t already know. I think that the basic problem is that WebVisions attracts a diverse audience, but the conference agenda doesn’t accomodate that diversity. Not only do you have end users, designers, developers, programmers and managers, but you also have experience levels ranging from newbie to net.god and everything in between. I think that the solution to this problem is that the conference organizers need to either narrow the focus of the conference so that the audience self-selects more appropriately, or they need to re-organize the agenda so that the individual sessions are more targeted towards skill level and interest. Or both. Here’s a couple of examples:
One session that I attended was called [Looking Beyond the Desktop][desktop session] which was presented by [Molly Holzschlag][mollys summary], a woman whose work I greatly admire. The basic thrust of the session was that website designers and developer need to target output devices other than the desktop web browser, in particular, handhelds (PDAs and cell phones), printers, projectors (i.e. slide shows) and screen readers. The session itself was fine as far as it went. The problem for me was that I would classify the session as targeted at a beginning to intermediate skill level, but I really wanted the advanced version. I wanted to see some examples of sites that were both designed for handheld devices as well as some that weren’t, then to see them displayed on actual devices to see what works and what doesn’t work. I wanted some discussion of the various strategies that might be employed when targeting handheld (and other devices), and what are the trade-offs encountered. I wanted to know which devices supported the “handheld” media type and which did not; which supported [XHTML MP][xhtml mp] and which did not; which supported Javascript and which did not; and so on. In other words, I wanted to get into the nitty-gritty details, but there simply wasn’t time for that in a session which included people who didn’t even know what the [“media” attribute][media attribute] is for.
Another example was the panel discussion entitled [The Future of Content][content session], where the panel was composed of [Nick Finck][nicks summary], [Molly Holzschlag][mollys summary], [Keith Robinson][keiths summary] and [Kevin Smokler][kevins site], all fine folks. I found this one particularly frustrating because there was hardly no actual discussion on the future of content. Instead, we spent about 10-15 minutes just trying to define the term “content,” another 10-15 minutes discussing how to extract content from clients, even about 5 minutes answering the question “What is RSS?” Now, that’s a perfectly legitimate question in an entry-level session, but not in a session called The Future of Content at a conference called WebVisions, both titles which imply a certain advanced, forward-looking orientation. Now, I happen to think that the future of content is a very interesting and timely topic for the year 2005. I had some very interesting post-panel discussions with some of the panelists, and I think I’ll do a more in-depth post on the topic sometime in the next few days. But still, I found the panel as a whole quite disappointing.
So to reiterate, I think that the conference organizers need to do one of three things: 1) Take the name WebVisions to heart and refocus the conference on the future and not cater at all to the newbies; 2) Reorganize the conference agenda into tracks which target various skill levels and interests; or 3) Don’t take the name WebVisions too seriously, and just be the local Portland web gathering.
[webvisions]: http://2005.webvisionsevent.com/ “WebVisions 2005”
[portland]: http://en.wikipedia.org/wiki/Portland%2C_Oregon “City of Portland, Oregon”
[desktop session]: http://2005.webvisionsevent.com/presentations/desktop/ “Session description”
[content session]: http://2005.webvisionsevent.com/presentations/future_content/ “Session description”
[xhtml mp]: http://www.developershome.com/wap/xhtmlmp/xhtml_mp_tutorial.asp?page=introduction “XHTML MP tutorial introduction”
[media attribute]: http://www.w3.org/TR/REC-CSS2/media.html “Media type section from CSS spec”
[mollys summary]: http://www.molly.com/2005/07/16/blurred-vision/ “Molly’s summary of WebVisions 2005 from molly.com”
[nicks summary]: http://www.digital-web.com/news/2005/07/webvisions_2005_aftermath/ “Nick’s summary of WebVisions 2005 from Digital Web Magazine”
[keiths summary]: http://www.7nights.com/asterisk/archives05/2005/07/webvisions-2005-roundup “Keith’s summary of WebVisions 2005 from asterisk*”
[kevins site]: http://www.kevinsmokler.com/ “Kevin’s site”

Gnomedex Wrapup

So now that [Gnomedex 5](http://www.gnomedex.com/ “Gnomedex home page”) is over and I’ve had a few days to think things over, what were the significant developments? I think that there were 3…well, better make that 2-1/2 actually.
#### One: Microsoft Announces RSS Support ####
This was far and away the most [important piece of news](http://news.com.com/Microsoft+confirms+RSS+plans/2100-1025_3-5759738.html “CNET article on Microsoft’s RSS plans”) to come out of the conference. From my point-of-view, there were 5 pieces to this announcement:
1. RSS reader as feature of IE7
2. Enclosure support for multiple file/media types
3. List extension namespace for RSS 2.0
4. RSS aggregator platform support in Longhorn
5. Implied RSS support in other Microsoft applications
**RSS Reader as a Feature of IE7.** This is essentially Microsoft playing catch-up with the other browsers such as [Safari](http://www.apple.com/macosx/features/safari/ “Apple’s Safari product page”) and [Firefox](http://www.mozilla.org/products/firefox/ “Mozilla’s Firefox product page”). When [Internet Explorer 7](http://blogs.msdn.com/ie/archive/2005/02/15/373104.aspx “Announcement of IE7 on IEBlog”) encounters a web page with a feed on it, it will “light up” a button on the toolbar, which is currently the familiar white-on-orange “RSS” icon. *(Caution: All user interface elements are subject to change before the final release.)* If the user clicks on that button, then the feed will be “previewed” in the browser window (more on that in a second). Feeds are identified on a web page via the HTML <link> element. If there is more than one such <link> element on a page, then it’s the first one found that gets previewed and used for subscription. I don’t know if there’s any way for the user to override this. All current RSS flavors will be supported, including [RSS 1.0](http://web.resource.org/rss/1.0/spec “RSS 1.0 spec”), [RSS 2.0](http://blogs.law.harvard.edu/tech/rss “RSS 2.0 spec”) and [Atom](http://www.atomenabled.org/developers/syndication/atom-format-spec.php “Atom 0.3 spec”).
Currently in IE6 or Firefox, if you click on an RSS button on a web page, you’ll get an [XML dump](http://www.trotternet.com/rss.xml “TrotterNet’s raw RSS 2.0 file”) of the feed file. Clicking on the RSS button on the IE7 toolbar will give you a preview of the feed itself, without all of the nitty-gritty details being displayed. I use [FeedDemon](http://www.bradsoft.com/feeddemon/ “Newsgator’s FeedDemon product page”) as my RSS reader, and from what I could see of the demo, this preview will look something like that product’s [Channel Newspaper](http://www.bradsoft.com/feeddemon/help/1.0/helpimg/newspaper.gif “Screenshot of a FeedDemon Group Newspaper, which is a little different from a Channel Newspaper”) output, which is to say, a nicely formatted chronological list of titles and articles. I don’t know if feed authors or users will be able to change the details of this formatting. There will be some kind of search feature on the preview page so that the user can quickly find something they might be interested in.
Also on the preview page, there will be a button on the toolbar which will subscribe the user to the feed being displayed. Currently, that button is a big “+” symbol, but that will certainly change before release. The demos didn’t show much in the way of subscription list management and retention policy, but that will be there in some form or another.
All of this functionality is going to be released with the **standalone version of IE7** so that users of current versions of Windows won’t have to upgrade to [Longhorn](http://msdn.microsoft.com/longhorn/ “Microsoft’s Longhorn developer page”) in order to take advantage of this. Microsoft didn’t make any commitment as to a ship date for IE7.
While none of this so far is particularly earth-shaking news since other browsers and aggregators already do all of this, the simple fact that Microsoft is doing it all in the standalone version of IE7 means that many more people are going to be exposed to the benefits of RSS feeds. In spite of the [rapid growth of Firefox](http://weblogs.mozillazine.org/asa/archives/2005_04.html “50,000,000 downloads and O’Reilly browser stats”) on the Windows platform, IE is still far and away the dominant browser, and for many people, if it doesn’t exist within Internet Explorer, then it doesn’t exist.
**Enclosure Support for Multiple File/Media Types.** I don’t remember them saying much about this at the Gnomedex live demo, but it comes through loud and clear on the [Channel 9 video](http://channel9.msdn.com/ShowPost.aspx?PostID=80533 “Blog entry containing interview video”) which was shot the night before the announcement.
The [RSS 2.0 <enclosure> element](http://blogs.law.harvard.edu/tech/rss#ltenclosuregtSubelementOfLtitemgt “RSS 2.0 enclosure tag documentation”) has historically–if 9 or 10 months counts as historical–been used mainly to deliver MP3 files to a [“podcatcher” application](http://www.bradsoft.com/feeddemon/help/1.0/enclosures/enc-sched.asp “FeedDemon’s companion application FeedStation”), which is an RSS aggregator that knows MP3 files are considered to be podcasts, and handles them in such a way that they get automatically loaded onto the user’s media player, which is typically an [Apple iPod](http://www.apple.com/ipod/ “Apple’s iPod product page”) or [something](http://www.creative.com/products/mp3/zenmicro/ “Creative’s Zen Micro product page”) [similar](http://www.iriveramerica.com/prod/ultra/700/ifp_799.aspx “iRiver’s iFP-799 product page”). There’s no reason that this mechanism can’t be used for other kinds of enclosed files, and it’s exactly Microsoft’s intention to do just that.
The example that they showed in the demo was of an RSS feed item which contained 1 or more enclosures specifying [iCal calendar data files](http://developer.apple.com/internet/appleapplications/icalendarfiles.html “Apple iCal file format developer page”). (It wasn’t clear to me whether this was a single feed item with multiple enclosures, or if it was multiple feed items each with a single enclosure. I guess it doesn’t really matter.) The iCal files were handed off to MS Outlook, where some special demo code the calendar events automatically into the Outlook calendar. They weren’t making an official announcement concerning any future version of Outlook, but the implication was clearly that this would be a standard feature at some point.
In the Channel 9 video, the Microsoft guys said that just about any file type can and would be supported in this fashion. Screensaver files could be automatically installed, documents, presentations and spreadsheets could be distributed, and so on. Again, they didn’t make any official annoucements, but they definitely indicated their intentions.
It wasn’t clear to me whether this was going to be a Longhorn feature, or if it would be available as part of the IE7 standalone release. I’m guessing that it’s a Longhorn feature.
**List Extension Namespace for RSS 2.0.** This is the feature that garnered the most attention and controversy because it’s the biggest and most obvious change. Microsoft has developed a specification for an XML *namespace* which, when used in conjunction with RSS 2.0, alters the semantics of RSS channels and their items or entries. An RSS channel or feed is implicitly a time-ordered list of what are essentially news articles. Microsoft’s [Simple List Extensions](http://msdn.microsoft.com/longhorn/understanding/rss/simplefeedextensions/ “list extensions spec”) basically remove the time-orderedness from the list, and allow the channel or feed to be simply a list of whatever you like.
If a reader or aggregator supports the extensions, then they need to support basic list operations such as sorting, insertion and deletion. If they don’t support the extensions, then they are simply ignored and the list is displayed as a normal RSS feed. The example that Microsoft demoed was of a wishlist on Amazon. They also talked about things like a wedding registry and other similar things.
The main points of controversy in the hall concerned whether or not this was a proper thing for Microsoft to be doing, that is, was this just a case of the standard MS strategy of Embrace, Extend, Extinguish, whether or not other developers **could** support these extentions, and whether or not they **would** support them. After much discussion, I think that the consensus was that it **was** okay for Microsoft to be doing this, and that developers **could** and **would** support the extensions. Microsoft has published the extensions under a [Creative Commons license](http://creativecommons.org/licenses/by-sa/2.5/ “Attribution ShareAlike license”), so anyone can implement them, and they showed a video of [Larry Lessig](http://www.lessig.org/blog/archives/002978.shtml “Lessig’s blog”) endorsing the concept.
My main question about all this is what are the aggregator developers going to do about this. Adding support for the extensions seems straightforward enough, but I’m thinking here at a higher level. Most aggregator applications are designed to be newsreaders. What should a newsreader do with a wedding registry list?
The other point of controversy occurred away from Gnomedex, on the blog postings of those who [support Atom in place of RSS 2.0](http://www.digital-web.com/news/2005/06/microsoft_to_take_rss_five_steps_backwards/ “Nick Finck’s opinion about all this”). Those people basically say something to the effect of “why extend RSS when you can just use Atom instead.” It’s not clear what Microsoft’s position on this is going to be. They’ve talked about supporting Atom, but that seems mainly in the content of IE7 being able to consume and display Atom feeds. The List Extensions are clearly intended to be used with RSS 2.0, and they haven’t made any mention of supporting an equivalent feature in Atom.
**RSS Aggregator Platform Support in Longhorn.** All aggregators and readers maintain some sort of list of subscribed channels or feeds, as well as some sort of database which contains all of the items from the feeds along with other metadata such as whether or not an items has been read, how long has it been in the database, and so on. Aggregators also have to implement some sort of retention policy so that the database doesn’t grow over time to overwhelm the available disk space. Microsoft intents to implement these [basic structural functions within Longhorn](http://msdn.microsoft.com/Longhorn/understanding/rss/rsslonghorn/ “Microsoft developer page on RSS in Longhorn”) itself, and to provide an API for developers to use to access these services.
In my opinion, this is **exactly** what a platform vendor should be doing with regard to a technology such as RSS: provide the basic structural support and allow developers to innovate on top of that. The big questions here are whether or not this structure will be truly open and fair: will all of the data and metadata be fully accessable to any application, or will there be private data and/or private API’s which only Microsoft applications have assess to?
**Implied RSS Support in Other Microsoft Applications.** There isn’t really much that be said about this right now. The Microsoft people didn’t make other official announcements concerning other Microsoft products. But the basic message comes through loud and clear: Microsoft as a whole thinks that RSS is a Big Deal, and they are intending to support it in every way imaginable, plus a few that are unimaginable. They specifically mentioned that MSN was doing some fairly obvious things with RSS, and the various demos that were shown are clear harbingers of things to come. I would expect to see various groups announcing their plans for RSS over the next 18 months to 2 years, the timeframe when Longhorn and all of it’s supporting cast will be delivered. In the meantime, I think it’s safe to assume that anything that Microsoft can do with RSS, they will do in the near future.
**Bottom Line.** I think that the short-term things that Microsoft is doing in terms of the additions to IE7 are fairly obvious enhancements–along with many others–that they have to do in order to catch up to the competition. Having basically gone silent for the past 4 years in the browser space, Microsoft has a lot of work ahead of them just to draw even again with the likes of Firefox, Safari and Opera. I think that the proposed list enhancements are a nice addition to the functionality of RSS, that seem to be well thought-out and should be a benefit to everyone. And finally, I think that the platform additions to Longhorn are absolutely the right thing for Microsoft to be doing.
On the other hand, Longhorn won’t be released to the public for another year and a half (or possibly longer), it will be another year or two before it reaches any significant share of the Windows base, and many more years after that before it becomes the dominant OS. All during that time, aggregator developers will have to maintain two versions of their code, one which takes advantage of the Longhorn RSS support if it’s present, and one which uses the developer’s own infrastructure if running on something other than Longhorn. A lot can happen in that amount of time, and it may very well be the case that by the time that Longhorn becomes sufficiently established, RSS may have moved in an entirely different direction. Time will tell.
**Update: Here’s the real key to why I’m generally positive about this announcement. Microsoft has lost a lot of goodwill and credibility over the past decade or so, and they must realize that nobody trusts them. But having the trust and support of 3rd-party developers is criticial to Microsoft’s future survival. So here they are, asking for developer’s trust and support for this initiative. They must know that everybody will be watching them on this one. It’s a test case. So far, they are saying and doing the right things. But if they revert to their old behavior, it’s game-over for them as far as developers go. No one will ever trust them again.**
#### Two: Adam Curry Announces Support for BitTorrent ####
Most podcasts are currently distributed via a direct file transfer from the author to the listener. When a new episode is posted by the author, there is typically a window of time of several hours in which a large number of listener’s aggregators attempt to download the new episode all more-or-less at the same time. This in turn puts a tremendous strain on the author’s server. One possible method for easing this strain would be for the podcast file to be encapsulated at a [BitTorrent](http://www.bittorrent.com/ “BitTorrent home page”) file, which would spread the download burden over many different systems.
This topic was one of many under discussion on a live-at-Gnomedex session of the [Gillmor Gang](http://gillmorgang.podshow.com/ “Gillmor Gang home page”), with [Adam Curry](http://live.curry.com/ “Adam Curry’s blog”) being one of the panelists. When the subject came up, Adam said that he didn’t want to use BitTorrent to distribute his show, The [Daily Source Code](http://dailysourcecode.com/ “Daily Source Code home page”), because he didn’t want to be associated with a technology which itself is mainly associated with the piracy of copyrighted digital media files. The other panelists responded by urging him to take on the responsibility of banner-waver for BT exactly because his podcast is completely (well, mostly) non-infringing, and BitTorrent needs just such a high-profile, non-infringing application to give it credibility in the eyes of the law and the mainstream public. Adam said that he’d think about it.
Fortunately for everyone, Adam announced during his conference-ending keynote speech that he would indeed [begin supporting BitTorrent](http://radio.weblogs.com/0001014/categories/dailySourceCode/2005/06/26.html “Daily Source Code #200 from Gnomedex”). No further details were given, but hopefully it will start soon and hopefully it will include some or all of his other [PodShow](http://www.podshow.com/ “ka-ching!”) podcasts such as [The Dawn and Drew Show](http://dawnanddrew.podshow.com/ “Dawn and Drew home page”) and The Gillmor Gang.
The stakes were raised on this subject today when the Supreme Court of the U.S. [ruled against the good guys](http://seattletimes.nwsource.com/html/businesstechnology/2002350058_grokster28.html “Seattle Times article on court ruling”) in the Grokster case. Adam, I hope that this ruling doesn’t dissuade you because we need your support now more than ever. I’m sure that if you do run into any trouble, you’ll have the full support of the EFF, Larry Lessig and the entire community. Good luck, and remember that we’ll all behind you.
#### Three: Dave Winer Announces–But Sadly Doesn’t Ship–The OPML Editor ####
This is the 1/2 that I referred to at the top. [Dave Winer](http://www.scripting.com/ “Dave Winer’s blog”) has been working on [The OPML Editor](http://support.opml.org/member/login “OPML placeholder page”) for several months now, and had earlier telegraphed that it would be ready to ship in time for Gnomedex. Unfortunately, there are apparently enough unsquashed bugs still remaining that Dave felt it wasn’t ready for a general release at this time, so we’re going to have to wait for another month or so. Oh well. Still, it’s better to have to wait then to release a buggy product onto unsuspecting users. We’ve all had our fill of that, I’m sure.
But, the good news is that Dave did demo *The OPML Editor* during his lead-off keynote address on Friday morning. It looks and works a lot like the editor from within [Frontier](http://frontier.userland.com/ “Frontier home page”), which is not surprising. But, instead of operating on Frontier object database data, it creates, opens, edits and emits OPML files, which is an XML schema for describing outlines. Additional features of the editor are that multiple users can share an outline file, making simultaneous updates which are immediately reflected to the other users, and it can be used to directly edit a weblog. I can hardly wait to get my hands on it!
One possible downside is that the user interface, at least in the initial release, looks a bit utilitarian. If you’ve ever seen *Frontier*, you’ll know what I mean. That’s probably okay if you’re a programmer or otherwise technically adept, but if you’re a more typical user like my mom or dad, then it might be a little intimidating. The good news is that Dave is releasing the code under an open source license–I think it was the GPL, but I’m not sure. That means that there’s on opportunity for someone to step up and take the code and turn it into a polished, commercial-quality outlining application. After all, if there was a decent market for ThinkTank and More 20 years ago, then surely there is an even bigger market for such a product today. There’s really nothing else like it out there.
![Thanks Dave](/shared/thanksdave2.jpg “My thanks to Dave Winer for his visionary role in the development of weblogs, RSS, podcasting, SOAP, XML-RPC, OPML, and outliners.”)

Intel Inside…Apple?

A number of people have recently asked me what I think about the Apple/Intel deal. This has been flogged to death on the net already, so I’ll just add a few brief observations.
* Apple is beginning a 2-year transition from IBM’s PowerPC chip architecture to some undisclosed Intel chip architecture. Most everyone is presuming that the Intel chip will be some current or future version of the Pentium 4 and Pentium M, and that seems like a good supposition. But I also wouldn’t be surprised if there were some other chip that Intel has under wraps that might be the subject of this deal.
* Apple will likely suffer a mild case of the “Osborne Effect” whereby it loses sales while customers stop purchasing the existing products in favor of waiting for the new ones, but I don’t think that it will be too severe. After it blows over, Apple’s sales will return to able where they are now. Switching to Intel CPU will not boost Apple’s sales by any significant about. There’s no advantage for a Windows user to buy a premium-priced system from Apple.
* Intel will be able to increase it’s sales by a few percentage points without having to take them away from AMD. In other words, the market for x86 chip expands with 100% of that expansion going to Intel.
* Apple will design their Intel-based systems so that they are architecturally distinct from “IBM-compatible” PCs, and Apple will make sure that the MacOS only runs on genuine Apple hardware. People tend to forget that in the early 80’s there were several different flavors of 68000-based workstations around that wouldn’t run Mac software (Sun, Apollo, etc.), and there were x86-based systems that weren’t PC-compatible (DEC Rainbow, to name one), so this isn’t so hard to do.
* However, Microsoft (or some third-party) will make it possible to run Windows natively on Apple hardware. This MAY make Apple hardware more acceptable in a corporate environment due to the fact that it will be theoretically possible for that hardware to run Windows. But corporate standards tend to be pretty exact, so I’m incline to doubt it.
* The more obvious way of running Windows and Windows applications on Apple hardware will be through emulators such as Virtual PC. I would be shocked if these emulators weren’t updated to run Windows apps at nearly full speed on a Mac.
* Due to the ubiquity of Windows applications, a Windows emulator such as Virtual PC will become almost standard equipment on an Intel-based Mac. This should result in a small increase in Microsoft’s Windows licencing revenue.
* In the longer term, this deal MAY result in fewer Mac-specific applications being built. Today’s applications are written to operating system API’s, not CPU instruction sets. Developers who are already committed to the MacOS API will likely continue to development Mac-specific applications. But other developers will likely rely on the emulators to get their Windows API applications running on the Mac. As the cost of maintaining two separate code bases–one with a very large market share and the other with a very small one–continues, developers may decide to abandon the Mac API and concentrate on the Windows API exclusively, relying on emulation to cover the Mac users. This is especially likely if the emulators can be improved to the point where they can run applications without having to reveal an entire Windows operating environment. (If that last bit isn’t clear, send me an email at “scott at trotternet dot com” and I’ll try to clarify.
So, to summerize:
Apple: Short-term loss of sales due to Osborne Effect, but recovering to current market share levels. In and of itself, the CPU switch probably won’t entice many current Windows users to switch to Mac.
Intel: Sales increase of 2-3% without having to battle AMD.
Microsoft: Windows licence revenue increase due to increased use of emulators on Macs.
Windows developers: Slightly larger potential market for their products, but probably not enough to get them really excited.
Macintosh developers: May eventually abandon native Mac applications in favor of relying on Windows emulators.
Now, all this is based on the participants stated intentions at this point in time. The deal also opens up some interesting possibilities should the parties–mainly Apple–choose to take advantage of them.
* In spite of their stated intention to not allow the MacOS to run on non-Apple hardware, they could easily change their minds further down the line. This would effectively kill their hardware business and transform them into a pure software company much like Microsoft. I don’t think that this will happen as long as Steve Jobs is in charge. After all, one of Steve’s first actions upon resuming control of Apple was to kill off the Mac clone market.
* Apple might try to revive their “Switch” campaign by offering a limited version of the MacOS which would run completely from a CD-ROM in much the same way as a version of Linux does not, thereby allowing existing Windows users to “test-drive” the MacOS on their own systems before they (hopefully, from Apple’s POV) buy a Mac.
That’s all for now. Drop me a line and let me know what you think.

Cory Doctorow a Liability to EFF

I just finished listening to [Sound Policy with Denise Howell][1] from [IT Conversations][2] where the subject was Google’s Autolink feature on the latest edition of the [Google Toolbar][3]. I was shocked by the behavior of the [EFF’s Cory Doctorow][4]. He was rude to the other speakers, continually interrupting and shouting them down. He was disrespectful of the opposing point of view, labeling their concerns as “silly.” He was ineffective in promoting his own point of view in favor of Autolink, continually–and **loudly**–espousing wildly inaccurate and inappropriate analogies and examples.
But worst of all, Doctorow seems to be publicly advocating a position which, if it’s official EFF policy, may make everyone want to seriously reconsider their support for the EFF. I thought that the mission of the EFF was, at least in part, to restore the balance to copyright law which the entertainment cartel has stacked in their favor. But Doctorow seems to feel that there should be no copyright law at all, repeatedly stating that web authors have “no right” to have integrity of their work respected. Well, if the EFF “doesn’t give a shit”–to use Doctorow’s words–about the rights of authors, then I no longer give a shit about the EFF.
[1]: http://www.itconversations.com/shows/detail438.html
[2]: http://www.itconversations.com/
[3]: http://toolbar.google.com/T3/
[4]: http://www.eff.org/about/staff/#cory_doctorow
**Update:** I should mention that prior to listening to this podcast, I was on the fence regarding Autolink. On the one hand, I can sympathize with the desire of authors not wanting to have the meaning–as opposed to the formatting–of their work altered by a third-party without their permission. On the other hand, Autolink does seem to be beneficial to the user under certain circumstances. For example, nothing irritates me more than to read an article on CNET describing some new company/product/service, but CNET refuses to provide any external links to the subject of the article in the mistaken belief that they can keep me trapped on their site in order to flash more ads at me. Wrong. They are just (A) pissing me off, and (B) forcing me to look it up the old-fashioned way. (BTW, I realize that the current incarnation of Autolink won’t “fix” this “problem” either.)
Cory Doctorow’s near-hysterical ranting certainly helped me make up my mind… I’m now firmly **opposed** to Autolink and anything like it. Hence, Doctorow’s ineffectiveness as an advocate in favor of Autolink.
By the way, there is another solution to the “problem” that Autolink is trying to address, but it’s getting late so I’ll write about it separately, probably tomorrow.