Category Archives: Uncategorized

WebVisions 2005 Wrapup

I attended the [5th annual WebVisions conference][webvisions] which was held this past Friday (7-15-05) in [Portland, Oregon][portland], and I must say that I have mixed feelings as to whether or not it was worth my while. On the one hand, I live in Portland and it’s not very expensive ($95) so other than missing a day of work, it’s almost a no-brainer. But on the other hand, I sure didn’t learn very much that I didn’t already know. I think that the basic problem is that WebVisions attracts a diverse audience, but the conference agenda doesn’t accomodate that diversity. Not only do you have end users, designers, developers, programmers and managers, but you also have experience levels ranging from newbie to net.god and everything in between. I think that the solution to this problem is that the conference organizers need to either narrow the focus of the conference so that the audience self-selects more appropriately, or they need to re-organize the agenda so that the individual sessions are more targeted towards skill level and interest. Or both. Here’s a couple of examples:
One session that I attended was called [Looking Beyond the Desktop][desktop session] which was presented by [Molly Holzschlag][mollys summary], a woman whose work I greatly admire. The basic thrust of the session was that website designers and developer need to target output devices other than the desktop web browser, in particular, handhelds (PDAs and cell phones), printers, projectors (i.e. slide shows) and screen readers. The session itself was fine as far as it went. The problem for me was that I would classify the session as targeted at a beginning to intermediate skill level, but I really wanted the advanced version. I wanted to see some examples of sites that were both designed for handheld devices as well as some that weren’t, then to see them displayed on actual devices to see what works and what doesn’t work. I wanted some discussion of the various strategies that might be employed when targeting handheld (and other devices), and what are the trade-offs encountered. I wanted to know which devices supported the “handheld” media type and which did not; which supported [XHTML MP][xhtml mp] and which did not; which supported Javascript and which did not; and so on. In other words, I wanted to get into the nitty-gritty details, but there simply wasn’t time for that in a session which included people who didn’t even know what the [“media” attribute][media attribute] is for.
Another example was the panel discussion entitled [The Future of Content][content session], where the panel was composed of [Nick Finck][nicks summary], [Molly Holzschlag][mollys summary], [Keith Robinson][keiths summary] and [Kevin Smokler][kevins site], all fine folks. I found this one particularly frustrating because there was hardly no actual discussion on the future of content. Instead, we spent about 10-15 minutes just trying to define the term “content,” another 10-15 minutes discussing how to extract content from clients, even about 5 minutes answering the question “What is RSS?” Now, that’s a perfectly legitimate question in an entry-level session, but not in a session called The Future of Content at a conference called WebVisions, both titles which imply a certain advanced, forward-looking orientation. Now, I happen to think that the future of content is a very interesting and timely topic for the year 2005. I had some very interesting post-panel discussions with some of the panelists, and I think I’ll do a more in-depth post on the topic sometime in the next few days. But still, I found the panel as a whole quite disappointing.
So to reiterate, I think that the conference organizers need to do one of three things: 1) Take the name WebVisions to heart and refocus the conference on the future and not cater at all to the newbies; 2) Reorganize the conference agenda into tracks which target various skill levels and interests; or 3) Don’t take the name WebVisions too seriously, and just be the local Portland web gathering.
[webvisions]: “WebVisions 2005”
[portland]: “City of Portland, Oregon”
[desktop session]: “Session description”
[content session]: “Session description”
[xhtml mp]: “XHTML MP tutorial introduction”
[media attribute]: “Media type section from CSS spec”
[mollys summary]: “Molly’s summary of WebVisions 2005 from”
[nicks summary]: “Nick’s summary of WebVisions 2005 from Digital Web Magazine”
[keiths summary]: “Keith’s summary of WebVisions 2005 from asterisk*”
[kevins site]: “Kevin’s site”

Intel Inside…Apple?

A number of people have recently asked me what I think about the Apple/Intel deal. This has been flogged to death on the net already, so I’ll just add a few brief observations.
* Apple is beginning a 2-year transition from IBM’s PowerPC chip architecture to some undisclosed Intel chip architecture. Most everyone is presuming that the Intel chip will be some current or future version of the Pentium 4 and Pentium M, and that seems like a good supposition. But I also wouldn’t be surprised if there were some other chip that Intel has under wraps that might be the subject of this deal.
* Apple will likely suffer a mild case of the “Osborne Effect” whereby it loses sales while customers stop purchasing the existing products in favor of waiting for the new ones, but I don’t think that it will be too severe. After it blows over, Apple’s sales will return to able where they are now. Switching to Intel CPU will not boost Apple’s sales by any significant about. There’s no advantage for a Windows user to buy a premium-priced system from Apple.
* Intel will be able to increase it’s sales by a few percentage points without having to take them away from AMD. In other words, the market for x86 chip expands with 100% of that expansion going to Intel.
* Apple will design their Intel-based systems so that they are architecturally distinct from “IBM-compatible” PCs, and Apple will make sure that the MacOS only runs on genuine Apple hardware. People tend to forget that in the early 80’s there were several different flavors of 68000-based workstations around that wouldn’t run Mac software (Sun, Apollo, etc.), and there were x86-based systems that weren’t PC-compatible (DEC Rainbow, to name one), so this isn’t so hard to do.
* However, Microsoft (or some third-party) will make it possible to run Windows natively on Apple hardware. This MAY make Apple hardware more acceptable in a corporate environment due to the fact that it will be theoretically possible for that hardware to run Windows. But corporate standards tend to be pretty exact, so I’m incline to doubt it.
* The more obvious way of running Windows and Windows applications on Apple hardware will be through emulators such as Virtual PC. I would be shocked if these emulators weren’t updated to run Windows apps at nearly full speed on a Mac.
* Due to the ubiquity of Windows applications, a Windows emulator such as Virtual PC will become almost standard equipment on an Intel-based Mac. This should result in a small increase in Microsoft’s Windows licencing revenue.
* In the longer term, this deal MAY result in fewer Mac-specific applications being built. Today’s applications are written to operating system API’s, not CPU instruction sets. Developers who are already committed to the MacOS API will likely continue to development Mac-specific applications. But other developers will likely rely on the emulators to get their Windows API applications running on the Mac. As the cost of maintaining two separate code bases–one with a very large market share and the other with a very small one–continues, developers may decide to abandon the Mac API and concentrate on the Windows API exclusively, relying on emulation to cover the Mac users. This is especially likely if the emulators can be improved to the point where they can run applications without having to reveal an entire Windows operating environment. (If that last bit isn’t clear, send me an email at “scott at trotternet dot com” and I’ll try to clarify.
So, to summerize:
Apple: Short-term loss of sales due to Osborne Effect, but recovering to current market share levels. In and of itself, the CPU switch probably won’t entice many current Windows users to switch to Mac.
Intel: Sales increase of 2-3% without having to battle AMD.
Microsoft: Windows licence revenue increase due to increased use of emulators on Macs.
Windows developers: Slightly larger potential market for their products, but probably not enough to get them really excited.
Macintosh developers: May eventually abandon native Mac applications in favor of relying on Windows emulators.
Now, all this is based on the participants stated intentions at this point in time. The deal also opens up some interesting possibilities should the parties–mainly Apple–choose to take advantage of them.
* In spite of their stated intention to not allow the MacOS to run on non-Apple hardware, they could easily change their minds further down the line. This would effectively kill their hardware business and transform them into a pure software company much like Microsoft. I don’t think that this will happen as long as Steve Jobs is in charge. After all, one of Steve’s first actions upon resuming control of Apple was to kill off the Mac clone market.
* Apple might try to revive their “Switch” campaign by offering a limited version of the MacOS which would run completely from a CD-ROM in much the same way as a version of Linux does not, thereby allowing existing Windows users to “test-drive” the MacOS on their own systems before they (hopefully, from Apple’s POV) buy a Mac.
That’s all for now. Drop me a line and let me know what you think.

Cory Doctorow a Liability to EFF

I just finished listening to [Sound Policy with Denise Howell][1] from [IT Conversations][2] where the subject was Google’s Autolink feature on the latest edition of the [Google Toolbar][3]. I was shocked by the behavior of the [EFF’s Cory Doctorow][4]. He was rude to the other speakers, continually interrupting and shouting them down. He was disrespectful of the opposing point of view, labeling their concerns as “silly.” He was ineffective in promoting his own point of view in favor of Autolink, continually–and **loudly**–espousing wildly inaccurate and inappropriate analogies and examples.
But worst of all, Doctorow seems to be publicly advocating a position which, if it’s official EFF policy, may make everyone want to seriously reconsider their support for the EFF. I thought that the mission of the EFF was, at least in part, to restore the balance to copyright law which the entertainment cartel has stacked in their favor. But Doctorow seems to feel that there should be no copyright law at all, repeatedly stating that web authors have “no right” to have integrity of their work respected. Well, if the EFF “doesn’t give a shit”–to use Doctorow’s words–about the rights of authors, then I no longer give a shit about the EFF.
**Update:** I should mention that prior to listening to this podcast, I was on the fence regarding Autolink. On the one hand, I can sympathize with the desire of authors not wanting to have the meaning–as opposed to the formatting–of their work altered by a third-party without their permission. On the other hand, Autolink does seem to be beneficial to the user under certain circumstances. For example, nothing irritates me more than to read an article on CNET describing some new company/product/service, but CNET refuses to provide any external links to the subject of the article in the mistaken belief that they can keep me trapped on their site in order to flash more ads at me. Wrong. They are just (A) pissing me off, and (B) forcing me to look it up the old-fashioned way. (BTW, I realize that the current incarnation of Autolink won’t “fix” this “problem” either.)
Cory Doctorow’s near-hysterical ranting certainly helped me make up my mind… I’m now firmly **opposed** to Autolink and anything like it. Hence, Doctorow’s ineffectiveness as an advocate in favor of Autolink.
By the way, there is another solution to the “problem” that Autolink is trying to address, but it’s getting late so I’ll write about it separately, probably tomorrow.

IE7… Don’t Get Excited Just Yet

So, today Microsoft reversed their plan-of-record and announced that there will after all be another standalone version of Internet Explorer independent of Longhorn, the next slated version of Windows. You can read the details for yourself on the IE Blog. The Firefox primates have gotten the monolith’s attention. But don’t get too excited just yet.
There are 3 main areas in which Internet Explorer could use some updating and enhancements:
1. Security fixes
2. New user features (e.g. tabs)
3. Rendering engine fixes (e.g. full W3C standards compliance)
If you look carefully at their announcement, they only thing they’re talking about at this point is #1, fixing some? all? of IE’s security problems. Personally, I don’t care too much about this because I don’t use IE myself. For the 90% of the user population who do still use IE, this is a big deal, but I would expect any software company to fix these kinds of problems with their product.
As for #2, adding new features, again I don’t care too much because I’m a very happy Firefox user. If Microsoft really wants to stop the loss of their IE users to Firefox, then they had better address #2, but their announcement doesn’t mention it.
As a professional web developer, I’m most concerned about #3, the continuing lack of standards compliance in Internet Explorer’s rendering engine. In the past few years, IE has become a significant roadblock to the continued development of the web, in much the same way that Netscape 4 was before that. And again, Microsoft’s announcement today says nothing about fixing this set of problems.
So for me, this announcement is a big yawner, not worthy of the headlines that it will garner. They are fixing the things that they have to fix in order to head off potential product liability lawsuits, but they aren’t doing anything about fixing the more strategic problems.

Copyright Cartel Strikes Again

I saw an item on Scripting News this morning about software which captures songs off of XM satelite radio and saves them as MP3 files on your PC’s hard drive. The software, Nerosoft TimeTrax, enhances a $50 XM accessory called the PCR which is an XM radio receiver that attaches to and plays through your PC’s audio system. TimeTrax captures individual songs and sames them as MP3, complete with file names and tags. I’ve been considering getting satelite radio for some time now (either XM or Sirius), and I thought that this little device–along with the software–would be just the thing to tilt the scales in favor of XM and lead me to subscribe.
That is until this afternoon, when I read on CNET that XM is pulling the device off the market. The article says that the RIAA didn’t pressure them to take that action, but I don’t believe that for a second. Pardon me, but didn’t the Betamax Decision twenty years ago specifically allow consumers to record broadcast signals for their own personal use? Just because this isn’t a VCR recording a television show shouldn’t make any difference. The principle is the same.
It’s time for some good old-fashioned Consumer Rights legislation which spells out exactly what we consumers have the right to do (and not do) with the copyrighted material which we buy. The entertainment industry is just going to get more and more restrictive in their licensing terms. It’s time we the people stood up and started seriously complaining about the loss of our ability to do what we please with the thing that we purchase. The entire House of Representatives and 1/3 of the Senate are up for re-election this fall, so now’s the time to get their attention.

So Far, So Good

The conversion from Radio Userland over to Movable Type has gone very smoothly so far. There’s no support or directions from Six Apart for importing from Radio, as there is for a few other blogging systems, but I found a Python script by Krzysztof Kowalczyk which did 95% of the work in converting my Radio entries into the format the MT can import. Fixing up the other 5% didn’t take too long since I only had about 10 entries in Radio. If I had a more “normal” load of blog entries, then it would have been much more painful to do the conversion.
I’ve poked around at the templates a bit, modifying things here and there so to be more to my liking. I’ve exported them to external files so that I can use a normal editor to modify them. I found a Dreamweaver Extension by Shabbir J. Safdar which integrates those template files with Dreamweaver very nicely. One thing which puzzles me a bit is that they have the master CSS file setup as a template, even though (apparently) no conversion or substitution takes place when that template is “rebuilt.” It appears to be a simple copy operation, in which case, why did they bother making it a template?
I bought the book Teach Yourself Movable Type in 24 Hours by Molly Holzschlag and Porter Glendinning. Although it’s for an older version of MT, the format of the “24 Hours” series makes it very easy for an advanced user to skim through it and pick up the important points. I’ve read one other book by Molly and I think she’s a terrific author, on par with Zeldman and Meyer.
BTW, I don’t really need to buy and use books like this. I’ve got 30 years of experience with computers and programming, and I could easily figure this stuff out on my own. But one of the things that those years of experience have taught me is to not waste my time unnecessarily. This particular book cost $30, and if it saves me 1/2-hour in getting up to speed on Movable Type, then its well worth the cost. There are plenty of other things out there to learn that don’t have books written about them. Take advantage of other people’s experience when its available.
Browsing Amazon, I see that there is another book about MT due out this fall, Movable Type 3.0 Bible Desktop Edition by Rogers Cadenhead. Rogers is the author of a similar book about Radio Userland which I found similarly helpful when I was (trying to) learn about that blogging tool. I doubt I’ll need it by the time its published, but I’ll probably buy it anyway.

Life’s Too Short

I started this weblog last year, because I wanted a place where I could publish my thoughts and analysis on various topics related to my interests, hobbies and profession. I chose to use Radio Userland as my blogging tool because of my familarity with Dave Winer and his earlier product Frontier. But I quickly discovered that Radio outputs the worst sort of tag soup HTML that you could imagine, and as a web standards advocate, that would never do. Since I knew how to program in UserTalk, Radio’s underlying scripting language, I had thought that if I had the time, I could correct Radio’s output so that it would conform to the current W3C web publishing standards.
Well, I don’t have the time to fix Radio’s problems, and Userland doesn’t seem to be particularly interested in fixing them either. Life’s too short to sit around waiting for something like this to get fixed, so I’m switching to another tool, Movable Type. Why MT? Because I recently attended a web development conference, and “everyone” there who published a weblog was using Movable Type. So I checked it out and found that it seems to do everything I need, it outputs standards-compliant code, and the copy behind it, Six Apart, seems to actually care about the product and support it.
So, so long Radio Userland, hello Movable Type. My first project will be to import my old Radio posting in to MT. We’ll see how it goes…

Still exploring the guts of Radio Userland

Still exploring the guts of Radio Userland. I had hoped that I would be able to effect the changes I desired simply by modifying the template files which are completely user-visible. Unfortunately, some of the macros that the templates invoke in order to place content into the published files use deprecated markup elements such as <br> instead of <p></p> to delimit paragraphs. That means that I’m going to have to rewrite at least some of the macros. The trick will be doing it in a why that is transparent to the normal functioning of Radio. That is, I don’t want to modify any of the code that is supplied by Userland. Rather I want to either write my own macros which emulate the Userland ones, or else write pre- or post-processing macros which wrap around the corresponding Userland macro. Stay tuned.