All posts by Scott Trotter

IE7… Don’t Get Excited Just Yet

So, today Microsoft reversed their plan-of-record and announced that there will after all be another standalone version of Internet Explorer independent of Longhorn, the next slated version of Windows. You can read the details for yourself on the IE Blog. The Firefox primates have gotten the monolith’s attention. But don’t get too excited just yet.
There are 3 main areas in which Internet Explorer could use some updating and enhancements:
1. Security fixes
2. New user features (e.g. tabs)
3. Rendering engine fixes (e.g. full W3C standards compliance)
If you look carefully at their announcement, they only thing they’re talking about at this point is #1, fixing some? all? of IE’s security problems. Personally, I don’t care too much about this because I don’t use IE myself. For the 90% of the user population who do still use IE, this is a big deal, but I would expect any software company to fix these kinds of problems with their product.
As for #2, adding new features, again I don’t care too much because I’m a very happy Firefox user. If Microsoft really wants to stop the loss of their IE users to Firefox, then they had better address #2, but their announcement doesn’t mention it.
As a professional web developer, I’m most concerned about #3, the continuing lack of standards compliance in Internet Explorer’s rendering engine. In the past few years, IE has become a significant roadblock to the continued development of the web, in much the same way that Netscape 4 was before that. And again, Microsoft’s announcement today says nothing about fixing this set of problems.
So for me, this announcement is a big yawner, not worthy of the headlines that it will garner. They are fixing the things that they have to fix in order to head off potential product liability lawsuits, but they aren’t doing anything about fixing the more strategic problems.

Copyright Cartel Strikes Again

I saw an item on Scripting News this morning about software which captures songs off of XM satelite radio and saves them as MP3 files on your PC’s hard drive. The software, Nerosoft TimeTrax, enhances a $50 XM accessory called the PCR which is an XM radio receiver that attaches to and plays through your PC’s audio system. TimeTrax captures individual songs and sames them as MP3, complete with file names and tags. I’ve been considering getting satelite radio for some time now (either XM or Sirius), and I thought that this little device–along with the software–would be just the thing to tilt the scales in favor of XM and lead me to subscribe.
That is until this afternoon, when I read on CNET that XM is pulling the device off the market. The article says that the RIAA didn’t pressure them to take that action, but I don’t believe that for a second. Pardon me, but didn’t the Betamax Decision twenty years ago specifically allow consumers to record broadcast signals for their own personal use? Just because this isn’t a VCR recording a television show shouldn’t make any difference. The principle is the same.
It’s time for some good old-fashioned Consumer Rights legislation which spells out exactly what we consumers have the right to do (and not do) with the copyrighted material which we buy. The entertainment industry is just going to get more and more restrictive in their licensing terms. It’s time we the people stood up and started seriously complaining about the loss of our ability to do what we please with the thing that we purchase. The entire House of Representatives and 1/3 of the Senate are up for re-election this fall, so now’s the time to get their attention.

So Far, So Good

The conversion from Radio Userland over to Movable Type has gone very smoothly so far. There’s no support or directions from Six Apart for importing from Radio, as there is for a few other blogging systems, but I found a Python script by Krzysztof Kowalczyk which did 95% of the work in converting my Radio entries into the format the MT can import. Fixing up the other 5% didn’t take too long since I only had about 10 entries in Radio. If I had a more “normal” load of blog entries, then it would have been much more painful to do the conversion.
I’ve poked around at the templates a bit, modifying things here and there so to be more to my liking. I’ve exported them to external files so that I can use a normal editor to modify them. I found a Dreamweaver Extension by Shabbir J. Safdar which integrates those template files with Dreamweaver very nicely. One thing which puzzles me a bit is that they have the master CSS file setup as a template, even though (apparently) no conversion or substitution takes place when that template is “rebuilt.” It appears to be a simple copy operation, in which case, why did they bother making it a template?
I bought the book Teach Yourself Movable Type in 24 Hours by Molly Holzschlag and Porter Glendinning. Although it’s for an older version of MT, the format of the “24 Hours” series makes it very easy for an advanced user to skim through it and pick up the important points. I’ve read one other book by Molly and I think she’s a terrific author, on par with Zeldman and Meyer.
BTW, I don’t really need to buy and use books like this. I’ve got 30 years of experience with computers and programming, and I could easily figure this stuff out on my own. But one of the things that those years of experience have taught me is to not waste my time unnecessarily. This particular book cost $30, and if it saves me 1/2-hour in getting up to speed on Movable Type, then its well worth the cost. There are plenty of other things out there to learn that don’t have books written about them. Take advantage of other people’s experience when its available.
Browsing Amazon, I see that there is another book about MT due out this fall, Movable Type 3.0 Bible Desktop Edition by Rogers Cadenhead. Rogers is the author of a similar book about Radio Userland which I found similarly helpful when I was (trying to) learn about that blogging tool. I doubt I’ll need it by the time its published, but I’ll probably buy it anyway.

Life’s Too Short

I started this weblog last year, because I wanted a place where I could publish my thoughts and analysis on various topics related to my interests, hobbies and profession. I chose to use Radio Userland as my blogging tool because of my familarity with Dave Winer and his earlier product Frontier. But I quickly discovered that Radio outputs the worst sort of tag soup HTML that you could imagine, and as a web standards advocate, that would never do. Since I knew how to program in UserTalk, Radio’s underlying scripting language, I had thought that if I had the time, I could correct Radio’s output so that it would conform to the current W3C web publishing standards.
Well, I don’t have the time to fix Radio’s problems, and Userland doesn’t seem to be particularly interested in fixing them either. Life’s too short to sit around waiting for something like this to get fixed, so I’m switching to another tool, Movable Type. Why MT? Because I recently attended a web development conference, and “everyone” there who published a weblog was using Movable Type. So I checked it out and found that it seems to do everything I need, it outputs standards-compliant code, and the copy behind it, Six Apart, seems to actually care about the product and support it.
So, so long Radio Userland, hello Movable Type. My first project will be to import my old Radio posting in to MT. We’ll see how it goes…

Still exploring the guts of Radio Userland

Still exploring the guts of Radio Userland. I had hoped that I would be able to effect the changes I desired simply by modifying the template files which are completely user-visible. Unfortunately, some of the macros that the templates invoke in order to place content into the published files use deprecated markup elements such as <br> instead of <p></p> to delimit paragraphs. That means that I’m going to have to rewrite at least some of the macros. The trick will be doing it in a why that is transparent to the normal functioning of Radio. That is, I don’t want to modify any of the code that is supplied by Userland. Rather I want to either write my own macros which emulate the Userland ones, or else write pre- or post-processing macros which wrap around the corresponding Userland macro. Stay tuned.

Another shot at it

Well, I’m going to take another shot at actually writing this blog. When I first started this enterprise last year, I was appalled at the quality of the HTML that was emitted by Radio Userland. Its the very worst ‘tag soup’ that you can imagine. Not only that, but when it uses CSS, it does so in the worst possible way, by including the entire set of style definitions at the top of each and every file that it published. This means that not only do you lose the benefit of reduced file size that CSS normally provides, but you also lose the ability to modify a style definition for the entire site in a single location. Yech!
Since I’m a big advocate of CSS and standards-based markup, the thought of having my own site published in this manner was both embarrassing and frustrating. On the other hand, I’m an old-time Frontier programmer, so I thought that I could just delve into the object database, and modify the publishing macros so that they emitted proper, modern markup. Unfortunately, just as I was about to embark upon that little project, things got really, really busy at work, so I had to shelve the idea. And rather than endure the embarrassment of having my blog published using old school markup, I decided to just shelve it as well.
So, now things are a little less hectic at work, and I’m going to take another look at the publication macros. I haven’t started that project yet, but I’m going to start writing anyway. Hopefully, by the time anyone actually reads these words, I’ll have gotten the markup to the point where I won’t have to apologize for it.
That’s the plan anyway. We’ll see how it goes…

Going on hiatus for a while

I know that nobody is reading this right now, so this is addressed to posterity. I haven’t posted in about 6 weeks because things at work have gotten WAY busy, and it’s likely to continue that way for a while longer. I hope to get back into the flow of things soon, but you never know. But since nobody’s reading anyway, it doesn’t really matter now, does it?

Concerning NewsMonster

Here is a copy of the rather lengthy comment that I posted on Ben Hammersley’s weblog concerning a discussion about a new news aggregator product called NewsMonster:
I haven’t tried NewsMonster yet, but based on the discussion, it appears that the functionality that it most closely resembles is the “Offline Web Pages” feature of Internet Explorer for Windows. It also would appear that most people contributing to this discussion have not used this feature before, and therefore don’t appreciate just how valuable it is. If you haven’t used it, here’s a quick overview:
Offline Web Pages drives Internet Explorer just as if a live user were driving it. It stores complete web pages and all linked images and other content elements in IE’s regular cache. Its completely user configurable: it can store complete sites or just single pages depending on the URL; it can recursively dive down up to 3 (I think) levels deep; it can follow links to “outside” sites or stay within the domain specified by the initial URL; it can run on a schedule, on system events like startup or shutdown, or on demand; it can traverse and cache a single site, or a whole list of sites.
From the user’s perspective, you just run IE, put it into offline mode, then browse the site(s) as you would normally. There’s no difference between that and browsing the site online, except that the offline experience is blazingly fast, much faster than browsing online even over DSL or other broadband.
The way I used to use this feature was as follows: I have a half-hour train ride to and from work every day. I had my laptop set to download a list of sites every weekday morning at 5 a.m. and again in the afternoon at 4 p.m. The sites included CNET, NYT-Tech, Wired, GMSV and a few others. I could then read the news on the train using my laptop with IE in offline mode. This was a tremendous time-saver for me. I’ve since switched to using a Pocket PC for the train ride, but I still use Offline Web Pages for a few sites that I look at in the evenings at home.
Remember that the vast majority of web users still are stuck with 56K dialup, and will be for years to come. Using Offine Web Pages vastly improves the experience of browsing the web in that environment, as well as extending the availability of the web into situations where it isn’t currently accessable. Are Offline Web Pages inefficient from a server perspective? Certainly. Nevertheless, the feature is invaluable under certain circumstances.
KEY POINT: If Offline Web Pages obeyed the Robot Exclusion Protocol, it would render this valuable feature completely useless.
So what the answer? First, is to recognize that IE’s Offline Web Pages and (apparently) NewsMonster are neither robots in the “classic” sense of search engines, nor are they flesh-and-blood users, but are a hybrid of the two. The solution should be twofold:
First, the offline user agents need to be very smart and efficient. They shouldn’t try and download content that they already have in their cache. (Sites like CNET which have multiple CMS-generated URLs that point to the same article complicate this.) And they should try and learn from the user’s history and only download pages that the user is likely to actually read–which is easier said than done!
Second, the Robot Exclusion Protocol is ancient by Internet standards, and could probably use an update to better handle this situation. Perhaps it could redirect bots to an alternate URL which would allow them operate more efficiently. Or maybe there’s already some other technology which would be more appropriate.