Last week I bought a Specialized Sirrus bike. I've ridden it to work twice this week, and expect it to be my standard transport to work. With gas alone at current prices, I will need to save about 250 gallons before I pay it off savings. My tank holds about 15 gallons, so that makes about 17.5 fill-ups. At my standard of a a fill-up roughly every two weeks, that makes it about year before I'll break even. That's assuming I ride about 35 weeks a year, but that's also assuming gas doesn't go up. I expect those two assumptions to prove equally false.

In addition to saving money, I get exercise on my way to work. Gmaps Pedometer says I ride about five miles a day. It doesn't that seem that far, but that may be because the scenery is still new. I expect I'll eventually buy an iPod to entertain myself on the ride.

The bike was the third most expensive purchase I've ever made, after computers and cars (in that order). But I expect over the next year it will turn out to be a better deal than both.


Pandora is a neat tool. It creates a personal radio station for you based on the musical style of an artist of your choosing. You can then customize it further with feedback on individual songs it has chosen. I'd like it to do more to expand my musical interests rather than just continually focusing or shifting them. And I'd also like to see it incorporate non-label music. But as is, it's a great example of where radio is headed.


A few months ago I discovered that Flickr has no security for images. Flickr has a nice feature which allows users to assign variable levels of copyright protection on images, from full copyright to various Creative Commons licenses. Images with less restrictive licenses show an "All Sizes" link which will let visitors download the image in various sizes. Images with full copyright do not show this link.

But the copyrighted images are still freely available for everyone to download. Flickr is relying on obscurity over actual security to prevent downloads of copyrighted images. And it's not even very good obscurity at that. To find an image, we need to know which server it is on, the ID of the image, and a "secret" key that's added to the image address. A quick glance at the source of any Flickr photo page shows that this information is stored in JavaScript variables named "server," "id," and "secret" respectively. It would appear that they're not even trying to protect these images.

This isn't much of a problem as long as only people with enough technical knowledge to look at the source code of a page can find the images, but I suspect there will be a good number of Flickr users who be a bit upset when anyone can download their full-size copyrighted images with a single click. It is with my sympathies for these users that I publish the following bookmarklet that you can click when viewing any image page on Flickr to get the original full-size version of that image:

Get Flickr Original

My goal here isn't to facilitate the downloading of images that Flick users don't want downloaded, but rather to point out that Flickr has already facilitated such downloading. This is a public service announcement: Flickr has no image security.


®¤©: music now has a podcast feed that will include new music as I put it on the server. I now have a recording setup using an iSight camera and a Mac Mini, which works surprisingly well because the Mac Mini is so quiet and the iSight's mic is much better than one might expect. So if you subscribe now, you'll get my new not-terrible-quality recordings when I start making them over the next few weeks.


There was a robbery in Des Moines yesterday afternoon. Police are looking for a bald man, a woman with an eyebrow piercing, and a large green parakeet. No word on whether any of the suspects had wooden legs or hooks. They were last seen sailing — wait, there's no sea near here. What are pirates doing in Iowa?


Christians like murder. Pastafarians like full pirate regalia. The choice is up to you.


Phil is my soon-to-be new co-worker.


URLs are Uniform Resource Locators. They're the addresses for stuff on the web. They commonly start with "http://" and beyond that, they range widely in format. Uniform Resource Locators aren't very uniform. Part of the lack of uniformity comes from having multiple URLs pointing to the same resource.

To see why this is a problem, you can take a trip to Des Moines and try to find my house on 16th Street. You may end up on 16th Street in West Des Moines, or South East 16th Street in Des Moines, but mine is the one that intersects with Crocker, just off Martin Luther King. Only Crocker is named Cottage Grove where it meets Martin Luther King, which is also named 19th Street or Fleur at various points on the same street.

If you find my street, and then my house, you'll still have some trouble, as I live in a duplex, with two entrances in both the front and the back. I could tell you I live on the left side, but that may not be your left when you're standing in front of (or behind) the house. This would all be a lot easier if you could just go to (Actually, you can, if you want to order some flowers, but that won't help you get here.)

Ideally, every resource on the web would have a single URL. is good at working toward this goal. If you go to or, you will end up at Other sites are not so good at this. An interesting auxiliary benefit of URL-based tagging sites like is that we can easily see when a single resource has multiple URLs pointing at it. For example, at this moment, the popular page has three different URLs listed for the exact same article on slashdot.

This isn't a problem if the only site you visit is slashdot, for the same reason I don't have trouble finding my house. But if you're out wandering the web, and you come across a link to one of these URLs, and you follow it, and a day later you come across a different link to a slightly different URL, you will not have the visual cue most browsers and websites offer to tell you that you've already followed this link and seen this content, so you'll click it again and waste precious seconds of your life.

Many web developers may not particularly care about a random user roaming around the internet. But it turns out, as Shelley recently pointed out, that Google creates its index of websites by acting as a random user roaming around the internet. When Google happens upon your second or third URL pointing to the same content it starts to think "hmm...maybe this site is just spamming the search index with the same content over and over again." If you have a Google rank as high as slashdot's, Google will quickly dismiss this suspicion, but you probably don't want Google ever wondering if your site is spamming the search index. Not even (or maybe especially) if you are spamming the search index.

The irony is that smaller sites can least afford Google's suspicion or visitor confusion, but smaller sites can also least afford to clean URLs. One of the most useful tools in URL cleaning is Apache's mod_rewrite, yet few smaller sites have access to mod_rewrite's URL cleaning power. For those who do have access to mod_rewrite, along with a healthy (unhealthy?) knowledge of regular expressions, the task of cleaning URLs is relatively quick and easy.

I assume the creators of slashdot have both the access and the know-how to clean their URLs, so they're easy targets for finger pointing. However, I also have both the access and the know-how to clean my URLs, and you'll notice no shortage of cruft around these parts. So this is as much a self service as a public service announcement. Self and public: clean your URLs.


I made a page to show sample Graphite graphs. The two there now show that Firefox downloads continue at a steady pace, which doesn't surprise me, and Google's index of the web changes often and sometimes drastically, which does surprise me. It takes at least a few days for the graphs to get interesting, so I don't have a lot now, but I'll keep adding more, and if anyone else has any graphs to share, please send them my way.


I discovered a couple flaws in Graphite. One just resulted in ugly graphs, but the other was an infinite loop that slowed the entire system. Both only happen after running the same graph for over a month, which no one but I could have possibly done.

0.2 beta fixes two flaws in 0.1 beta. You should replace 0.1 with 0.2 to avoid problems.

Earlier, I wrote about how I'd like to see something that allows widgets to function more like standard applications, bringing web applications to the desktop. Turns out Amnesty does that and more, and was released a full month before I asked for it.

So here's my prediction: a future release of OSX will provide similar functionality in the OS itself.


Apple's Dashboard has a developer mode that lets you pull a widget off the Dashboard and onto the desktop, where it hovers over everything else, except another widget that has been selected more recently. It occurred to me today that somewhere Apple must be storing information about whether or not each widget is on the desktop, what layer each widget is on (relative to other widgets), the location of each widget, and more.

It doesn't take much more to make a window manager. If someone could figure out where this information is stored, and how to edit it, they could create a run time environment that allows widgets to be run as desktop applications. It seems inevitable to me that eventually the functionality of web applications will be build into the OS, but it would be neat to see someone give Apple a little push in that direction. is now being served from an iBook in my house, on a dynamic IP address at 7Mbps. This means everything on will be slower, and occassionally it may go down altogether. In testing the system, I haven't noticed a big speed difference and it hasn't gone down once, but we'll see how it holds up under real traffic. On the positive side, I no longer have space limitations for the music, which means I can host as many songs as I want. It will also be easier to add new songs, as I can just stick them on my local iBook, and don't need to wait through slower FTP uploads. I hope this will result in my offering more music more often.


Interesting websites I've found via

The last one links to a USA Today article about legal issues of podcasting radio stations, a bump in the road to internet radio I once suggested.

The signal to noise ratio on is suprisingly high with very few broken or boring links. I hope that lasts.


I finally finished the widget I've been working on:

Graphite is a free widget for Apple's Dashboard. You give it a website address, and some text before and after a number, and it will track and graph changes to that number over time.

If you are running OSX 10.4, please try it out and share your thoughts.


One of the best things my parents did for me as a child was not getting cable television. Since then, cable television has become more and more pervasive, so much so that the broadcast television is scheduled to die next year. But I have never paid for cable television. I would have been more likely to pay for cable TV if it was less pervasive, but why pay for it when everyone else I know has it? Not only can I watch it on other people's TVs, but I also have a natural filter of all the worthless television, because I never hear about it from other people. No one walks around saying "Did you see that show last night? It wasn't worth watching," though plenty of people watch such shows.

I think many in the FCC and the rest of the TV industry are assuming broadcast TV will be replaced, to whatever extent it hasn't been already, by cable and satellite TV. But Douglas Rushkoff says The "next big thing" in media will not happen on TV - or at least not primarily on TV. It will happen on or through the Internet. And I think it's reasonable to expect it will happen around the same time broadcast TV formally dies. And so cable TV will die soon after.

One can get a vague sense of this with the explosion of weblogs, and more recently podcasting. Participatory publishing is moving into higher-bandwidth media, and television is the next logical step. But why go with a vague sense when Open Network TV makes the future much clearer.

ONTV's beta release of a video aggregator is pretty bad. It has user interface confusion all over the place, and there's just not enough content yet. But still, it's better than the cable TV alternative, Current TV, for the simple reason that I can watch whatever I want with I/ON. That this is the future of television changes from a hunch to completely obvious after using this tool for five minutes. Imagining how much better I/ON would (and will) be with the resources that have been poured into Current TV, I think Al Gore made a big mistake.

Current TV's about page starts by saying Right now, at this moment in history, TV is the most powerful medium in the world. I think that moment passed already.


The weblog has been prettied up a bit. I'm going to do other sections one by one so I can check if the fixed width causes problems. Eventually, the whole site will look more or less like the weblog does now. The background is from squidfingers. The general look is from various styles at CSS Zen Garden.


I was going to write a rant about terrible customer service I've experienced recently, but instead I present an orderly table of information about the three companies that have overcharged me in the past two months:

Company: Qwest Sprint
Amount ~$60 ~$250 $10.20
Hours spent resolving: ~15 ~10 ~2
Weeks from problem reported to problem solved: 6 8 1, so far
Level of disdain for company (1-10): 8 6 5 and rising
Lesson learned: Don't buy modems from broadband providers. They're just selling a modem someone else made, so buy it from that someone else, who can reliably ship and support it. Go to a physical store to get help. Customer service reps pretend to help, but don't. Sprint store employees don't pretend, because you know where to find them. Yell at people to get money. I'm normally more passive aggressive, but a dozen hours on the phone brought me to the discovery that customer service reps are more eager to give refunds to an unpleasant customer.

Okay, a bit of ranting at the end. If I could easily do so, I would drop my contract with all of these companies. But each provide a valuable communications service I can't conveniently find elsewhere.


What Business Can Learn from Open Source is the most interesting thing I've read in a long time. Here's one of many smart insights:

You can't expect employers to have some kind of paternal responsibility toward employees without putting employees in the position of children. And that seems a bad road to go down.

I think this part is a just a bit too broad, though. Not all responsibilities are paternal - some are just part of living. For example, it doesn't imply the same paternal relationship to say that employers are responsible for not harming employees. This is a responsibility we assign to everyone. Other than that, I think his point about the negative consequences of giving away responsibility applies equally to other contexts. Government and religion come to mind.

But this is just one small section. You should really read the whole essay.


Last month my company had a golf outing. I had never played golf, and I was relatively new to the company, so I went and did my best to fit in. I would have never guessed I'd be working at an advertising agency and going to golf outings.

Golf Outing

Nor would I have guessed the mohawks.


PiggyBank looks like an interesting tool, which bills itself as an extension to the Firefox web browser that turns it into a “Semantic Web browser”, letting you make use of existing information on the Web in more useful and flexible ways. It's just one step too many for me to actually install, but the description looks like something that might be more widely used if it were a bit more simple. At some point, I expect PiggyBank and other similar tools will be more widely used, and I wonder what will happen to the web then.

Given enough context, it's not difficult to force semantics onto any website. And if the context isn't provided on the publishing side, there's no reason the reader can't provide it. I know where the movie titles are on an IMDB page, even though it isn't marked <h1 class="movie_title">A Great Movie</h1>. This is a lesson I learned through working on disemployed, and I'm relearning through playing with tools like GreaseMonkey, my MySpace RSS feed tool, and the widget I'm working on (and hope to release within the next week). All of these tools use context to infer meaning from otherwise meaningless markup. There is more and more technology adopting this method, but where is this leading?

Forcing semantics onto a website will only work so long as the website maintains enough predictable structure to know where to put the semantics. When the structure changes, everything breaks. PiggyBank, for example, will require new scrapers nearly every time a target site changes structure. This isn't stable. It's also not scalable. There are billions of websites, and it's just not going to work to write and maintain custom scripts for each one to make it more semantic. At some point website developers will need to start participating in the semantic web for it to work.

But the current trend seems to discourage such participation in two ways. First, tools like PiggyBank and GreaseMonkey, as they become more popular, provide disincentives to change website markup. This is good for the stability issue I mentioned, but it's bad for the transition to a more semantic web. Second, as forced semantics tools get better and better at converting non-semantic websites into something semantic, there is little reason for the websites to themselves become more semantic.

Maybe I'm wrong, and website developers will look at something like PiggyBank, see the benefit of semantics to users, and decide to start using more descriptive XHTML or more RDF. But it seems to me more likely that we're headed towards a "semantic web" in which the semantics are forced onto websites by browsers and other intermediaries. This isn't necessarily a problem, but it isn't what most people have in mind when they talk about the semantic web.