Tom Lee's Techdirt Profile

Tom Lee

About Tom Lee

Posted on Techdirt - 6 November 2008 @ 04:28pm

Should We Worry About A National CTO?

Here in D.C. the town’s collective post-election hangover is lifting, and folks are beginning to ponder exactly what the new administration will mean for their respective corners of the world. Those of us working in technology are no exception, and a new blog post by Wayne Crews on OpenMarket.org has renewed discussion of President-elect Obama’s proposal for a national CTO.

Unfortunately, Crews’ post doesn’t offer much insight — he simply conflates “CTO” with “czar” (as in “drug czar”) and then decides that the track record of such positions means the initiative is a bad idea. As Jerry Brito noted in comments at the TLF, this rhetorical sleight of hand is a bit dishonest. The Obama campaign’s stated intention is for the CTO to “ensure the safety of our networks and will lead an interagency effort, working with chief technology and chief information officers of each of the federal agencies, to ensure that they use best-in-class technologies and share best practices.” That’s considerably less expansive than what Crews seems to fear.

Our own Tim Lee has weighed in on the idea before, defining two possible roles for a national CTO: one as a coordinator of federal systems (as described above) and another as an adviser on tech policy. As Tim notes, it’s important that President-elect Obama receive smart counsel on tech policy — and the Obama campaign’s association with people like Vint Cerf is encouraging on this score. But again, it’s not clear that such advising is within the purview of the CTO role as Obama conceives it.

So what about the other function? Tim isn’t enthusiastic about it, noting that the government probably already achieves what economies of scale it can, meaning that centralizing IT decisions would only result in reduced flexibility for individual agencies.

Speaking as a former government IT contractor, I’m not so sure about that. In my experience, IT procurement decisions within agencies are played very, very safe. The person making the purchasing decision is generally operating in CYA mode: the purchase is being made with an eye toward their career. There are no stock options or revenue sharing to consider — no upside — so the primary goal is to make decisions that minimize the potential for blame.

In practice this means buying from huge, established vendors, even when doing so isn’t really appropriate. I’ve seen projects buy massively expensive Oracle licenses when MySQL or PostgreSQL would’ve worked just fine, and would have cost far fewer dollars and man-hours. Why waste those resources? Because Oracle was seen as safe (particularly since Sun hadn’t yet acquired MySQL AB). It’s the same old problem that slowed private industry’s adoption of open-source software, except without the profit motive to push things along.

It’s possible to mount a justification for such a cautious approach by government, but “efficiency” isn’t likely to be part of that argument. And here’s where a national CTO really could make a difference: the high-profile, appointed nature of the position calls for a big name — someone with influence and a proven record of innovative ideas — rather than a cowering careerist. And that, in turn, might embolden the don’t-blame-me CTOs and CIOs further down the federal ladder. Desktop Linux springs to mind as the sort of technology that could save huge amounts of taxpayer money, but which is probably too intimidating for most agencies to undertake without direction from above.

What would this mean for you, me and the larger tech industry? In all likelihood, not very much. It’s not as if open-source technologies need the government’s stamp of approval to prove their viability; and every indication is that the important regulatory decisions that affect our industry will continue to be made at places like the FTC and FCC. A national CTO will be irrelevant to most of us, so time spent fretting over the office is probably time wasted. But that doesn’t mean that such a position isn’t a good idea — saving tax dollars usually is, and there’s reason to think that a national CTO could do just that.

Posted on Techdirt - 3 October 2008 @ 01:18pm

Let's Be Honest About Bandwidth Rationing

Earlier this week, GigaOM released a new paper on bandwidth caps by Muayyad Al-Chalabi. It’s been getting a fair amount of positive attention from around the web. I have to confess that I find this somewhat mystifying — I can only assume that those linking to it approvingly haven’t actually bothered to read it. Mr. Malik does a lot of good work, but this paper is beneath his usually high standards. It is at times confused, at others dishonest, and almost uniformly irrelevant.

The paper even starts off on the wrong foot with the standard, formulaic, credential-establishing pseudo-academic gibberish, in this case about scale-free networks. Al-Chalabi sagely notes that: “A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as P(k) ~ k-y where k is a constant whose value is typically in the range 2<k<3.” Fascinating! But it does sound vaguely familiar… hmm… now where could I have read about scale-free networks before? Oh right! Wikipedia!

wikipedia_scalefreenetwork_small

There’s no shame in using Wikipedia as a reference (although there probably should be some shame in copying from it verbatim without a citation). No, the real problem is that there’s no indication that the author understands what any of this means, aside, perhaps, from it having something to do with graph theory. Still, at least the information on Wikipedia is correct. In the paper, the end of this section is the point at which things begin to go downhill.

Al-Chalabi’s basic thesis is that the power users likely to be affected by Comcast’s 250GB monthly bandwidth cap are vital to the network’s overall health. To establish this, he talks about Skype — it’s questionable whether a VoIP app that uses sub-dialup amounts of bandwidth is germane to a discussion of high bandwidth use, but let’s press on. He mentions that machines on the Skype network can be classified as nodes, login servers or supernodes. This is true. Supernodes “exhibit higher degrees of connectivity than ordinary nodes”. Also true! But here the paper conflates “connectivity” and “bandwidth use” and everything goes badly wrong. Supernodes sound great — they’re super, after all! — but they’re really just nodes that aren’t sitting behind firewalls. That lets them help with NAT traversal and other network tasks, but it adds up to very little extra bandwidth use. Bandwidth limits are irrelevant to supernodes’ existence.

Al-Chalabi goes on to defend the utility of power users elsewhere, namechecking “web-based applications and networks”, “critical social network hubs” (p. 6) and Hulu, Netflix and iTunes (p. 7). But none of these applications rely on power users to relay their content through the network — they’re all standard client/server apps. In truth, Al-Chalabi’s power users are critical for the health of only one major type of network application: P2P. And that’s fine! P2P is important and has plenty of legitimate uses. But despite the dire predictions of bandwidth-capped startups smothered in their metaphorical cribs, the bandwidth hogs currently on the rise are non-P2P ventures backed by huge corporations. Let’s not pretend that we need to subsidize power users in order to maintain the health of our plucky web startups and their apps. It may well be that someday legitimate P2P-based apps like Joost and Steam arise as a meaningful force at odds with bandwidth limitations. At the moment, though, the initial wave of enthusiasm for integrating P2P into mainstream applications seems to be somewhat stalled.

The most startlingly dishonest part of the whitepaper comes at its end, when Al-Chalabi uses an estimate of monthly data use by an average household in 2012: 200GB per month. He then picks Time Warner’s proposed tiered pricing scheme and uses it to arrive at a monthly bill of over $200. This is ridiculous. It’s quite obvious that bandwidth caps and pricing schemes will adapt as data demands grow, provider networks improve and competition forces carriers to adapt to changing customer demands. I might as well assert that a minute of trans-Atlantic telephone communication costs $300 — that’d be the inflation-adjusted price, based on how much it cost when the service first became available. It’s also odd that Al-Chalabi uses Time Warner’s pricing scheme after primarily discussing Comcast in the rest of the paper. But then, his 2012 bandwidth use estimate falls underneath the 250GB cap that Comcast is instating. Even if the cap didn’t go up over time, your data bill would (theoretically) remain constant throughout his example.

I’m sympathetic to where Al-Chalabi is coming from, and others here at Techdirt are even more so — Mike is on record as opposing bandwidth caps. But this whitepaper merely amounts to a complaint that a free lunch is ending. Bandwidth is clearly an increasingly limited resource. And in capitalist societies, money is how we allocate limited resources. The alternate solutions that Al-Chalabi proposes to the carriers on pages 6 and 8 — like P2P mirrors, improved service and “leveraging… existing relationships with content providers” — either assume that network improvements are free, would gut network neutrality, or are simply nonsense.

Yes, Comcast’s bandwidth cap is a drag. Instead of disconnection, there should be reasonable fees imposed for overages. They should come up with a schedule defining how the cap will increase in the future. And the paper’s suggestion of loosened limits during off-peak times is a good one. But the establishment of an actual, known limit does constitute a real improvement over the company’s frankly despicable past treatment of the issue and its customers. Hopefully the data carriers’ pricing schemes will continue to evolve in a more nuanced direction. Reasonable people can disagree about the specifics, but as this whitepaper accidentally proves, it’s hard to make an honest case that people shouldn’t have to pay for what they use.

Posted on Techdirt - 21 August 2008 @ 08:03pm

The First Step Is For Microsoft To Admit It Has A Problem

Ars Technica brings word of a pair of interesting efforts underway over at the Mozilla Project — both aimed at improving Internet Explorer, whether Microsoft likes it or not.

You may have heard of the first one already: ScreamingMonkey has gotten some press. It aims to make the core of Firefox’s next-generation Javascript engine (originally developed by Adobe) available in IE, providing advantages in speed and standards-compliance.

The other project is a bit more recent, and a bit more far-out: it’s an IE plugin created by Mozilla developer Vladimir Vukićević that implements the HTML5 <canvas> element — something that IE’s never gotten around to supporting. Canvas allows Javascript to draw 2D graphics on the client-side. You may have stumbled across it in the form of one or another nifty in-browser FPS demo. It’s a potentially powerful tool, but, as Ars notes, one that hasn’t achieved widespread adoption by web developers due to IE’s lack of support for it.

Both of these projects are impressive pieces of technology. But unfortunately both attempts to improve IE are unlikely to succeed in the ways that their authors would like — and it’s easy to see why. It’s safe to say that IE users tend to be among the web’s least technically sophisticated. These are exactly the people who can least reasonably be expected to install modular improvements to their browser’s underlying technology. It’s hard to imagine anyone finding it easier to do this than to simply download and begin using Firefox — a task that’s already clearly too complicated for many people. And that’s to say nothing of the difficulty of getting the word out in the first place.

The right solution is the same as it’s always been: for Microsoft to fix its abysmally noncompliant browser. They wouldn’t even have to do it themselves! As Tom Raftery suggested some time ago, Microsoft could simply open-source IE. Superficially, this seems like a good fix: it’s not as if IE is a profit center for Microsoft, and Apple has already shown the viability of the approach with its open source WebKit HTML rendering engine. A bold step like that could go a long way to bolstering what has thus far been a fairly anemic stab at open source on Redmond’s part.

But of course it will never happen. As some of Raftery’s commenters pointed out, IE probably couldn’t be open sourced without revealing critical — and valuable — Windows code. More to the point, Microsoft wants a broken browser. Not supporting <canvas> means that no one will rely on it, which in turn means less competition for Microsoft’s rich client library Silverlight — created to solve the problem of missing <canvas>-like functionality (among other things). More broadly, a world of webapps that are perpetually forced to accommodate IE’s underachieving status means less time spent by users in the cloud, and consequently a bit more relevance for MS. Put simply, IE’s awfulness isn’t a bug, it’s a feature.

This is hardly an original observation, but that doesn’t make it any less true. And that means that the answer to IE’s persistence is the same as it’s always been: for Safari, Opera, Firefox et al to consistently provide a better browsing experience and thereby compel Microsoft to fix its mistakes — as it at least began to do with IE7. Unfortunately, that’s something that they’re going to have to do for themselves.

Posted on Techdirt - 29 July 2008 @ 01:50am

The Airlines' Ongoing Struggle With Price Aggregation Sites

It’s proving pretty difficult to figure out exactly what happened between American Airlines and Kayak last week. Last Wednesday TechCrunch reported that American Airlines was pulling its listings from the airfare search engine. Comments left by Kayak’s CEO Steve Hafner and VP Keith Melnick chalked the split up to Kayak’s display of AA fares from Orbitz: American had demanded that Kayak suppress the Orbitz listings, and Kayak refused.

Presumably one of two things is making American want to avoid comparison to Orbitz prices: either, as TechCrunch speculates, users clicking the Orbitz option put AA on the hook for two referral fees — one to Kayak and one to Orbitz; or AA has struck a deal with Orbitz that provides the latter’s users with cheaper fares than can be found on aa.com.

Either way, the news doesn’t appear to be as dire as it first sounded. It doesn’t seem that AA flights will be disappearing from Kayak — it’s just the links to buy them at aa.com that will go missing. As Jaunted points out this might wind up costing flyers a few more dollars, but it shouldn’t be a major inconvenience for Kayak customers.

The more interesting aspect of this episode is how it reveals the stresses at play in the relationship between the airlines and travel search engines like Kayak. It’s no secret, of course, that the airlines are having a rough time as rising fuel prices put even more pressure on their perennially-failing business model. But while an airline attempting to control the distribution of its prices is nothing new, one can’t help but wonder whether ever-narrowing margins might lead to a shakeup of this market.

Kayak, like most travel search sites, gets its data from one of a handful of Global Distribution Services: businesses that charge airlines a fee to aggregate price and reservation information. Some airlines, like Southwest, opt out of the GDS system in order to avoid those fees. Others, like American, participate in the system but try to send as much online business as possible to their own sites. Presumably each airline tries to find an equilibrium point at which the business brought in by participation in a GDS and the payments associated with it add up to the most profit.

But so long as the financial temptation to retreat from the GDSes persists, GDS data will be less than complete. And that creates an opportunity for another kind of fare-aggregation business — one based upon scraping the data from the airlines’ websites. It’s been done before, after all, albeit on a limited scale. And since most people recognize that prices can’t be copyrighted, there doesn’t seem to be any legal barrier stopping such an aggregator from stepping in (nothing besides the need to write a lot of tedious screen-scraping software, that is). Though, of course, that won’t stop airlines from suing, but the legal basis for their argument seems pretty weak.

Whether such a business is likely to emerge and succeed, I couldn’t say. But it does seem certain that as fuel prices rise we’ll be seeing more and more travel industry infighting — and more and more hoops for online fare-shoppers to jump through.

Posted on Techdirt - 28 July 2008 @ 06:34am

As Expected, CW Realizes Gossip Girl Needs To Be Online

You’ll have to excuse the gloating, but, well: we told you so. Or Mike did, anyway, when back in April he explained why the CW Network’s decision to stop streaming Gossip Girl on its website was completely boneheaded.

The executives behind the decision were trying to force the show’s sizable online audience to watch the program on broadcast television instead. Unsurprisingly, the ploy didn’t work, and yesterday the network’s president confirmed that streaming will resume.

We should give credit where it’s due: the network brass recognized their experiment’s failure relatively quickly and called it off. And their original motivation is understandable — online advertising continues to lack the financial firepower of traditional media ads due to a variety of factors, only some of which can be blamed on the ad industry’s continued confusion over how to deal with the internet age.

But as Mike originally pointed out, this plan was doomed from the start. Limiting consumer choice is no longer a viable business strategy. Attempts to do so in the media realm are especially hopeless, and doubly so when, like the CW, you’ve already shown your users how much freedom they could be enjoying. Sure enough, the torrents for Gossip Girl are well-seeded. Given that, the CW’s decision to serve its viewers on their own terms is a wise one. Here’s hoping they do us all a favor and find a way to make their online ad stock more financially viable.

Posted on Techdirt - 9 July 2008 @ 09:09am

Blaming The Flickr API For Copyright Infringement?

The Fourth of July is over, but for some Flickr users the holiday’s revolutionary spirit is still running strong. Apparently over the weekend a company called MyxerTones made Flickr’s entire photographic catalog available for sale as cellphone wallpaper — regardless of the license selected by each photo’s owner.

For Jim Goldstein, a photographer affected by the violation, this was the last straw. He’s posted a lengthy discussion of the issue in which he details other instances where his Flickr photos have been used without permission. Goldstein lays the blame at Flickr’s feet, saying that their API and RSS systems suffer from “security holes” and don’t properly protect users’ copyrights. His post has attracted over 100 comments and nearly as many inbound links.

So what’s the problem, exactly? In an early email to Flickr, Goldstein put it this way:

I want to be clear RSS feeds are not a problem for people to receive updates to view photos either in their RSS reader or through a web browser on their computer […] Personally I like that Flickr provides tags as a means of searching and organizing. I have no problem with using this functionality for all uses other than the unauthorized publication of my work.

In other words, use of his work by RSS is fine, except when it isn’t. How is Flickr supposed to know the difference? Well, it just is. And not by requiring Goldstein to mark his photos “friends only,” mind you — Goldstein doesn’t want to lose the promotional value of Flickr’s tag searches and RSS feeds. Flickr should know, somehow, that he doesn’t mind users viewing his photos in their RSS readers, but does mind when they view them, via RSS, on Mac Mini connected to a TV that uses FlickrFan. Photos should be public, but not, you know, really public.

Needless to say, this is incoherent. If your work can be viewed on a computer it can also be copied — and, in fact, already has been. And, if the copier so desires, they can then reuse it. It’s sweet of Flickr to implement tricks like their one pixel overlays, but only a fool would think they stop any but the laziest and most insignificant pirates.

Does Flickr’s API make unauthorized use of copyrighted material easier? Sure. But it doesn’t fundamentally change any of the operations that can be performed on photos through the website — it just simplifies a bit of the rigamarole associated with automating the process. In this respect it serves as a device that abets infringement. But that can’t reasonably be considered a flaw or mistake, for reasons we should all be familiar with by now.

The Flickr API can be used to violate photograph owners’ rights. But the fault lies with those who misuse the tool, not with the tool itself. Goldstein doesn’t seem content with going after infringers, and I suppose that’s understandable — it’s a neverending battle, and an unsatisfying one. But what he’s asking from Flickr is both wrongheaded and technically impossible. He, and copyright owners everywhere, can choose to adapt to the rules of the digital age or to retreat from them entirely — but rewriting them is not an option.

Posted on Techdirt - 20 June 2008 @ 05:32pm

The March Of Mobile Phone Progress Isn't Always Smooth Or Direct

Tim Wu is discouraged. Writing in Slate last week, the telecom expert lamented the terms he’s facing as an aspiring iPhone 2 owner: a two-year AT&T contract thanks to the handset’s newfound inability to be unlocked and a move toward a more conventional subsidized handset model. Wu sees this as emblematic of a shift in the mobile industry:

The fact that someone like me is switching to AT&T is a sign of the times in the telephone world. The wireless industry was once and is still sometimes called a “poster child for competition.” That kind of talk needs to end.

He’s right — but then, that kind of talk shouldn’t have been started in the first place. The mobile market was defined by long contracts, locked handsets and a lack of prepaid options long before Apple arrived on the scene. Now it appears that it’ll remain that way long after Apple.

Admittedly, this is a disappointment. Many looked at Apple’s choice of a second-rate carrier — one they could bully around — as a sign that everything was about to change. Finally a handset manufacturer had arisen that was powerful enough to break the industry’s self-serving revenue model and empower consumers! With the recent declaration of the iPhone 2’s retreat toward conventional industry shadiness, those counting on Apple’s benevolent technological dictatorship have found themselves disappointed (as they have before, and no doubt will again). They were fooling themselves anyway: did anyone really think Apple was going to tolerate phone unlocking forever?

But the outlook isn’t all grim. As Wu notes, the Google-led Open Handset Alliance is trying to follow in Apple’s footsteps with its own game-changing, must-have handsets — only this time there seems to be a more expressly ideological slant to the effort. And Verizon’s Open Development Initiative, while less than perfect, is perhaps even more encouraging in that it shows the industry has begun to acknowledge the market’s need for more flexibility in data services.

And that’s the real reason for hope: the march of progress. Anyone who tries to paint the mobile industry as the picture of efficient market competition is either in denial or deeply dishonest. But wireless services will inevitably become more important and more available, whether thanks to WiMAX, revived municipal wifi projects (now without capital costs, thanks to the magic of bankruptcy!), spectrum freed by digital broadcasting, or some other wireless technology. The mobile carriers haven’t been great at competing amongst themselves, but you can bet they’ll begin responding once consumers have reasonable alternatives.

Posted on Techdirt - 19 May 2008 @ 11:39am

Is Net Neutrality Going To Kill You?

Techdirt does not have much of a history of awarding plaudits to Sonia Arrison. But this time we at least have to give her points for originality: in her latest essay opposing net neutrality she advances the indisputably original argument that net neutrality will kill you.

Well, alright: that’s a bit hyperbolic. But she does think that net neutrality legislation could lead to clogged networks that make pervasive health-monitoring applications unsafe, or at least untenable:

Technology like RFID tags connected with wireless networks can help create an “always on” health monitoring system, thereby transitioning society away from a “mainframe” medical model and redirecting it toward a smaller, more personalized, PC-type model. This is a great idea, yet the unspoken truth is that this type of communication requires healthy, innovative networks. That raises a key question about Net neutrality, an issue spun and respun by many.

It’s a neat trick, presuming that “non-neutral” and “healthy” are synonymous. But leaving aside that sleight of hand, Ms. Arrison’s position ignores an area where regulated networks have historically excelled: providing a minimal but guaranteed level of service. The telephone system’s better-than-five-nines level of reliability emerged while Ma Bell was at her most closed and monolithic. The ubiquity of the E911 system is the product of a federal mandate. And the highly-regulated public broadcast spectrum rarely sees dead air.

The best arguments that net neutrality opponents have advanced concern the future of the network, not its present state. They maintain that treating a packet differently based on its business pedigree rather than its functional characteristics will ensure a competitive marketplace that provides new network services — more bandwidth and lower latency — and keeps prices low. Whether or not you agree with this conclusion, these posited advantages are exactly what low-bandwidth, latency-insensitive health monitoring systems don’t need.

That isn’t true of telerobotic surgery, of course, and that application is the other healthcare case that Arrison considers. And although she inexplicably implies that using the public network for it would be anything other than lunacy, she at least acknowledges that dedicated links can and likely would be used by hospitals for this sort of work. But then Arrison bizarrely notes that net neutrality legislation could cripple these privately-owned networks, too.

In 2001, professor Jacques Marescaux, M.D. and his team performed the first clinical robot-assisted remote telepresence surgery, operating on the gallbladder of a patient in Strasbourg, France — 4,000 miles away from their location in New York. What this type of procedure means to remote patients is life-changing, yet such an operation requires a stable and well-managed network, free from the binding hands of politics. Even if the doctors are using a dedicated network, it is still affected by whatever rules bureaucrats place on network operators as a whole.

I suppose this disastrous outcome is a possibility — but only in the sense that net neutrality legislation would also be a bad idea if it mandated that ISPs only allow traffic related to Facebook gifts and chain emails, or that cablemodem speeds not exceed 56k. It’s hard to imagine why legislators would do anything so daft. “Strengthening the case of anti-neutrality activists” is about the only reason I can come up with.

To be sure, there are real arguments to be made about the future of our networks and the appropriate role of the government, if any, in managing them. But net neutrality is not going to make you sicker.

Posted on Techdirt - 8 May 2008 @ 06:49pm

Does The GPL Still Matter?

The GNU General Public License heads to court again today, as Skype attempts to defend its distribution of Linux-enabled SMC hardware handsets that appear to be in violation of the operating system’s open source license. It’s easy to guess why Skype is fighting the suit, which was brought by GPL activists: the company relies on a proprietary protocol, and releasing the code could give competitors an advantage. You can’t blame them for trying. Although in the past few years the GPL has made important strides in establishing its legal enforceability, it’s still conceivable that a court could find something wrong with its unusual, viral nature.

Few think that this will be the court case that makes or breaks the GPL. Skype’s already lost early rounds of this fight, and the claims it’s now making seem so broad as to imply desperation. Besides, the case is being tried in the German legal system, which to date has proven friendly to the GPL.

But even if the license was invalidated, either in this case or another, there’s an argument to be made that the GPL has already served its purpose. Its impact on the world of open source software is undeniable: by ensuring that an open project would remain open, the license encouraged programmers to contribute to projects without fear of their work being coopted by commercial interests. And by making it difficult, if not impossible, for a project derived from a GPLed project to go closed-source, it encouraged many programmers to license their efforts under open terms when they otherwise might not have.

But today, with open source firmly established as a cultural and commercial force, the GPL’s relevance may be waning. The transition to the third version of the license left many in the open source community upset and intent on sticking with its earlier incarnations. And an increasing number of very high profile projects, like Mozilla, Apache and Open Office, have seen fit to create their own licenses or employ the less restrictive LGPL. The raw numbers bear out the idea of a slight decline in the GPL’s prominence, too: Wikipedia lists the percentage of GPLed projects on Sourceforge.net and Freshmeat.net, two large open source software repositories, as 68% and 65%, respectively, as of November ’03 and January ’06. Today, the most recently available numbers show that Sourceforge’s share has fallen to 65%, and Freshmeat’s share has fallen to to 62%.

This is, of course, a small decline, and the GPL remains the world’s most popular open source license by a considerable margin. But it does seem as though there may be a slowly decreasing appetite for the license’s militant approach to copyleft ideals. I certainly don’t wish Skype well in its probably-quixotic tilt at the GPL, but if they were to somehow get lucky at least they’d be doing so at a point in the open source movement’s history when the GPL is decreasingly essential.

Posted on Techdirt - 25 April 2008 @ 11:54am

Comcast Cares — But Only About People Like You

You might’ve already heard about the Twitter account @comcastcares. Run by Comcast employee Frank Eliason, its purpose is to find upset customers before they even know they’re looking for help. I’d heard of Eliason’s project, but had completely forgotten about it when, on Sunday, I found my HD service mysteriously missing and broadcast my frustration to the Twitterverse. Frank’s immediate Twittered response was unexpected and reassuring. When the next day’s service call proved fruitless, he asked me to email him. Within a few hours I had received phone calls and emails from three different smart and seemingly concerned Comcast employees, and by the evening my problem was solved. I had been prepared to settle in for a weeks-long fight with the cable company. Instead, Frank’s quick intervention left me feeling oddly positive about a company that I had long considered to be more or less the embodiment of of malevolent, slothful incompetence.

I’m not the only one who’s noticed Frank’s project, of course. Mike Arrington wrote about a positive encounter with @comcastcares earlier this month. And although Dave Winer remained peeved by his cable internet woes, it’s clear that he found @comcastcares helpful and worthwhile.

It certainly is those things. But is it anything more? Arrington is right, of course, when he says that more brands should be using Twitter as a buzz monitoring tool. But we should all keep in mind that the sort of concierge-style customer support offered by @comcastcares is unlikely to ever scale beyond the size of a PR exercise.

In this case Twitter’s chief virtue is its userbase: a collection of highly-wired early adopters whose online complaints about cable provider malfeasance frequently find their way into press accounts and Google results associated with the company. As handy a notification system as Twitter is, it’s not as if it offers a technological breakthrough that suddenly makes competent customer service possible: there’s been nothing preventing Comcast support from answering email, or getting on IM, or even just using a phone system that calls clients back rather than making them sit on the phone until they hang up or are driven mad by the hold music.

The reason Comcast and companies like it don’t do those things is simply that providing high-quality, personalized support is expensive. Providing high-quality support to influential users is expensive, too, but there are many fewer of them and they make a lot more noise, which makes it a better investment. I’m sure Frank isn’t undertaking his project cynically, but it’s hard to see how Twitter can change the economics of tech support.

More posts from Tom Lee >>