Posted on Techdirt - 6 November 2008 @ 4:28pm
Here in D.C. the town's collective post-election hangover is lifting, and folks are beginning to ponder exactly what the new administration will mean for their respective corners of the world. Those of us working in technology are no exception, and a new blog post by Wayne Crews on OpenMarket.org has renewed discussion of President-elect Obama's proposal for a national CTO.
Unfortunately, Crews' post doesn't offer much insight — he simply conflates "CTO" with "czar" (as in "drug czar") and then decides that the track record of such positions means the initiative is a bad idea. As Jerry Brito noted in comments at the TLF, this rhetorical sleight of hand is a bit dishonest. The Obama campaign's stated intention is for the CTO to "ensure the safety of our networks and will lead an interagency effort, working with chief technology and chief information officers of each of the federal agencies, to ensure that they use best-in-class technologies and share best practices." That's considerably less expansive than what Crews seems to fear.
Our own Tim Lee has weighed in on the idea before, defining two possible roles for a national CTO: one as a coordinator of federal systems (as described above) and another as an adviser on tech policy. As Tim notes, it's important that President-elect Obama receive smart counsel on tech policy — and the Obama campaign's association with people like Vint Cerf is encouraging on this score. But again, it's not clear that such advising is within the purview of the CTO role as Obama conceives it.
So what about the other function? Tim isn't enthusiastic about it, noting that the government probably already achieves what economies of scale it can, meaning that centralizing IT decisions would only result in reduced flexibility for individual agencies.
Speaking as a former government IT contractor, I'm not so sure about that. In my experience, IT procurement decisions within agencies are played very, very safe. The person making the purchasing decision is generally operating in CYA mode: the purchase is being made with an eye toward their career. There are no stock options or revenue sharing to consider — no upside — so the primary goal is to make decisions that minimize the potential for blame.
In practice this means buying from huge, established vendors, even when doing so isn't really appropriate. I've seen projects buy massively expensive Oracle licenses when MySQL or PostgreSQL would've worked just fine, and would have cost far fewer dollars and man-hours. Why waste those resources? Because Oracle was seen as safe (particularly since Sun hadn't yet acquired MySQL AB). It's the same old problem that slowed private industry's adoption of open-source software, except without the profit motive to push things along.
It's possible to mount a justification for such a cautious approach by government, but "efficiency" isn't likely to be part of that argument. And here's where a national CTO really could make a difference: the high-profile, appointed nature of the position calls for a big name — someone with influence and a proven record of innovative ideas — rather than a cowering careerist. And that, in turn, might embolden the don't-blame-me CTOs and CIOs further down the federal ladder. Desktop Linux springs to mind as the sort of technology that could save huge amounts of taxpayer money, but which is probably too intimidating for most agencies to undertake without direction from above.
What would this mean for you, me and the larger tech industry? In all likelihood, not very much. It's not as if open-source technologies need the government's stamp of approval to prove their viability; and every indication is that the important regulatory decisions that affect our industry will continue to be made at places like the FTC and FCC. A national CTO will be irrelevant to most of us, so time spent fretting over the office is probably time wasted. But that doesn't mean that such a position isn't a good idea — saving tax dollars usually is, and there's reason to think that a national CTO could do just that.
10 Comments | Leave a Comment..
Posted on Techdirt - 3 October 2008 @ 1:18pm
Earlier this week, GigaOM released a new paper on bandwidth caps by Muayyad Al-Chalabi. It's been getting a fair amount of positive attention from around the web. I have to confess that I find this somewhat mystifying — I can only assume that those linking to it approvingly haven't actually bothered to read it. Mr. Malik does a lot of good work, but this paper is beneath his usually high standards. It is at times confused, at others dishonest, and almost uniformly irrelevant.
The paper even starts off on the wrong foot with the standard, formulaic, credential-establishing pseudo-academic gibberish, in this case about scale-free networks. Al-Chalabi sagely notes that: "A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as P(k) ~ k-y where k is a constant whose value is typically in the range 2<k<3." Fascinating! But it does sound vaguely familiar... hmm... now where could I have read about scale-free networks before? Oh right! Wikipedia!
There's no shame in using Wikipedia as a reference (although there probably should be some shame in copying from it verbatim without a citation). No, the real problem is that there's no indication that the author understands what any of this means, aside, perhaps, from it having something to do with graph theory. Still, at least the information on Wikipedia is correct. In the paper, the end of this section is the point at which things begin to go downhill.
Al-Chalabi's basic thesis is that the power users likely to be affected by Comcast's 250GB monthly bandwidth cap are vital to the network's overall health. To establish this, he talks about Skype — it's questionable whether a VoIP app that uses sub-dialup amounts of bandwidth is germane to a discussion of high bandwidth use, but let's press on. He mentions that machines on the Skype network can be classified as nodes, login servers or supernodes. This is true. Supernodes "exhibit higher degrees of connectivity than ordinary nodes". Also true! But here the paper conflates "connectivity" and "bandwidth use" and everything goes badly wrong. Supernodes sound great — they're super, after all! — but they're really just nodes that aren't sitting behind firewalls. That lets them help with NAT traversal and other network tasks, but it adds up to very little extra bandwidth use. Bandwidth limits are irrelevant to supernodes' existence.
Al-Chalabi goes on to defend the utility of power users elsewhere, namechecking "web-based applications and networks", "critical social network hubs" (p. 6) and Hulu, Netflix and iTunes (p. 7). But none of these applications rely on power users to relay their content through the network — they're all standard client/server apps. In truth, Al-Chalabi's power users are critical for the health of only one major type of network application: P2P. And that's fine! P2P is important and has plenty of legitimate uses. But despite the dire predictions of bandwidth-capped startups smothered in their metaphorical cribs, the bandwidth hogs currently on the rise are non-P2P ventures backed by huge corporations. Let's not pretend that we need to subsidize power users in order to maintain the health of our plucky web startups and their apps. It may well be that someday legitimate P2P-based apps like Joost and Steam arise as a meaningful force at odds with bandwidth limitations. At the moment, though, the initial wave of enthusiasm for integrating P2P into mainstream applications seems to be somewhat stalled.
The most startlingly dishonest part of the whitepaper comes at its end, when Al-Chalabi uses an estimate of monthly data use by an average household in 2012: 200GB per month. He then picks Time Warner's proposed tiered pricing scheme and uses it to arrive at a monthly bill of over $200. This is ridiculous. It's quite obvious that bandwidth caps and pricing schemes will adapt as data demands grow, provider networks improve and competition forces carriers to adapt to changing customer demands. I might as well assert that a minute of trans-Atlantic telephone communication costs $300 — that'd be the inflation-adjusted price, based on how much it cost when the service first became available. It's also odd that Al-Chalabi uses Time Warner's pricing scheme after primarily discussing Comcast in the rest of the paper. But then, his 2012 bandwidth use estimate falls underneath the 250GB cap that Comcast is instating. Even if the cap didn't go up over time, your data bill would (theoretically) remain constant throughout his example.
I'm sympathetic to where Al-Chalabi is coming from, and others here at Techdirt are even more so — Mike is on record as opposing bandwidth caps. But this whitepaper merely amounts to a complaint that a free lunch is ending. Bandwidth is clearly an increasingly limited resource. And in capitalist societies, money is how we allocate limited resources. The alternate solutions that Al-Chalabi proposes to the carriers on pages 6 and 8 — like P2P mirrors, improved service and "leveraging... existing relationships with content providers" — either assume that network improvements are free, would gut network neutrality, or are simply nonsense.
Yes, Comcast's bandwidth cap is a drag. Instead of disconnection, there should be reasonable fees imposed for overages. They should come up with a schedule defining how the cap will increase in the future. And the paper's suggestion of loosened limits during off-peak times is a good one. But the establishment of an actual, known limit does constitute a real improvement over the company's frankly despicable past treatment of the issue and its customers. Hopefully the data carriers' pricing schemes will continue to evolve in a more nuanced direction. Reasonable people can disagree about the specifics, but as this whitepaper accidentally proves, it's hard to make an honest case that people shouldn't have to pay for what they use.
32 Comments | Leave a Comment..
Posted on Techdirt - 21 August 2008 @ 8:03pm
Ars Technica brings word of a pair of interesting efforts underway over at the Mozilla Project -- both aimed at improving Internet Explorer, whether Microsoft likes it or not.
Both of these projects are impressive pieces of technology. But unfortunately both attempts to improve IE are unlikely to succeed in the ways that their authors would like -- and it's easy to see why. It's safe to say that IE users tend to be among the web's least technically sophisticated. These are exactly the people who can least reasonably be expected to install modular improvements to their browser's underlying technology. It's hard to imagine anyone finding it easier to do this than to simply download and begin using Firefox -- a task that's already clearly too complicated for many people. And that's to say nothing of the difficulty of getting the word out in the first place.
The right solution is the same as it's always been: for Microsoft to fix its abysmally noncompliant browser. They wouldn't even have to do it themselves! As Tom Raftery suggested some time ago, Microsoft could simply open-source IE. Superficially, this seems like a good fix: it's not as if IE is a profit center for Microsoft, and Apple has already shown the viability of the approach with its open source WebKit HTML rendering engine. A bold step like that could go a long way to bolstering what has thus far been a fairly anemic stab at open source on Redmond's part.
But of course it will never happen. As some of Raftery's commenters pointed out, IE probably couldn't be open sourced without revealing critical -- and valuable -- Windows code. More to the point, Microsoft wants a broken browser. Not supporting <canvas> means that no one will rely on it, which in turn means less competition for Microsoft's rich client library Silverlight -- created to solve the problem of missing <canvas>-like functionality (among other things). More broadly, a world of webapps that are perpetually forced to accommodate IE's underachieving status means less time spent by users in the cloud, and consequently a bit more relevance for MS. Put simply, IE's awfulness isn't a bug, it's a feature.
This is hardly an original observation, but that doesn't make it any less true. And that means that the answer to IE's persistence is the same as it's always been: for Safari, Opera, Firefox et al to consistently provide a better browsing experience and thereby compel Microsoft to fix its mistakes -- as it at least began to do with IE7. Unfortunately, that's something that they're going to have to do for themselves.
32 Comments | Leave a Comment..
Posted on Techdirt - 29 July 2008 @ 1:50am
It's proving pretty difficult to figure out exactly what happened between American Airlines and Kayak last week. Last Wednesday TechCrunch reported that American Airlines was pulling its listings from the airfare search engine. Comments left by Kayak's CEO Steve Hafner and VP Keith Melnick chalked the split up to Kayak's display of AA fares from Orbitz: American had demanded that Kayak suppress the Orbitz listings, and Kayak refused.
Presumably one of two things is making American want to avoid comparison to Orbitz prices: either, as TechCrunch speculates, users clicking the Orbitz option put AA on the hook for two referral fees -- one to Kayak and one to Orbitz; or AA has struck a deal with Orbitz that provides the latter's users with cheaper fares than can be found on aa.com.
Either way, the news doesn't appear to be as dire as it first sounded. It doesn't seem that AA flights will be disappearing from Kayak -- it's just the links to buy them at aa.com that will go missing. As Jaunted points out this might wind up costing flyers a few more dollars, but it shouldn't be a major inconvenience for Kayak customers.
The more interesting aspect of this episode is how it reveals the stresses at play in the relationship between the airlines and travel search engines like Kayak. It's no secret, of course, that the airlines are having a rough time as rising fuel prices put even more pressure on their perennially-failing business model. But while an airline attempting to control the distribution of its prices is nothing new, one can't help but wonder whether ever-narrowing margins might lead to a shakeup of this market.
Kayak, like most travel search sites, gets its data from one of a handful of Global Distribution Services: businesses that charge airlines a fee to aggregate price and reservation information. Some airlines, like Southwest, opt out of the GDS system in order to avoid those fees. Others, like American, participate in the system but try to send as much online business as possible to their own sites. Presumably each airline tries to find an equilibrium point at which the business brought in by participation in a GDS and the payments associated with it add up to the most profit.
But so long as the financial temptation to retreat from the GDSes persists, GDS data will be less than complete. And that creates an opportunity for another kind of fare-aggregation business -- one based upon scraping the data from the airlines' websites. It's been done before, after all, albeit on a limited scale. And since most people recognize that prices can't be copyrighted, there doesn't seem to be any legal barrier stopping such an aggregator from stepping in (nothing besides the need to write a lot of tedious screen-scraping software, that is). Though, of course, that won't stop airlines from suing, but the legal basis for their argument seems pretty weak.
Whether such a business is likely to emerge and succeed, I couldn't say. But it does seem certain that as fuel prices rise we'll be seeing more and more travel industry infighting -- and more and more hoops for online fare-shoppers to jump through.
21 Comments | Leave a Comment..
Posted on Techdirt - 28 July 2008 @ 6:34am
You'll have to excuse the gloating, but, well: we told you so. Or Mike did, anyway, when back in April he explained why the CW Network's decision to stop streaming Gossip Girl on its website was completely boneheaded.
The executives behind the decision were trying to force the show's sizable online audience to watch the program on broadcast television instead. Unsurprisingly, the ploy didn't work, and yesterday the network's president confirmed that streaming will resume.
We should give credit where it's due: the network brass recognized their experiment's failure relatively quickly and called it off. And their original motivation is understandable -- online advertising continues to lack the financial firepower of traditional media ads due to a variety of factors, only some of which can be blamed on the ad industry's continued confusion over how to deal with the internet age.
But as Mike originally pointed out, this plan was doomed from the start. Limiting consumer choice is no longer a viable business strategy. Attempts to do so in the media realm are especially hopeless, and doubly so when, like the CW, you've already shown your users how much freedom they could be enjoying. Sure enough, the torrents for Gossip Girl are well-seeded. Given that, the CW's decision to serve its viewers on their own terms is a wise one. Here's hoping they do us all a favor and find a way to make their online ad stock more financially viable.
6 Comments | Leave a Comment..
Posted on Techdirt - 9 July 2008 @ 9:09am
The Fourth of July is over, but for some Flickr users the holiday's revolutionary spirit is still running strong. Apparently over the weekend a company called MyxerTones made Flickr's entire photographic catalog available for sale as cellphone wallpaper -- regardless of the license selected by each photo's owner.
For Jim Goldstein, a photographer affected by the violation, this was the last straw. He's posted a lengthy discussion of the issue in which he details other instances where his Flickr photos have been used without permission. Goldstein lays the blame at Flickr's feet, saying that their API and RSS systems suffer from "security holes" and don't properly protect users' copyrights. His post has attracted over 100 comments and nearly as many inbound links.
So what's the problem, exactly? In an early email to Flickr, Goldstein put it this way:
I want to be clear RSS feeds are not a problem for people to receive updates to view photos either in their RSS reader or through a web browser on their computer [...] Personally I like that Flickr provides tags as a means of searching and organizing. I have no problem with using this functionality for all uses other than the unauthorized publication of my work.
In other words, use of his work by RSS is fine, except when it isn't. How is Flickr supposed to know the difference? Well, it just is. And not by requiring Goldstein to mark his photos "friends only," mind you -- Goldstein doesn't want to lose the promotional value of Flickr's tag searches and RSS feeds. Flickr should know, somehow, that he doesn't mind users viewing his photos in their RSS readers, but does mind when they view them, via RSS, on Mac Mini connected to a TV that uses FlickrFan. Photos should be public, but not, you know, really public.
Needless to say, this is incoherent. If your work can be viewed on a computer it can also be copied -- and, in fact, already has been. And, if the copier so desires, they can then reuse it. It's sweet of Flickr to implement tricks like their one pixel overlays, but only a fool would think they stop any but the laziest and most insignificant pirates.
Does Flickr's API make unauthorized use of copyrighted material easier? Sure. But it doesn't fundamentally change any of the operations that can be performed on photos through the website -- it just simplifies a bit of the rigamarole associated with automating the process. In this respect it serves as a device that abets infringement. But that can't reasonably be considered a flaw or mistake, for reasons we should all be familiar with by now.
The Flickr API can be used to violate photograph owners' rights. But the fault lies with those who misuse the tool, not with the tool itself. Goldstein doesn't seem content with going after infringers, and I suppose that's understandable -- it's a neverending battle, and an unsatisfying one. But what he's asking from Flickr is both wrongheaded and technically impossible. He, and copyright owners everywhere, can choose to adapt to the rules of the digital age or to retreat from them entirely -- but rewriting them is not an option.
31 Comments | Leave a Comment..
Posted on Techdirt - 20 June 2008 @ 5:32pm
Tim Wu is discouraged. Writing in Slate last week, the telecom expert lamented the terms he's facing as an aspiring iPhone 2 owner: a two-year AT&T contract thanks to the handset's newfound inability to be unlocked and a move toward a more conventional subsidized handset model. Wu sees this as emblematic of a shift in the mobile industry:
The fact that someone like me is switching to AT&T is a sign of the times in the telephone world. The wireless industry was once and is still sometimes called a "poster child for competition." That kind of talk needs to end.
He's right -- but then, that kind of talk shouldn't have been started in the first place. The mobile market was defined by long contracts, locked handsets and a lack of prepaid options long before Apple arrived on the scene. Now it appears that it'll remain that way long after Apple.
Admittedly, this is a disappointment. Many looked at Apple's choice of a second-rate carrier -- one they could bully around -- as a sign that everything was about to change. Finally a handset manufacturer had arisen that was powerful enough to break the industry's self-serving revenue model and empower consumers! With the recent declaration of the iPhone 2's retreat toward conventional industry shadiness, those counting on Apple's benevolent technological dictatorship have found themselves disappointed (as they have before, and no doubt will again). They were fooling themselves anyway: did anyone really think Apple was going to tolerate phone unlocking forever?
But the outlook isn't all grim. As Wu notes, the Google-led Open Handset Alliance is trying to follow in Apple's footsteps with its own game-changing, must-have handsets -- only this time there seems to be a more expressly ideological slant to the effort. And Verizon's Open Development Initiative, while less than perfect, is perhaps even more encouraging in that it shows the industry has begun to acknowledge the market's need for more flexibility in data services.
And that's the real reason for hope: the march of progress. Anyone who tries to paint the mobile industry as the picture of efficient market competition is either in denial or deeply dishonest. But wireless services will inevitably become more important and more available, whether thanks to WiMAX, revived municipal wifi projects (now without capital costs, thanks to the magic of bankruptcy!), spectrum freed by digital broadcasting, or some other wireless technology. The mobile carriers haven't been great at competing amongst themselves, but you can bet they'll begin responding once consumers have reasonable alternatives.
21 Comments | Leave a Comment..
Posted on Techdirt - 19 May 2008 @ 11:39am
Techdirt does not have much of a history of awarding plaudits to Sonia Arrison. But this time we at least have to give her points for originality: in her latest essay opposing net neutrality she advances the indisputably original argument that net neutrality will kill you.
Well, alright: that's a bit hyperbolic. But she does think that net neutrality legislation could lead to clogged networks that make pervasive health-monitoring applications unsafe, or at least untenable:
Technology like RFID tags connected with wireless networks can help create an "always on" health monitoring system, thereby transitioning society away from a "mainframe" medical model and redirecting it toward a smaller, more personalized, PC-type model. This is a great idea, yet the unspoken truth is that this type of communication requires healthy, innovative networks. That raises a key question about Net neutrality, an issue spun and respun by many.
It's a neat trick, presuming that "non-neutral" and "healthy" are synonymous. But leaving aside that sleight of hand, Ms. Arrison's position ignores an area where regulated networks have historically excelled: providing a minimal but guaranteed level of service. The telephone system's better-than-five-nines level of reliability emerged while Ma Bell was at her most closed and monolithic. The ubiquity of the E911 system is the product of a federal mandate. And the highly-regulated public broadcast spectrum rarely sees dead air.
The best arguments that net neutrality opponents have advanced concern the future of the network, not its present state. They maintain that treating a packet differently based on its business pedigree rather than its functional characteristics will ensure a competitive marketplace that provides new network services -- more bandwidth and lower latency -- and keeps prices low. Whether or not you agree with this conclusion, these posited advantages are exactly what low-bandwidth, latency-insensitive health monitoring systems don't need.
That isn't true of telerobotic surgery, of course, and that application is the other healthcare case that Arrison considers. And although she inexplicably implies that using the public network for it would be anything other than lunacy, she at least acknowledges that dedicated links can and likely would be used by hospitals for this sort of work. But then Arrison bizarrely notes that net neutrality legislation could cripple these privately-owned networks, too.
In 2001, professor Jacques Marescaux, M.D. and his team performed the first clinical robot-assisted remote telepresence surgery, operating on the gallbladder of a patient in Strasbourg, France -- 4,000 miles away from their location in New York. What this type of procedure means to remote patients is life-changing, yet such an operation requires a stable and well-managed network, free from the binding hands of politics. Even if the doctors are using a dedicated network, it is still affected by whatever rules bureaucrats place on network operators as a whole.
I suppose this disastrous outcome is a possibility -- but only in the sense that net neutrality legislation would also be a bad idea if it mandated that ISPs only allow traffic related to Facebook gifts and chain emails, or that cablemodem speeds not exceed 56k. It's hard to imagine why legislators would do anything so daft. "Strengthening the case of anti-neutrality activists" is about the only reason I can come up with.
To be sure, there are real arguments to be made about the future of our networks and the appropriate role of the government, if any, in managing them. But net neutrality is not going to make you sicker.
9 Comments | Leave a Comment..
Posted on Techdirt - 8 May 2008 @ 6:49pm
The GNU General Public License heads to court again today, as Skype attempts to defend its distribution of Linux-enabled SMC hardware handsets that appear to be in violation of the operating system's open source license. It's easy to guess why Skype is fighting the suit, which was brought by GPL activists: the company relies on a proprietary protocol, and releasing the code could give competitors an advantage. You can't blame them for trying. Although in the past few years the GPL has made important strides in establishing its legal enforceability, it's still conceivable that a court could find something wrong with its unusual, viral nature.
Few think that this will be the court case that makes or breaks the GPL. Skype's already lost early rounds of this fight, and the claims it's now making seem so broad as to imply desperation. Besides, the case is being tried in the German legal system, which to date has proven friendly to the GPL.
But even if the license was invalidated, either in this case or another, there's an argument to be made that the GPL has already served its purpose. Its impact on the world of open source software is undeniable: by ensuring that an open project would remain open, the license encouraged programmers to contribute to projects without fear of their work being coopted by commercial interests. And by making it difficult, if not impossible, for a project derived from a GPLed project to go closed-source, it encouraged many programmers to license their efforts under open terms when they otherwise might not have.
But today, with open source firmly established as a cultural and commercial force, the GPL's relevance may be waning. The transition to the third version of the license left many in the open source community upset and intent on sticking with its earlier incarnations. And an increasing number of very high profile projects, like Mozilla, Apache and Open Office, have seen fit to create their own licenses or employ the less restrictive LGPL. The raw numbers bear out the idea of a slight decline in the GPL's prominence, too: Wikipedia lists the percentage of GPLed projects on Sourceforge.net and Freshmeat.net, two large open source software repositories, as 68% and 65%, respectively, as of November '03 and January '06. Today, the most recently available numbers show that Sourceforge's share has fallen to 65%, and Freshmeat's share has fallen to to 62%.
This is, of course, a small decline, and the GPL remains the world's most popular open source license by a considerable margin. But it does seem as though there may be a slowly decreasing appetite for the license's militant approach to copyleft ideals. I certainly don't wish Skype well in its probably-quixotic tilt at the GPL, but if they were to somehow get lucky at least they'd be doing so at a point in the open source movement's history when the GPL is decreasingly essential.
21 Comments | Leave a Comment..
Posted on Techdirt - 25 April 2008 @ 11:54am
You might've already heard about the Twitter account @comcastcares. Run by Comcast employee Frank Eliason, its purpose is to find upset customers before they even know they're looking for help. I'd heard of Eliason's project, but had completely forgotten about it when, on Sunday, I found my HD service mysteriously missing and broadcast my frustration to the Twitterverse. Frank's immediate Twittered response was unexpected and reassuring. When the next day's service call proved fruitless, he asked me to email him. Within a few hours I had received phone calls and emails from three different smart and seemingly concerned Comcast employees, and by the evening my problem was solved. I had been prepared to settle in for a weeks-long fight with the cable company. Instead, Frank's quick intervention left me feeling oddly positive about a company that I had long considered to be more or less the embodiment of of malevolent, slothful incompetence.
I'm not the only one who's noticed Frank's project, of course. Mike Arrington wrote about a positive encounter with @comcastcares earlier this month. And although Dave Winer remained peeved by his cable internet woes, it's clear that he found @comcastcares helpful and worthwhile.
It certainly is those things. But is it anything more? Arrington is right, of course, when he says that more brands should be using Twitter as a buzz monitoring tool. But we should all keep in mind that the sort of concierge-style customer support offered by @comcastcares is unlikely to ever scale beyond the size of a PR exercise.
In this case Twitter's chief virtue is its userbase: a collection of highly-wired early adopters whose online complaints about cable provider malfeasance frequently find their way into press accounts and Google results associated with the company. As handy a notification system as Twitter is, it's not as if it offers a technological breakthrough that suddenly makes competent customer service possible: there's been nothing preventing Comcast support from answering email, or getting on IM, or even just using a phone system that calls clients back rather than making them sit on the phone until they hang up or are driven mad by the hold music.
The reason Comcast and companies like it don't do those things is simply that providing high-quality, personalized support is expensive. Providing high-quality support to influential users is expensive, too, but there are many fewer of them and they make a lot more noise, which makes it a better investment. I'm sure Frank isn't undertaking his project cynically, but it's hard to see how Twitter can change the economics of tech support.
19 Comments | Leave a Comment..
Posted on Techdirt - 15 April 2008 @ 3:35pm
Yesterday's news that Gawker Media will be selling three of its sites caught many by surprise. Particularly shocking was the revelation that Wonkette was among them — it was an early site in the network, and one famous enough to be featured in the newly-reopened Newseum.
But more interesting than the news was the reasoning behind it, which was explained by Gawker chief Nick Denton in an email to Fishbowl NY:
[S]ince the end of last year, we've been expecting a downturn. Scratch that: since the middle of 2006, when we sold off Screenhead, shuttered Sploid and declared we were "hunkering down", we've been waiting for the Internet bubble to burst. No, really, this time. And, even if not, better safe than sorry; and better too early than too late.
Everybody says that the internet is special; that advertising is still moving away from print and TV; and Gawker sites are still growing in traffic by about 90 percent a year, way faster than the web as a whole. But it would be naive to think that we can merely power through an advertising recession. We need to concentrate our energies... on the sites with the greatest potential for audience and advertising... [T]hen, once this recession is done with, and we come up from the bunker to survey the Internet wasteland around us, we can decide on what new territories we want to colonize.
Say what you will about Denton — and many people do — but he's proven himself to be a shrewd businessman. As he notes, it's easy to find wishful thinking when it comes to online advertising's capacity to withstand the recession that most experts say is coming or already occurring. But, as that second link notes, there's no denying that advertising expenditures declined during past economic downturns, or that online ads have fared even worse than other media. So while Denton is just one businessman, it's a safe bet that he's not the only one girding for lean times.
This isn't to say that the organizations buying these Gawker properties are making a mistake. Mike has written before about the need to marry content and promotion in a way that's compelling to an audience. Idolator and Gridskipper in particular seem well-positioned to do just that, as they join newly-consolidated ventures from Buzznet and Curbed, respectively (Wonkette, which exists in a media environment filled with publications that largely subsist on donor largesse, may have a harder time of it).
But whatever the fate of these particular sites, a recession-sparked advertising downturn would clearly be bad news for the web. With so much of the internet economy built on top of ad models — Google's foremost among them — vulnerable startups may do well to follow Denton's lead and hedge their bets now.
4 Comments | Leave a Comment..
Posted on Techdirt - 10 April 2008 @ 11:57am
If you've heard of HuddleChat at all, you already know about its demise. Put together by a few Google engineers in their spare time, the web chat application was used to showcase Google's newly-announced App Engine offering. There was just one problem: it was nearly identical to 37Signals' Campfire, a well-known SaaS web chat application. 37Signals gave some petulant quotes to ReadWriteWeb about the situation, and shortly thereafter Google pulled the app down.
As Om Malik has pointed out, this is all a bit ridiculous. AJAX/Comet chat is a fairly simple feature to implement. If my fellow participants in the Web 2.0 economy are counting on earning their keep via a collective conspiracy to make our jobs look harder than they are, we're all in deep, deep trouble. There's additional potential irony here, too, given that 37Signals has been accused of ripping off others' work to create Campfire in the first place.
But while this incident may prove portentous to the long-term prospects of the 37Signals business plan, it's hard to see how it could mean anything for Google. Breathless declarations that "many in the developer community [will] view Google App Engine as a Xerox machine for copycat product developers" are downright laughable. Google's decision to kill HuddleChat makes good PR sense, but it's inconceivable that many cost-conscious, Python-friendly startups would give up on App Engine over this minor blog imbroglio. As in many other respects, Amazon Web Services will likely provide the relevant template for these issues, and so far AWS has wisely avoided getting dragged into policing its users' apps.
Of course there's a lot of speculation that App Engine will include a free offering, and for that reason it may attract more troublesome users than EC2 currently does. But even if Google finds itself obligated to fight more griefers, phishers and spammers than Amazon does, it seems certain that they won't waste their time arbitrating squabbles over who called dibs on which trivial featureset. Sadly, that will remain for the courts to decide.
8 Comments | Leave a Comment..
Posted on Techdirt - 3 April 2008 @ 5:37pm
Comcast's decision to collaborate with Bittorrent, Inc. attracted a predictably huge amount of attention and analysis. But surprisingly little of it has actually speculated as to what Bittorrent, Inc. is actually going to do for Comcast. When guesses have been ventured, they've frequently suggested that the company will throw its weight around in order to alter the protocol and make it more friendly to Comcast's network. But this is unlikely for exactly the reasons Prof. Felten discusses at that link (though Felten actually argues that altering the protocol is the goal). Instead, I think there are reasons to believe that Bram Cohen's startup will be selling network appliances to Comcast.
There are two problems facing Comcast. (1) the expense that Bittorrent incurs in infrastructure demands and bandwidth bills and (2) the public outcry and potential FCC action invited by its initial artless solution to that problem. Announcing the partnership with Bittorrent, Inc.; pledging to increase upload capacity (as it no doubt planned to anyway); and ceasing to forge RST packets all go a long way toward solving the second problem.
But the first problem -- the expense -- remains, and it may prove to be the area where the new partnership has the most to offer. Have a look at the quote that Torrentfreak got from Bittorrent, Inc.'s Ashwin Navin:
We decided to collaborate with Comcast because they agreed to stop using RSTs, increase upload capacity, and evaluate network hardware that accelerates media delivery and file transfers.
Bittorrent, Inc. has primarily been known for acquiring uTorrent and for working to pitch BT as a content distribution system. But it's also announced partnerships with various hardware manufacturers. And while some of these vendors are probably looking for little more than to be able to slap "Bittorrent approved!" stickers on their consumer-grade routers, others clearly have the expertise to make network appliances. This is what Bittorrent, Inc. may be selling to Comcast.
What will these theoretical boxes do? Despite Comcast's announced intention to be protocol-agnostic, it seems most likely that the devices would serve as P2P repeaters, keeping more of a given swarm inside Comcast's systems and thereby minimizing expensive trips across the network boundary. Contrary to all of the online wailing about bandwidth hogs degrading its neighbors' internet service, this expense was always the real issue: it's telling that forged RST packets were only ever sent for Bittorrent connections that extended beyond Comcast's network. Establishing a repeater product would also add nicely to the company's Bittorrent DNA offering.
Whatever the specifics, minimizing network expenses is a reasonable goal that Comcast is certain to continue to pursue. Hopefully Bittorrent Inc. will help them find a way to do so without antagonizing their customers.
6 Comments | Leave a Comment..
Posted on Techdirt - 6 March 2008 @ 12:40am
Regardless of what you think of his ideas about net neutrality, Tim Wu is unequivocally right about one thing: Ziphone is downright magical. Thanks to it I've been in possession of an unlocked iPhone for the past few weeks, and I've been quite pleased with it. The variety of things this little gadget can do is truly amazing.
But for the mobile carriers the sensation it prompts is probably closer to worry. These newfound apps are bandwidth-hungry, and not only for WiFi packets. iPhone Bittorrent is a rather extreme example; EDGE-capable podcatchers are a more plausible threat. But perhaps most striking -- and therefore menacing -- is iRadio, a native application that brings Shoutcast-based streaming audio to the platform. It's easy to imagine a lot of users wanting this functionality and using it heavily, particularly given how often I forget that I've left it playing.
Of course, the percentage of jailbroken handsets isn't likely to ever get particularly high. But that won't be enough to stop these applications. For one thing, most observers think that the SDK -- which is expected to be announced today -- will allow developers access to both the phone's EDGE and WiFi capabilities. For another, streaming audio has already come to the platform without the need for any new code at all. FlyTunes offers a number of radio channels through an iPhone web interface; it works great. Similarly, WFMU offers a specialized domain for listening to the station on your mobile. More of these apps are almost certainly on the way.
It's true that this is just one device, but it's already setting a standard for what consumers expect from a smartphone -- and proving that users and savvy developers will use every bit of bandwidth they can get to. This demand will only grow as Android arrives and the carriers' grip on the mobile platform inevitably loosens. I'm hardly longing for the days of per-kilobyte data charges, but it seems likely that many carriers will soon be faced with choosing between a return to metering or a flood of customers upset by unexpected transfer caps on their allegedly-unlimited data plans.
11 Comments | Leave a Comment..
Posted on Techdirt - 28 February 2008 @ 4:51pm
If you're not an OS X application developer, you can be forgiven for missing last week's debut of MGTwitterEngine. It is, admittedly, a bit arcane: a software component designed for use by developers that allows them to more easily interface with a proprietary messaging network. I wouldn't hold my breath for an Xbox version if I were you. But the software -- and the enthusiastic response it received -- are still worth noting as evidence of notification frameworks' potential for growth.
Many Mac users are familiar with Growl, the ambient notification system that tastefully alerts them of new emails, appointments, completed downloads or any of a huge variety of other system events. There are libraries that make it easy for developers to make their applications display messages through Growl, and many have. But while an ambient notification on your screen is great, an ambient notification that gets routed to whatever display you find most useful is better. So MGTwitterEngine makes it easy for developers to get their apps talking to Twitter (not that it was very hard to begin with -- Twitter's API is quite easy to use). If the idea catches on, soon you'll be able to get a Tweet when your DVD rip completes or as confirmation when your nightly backup succeeds. I wrote about "push" notification technology's resurgence a little while ago; when I did, these were some of the kinds of applications that I had in mind.
Of course, I don't mean to simply boost Twitter. As others have pointed out in comments to previous posts, the service can be spotty, and these days it's far from unique. Twitter owes its current success to its pedigree, its developer-friendly API and its SMS capabilities; for those reasons it seems likely to be the first to gain significant traction in the application notification space. But it would be a shame if a proprietary solution wins the day. For that reason it's worth keeping an eye on the occasional discussions hosted by Dave Winer about building a noncommercial, federated Twitter alternative (likely on top of XMPP).
Will those musings go anywhere? I have to admit that I have no idea — I'm skeptical, but wary of betting against such an endeavor after witnessing OpenID's come-from-behind success. Either way, it seems certain that soon more websites, applications and services are going to be sending me notifications through Twitter or something like it -- perhaps even allowing some of the musings of my colleagues in the Techdirt Insight Community, on how Twitter can be useful for companies to start to come true.
5 Comments | Leave a Comment..
Posted on Techdirt - 21 February 2008 @ 1:55pm
With the possible exception of our allegedly-sexual-predator-filled social networks, it seems safe to say that there's no internet phenomenon that causes quite as much finger-wagging consternation as Wikipedia. Is it credible? Complete? A worthy reference material? Personally, I'm content to leave these questions to the world's concerned librarians.
One thing that's not in question is whether Wikipedia is successful. But why aren't its competitors? Linux News' Mick O'Leary discussed the issue yesterday, specifically examining why Veropedia and Citizendium's efforts to improve upon Wikipedia don't show much promise for attracting a following. O'Leary's diagnosis of the problems with the sites' underlying models is almost beside the point: despite Wikipedia's content being reproducible under a GPL-like license, neither project has decided to use a forked Wikipedia as a starting point. As a result they simply don't have the content to count as a viable alternative.
But, as Bennett Haselton convincingly argued on Slashdot last week, this is a problem that Google's upcoming Knol initiative is unlikely to face. The prospect of ad revenue (and page views supplied by a presumably friendly PageRank) will no doubt prompt a flurry of copy & pasting from Wikipedia. And although Google's Knol announcement is a little vague, their professed light-touch approach to content sounds likely to make Wikipedia-licensed content okay for Knol. Even without an automated forking process, it seems certain that Knol will wind up mirroring large parts of Wikipedia.
But after that initial land-grab will Knol be able to take the ball from Jimmy Wales' leviathan and run with it? It depends what Google is banking on. Veropedia and Citizendium's examples strongly imply that Knol's focus on authorial accountability won't be the deciding factor in its success. A human name and grinning headshot may be more immediately comforting than an inscrutable pseudonym, but they only confer modestly more meaningful vetting opportunities than does Wikipedia's contribution-tracking system. Seriously evaluating an author's background, perspective and credibility will be a time-consuming task no matter what the underlying system is.
But if Knol instead relies on Google's built-in promotional advantages -- aka search result dirty tricks -- it's got a real shot. Wikipedia is proof that a wiki reference tool's value is largely derived from the network effects it enjoys, and currently most of those effects are driven by the site's high placement in search results. What will happen if Google decides to put Knol on an equal footing? Given Wikipedia's liberal licensing scheme and Knol's plan for more aggressively attracting content, the coming wiki showdown may wind up being decided by pure brand power more than anything else.
18 Comments | Leave a Comment..
Posted on Techdirt - 6 February 2008 @ 11:46am
Recently I mentioned that I think services like Twitter are likely to help stitch together the variety of short-messaging options that are becoming available to mobile users. There's another story here, though: the shift in net architecture that's taking place in order to support these new services.
Jive Software has an interesting blog post about one of these technologies: XMPP, aka Jabber (ReadWriteWeb also has a thoughtful post on the matter here). Jabber is an increasingly popular XML protocol that powers instant messaging services like Google Talk -- and many other things. As the Jive post points out, Tivo is beginning to use XMPP to notify its customers' set-top boxes of schedule updates. The old alternative involved each Tivo polling a central server every so often to check for updates, the same way that email and RSS clients do. But that's inefficient, particularly if the polling needs to be done frequently. An IM protocol is ideally suited to delivering messages with little latency and in a lightweight manner (and will know how to traverse users' NAT routers, too). XMPP is particularly extensible and comprehensive, making it useful for many different applications.
And XMPP isn't the only technique being used to solve these problems. Comet is another emerging technology with a similar purpose, but focused specifically on the web. Instead of repeated polling, a Comet app keeps one very long-running HTTP connection open, along which messages can be sent without waiting for the browser to ask for them. This lets applications like Gmail and Meebo deliver performance that's virtually latency-free.
Although I'm tempted to avoid the baggage that comes with it, this trend does fit pretty comfortably into the push/pull paradigm of the late 90s. I have good reason for that reticence: as anyone who's lived through periods of both thin and fat client triumphalism knows, enthusiasm for different technological approaches is cyclical, driven by whatever applications people consider most exciting at the time, and along the way shoehorning a lot of ill-suited apps into the hot paradigm du jour.
But this time the demand for push protocols is more than just a fad. It's also a sign of our increasing technological sophistication. Polling is no longer an option for a lot of reasons, but all of them have to do with computing's ubiquity: there are too many users, too many devices, and no patience for less than immediate performance. Broadcast was fine when technology was just entertainment; pull was fine when technology was just a supplement to our lives. But now it seems that the network is driving our daily activities, and we can't wait around for it to do so.
5 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2008 @ 9:20am
Last week, Slashdot linked to an entertaining analysis of the cost of SMS messages. Noting that many carriers are raising their SMS prices despite increasing demand for the service — demand which should be spurring competition — the author of the post figures out the number of bits in a text message and concludes that transmitting data by SMS is about 15 million times more expensive than doing so over a commodity internet connection.
But of course this isn't really a fair comparison. A commodity internet connection doesn't afford the ubiquity that a cellular network does. Comparing the data rate and price of voice traffic is probably more instructive (although the two types of messages are admittedly not transmitted in the same manner across the network). Taking AT&T's overage charge of $0.45 cents/minute and 13kbps as a plausible bitrate for a GSM call, my calculator says that SMS data is a mere 316% more expensive than voice traffic.
That's still not great, though. And there's no question that SMS prices are going up even farther — in the past year or so the Consumerist blog has been full of posts encouraging various carriers' users to escape their contracts thanks to those contracts' newly-increased SMS fees. It's an unfortunate situation: very few consumers select a carrier on the basis of its SMS offerings, and few will leave their carrier over them, either, blunting the consumer response to price increases. Plus, as the technology has gained popularity the mobile operators have lost the need to encourage its adoption through cheap rates. It's not very surprising to see them conclude that the most profitable price point for SMS is higher than the one they had been offering.
Fortunately for the rest of us, this state of affairs doesn't seem likely to last much longer. Although there's little reason to have faith in the mobile market's ability to bend the carriers to consumers' will, new technologies are going to inevitably dry up the SMS bonanza. We're on the verge of the iPhone SDK's release, and Google's Android seems likely to find its way into many cheaper handsets. These and other technologies mean that the average customer will have access to bulk data services on their handset soon if they don't already. And once bulk data can be consumed, so many options for short message communication become available that SMS's specialized role will disappear almost immediately. Between web interfaces, widgets, IM clients and email apps, there are a vast number of ways to send short strings of text. Services like Twitter that offer a variety of input modalities will no doubt help to stitch together this looming surplus of communication options.
Given how few bits are required to transmit those messages (and the generic nature of those bits), there'll be no way for the carriers to keep short message transmission as expensive as it currently is — not without without pricing web browsing, email and other mobile data services into oblivion. I wouldn't expect SMS to disappear, but it seems safe to assume it'll start getting cheaper soon.
25 Comments | Leave a Comment..
Posted on Techdirt - 22 January 2008 @ 11:57am
The RIAA's website was hacked early Monday morning — their out-of-date CMS installation proved to be vulnerable to a number of SQL injection attacks — and as you might expect the internet has been having a good laugh about it since.
Well, ha ha. I won't pretend to be immune from a little shadenfreude at the expense of this particular blogospheric bête noir. But in a larger way, this incident validates the RIAA's existence. After all, it's not the RIAA's name that appears on lawsuits filed against P2P users: it's those of the record labels. The association serves a number of functions, but not least among them is its role as a consequence-free focal point for consumer backlash — backlash that most recently channeled itself into meaningless vandalism against a brochureware site that no one visits.
Of course, this displacement of blame works in both directions. It's considerably easier for copyfighting triumphalists to claim they're in the right when the enemy is a constituency-free trade group rather than a business that represents (however poorly) the artists whose work is being appropriated. For this reason, I wouldn't take too seriously the rumors of the RIAA's demise. So long as the labels choose to prosecute their war on filesharers, everyone concerned will have a use for a scapegoat.
6 Comments | Leave a Comment..
Posted on Techdirt - 15 January 2008 @ 6:53pm
Rick Falkvinge, the head of Sweden's Piratpartiet has just given a new interview, and it's worth a read. As you might expect from the leader of a pro-piracy political party, he's rather bullish on the future of filesharing:
[A]nonymous encrypted P2P is just a few years off (and encrypted BitTorrent is already becoming ubiquitous). More interestingly, our cellphones are increasing in capacity dramatically. When P2P debuted with Napster in 2000, the average hard drive was the same size as my cell phone memory is today. Using technology already available, BlueTooth 2, I can share content from my cellphone anonymously — say, in a café or so. This will probably just accelerate, with cellphones being more and more capable, holding more and more data, and opening up to customized applications. I'm betting that a P2P app operating on Bluetooth is not far off for the iPhone, for example. Imagine the anonymous sharing that will happen in the background just on the average subway train! The possibilities are very, very encouraging.
File sharing will find new ways — any measure to stop it will be ineffective the instant it is in place.
I can't say that I agree with everything Falkvinge says here. Although it's true that Bittorrent encryption is fairly widespread, the technique is employed to avoid ISP throttling, not as a useful means of protecting filesharers' identities. And anyone who's paid any attention to Bluetooth's miserable security record — or who has just been frustrated when trying to get two devices to pair — can be forgiven for laughing wryly at the idea of the protocol evolving into something suitable for ad-hoc high-speed filesharing.
Falkvinge's optimism about anonymous P2P is perhaps the most interesting part of his filesharing triumphalism. In truth, it's a considerably harder problem than he implies: the internet is simply not designed for two-way communication with a truly unknown party. Sure, black hats can spoof IP addresses — but that's a technique that's only useful for a one-way communique, such as when flooding a target with junk packets in a denial of service attack. If you want a response you either need to reveal your identity or relay the traffic through a third party who can be counted on to keep everyone's identities secret.
This sort of relay system has been successfully employed by Relakks proxy service, as well as the Freenet and Tor projects, the latter two of which also add encryption to limit the relay nodes' complicity. But if Falkvinge is counting on the lack of prosecutions against these projects as evidence of the technique's legal unassailability, he's dreaming. Given that both Freenet and Tor are widely rumored to be havens for child pornographers — and the understandable (if occasionally misguided) zeal with which such crimes are prosecuted — it seems like only a matter of time before someone operating a Tor node is arrested for facilitating illegal activity (the infamous Tor embassy hack has already attracted law enforcement's attention, of course).
But Falkvinge's larger point seems sound: there's no indication that P2P can be stopped. But this isn't because of some just-around-the-corner bulletproof technology; it's simply a matter of filesharers' overwhelming numbers — numbers that, as Falkvinge implies, may be better measured by the rapidly-expanding count of P2P-capable network interfaces than by the number of humans operating them.
14 Comments | Leave a Comment..
More posts from Tom Lee >>