Posted on Techdirt - 20 April 2009 @ 10:01am
There was a lot of attention paid last week to a new "cybersecurity" bill that would drastically expand the government's power over the Internet. The two provisions that have probably attracted the most attention are the parts that would allow the president to "declare a cybersecurity emergency" and then seize control of "any compromised Federal government or United States critical infrastructure information system or network." Perhaps even more troubling, the EFF notes a section that states that the government "shall have access to all relevant data concerning (critical infrastructure) networks without regard to any provision of law, regulation, rule, or policy restricting such access." Read literally, this language would seem to give the government the power to override the privacy protections in such laws as the Electronic Communications Privacy Act and the Foreign Intelligence Surveillance Act. Thankfully, Congress can't override the Fourth Amendment by statute, but this language poses a real threat to Fourth Amendment rights.
One clause that I haven't seen get the attention it deserves is the provision that would require a federal license, based on criteria determined by the Secretary of Commerce, to provide cybersecurity services to any federal agency or any "information system or network" the president chooses to designate as "critical infrastructure." It's hard to overstate how bad an idea this is. Cybersecurity is a complex and fast-moving field. There's no reason to think the Department of Commerce has any special expertise in certifying security professionals. Indeed, security experts tend to be a contrarian bunch, and it seems likely that some of the best cybersecurity professionals will refuse to participate. Therefore, it's a monumentally bad idea to ban the government from soliciting security advice from people who haven't jumped through the requisite government hoops. Even worse, the proposal leaves the definition of "critical infrastructure" to the president's discretion, potentially allowing him to designate virtually any privately-owned network or server as "critical infrastructure," thereby limiting the freedom of private firms to choose cybersecurity providers.
When thinking about cyber-security, it's important to keep in mind that an open network like the Internet is never going to be perfectly secure. Providers of genuinely critical infrastructure like power grids and financial networks should avoid connecting it to the Internet at all. Moreover, the most significant security threats on the Internet, including botnets and viruses, are already illegal under federal law. If Congress is going to pass cybersecurity legislation this session (and it probably shouldn't) it should focus on providing federal law enforcement officials with the resources to enforce the cyber-security laws we already have (and getting the government's own house in order), not give the government sweeping and totally unnecessary new powers that are likely to be abused.
14 Comments | Leave a Comment..
Posted on Techdirt - 25 March 2009 @ 9:24pm
Rather than simply wringing his hands about how the decline of the newspaper means that no one will report local news, Reason's Jesse Walker actually gives some thought to where local news coverage might come from in a post-newspaper world. He focuses on people and institutions that can provide hyper-local news: not just about a state or metropolitan area, but of a particular town or even a specific neighborhood. For example, most communities already have one or more local gadflies who regularly attend city council and school board meetings and are often the first to notice funny business by government officials. Traditionally, if a gadfly spotted something he thought the public should know about, he had to convince a reporter to cover his scoop. Now there's no filter: the gadfly can post the story to his blog. That won't necessarily mean that a lot of people will read his post, but it at least gives him the opportunity to be noticed by others online. Jesse notes that local activists, government insiders, and community organizations are also candidates to do much of the work that has traditionally been done by local reporters.
The striking thing about this list is how diverse it is. In the traditional, vertically-indicated news business, a single institution oversees the entire news "supply chain," from the reporter attending the local city council meeting to the paper boy who delivers the finished newspaper to readers. The technological and economic constraints of newsprint meant that the whole process had to be done by full-time employees and carefully coordinated by a single, monolithic organization. But the Internet makes possible a much more decentralized model, in which lots of different people, most of them volunteers, participate in the process of gathering and filtering the news. Rather than a handful of professional reporters writing stories and an even smaller number of professional editors deciding which ones get printed, we're moving toward a world that Clay Shirky calls publish, then filter: anyone can write any story they want, and the stories that get the most attention are determined after publication by decentralized, community-driven processes like Digg, del.icio.us, and the blogosphere.
Decentralized news-gathering processes can incorporate small contributions from a huge number of people who aren't primarily in the news business. You don't need to be a professional reporter to write a blog post every couple of weeks about your local city council meeting. Nor do you need to be a professional editor to mark your favorite items in Google Reader. Yet if millions of people each contribute small amounts of time to this kind of decentralized information-gathering, they can collectively do much of the work that used to be done by professional reporters and editors.
Unfortunately, this process is hard to explain to people who don't have extensive experience with the Internet's infrastructure for decentralized information-gathering. Decentralized processes are counter-intuitive. Having a single institution promise to cover "all the news that's fit to print" seems more reliable than having a bunch of random bloggers cover the news in an uncoordinated fashion. The problem is that, in reality, newspapers are neither as comprehensive nor as reliable as they like to pretend. Just as a few dozen professionals at Britannica couldn't produce an encyclopedia that was anywhere near as comprehensive as the amateur-driven Wikipedia, so a few thousand newspaper reporters can't possibly to cover the news as thoroughly as millions of Internet-empowered individuals can. This isn't to disparage the reporters and editors, who tend to be smart and dedicated. It's just that they're vastly outnumbered. As Jesse Walker points out, any news gathering strategy that doesn't incorporate the contributions of amateurs is going to be left in the dust by those that do.
13 Comments | Leave a Comment..
Posted on Techdirt - 25 March 2009 @ 12:45pm
Computer science is cool again. At least, that's what the headline at Network World says. Apparently, CS enrollments are up for the first time in six years, driven by "teens' excitement about social media and mobile technologies." I'm a CS grad student, so you might expect me to be excited about this development, but I'm not actually sure it's such a good sign. It's great that there are more people considering careers in the IT industry, but I worry about people going into computer science for the wrong reasons. In my experience, if your brain works a certain way, you'll love programming and will have a successful career in the software industry. If it doesn't, there probably isn't much you can do to change that. So I'd love to see more kids explore CS, but if, after taking a couple of classes, they're not sure if CS is the right major for them, then frankly it probably isn't. If you don't enjoy programming, you're almost certainly not going to be a good programmer, and you're not going to be either successful or happy in that career. The fact that you like Facebook or your iPhone definitely isn't enough reason to be a CS major.
I think it would be better if colleges focused on expanding the computer training that non-CS majors receive. Almost every technical field involves manipulating large datasets, and so the ability to write basic computer programs will be a big productivity boost in a wide variety of fields, from economics to biology. Most people aren't cut out to be full-time programmers, but lots of people could benefit from a 1-semester course that focuses on practical data manipulation skills with a high-level scripting language like Perl or Python.
28 Comments | Leave a Comment..
Posted on Techdirt - 23 March 2009 @ 10:10am
Longtime Techdirt readers may remember Alex Halderman, who conducted influential research into the problems created by CD-based DRM during his time as a grad student here at Princeton. He's now a professor at the University of Michigan, and he's working on a new project: seeking a DMCA exemption for security research related to defective DRM schemes that endanger computer security. We've seen in the past that DRM schemes can open up security vulnerabilities in users' computers, and Halderman argues that the public would benefit if security researchers could examine DRM schemes without being threatened with litigation under the DMCA for doing so.
The DMCA gives the Librarian of Congress the power to grant three-year exemptions for DRM circumventions that are perceived to be in the public interest, and one of the exemptions granted in the 2006 triennial review was for CD-based DRM schemes that create security problems. Alex points out in his filing that the most serious security vulnerabilities created by DRM since that rule-making have come not from CD-based DRM but from video game DRM, which has not been adequately studied by security researchers. A ton of prominent security researchers (including Alex and my mutual advisor, Ed Felten) have endorsed Alex's request, arguing that the threat of DMCA liability hampers their research. We hope the Librarian of Congress is listening. If you live near Palo Alto or Washington, DC, you can sign up to testify about Alex's proposal (or others) by filling out this form.
23 Comments | Leave a Comment..
Posted on Techdirt - 19 March 2009 @ 11:01pm
One of the interesting things about the end of the Seattle Post-Intelligencer's print edition, which Mike noted on Monday, is how much more flexibility the PI will have to adjust to changing economic conditions now that it's an online-only publication. I don't think it's generally appreciated how constraining the newspaper format is. Readers expect a daily paper to be a certain size every day, and to arrive on their doorstep at a certain time every morning. Meeting those requirements involves a ton of infrastructure and personnel: typesetters, printing presses, delivery trucks, paper carriers, and so forth. To meet these infrastructure requirements, a paper has to have a minimum circulation, which in turn requires covering a wide geographical area. All of which means that as a daily paper's circulation falls below a certain threshold, it can lead to a death spiral where cost-cutting leads to lower quality, which leads to circulation declines and more cost-cutting. Of course, some papers manage to survive with much smaller circulations than the PI, but these tend to be either weekly papers (which tend to have a very different business model) or papers serving smaller towns where they have a de facto monopoly on local news.
These economic constraints, in turn, greatly constrain what journalists can do. They have a strict deadline every evening, and there are strict limits on the word count they can publish. Because newspapers have to target a large, general audience with limited space, reporters are often discouraged from covering niche topics where they have the greatest interest or expertise. Moreover, because many newspaper readers rely on the paper as their primary source of news, people expect their newspaper to cover a broad spectrum of topics: national and international news, movie reviews, a business section, a comics page, a sports page, and so forth. Which means that reporters frequently get dispatched to cover topics they don't understand very well and that don't especially interest them. The content they produce on these assignments is certainly valuable, but it's probably not as valuable as the content they'd produce if they were given more freedom to pursue the subjects they were most passionate about.
The web is very different. Servers and bandwidth are practically free compared with printing presses and delivery trucks, so news organizations of virtually any size—from a lone blogger to hundreds of people—can thrive if they can attract an audience. And thanks to aggregation technologies such as RSS and Google News, readers don't expect or even want every news organization to cover every topic. Here at Techdirt, we don't try to cover sports, the weather, foreign affairs, or lots of other topics because we know there are other outlets that can cover those topics better than we could. Instead, we focus on the topics we know the most about—technology and business—and cover them in a way that (we hope) can't be found anywhere else. In the news business, as in any other industry, greater specialization tends to lead to higher quality and productivity.
Moving online will give the PI vastly more flexibility to adapt to changing market conditions and focus on those areas where they can create the most value. The PI says they'll have about 20 people producing content for the new web-based outlet. That's a lot fewer than the print paper employed, but it's enough to produce a lot of valuable content. And now that they're freed of the costs and constraints of newsprint, and the expectation to cover every topic under the sun, it'll be a lot easier to experiment and find a sustainable business model.
9 Comments | Leave a Comment..
Posted on Techdirt - 18 March 2009 @ 7:54pm
The New York Times takes a look at the changing role of foreign correspondents in the Internet age. A generation ago, journalists who covered foreign countries could send reports back home without worrying about how their coverage would be perceived by the natives. This may have allowed more candid reporting, but it also meant coverage was less accurate because reporters never got feedback from the people they were covering. Now all that has changed. On the Internet, Indian readers can read the New York Times as easily as the Times of India. When reporters make mistakes, they get instant feedback from the subjects of their stories.
One question the story doesn't specifically discuss is whether there's a need for foreign correspondents, at all, in the Internet age. In the 20th century, newspapers needed foreign correspondents because the process of gathering and transmitting news across oceans was expensive and cumbersome. Having a foreign bureau gave a newspaper a competitive advantage because it allowed it to get fresher and more complete international news than its competitors. Now, of course, transmitting information around the world is incredibly cheap and easy. My local newspaper is no longer the only—or even the best—source of information about world events. Those who understand the language can get their news directly from foreign media outlets. And for the rest of us there are a ton of people who translate, filter, and interpret the news coming out of foreign countries for domestic consumption. Given these realities, it's not obvious how much value is added by having American newspapers send reporters to the far-flung corners of the globe.
Of course, there are still tremendous advantages to having people who can explain foreign events and put them in context for American readers. I can read India's newspapers, but I'm not going to pick on all the nuances of the coverage. But there are lots of ways to provide this kind of context and analysis. For example, there are undoubtedly smart Indian journalists who went to college in the United States and then returned to India. Such journalists are going to possess a much deeper understanding of Indian culture than an American journalist could. Conversely, there may be American expats living in India (perhaps with day jobs other than journalism), who could provide an American perspective on Indian news. Most importantly, there are lots of people here in the United States, who can read Indian news sources and then write about developments there, from an American perspective. These include Indian immigrants and Americans who have spent time in India, in the past.
One of the things people frequently cite as evidence of the dire state of the news industry is the fact that newspapers are closing their foreign bureaus and laying off their foreign correspondents. Maybe this is a sign that journalism, as a profession, is in trouble. But another interpretation is that we've just found more efficient ways to get news about foreign events. American readers will continue to demand coverage of overseas events. But 21st century news organizations are likely to discover that shipping American journalists overseas is not the most efficient way to meet that demand.
9 Comments | Leave a Comment..
Posted on Techdirt - 18 March 2009 @ 3:17am
Last month we covered Microsoft's patent infringement lawsuit against GPS device maker TomTom. As Mike noted, this is a pretty clear example of abusive patent litigation. The patents in question are so broad that it's virtually impossible to innovate in this space without first paying Microsoft for the privilege. Obviously, that prospect doesn't bother Microsoft's top patent lawyer very much, but it should be a serious concern for the rest of us. Since Mike wrote that post, another angle of the case has gotten a lot of attention from tech blogs: whether it's possible for TomTom to settle the lawsuit without running afoul of the GPL, the free software license that covers the Linux code that Microsoft claims infringes at least three of those patents.
A bit of background is helpful here. When the Free Software Foundation drafted version 2 of the GPL, it included a clause saying that if a vendor is forced to place restrictions on downstream redistribution of software covered by the GPL (due to a per-unit patent licensing agreement, for example), that vendor loses the right to distribute the software at all. This clause acts as a kind of mutual defense pact, because it prevents any firm in the free software community from making a separate peace with patent holders. A firm's only options are to either fight to invalidate the patent or stop using the software altogether. This clause of the GPL actually strengthens the hands of free software firms in their negotiations with patent holders. A company like Red Hat can credibly refuse to license patents by saying "we'd love to license your patent, but the GPL won't let us."
This creates a problem for a company like Microsoft that wants to extract licensing revenues from firms distributing GPLed software. Ordinarily, a patent holder sues in the hope that it will be able to get a quick settlement and a nice revenue stream from patent royalties. But the vendor of GPLed software can't settle. And if the patent holder wins the lawsuit, the defendant will be forced to stop distributing the software, depriving the patent holder of an ongoing revenue stream. Either way, the trial will generate a ton of bad publicity for the patent holder.
In a comment at the "Open..." blog, prominent Samba developer Jeremy Allison charged that Microsoft has tried to sidestep this agreement by basically forcing companies to sign patent licensing agreements that violate the GPL under the cover of non-disclosure agreements. Allison argues that TomTom got sued because it was the first company to refuse to participate in this fraud. It's important to note here that Allison can't prove the existence of these agreements, so we should take his claims with a grain of salt. But if these charges are ever conclusively proven, they would have explosive consequences. The Free Software Foundation would likely insist that such firms either cancel their agreements with Microsoft (likely triggering a patent lawsuit) or stop distributing GPLed software altogether (which could be a death sentence for a firm that relies on such software).
Regardless, TomTom is now stuck between a rock and a hard place. The GPL has left the firm with only two options. It must either fight Microsoft's patents to the death (literally) or it must settle with Microsoft and immediately stop distributing GPLed software. Given how deeply-entwined GPLed software apparently is in TomTom's products, that second option may be no option at all. So expect a long and bloody fight in the courts.
One likely result will be to create a serious PR problem for Microsoft. Some people might remember the infamous GIF patent wars of the 1990s. When Unisys tried to collect patent royalties on the GIF format, the Internet community responded by switching in droves to the PNG format. In the process, Unisys earned a ton of bad press and a terrible reputation among computer geeks who care about software freedom. Microsoft risks a similar fate if it pursues this litigation campaign against Linux. And given that Microsoft is in a business where innovation is king, it's probably not a good idea to become a pariah in a community that includes many of the world's most talented software engineers.
23 Comments | Leave a Comment..
Posted on Techdirt - 17 March 2009 @ 7:59pm
As network infrastructure has become an increasingly important part of our economy, there's been growing concern about the problems of cybersecurity. So far, the key debate is over whether the government should be involved in helping the private sector secure its networks or should focus on government networks. But another important question is which part of the government should be in charge of cyber-security. We're in the midst of a bureaucratic turf war between the Department of Homeland Security and the National Security Agency over who will be in charge of government cybersecurity policy. The NSA's head, Keith Alexander, is pushing the theory that cyber-security is a "national security issue," and that therefore an intelligence agency like the NSA ought to be in charge of it.
The problem with this is that the NSA has a peculiar definition of cyber-security. When most of us talk about cyber-security, we mean securing our communications against intrusion by third parties, including the government. Yet the NSA has made no secret of its belief that "cyber security" means being able to spy on people more easily. Moreover, as Amit Yoran, former head of the Department of Homeland Security's National Cyber Security Division, points out, the NSA's penchant for secrecy, and concomitant lack of transparency, will be counterproductive in the effort to secure ordinary commercial networks. Therefore, the fight between DHS and the NSA is more than just a bureaucratic squabble. There's plenty to criticize about the Department of Homeland Security, and reasons to doubt whether they should be helping to secure private sector networks at all. But at least DHS is relatively transparent, and (as far as we know) doesn't engage in the kind of indiscriminate, warrantless wiretapping for which the NSA has become notorious.
10 Comments | Leave a Comment..
Posted on Techdirt - 17 March 2009 @ 5:13pm
The Utah legislature has seemed strangely obsessed with technology issues this session. Perhaps spurred on by a questionable BYU study on the problems created by video games, the Utah legislature has passed a bill promoted by disgraced lawyer and anti-videogame activist Jack Thompson to regulate the sale of video games to minors. The good news, as Ars Technica reports, is that the law was largely defanged during the legislative process. Under the final version of the bill, retailers would not be liable for selling M-rated video games to minors if they'd put their employees through a training program. They'd also not be liable if the children had gotten the games by lying about their age. With that said, there's still plenty to object to here. For starters, the legislation punishes retailers for failing to follow their published policy on video game sales. That means that a retailer that has a strong policy against selling to minors will face more liability if it breaks that policy than a retailer that doesn't have such a policy. This could have the perverse effect of discouraging retailers from adopting strong policies against selling violent video games to children. It will also force a lot of retailers to put their employees through "training" programs that may be completely unnecessary. But probably the most serious problem with this legislation is that it may be an opening wedge for future regulation of video game sales. Expect the same interest groups that pushed this legislation through to come back in future years with bills that would close the "loopholes" in this year's legislation.
13 Comments | Leave a Comment..
Posted on Techdirt - 17 March 2009 @ 8:27am
Computer scientist Steven Bellovin notes a troubling trend: companies that republish public domain works are increasingly trying to use contract law to place restrictions on their use. For example, Google is apparently in the habit of "requesting" that people only use the out-of-copyright works they've scanned for "personal, non-commercial purposes." Even more troubling, works like this one that were produced by the US federal government—and have therefore never been subject to copyright—come with copyright-like notices stating that any use other than "individual research" requires a license. Fundamentally, this is problematic because copyright law is supposed to be a bargain between authors and the general public: we give authors a limited, temporary monopoly over their works, in exchange for those works being created. But in this case, the restrictions are being imposed by parties—Google and Congressional Research Services, Inc., respectively—who had nothing to do with the creation of the works. The latter case is particularly outrageous because taxpayers already paid for the works once, through our tax dollars.
With that said, there are a couple of reasons to think that things aren't as bad as Bellovin suggests. It's hardly unusual for companies to claim rights they don't have in creative works—that doesn't mean those claims will stand up in court. The fact that Google "requests" that users limit how works are used doesn't mean they can stop people who ignore their requests. And especially in the case of government works, there's a strong case to be made that copyright law's explicit exemption of government works from legal restrictions should trump any rights that private companies might claim to limit the dissemination of such works. Moreover, a few courts have recognized the concept of copyright misuse, the attempt to extend a copyright holder's rights beyond those that are specified in the law. So it's not at all clear that these purported contractual restrictions would actually be binding. Companies might say that you need permission to reproduce the works, but they're unlikely to try to enforce those requirements in court. Nevertheless, government officials and librarians should do a better job of policing these kinds of spurious claims. As Bellovin says, government agencies that hire firms to manage collections of public domain works should ensure that the private firms are contractually obligated not to place additional restrictions on downstream uses of those works.
17 Comments | Leave a Comment..
Posted on Techdirt - 16 March 2009 @ 4:32pm
A couple of years back we noted that the Utah legislature was considering legislation that would have banned companies from buying search ads related to their competitors' brand names. EFF and others said the law was likely unconstitutional, but the legislature passed it anyway. The legislation was such a disaster that last year the Utah legislature repealed it. Incredibly, despite all the negative publicity the 2007 bill received, and despite assurances from legislators that they'd learned their lesson, the backers of the legislation haven't given up. This year they introduced yet another bill restricting keyword advertising that passed the Utah House but died in the Utah Senate a few days ago. Given the tenacity of the bill's sponsors—1-800-Contacts is reportedly the leading backer of the proposal—the proposal may very well come back in future years.
Proposals to regulate keyword advertising have come in for a lot of criticism, but one person who's willing to defend the Utah proposal is Harvard's Ben Edelman. He argues that the Utah bill is necessary to avoid consumer confusion. He suggests that when consumers search for a trademarked term (say, "Hertz"), they're expecting to see search results related to that company, not to the company's competitors. He argues that if a consumer really wanted results from a variety of different companies, she would have chosen a generic term like "car rental" rather than a specific brand name. But James Grimmelmann points out a couple of problems with this reasoning. First, it shows an awfully low opinion of the intelligence of the average consumer. More importantly, there are circumstances where a consumer wants to see ads for a firm's competitors. For example, a consumer may be considering buying a particular company's products, but might want to check out that company's competitors before making her decision. Searching for that company's name is a quick and easy way to find out which other companies consider themselves to be in the same market. In contrast, the customer may not know which generic terms precisely describe that company's market. In Grimmelmann's example, it might be easier to ask for all companies in the same market as "Godiva" or "Hershey's", rather than having to describe precisely which segment of the chocolate market we're interested in.
19 Comments | Leave a Comment..
Posted on Techdirt - 16 March 2009 @ 2:04pm
Back in January, we noted that despite Steve Jobs's posturing on the music DRM front, Apple remains a big supporter and user of DRM and DRM-like schemes throughout their product lines. Over at the EFF blog, Fred von Lohmann suggests another potential example. The new iPod Shuffle has no buttons; the controls are on the included headphones. And if these folks are right (and there seem to be some doubts), the new shuffles won't work with the remote controls of any existing third-party headphones because the iPod looks for a special "authentication chip" that so far is only embedded in the headphones Apple bundles with the shuffle. This would be irritating to me personally because I hate earbuds and so if I bought a shuffle the first thing I'd want to do is swap out the Apple-supplied earbuds with third-party headphones.
Fred suggests that the purpose of this "authentication chip" is to trigger liability under the DMCA if anyone tries to reverse-engineer the chip. That's possible, but it's far from clear that that's what's going on. We don't know exactly what the chip does, but it seems unlikely that they'd embed enough computing power in the chip to do real crypto. And if there's no crypto, it becomes harder—although certainly not impossible—to invoke the DMCA's anti-circumvention provisions. Unfortunately, there's so little case law on the DMCA's anti-circumvention rules that we don't really know how it would apply in a case like this. And that uncertainty may be all Apple needs to discourage third parties from building unauthorized accessories. b>Update: It looks like we were right to be skeptical about the DRM angle. Fred updates to point to a Boing Boing report that there's no authentication in the new headphones. Which means that a DMCA claim probably wouldn't apply to third-party headphone makers.
26 Comments | Leave a Comment..
Posted on Techdirt - 19 January 2009 @ 10:55am
If I'm right that, as I argued on Friday, there's a cultural gap between the patent bar and the technology industry on the subject of software patents, an interesting question is how we got them in the first place. After all, it wasn't that long ago that software was widely believed to be unpatentable, and major technology firms were hardly clamoring for patent protection. Peter Mennell, a Berkeley law professor who spoke at last Wednesday's Brookings patent conference had an interesting perspective on how this came about. He argues that the impetus for software patents came from patent attorneys within major software firms who spread the "gospel of patenting" within their companies. Not surprisingly, CEOs tend to delegate patent issues to their patent lawyers, and of course patent lawyers will tend to have more pro-patent views than their bosses. And so despite the fact that few technology executives were enthusiastic about patenting, the patent lawyers who worked for them pushed their firms in that direction. And of course, once some software firms started acquiring significant numbers of patents, it sparked the arms race that we've talked about here at Techdirt.
To be clear, I don't think that firms' patent attorneys were deliberately flouting their bosses' orders or working against their companies' interests. Rather, I think that patent lawyers genuinely believed (and still believe) that software patents would be good for their own firms and the broader software industry. This is similar to a phenomenon I noticed when I was researching eminent domain abuse: even lawyers who made their living defending property owners against abuses of the eminent domain system didn't think it should be illegal to take someone's property for private profit. Rather, they tended to think that the solution was to add additional layers of review to filter out the worst abuses. Obviously there's an element of self-interest here. Scaling back the number of eminent domain cases or software patents means fewer jobs for eminent domain or patent lawyers, respectively. But I think the far more important explanation is that when you have a hammer, everything looks like a nail. When you're an expert on the minutia of a particular body of law, you're naturally going to think that the solution to any given problem is to fine-tune that body of law. They tend not to think about reforms that would involve getting the lawyers out of the picture altogether.
I think the good news (if you can call it that) is that the patent system is getting so dysfunctional that it's starting to generate interest from corporate CEOs, most of whom are not patent attorneys. A Hill staffer, who spoke on the same panel as I, mentioned that he's seen an increasing trickle of tech companies coming to Capitol Hill to lobby for patent reform. As it becomes more obvious that software patents do little to promote innovation and are mostly a wealth transfer from the software industry to the patent bar, I think we'll see more tech industry CEOs paying attention to the patent problem. And most of them will be less committed to software patents than their patent lawyers are.
17 Comments | Leave a Comment..
Posted on Techdirt - 16 January 2009 @ 6:30pm
One of the persistent themes I noticed at Wednesday's patent conference at the Brookings Institution is that most of the lawyers seemed to assume that if the legal system ultimately reaches the right conclusion—invalidating a bad patent, say—that this means that the patent system is working well. Some panelists suggested that the Bilski decision, which struck down one particularly egregious "business method" patent, shows that there's not really a problem, because the courts are recognizing the problems with bad patents and correcting them. They seemed not to fully appreciate how slow and expensive the legal system is. One only has to think back to the great BlackBerry showdown to see that having the legal system eventually invalidate a bad patent may not be good enough. Even if the law is on the side of an accused patent infringer, the time, expense, and uncertainty of litigation can kill the firm before its rights can be vindicated in court.
I think the right way to think about patent reform is not whether the courts eventually reach the right result, but whether the system is predictable enough that you can tell in advance what the law requires, without hiring a patent lawyer. After all, this is how well-designed property rights systems work. I didn't need to hire a property lawyer to tell me who owns the apartment I'm living in—the rules of real property are predictable enough that I could figure it out on my own. The vast majority of property transactions are the same way—lawyers only get involved in exceptional cases that involve large sums of money or tricky legal issues. By the same token, if we're going to have patents on software (or in any other industry), they should be few enough and clear enough that a smart entrepreneur can figure out in advance, without the help of a lawyer, which patents he needs to license. If our current patent system isn't living up to that standard, the solution isn't to come up with ever-more-complex legal doctrines trying to separate the "good" vague patents from the bad ones. Rather, the solution is to restrict patenting to those fields where it's possible to make things clear and predictable. If that's not possible in some industry (and I suspect it's not in software), then that's a sign that patents aren't an effective way to promote innovation in that industry.
59 Comments | Leave a Comment..
Posted on Techdirt - 16 January 2009 @ 10:10am
On Wednesday I attended the Brookings Institution's conference on "The Limits of Abstract Patents in an Intangible Economy." The conference was organized by software patent skeptics, so that perspective has been well represented. But I was struck by the dramatic differences between the views of lawyers on the one hand (who made up the majority of the panelists and audience members) and the handful of technologists on the other. The first panel focused on the economics of abstract patents, and included a mix of technologists, economist, and lawyers. All of the panelists spoke about the serious problems being caused by patents in the software industry and argued for dramatic restrictions on software and business method patents. The tone of the second panel, which focused on legal issues, was rather different. All of the panelists were lawyers, and although they acknowledged that the patent system had problems, and that these problems are especially serious in the software industry, their focus was on abstruse details of patent law. None of them supported explicit restrictions on software patents, and few seemed to feel any urgency about the need to rein in patenting in the software industry. I think this contrast is reflected in the broader software patent debate—patent attorneys and law professors who write about patent law are overwhelmingly in favor of patents on software, and prefer to argue about how to fine-tune patent law to get fewer "bad" software patents without invalidating the "good" ones. In contrast, a lot of computer programmers simply wish the patent system would leave them alone.
There are a couple of ways you can view this split. On the one hand, it's possible that the economists and technologists on the first panel are naive and don't understand the complexities of patent law. Maybe broad restrictions on patenting of software or other abstract inventions would have unintended consequences in other parts of patent law that only one schooled in the minutia of patent law can understand. On the other hand, the perspectives found on the second panel could be a reflection of the solipsism of the patent bar. Patent attorneys seem to have an unshakable faith that there's no sector of the economy that couldn't be improved by more patenting. I suspect that one reason for these different attitudes has to do with the role the two groups play in the software industry. Patent attorneys only interact with those parts of the software industry that participate in the patent system. When software engineers write useful software without seeking patents on it—a vastly more common occurrence—patent attorneys will, by definition, not be there. Therefore, patent lawyers are inevitably going to over-estimate the importance of patents to the software industry. In contrast, the average programmer deals with the patent system infrequently. For a lot of entrepreneurs, patents are basically a nuisance—they have to get some for defensive purposes, but they're not an important part of their business plans. For employees at larger firms, patents are basically irrelevant to their day-to-day jobs. No programmer starts a programming project by consulting the patent database.
As a consequence, the two communities have radically different views of how well the patent system is working. The lawyers certainly acknowledge that there's a problem, but they seem to find it incomprehensible that there could be a major American industry that's better off without patent protections. Techies understand that patents are not an important part of the software industry, and so they're much more likely to say that their industry would be better off without them.
53 Comments | Leave a Comment..
Posted on Techdirt - 29 December 2008 @ 7:02pm
Tyler Cowen points us to an interesting post on the future of the classical music market. Bill Stensrud predicts that the major record labels will soon exit the classical music business, leaving behind Naxos, a label that saves money by paying musicians very little. Stensrud urges classical musicians to give up on the idea of making money by selling recorded music, and instead think of recorded music as a promotional tool. He paints a pretty stark picture of the future of the music business, predicting that "live recordings will completely replace studio recordings."
It certainly seems like a reasonable prediction that we'll see growth in live performance relative to studio performance. But Stensrud's overall prediction seems unduly grim. There's plenty of evidence that the Internet has benefitted classical music by introducing more people to the genre. And it seems pretty implausible that studio performances will disappear completely. If there's a demand for studio recordings, someone is going to figure out how to meet that demand profitably, whether that's through an ad-supported streaming service or as a way to promote the sale of products like musical instruments. Also, we should remember that most major orchestras depend on charitable contributions, so if it's really the case that it will be impossible to make studio recordings profitably (which seems unlikely) the same wealthy patrons who subsidize orchestras now are likely to step up to help pay for the costs of some studio recordings. Perhaps we'll see fewer studio recordings than we did in the 20th century, but studio recordings aren't going to disappear.
Still, Stensrud's fundamental point seems sound: in the 20th century, many classical musicians supported themselves by selling copies of recorded music. In the future, that's probably the wrong approach. Instead, musicians should free their music in order to increase sales of other products and services, such as music lessons, live performances, and (for the most successful) product endorsements.
10 Comments | Leave a Comment..
Posted on Techdirt - 24 December 2008 @ 2:00am
Over at his personal blog, occasional Techdirt contributor Tom Lee weighs in on an interesting discussion going on around the blogosphere about who, if anyone, is to blame for the precipitous decline of the newspaper business. My sympathies are with the pessimists: in principle, there are a lot of things newspapers could have done to better manage the transition to Internet-based news, but as a practical matter it's really difficult for large organizations to adapt to disruptive technologies. Tom makes some sensible points about the newspaper business, but then makes a claim about the broader advertising industry that I didn't agree with. Tom suggests that the online advertising market may be fundamentally doomed because now that advertisers can more precisely measure the effects of advertising, they're discovering that it "just doesn't work very well."
I think there are a couple of problems with this. In the first place, advertising has never "worked very well" in the sense that any given ad impression doesn't exactly get the viewer to run out and purchase the product being advertised. In the traditional advertising business, companies didn't know which specific ad will work on which specific viewer, so they adopted a scattershot approach where they exposed millions of customers to dozens of ads and hoped a few of them would have the desired effect. But despite our ignorance about precisely which ads "work" on which viewers, it's pretty clear that advertising "worked" in the aggregate. McDonalds and Coca Cola clearly get some value from the millions of dollars they spend on TV and print ads.
On the Internet, the scattershot approach is no longer necessary. Digital media allows advertisers to be a lot more specific about the users they want to target and to collect a lot more data about their effectiveness. Tom suggests that this is a bad thing because once companies discover their ads aren't working well, they'll stop spending money on them. But the flip side is that advertisers can measure when a particular ad is working, and that ad inventory becomes correspondingly more valuable. Even better, better measurement means that the average ad should improve over time. Ads that don't work can get dropped more quickly, and the ones that perform well can be put on heavy rotation, emulated by other advertisers, and so forth. That can only be good for ad revenues.
Tom also suggests that advertising is doomed because the Internet makes it a lot easier to avoid it. But peoples' hatred for advertising isn't inevitable. It's a consequence of the limitations of 20th century media technologies that required advertisers to adopt "scattershot" approaches to advertising. There was no way to target car ads at the 5 percent of the population that's in the market for a car at any given time, so the other 95 percent of us had to sit through endless car commercials. But online there are lots of ways to more narrowly target ads at people who are likely to be interested in them. In the long run, as we've said before, advertisers are going to have to realize that content is advertising. If you can make ads relevant, interesting, or entertaining, people aren't going to try as hard to avoid them. Search engines do this by only showing ads relevant to the particular keyword a user entered. Other advertisers have figured out that if they make their commercials fun to watch, people will be more willing to watch them. Of course, it's hard to predict whether the total amount of advertising revenue will go up or down over the next decade. But as long as people buy stuff, companies will be willing to spend significant amounts of money to influence their decisions.
18 Comments | Leave a Comment..
Posted on Techdirt - 22 December 2008 @ 11:27am
Back in 2005, Mike coined the term "the Streisand Effect" to describe the situation where an attempt to suppress information generates increased publicity for that information. Our latest example comes via my friend Matthew Yglesias, who on Friday had some choice words for a center-left organization called Third Way. Matt blogs on a site run by Center for American Progress (CAP), though with full editorial control over the posts on his particular blog on the site. Despite calling Third Way out, the post got little overall attention on the blog.
On Sunday, however, readers of Matt's blog were treated to this creepy post in which Matt's boss, Jennifer Palmieri, noted that his posts don't reflect the opinions of the Center for American Progress, and then insisted that CAP has "a great deal of respect for [Third Way's] critical thinking and excellent work product." This is a great illustration of the differences between traditional and web-based media. In a traditional paper publication, everything is subject to editorial control, and in all likelihood Matt would have been asked to tone down his criticism of Third Way before his writing hit the presses. But Matt's blog gets posted unfiltered, complete with curse words and spelling errors. The immediacy of Matt's blog is a big part of what keeps readers coming back to the site. And it's also what made Palmieri's post so damaging.
Although Matt's blog is hosted on CAP's site, it's Matt's blog, and readers expect to get Matt's unfiltered opinions. Having Matt's boss hijack his blog in order to publicly reprimand him is really jarring. And then there's the Streisand Effect. Everyone would have forgotten about Matt's original post within a few days had someone at Third Way not called Matt's boss and demanded an apology. Instead, the entire liberal blogosphere is talking about Matt's post... and about Third Way's thin skin. The backlash is going to do far more damage to Third Way's reputation than Matt's original post could have.
8 Comments | Leave a Comment..
Posted on Techdirt - 10 November 2008 @ 2:04pm
Over at News.com, Declan McCullagh writes that Barack Obama's election as the next president of the United States has bolstered the hopes of those hoping to impose network neutrality regulations on the Internet. While Obama's key advisors have been cagey about precisely what the new administration's stance on the issue will be, it's a safe bet that we'll be hearing a lot about the issue in the coming months. This seems like a good time for a long-overdue conclusion to my ongoing series on network neutrality regulation.
One of the things that has been missing from the network neutrality debate has been a sense of how it fits into the broader history of government regulations of network industries. It's easy to imagine that the Internet is so new and different that historical comparisons just aren't relevant. But as we've seen with copyright and patent debates, we can learn a lot from historical experiences that may not seem immediately relevant.
I think this is equally true in the network neutrality debate. While the specifics of network neutrality are unlike anything that has come before, the general principles involved—non-discrimination, competition, monopoly power, and so forth—have actually been with us for more than a century. Indeed, today's network neutrality debate bears a striking resemblance to the debate that led to the very first American regulatory agency: the Interstate Commerce Commission, which was created to regulate the railroad industry.
The railroad industry was the high-tech industry of its day, and it had many of the same kinds of transformative effects on the 19th Century American economy that the Internet is having today. As with today's Internet, some parts of the railroad market were highly competitive, while other markets were served by only one or two firms. And people had concerns about the behavior of the largest railroad firms that echoed those that people have about large Internet providers today: that they restrict competition, discriminate among customers.
In 1887, Congress passed legislation (you can read an abridged version here) that is strikingly similar to the proposed network neutrality legislation that we're debating today. The Interstate Commerce Act declared it illegal to charge different prices to different customers for "the transportation of a like kind of traffic under substantially similar circumstances and conditions." It also said that railroads may not "make or give any undue or unreasonable preference or advantage to any particular person, company, firm, corporation, or locality, or any particular description of traffic." Compare that to the leading network neutrality proposal during the last Congress, which would have required network providers to deliver content on a "reasonable and nondiscriminatory" basis without imposing "a charge on the basis of the type of content, applications, or services made available."
Unfortunately, the story of the Interstate Commerce Commission does not have a happy ending. Grover Cleveland appointed a railroad ally named Thomas M Cooley as the first chairman of the ICC. The ICC was widely regarded as toothless for its first couple of decades, largely rubber-stamping railroad industry decisions. Things got even worse after the turn of the century, when the ICC began actively discouraging competition in the railroad industry. The ICC had the power to decide when new firms were allowed to enter the railroad industry, and by the 1920s, the FCC was actively working to discourage competition and push up railroad rates. In the 1930s, the ICC gained authority over the infant trucking industry, and used its authority to slow the growth of the trucking industry to protect the railroads from competition. By 1970, things had gotten so bad that a Ralph Nader report described the ICC as "predominantly a forum at which transportation interests divide up the national transportation market."
What went wrong? The story is too long and complicated to fully describe in a blog post, but I think there are two key lessons. First, the authors of the ICA dramatically underestimated the complexity of the railroad industry and the difficulty of government oversight. One of the reasons the ICC was relatively toothless in its early years is that it was completely overwhelmed with paperwork, as dozens of railroads sent it information about thousands of routes. The railroad industry was simply too complex and dynamic for a few Washington bureaucrats to even understand, to say nothing of regulating them effectively.
Second, the ICC's failure is a classic example of what economists call "regulatory capture": the ability of special interests to gain control of the regulatory process and use it to their advantage. Because the railroads cared more about railroad regulation than anyone else, they were adept at getting their allies appointed to key positions at the commission. Over time, the ICC not only ceased to be an effective watchdog of consumer interests, but actually began actively defending the interests of the railroads at consumers' expense. For about six decades—from about 1920 to 1980—the ICC pursued policies that reduced competition and raised prices in the railroad industry. And when trucking emerged as a potentially disruptive innovation, the ICC helped to limit its growth and slow the corresponding decline of the railroad industry.
The story of the ICC is not an isolated case. Similar stories can be told of the Civil Aeronautics board, which limited competition in the airline industry until the 1970s. And, of course, there's the example of that the FCC actively promoted AT&T's monopoly in the telecommunications market until it was broken up in 1984.
We can certainly hope that Congress has learned from the experiences of the 20th century and will avoid the most egregious mistakes it made in the 20th century. But it's worth remembering that many of the conditions that led to the ICC's problems are still with us. Today's FCC, like the ICC of the 20th Century, has a revolving door between the commission and the firms they regulate. And the Internet, like the railroad industry of the 19th century, is extraordinarily dynamic and complex. As a result, there's a real danger that if Congress gives the FCC the power to regulate the Internet, it will make things worse, either because it cannot keep up with the Internet's rapid evolution, or because industry incumbents will succeed in getting their own allies in key positions within the commission. Either way, the results could be very different from what network neutrality proponents are hoping for.
34 Comments | Leave a Comment..
Posted on Techdirt - 16 September 2008 @ 5:58pm
In previous installments of my series on network neutrality, I've pointed out that the end-to-end principle is not as fragile as a lot of people assume. Technological platforms have a kind of momentum that make them hard to change once they've become established, and so it's not at all obvious that major broadband providers have the ability to significantly change the Internet's architecture. In my view, this is one reason to be skeptical of making the FCC the nation's network neutrality cop.
Here's a good example of another reason for skepticism: Catherine Bohigian, chief of the office of Strategic Planning and Policy Analysis at the Federal Communications Commission, stepped down effective September 5. Her next job will be with cable giant Cablevision. According to the Washington Post, Bohigian has worked closely with chairman Kevin Martin throughout his tenure. And before her tour of duty at the FCC, Bohigian—like Martin—worked at Wiley, Rein & Fielding, a private law firm specializing in communications law. In other words, Bohigian first worked at a law firm that regularly appears before the FCC, then she became one of the key decision-makers at the FCC, and now she's going to be working for a company that regularly appears before the FCC. It's reasonable to assume that she'll be using her intimate knowledge of the regulatory process—and, perhaps, her close ties to other FCC staffers—to gain regulatory advantages for her employer.
Now, this isn't illegal. It's not even unusual. But this kind of low-grade corruption does give us a window into how the regulatory process works. Theoretically, the FCC is supposed to be a neutral agency that enforces the law in the public interest. In practice, the revolving door between the commission, major telecom companies, and the high-priced law firms that represent those companies means that the people who staff the agency and the people who lobby the agency are largely the same people at different points in their careers. In the next few years, if Cablevision wants to make sure that a particular FCC decision comes out in a way that promotes their interests, they won't just be able to make their arguments via the formal legal process. They'll also be able to dispatch Bohigian to have lunch with key FCC staffers—many of whom will be her friends, and possibly her former employees—to personally plead Cablevision's case. And of course, many of those staffers will be thinking about what their next gig will be, and it will be obvious that their chances at getting a cushy job at a major telco or cable company will be enhanced if they're helpful to those companies while they're still with the Commission.
You could mitigate this somewhat with stricter lobbying rules. For example, Congress imposes a one-year time limit on Hill staffers lobbying their former colleagues after they take jobs in the private sector. Maybe the FCC should beef up its own conflict-of-interest rules. (Update: As some commenters have pointed out, senior FCC officials are are already subject to a one-year cooling off period. This restriction could obviously be broadened or extended in various ways, but it's not going to be feasible to write rules that would eliminate the influence of industry insiders at the FCC.) But shutting down the revolving door completely would be extremely difficult. The regulations the FCC enforces are complicated, and the FCC needs a pool of people with in-depth understanding of those rules in order to do its job. But for people with expertise in the areas of law the FCC administers, the only other use for those skills is representing clients before the FCC. A ban on former FCC staffers working for telecom firms or the law firms that represent them would make it extremely difficult for the FCC to recruit talent, because working at the FCC would essentially be a dead-end job. Once somebody had taken a job at the Commission and developed expertise in telecom law, she'd have no real options for using those skills.
Which means that when we're debating new regulations of the telecom industry, we have to remember that the rules will be enforced by an agency that has close ties to incumbent telco interests. If Congress passes network neutrality regulations, those regulations will be interpreted and enforced by an agency whose key staffers have close ties to the major telephone and cable incumbents. Which means that the results are likely to be more incumbent-friendly—and less consumer-friendly—than network neutrality advocates expect. If Cablevision gets in hot water for a network neutrality problem, they'll be able to dispatch Bohigian and others on their payroll to make sure the company doesn't get more than a slap on the wrist. And, as I'll explain in the next installment, not only can this sort of lobbying render regulations toothless, but in some cases it can actually make things worse by allowing incumbents to tie their competitors up in red tape.
Other posts in this series:
13 Comments | Leave a Comment..
More posts from Timothy Lee >>