from the go-read-a-book dept
Well, you had to know this was coming. With the release of Nick Carr’s latest book, The Shallows — basically an extended riff on his silly and easily debunked article from The Atlantic a few years ago — Carr is now getting plenty of press coverage for his claims. However, like Jaron Lanier before him, this seems like yet another case of Carr pining for the good old days that never existed. As I’ve pointed out in the past, I think Carr is a brilliant writer, and a deep thinker, who is quite good at pulling interesting nuggets out of a diverse set of information. What I find absolutely infuriating about him, however, is that he lays down this cobblestone path of brilliance, making good point backed up by evidence, followed by good point backed up by evidence… and then at the end, after you’re all sucked in, he makes a logical leap for which there is no actual support. He seems to do this over and over again, and his latest effort appears to be the same thing yet again.
The Wall Street Journal is running a bit of a “debate” between Carr and Clay Shirky, who each have books out, which seem to suggest the exact opposite things. So the two of them each address the question of whether or not the internet is making us dumb. Carr’s column does a nice job highlighting a variety of studies that show that too much multitasking means you don’t concentrate very much on anything. Except… that seems a bit tautological, doesn’t it? The “key study” that he highlighted shows that “heavy multitaskers” did poorly on certain cognitive tests. But it fails to say which direction the causal relationship goes in. It could be that those who don’t do well in certain cognitive areas are more likely to spend their time multitasking, for example, since they get less enjoyment from bearing down on a single piece of information.
And, unfortunately, there’s lots of evidence to suggest that Carr is very clearly misreading the evidence he presents in his book. Jonah Lehrer at the NY Times, in his review of Carr’s book, highlights this point:
What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in ?visual attention and memory.
Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.
So the science doesn’t actually agree with what Carr says it does. Then all that he’s left with is the claim that, because of the internet, fewer people are reading books… and that’s somehow “bad.” This isn’t based on any evidence, mind you. It’s just based on Carr saying it’s bad:
It is revealing, and distressing, to compare the cognitive effects of the Internet with those of an earlier information technology, the printed book. Whereas the Internet scatters our attention, the book focuses it. Unlike the screen, the page promotes contemplativeness.
Reading a long sequence of pages helps us develop a rare kind of mental discipline. The innate bias of the human brain, after all, is to be distracted. Our predisposition is to be aware of as much of what’s going on around us as possible. Our fast-paced, reflexive shifts in focus were once crucial to our survival. They reduced the odds that a predator would take us by surprise or that we’d overlook a nearby source of food.
To read a book is to practice an unnatural process of thought. It requires us to place ourselves at what T. S. Eliot, in his poem “Four Quartets,” called “the still point of the turning world.” We have to forge or strengthen the neural links needed to counter our instinctive distractedness, thereby gaining greater control over our attention and our mind.
It is this control, this mental discipline, that we are at risk of losing as we spend ever more time scanning and skimming online.
This makes two important assumptions. First, that reading a book is somehow the absolute pinnacle of information consumption. There is no evidence that this is the case. In fact, in Shirky’s response piece, he notes similar misguided concerns about how mass-market books would make us dumber:
In the history of print, we got erotic novels 100 years before we got scientific journals, and complaints about distraction have been rampant; no less a beneficiary of the printing press than Martin Luther complained, “The multitude of books is a great evil. There is no measure of limit to this fever for writing.” Edgar Allan Poe, writing during another surge in publishing, concluded, “The enormous multiplication of books in every branch of knowledge is one of the greatest evils of this age; since it presents one of the most serious obstacles to the acquisition of correct information.”
But, as Shirky points out, society adapts. Each new technology brings along some good uses and some bad, but society, as a whole, seems to adapt to promote the good uses, such that they greatly outweigh the bad uses.
The second assumption that Carr falsely makes, of course, is that our internet time is taking away from our reading time. But, as Shirky notes in his piece and his book, it seems like our internet time is more about taking away from TV time (remember TV?), and thus is allowing us to be more interactive and do more socially useful things with our time than just vegging out:
First, the rosy past of the pessimists was not, on closer examination, so rosy. The decade the pessimists want to return us to is the 1980s, the last period before society had any significant digital freedoms. Despite frequent genuflection to European novels, we actually spent a lot more time watching “Diff’rent Strokes” than reading Proust, prior to the Internet’s spread. The Net, in fact, restores reading and writing as central activities in our culture.
The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn’t whether there’s lots of dumb stuff online–there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.
The past was not as golden, nor is the present as tawdry, as the pessimists suggest, but the only thing really worth arguing about is the future. It is our misfortune, as a historical generation, to live through the largest expansion in expressive capability in human history, a misfortune because abundance breaks more things than scarcity. We are now witnessing the rapid stress of older institutions accompanied by the slow and fitful development of cultural alternatives. Just as required education was a response to print, using the Internet well will require new cultural institutions as well, not just new technologies.
Oh, and as was noted well over a year ago, after decades upon decades of people reading fewer books (mainly because of TV), recently, it turns out that people are actually reading more books — entirely contrary to Carr’s entire thesis.
Filed Under: clay shirky, internet, multi-tasking, nick carr, reading, smart