Information Overload?


Since I purchased my first laptop two years ago, one of my most common mental states has been distraction. Too often, I find myself immersed in virtual information scapes teeming with curious, interesting, alluring, tugging things. Frequently, I find myself skimming several interesting sources, my attention distributed so broadly that it fails to penetrate much deeper than the surface level of whatever information source I’m attending to at that particular moment. Much of this distraction is my own fault. I subscribe to all kinds of RSS feeds (which I check infrequently), and dozens of podcasts (both talk and music), which are constantly delivering new information–text and audio for me to browse through and consume. Furthermore, I get a ceaseless stream of email, much of which comes from a number of academic groups or lists that I’ve joined or signed up for email updates from. It’s hard not to feel whelmed by the sheer volume of notices whenever I plug into my laptop–and equally hard at times to pull myself away from computer once I’ve been sucked in.

In “Hyper and Deep Attention: The Generational Divide in Cognitive Modes,” one of the most interesting academic articles I can remember reading in the past few years, N. Katherine Hayles, now a Professor of Literature at Duke University, suggests that we are the midst of a generational divide in cognitive modes, with younger people preferring the hyper attentivity stimulated, encouraged, and inculcated by the internet (a method of engagement Hayles defines as being “characterized by switching focus rapidly among different tasks, preferring multiple information streams, seeking a high level of stimulation, and having a low tolerance for boredom,”) while many older people remain wedded to the traditions of deep attention that is traditionally associated with study in the humanistic disciplines (and more broadly, with educational institutions in general). She argues that “as students move deeper into the mode of hyper attention, educators face a choice: change the students to fit the educational environment or change that environment to fit the students.” I see evidence that the educational environment is already changing to fit the students (a friend of mine at UW is hard at work on incorporating narrative into educational video game design, and the University recently funded a proposal by one of my professors to build DesignLab, a digital composition center).

I’m interested in many of these efforts, and think that many of them are interesting and probably beneficial on the whole. I’m aware of a great deal of scientific interest in neural plasticity and seem vaguely aware that there are quite probably strong links in brain development and media consumption, including usage of the internet. Matt Richtel’s front page NYT story from June 2010 summarizes a great deal of contemporary neuroscience research and suggests that part of the appeal of constant face time with electronic devices is the surge of dopamine that this interaction provides. One problem, however, is that we don’t seem to be getting better at anything when we multitask. In fact, our stress levels are measurably higher and our ability to weed out insignificant new information seems to be diminished. These are distressing findings, especially considering how much time we spend enthralled to new information. Richtel notes in his article that: “At home, people consume 12 hours of media a day on average, when an hour spent with, say, the Internet and TV simultaneously counts as two hours. That compares with five hours in 1960, say researchers at the University of California, San Diego. Computer users visit an average of 40 Web sites a day, according to research by RescueTime, which offers time-management tools.” While I don’t believe that hyper attention is an inherently inferior or even primarily negative mode of engagement, I do sometimes worry that my computer use habits are eroding my capacity for deep attention, an erosion that I find lamentable, and I also worry that I’m positively reinforcing some hyper-attentive behaviors that I don’t find desirable or attractive when I reflect more carefully on them, a distaste that is particularly strong when I’m at some distance from my computer. In short, I fear that the way I’m using technology may be making me “impatient, impulsive, forgetful and even more narcissistic,” as Tare Parker-Pope frets in one recent NYT article. According the results of a recent nationwide poll of 855 adults commissioned by the NYT and CBS news, I’m not alone: “Almost 40 percent check work e-mail after hours or on vacation. … About a third of those polled said they couldn’t [imagine living without their computers] … One in seven married respondents said the use of these devices was causing them to see less of their spouses. And 1 in 10 said they spent less time with their children under 18.” Heavy users of technology are no longer as present for their loved ones as they might be, and this worries me.

My question isn’t exactly how to avoid sliding into a vacuous, Willy Mink-like state of dylar-induced hyper attentivity–I don’t yet resemble the protagonist in Richtel’s article, who “goes to sleep with a laptop or iPhone on his chest, and when he wakes, he goes online … escapes into video games during tough emotional stretches. On family vacations, he has trouble putting down his devices,” nor do I intend to abdicate my agency to such a degree that I ever do resemble this man. But I see some parallels, and recognize some tendencies towards what Christine Pearson terms “incivility” in my use of technology in work and school settings. Pearson offers three clear, reasonable suggestions: advising readers to 1) minimize their own use of electronic devices when interacting with others and clearly express urgent need to respond to technological pull when interacting with another person; 2) ask for face-to-face attention when the person you’re engaging seems more engrossed in technology; and 3) consider some kind of no-screen policy in certain work situations, while allowing for regular breaks for participants to reconnect to the virtual social networks that are most meaningful for them.

All this is sound advice and I’m certain that my personal solution to the worrying trends I’ve already considered is plain enough–I merely need to set limits and honor them, turn off the electronic devices, unplug myself from the sources of distraction, cultivate healthy relationships with humans and other meaningful components of my direct environment, and reorient myself in relation to the values which seem to be most significant and important in an ethically responsive life. One author, Susan Maushart, has gained some notoriety and attention for her recent book The Winter of our Disconnect (you can read a brief excerpt on msnbc’s website), which relates the story of an experiment undertaken by herself and three teenage children in which they spend 6 months of their lives without any screen technologies (i.e. no smart phones, iPods, computers, TVs, or video games). Another, Joshua Foer, has written a book entitled Moonwalking With Einstein: The Art and Science of Remembering Everything (excerpted at the NYT), in which he describes how he has trained and developed his memory without embracing Luddism or technophobia. It’s clear that I still have a great deal of choice in the way that my brain develops and my cognitive skills are sharpened or neglected. I especially liked these responses to the Richtel article from an assembled panel of experts: most of them focus on the need to unplug, to moderate ourselves, and to practice restraint. Two experts in particular offered particularly memorable advice: Stephen Yantis cautions that multitasking usually means switching between doing multiple tasks and that each of those switches carries what he calls a “switch cost.” His advice: we must “recognize that the human mind, while amazingly adaptable, is nevertheless limited in what it can do — and that those limitations have to be respected.” Gloria Mark suggests working in what she calls “batch mode,” which entails the use of technology is scheduled time intervals. I’ve started gravitating more and more to this method, and have toyed in the last week with adopting the Pomodoro Method, in which I would work intensely on single tasks for 25 minute intervals followed by 5 minute breaks. I don’t yet have a tried and true method, but I’m interested in developing and adhering to one, and would welcome any and all suggestions for things others have done to manage and control the role of information technology in their daily lives.

And with that, I’m getting up from the desk and moving on to something else.