Feeds:
Posts
Comments

Cell Phones and Polling

As Nate Silver reported the other week:

Pew Research issued a study suggesting that the failure to include cellphones in a survey sample — and most pollsters don’t include them — may bias the results against Democrats. Pew has addressed this subject a number of times before, and in their view, the problem seems to be worsening. Indeed, this is about what you might expect, since the fraction of voters who rely on cellphones is steadily increasing: about 25 percent of the adult population now has no landline phone installed at all.

Clearly, this is a major problem in survey research — and one that, sooner or later, every polling firm is going to have to wrestle with. What isn’t as clear is how much of a problem it is right now.

He goes on to cover several of the key issues that are specific to this case and time. but I’ll focus for a minute on the larger-scale issues. I’ve talked about some of these ideas before, and indeed we were talking about cell-phone undercounting on the Dean campaign in 2003 and Kerry in 2004 (not, as it turned out, the biggest problem in either of those cases). But as Nate says: this is a major problem that sooner or later everyone is going to have to deal with, it’s just a question of when.

Will that be this year? Hopes of Democrats aside, probably not – or at least, not provably, given the substantial problems in constructing likely voter screens this cycle. But when the dust settles and post-election analyses are done, all the pollsters are going to have to take a good, long look at their numbers and at results, and through the lens of Pew’s results, begin to (or further) adjust their approaches. Because by 2012, an even larger share of the voting-age population will be living in cell-phone-only households, due both to continued abandonment of landlines by older demographics and the maturation of millions more who’ve never had a landline (and mostly never will).

This isn’t an impossible problem, but it’s also not solvable with a silver bullet. Polling, like any sort of research, is going to need to become more multi-modal, faster-thinking and -responding, in order to reflect anything like a generalizable sample of the population. This means working harder, thinking more and understanding better the ways in which all different sorts of people use different kinds of communications technologies.

Your Voices, Our Selves

One of the best ongoing investigations of thought and the universe is Radiolab, a show produced at WNYC by Jad Abumrad and Robert Krulwich (no small point of pride to me, both Oberlin grads). One of their very best and most mind-blowing episodes came a couple months back, called “Words.” I’d recommend you listen to the show in its entirety, and there are dozens of strands I could pull out and discuss all day. For now, I’d like to focus on the (intentionally) provocative claim made by Charles Fernyhough, a writer and psychologist at Durham University (UK):

“I don’t think very young children do think.”

Spinning this out in a later podcast led to (to my total delight) an in-depth discussion of L.S. Vygotsky’s theories of self and child development, especially on the internalization of speech – learning to refer to oneself as one refers to others.  The podcast focuses on non-normative variations in development – how sometimes, people internalize not just their voice but other voices as part of their internal monologue. Or dialogue. This can in its worst instantiations lead to things like schizophrenia, which is bad.

But I’d like to move one degree further, and think about how these issues relate to ideas of ourselves, and to our shifting media consumption and discussion habits.

Contra the much-discussed Death of Reading, the media landscape today in fact represents the apogee of reading in all of human history. More people are literate today than ever before, and they consume more written text than ever before. That they do not all do so through a printed medium called “book” or “newspaper” is beside the point, as is the fact that they also watch television. Words are being consumed and produced internally in simply staggering amounts, and a great deal of many people’s days – both in the developed world and less-developed countries – involves people, themselves, consuming and producing words internally.

What is the effect, then, of all these internal words on our own personal monologues? What is the effect, in particular, of the chatter of social media, where the voice is not our construction of anonymous authority (or not) from some Media Source but people that we know, whose actual – both written and spoken – voices we are familiar with?

One of to my mind the most elegant definitions of self (also referenced in “Words“) is that it is nothing more than a continuous story we tell: one thing happened, then another, then another, all in the same voice, and that’s how I’m me. Schizophrenia and similar disorders are so terrifying because that basic premise is violated – all of these voices are competing for attention, and it becomes impossible to determine what is real, or who you are.

Pulling all of these threads together, then, the question becomes: what happens to the story of ourselves becomes the story of ourselves? When the “I” is spending so much time with the “we” and the “they” inside our skulls? As a purely personal anecdote, I do know that while I know more specific and timely things than I used to, source attribution is often murky. Did I hear that on the radio, or when talking to a friend? Did I think it myself, or read a blog? Does it matter?

This is not a new question or problem, entirely – the tension between individualism and communitarianism stems from the same dynamic. But the scale of this shift in our internal voices is unprecedented, as is the breadth of effect in the day-to-day lives of people in our technologically-mediated culture. While I tend to eschew both Utopian and Dystopian readings of technology’s effects on us (the Internet being, like Soylent Green, made of people), I do think that it’s worth considering (agnostically) what the longer-term effects of a society-wide shift in the kinds of internal voices we maintain might entail. Probably a big deal.

“The world is changed… much that once was is lost, for none now live who remember it.”

I’ve lately had the sensation of living in the future – not the future of robots and flying cars (both in still-tragic short supply) but the future of my life, the future of something New and Different. This has caused me, in turn, to consider just what it is that is new or different, and just what is meant by Future, Past and Present.

We are all of us the epic heroes of our own tales, the centers of action and narrative direction, the most dramatis of personae. So it is fairly obvious to see why my internal libretticist would determine this to be a turning point in the action: some months of great preparation leading to a grande moment, followed by a change of scene and a journey into the unknown. The curtain falls, the screen fades to black, End of Book Two – resume in medias res some months or years along when sufficient interest and tension has built along my next act.

Human that I am, I look for patterns to justify this perception, and believe that I have found them. From where I stand now, the 2000s look like a migraine-filled interregnum – citizen of a country making awful decisions, resident of a planet trundling into irreparable change, confused twentysomething unsure of my place in the world or in myself. The Bush years, even while ongoing, always had the eerie unreality of a dream state. That they were succeeded by the election as President of a black man named Barack Hussein Obama was no less hallucinatory, even if I have the pictures on my cell phone to prove it.

And now awake, and the dream was more and less true for good and bad, but we must live with what was wrought through sleepwalkery. I am an adult (or something like it) in this world after kidhood in the pleasant-smelling 1990s, but even while history spins around again the wheel’s not quite the same for its rotation. Anti-government zealots were just as crazy and well-funded in the bad old days of the Arkansas Project and Tim McVeigh, but today’s wingnuts are self-consciously the stars of their own reality television shows and the media an ever-more-efficient conduit for that effluent.

But then there’s always authoritarians, aren’t there, no matter the names they use or the shirts they wear. My villains of copyright maximalization, seedbank patent-squatters and cynical political operatives sure seem to be wearing black hats: everyone does in silhouette.

I can’t really worry about that, though – can’t have access to more than one subjectivity, can’t have the cut-shot pan-over Cinemascope wide angle. Acting As If is the best I can manage.

So for me, right now, I’ve arrived in the future. Things change always, but a period of flux is over and a new dynamic will be the setting for our action over the next little while. It’s a world where the benefits of communications technology accrue in innumerable ways to increasingly huge numbers of the world’s people, but where material economic growth will remain stagnant for the forseeable future – especially for those of us who already have more than our fair share (but not those with way, way, way more than their fair share). It’s a world where despite these unmistakable improvements to our everyday lives (all of us: next year or the one after, more than half of the citizens of Earth will be able to call each other on the phone; soon after, two out of three, and we haven’t even begun to think about How This Changes Everything), the main task of my professional career and political life will be fighting a rearguard action against Know-Nothings who reject a rationalist worldview: people for whom evidence is bias or proof of its opposite. It’s a world where the institutions – national and international – that have done such a good job getting us Here (for good and for ill), are terribly ill-suited to getting us to some better definition of There. Some of those will get better: many will get worse.

But here we are, innit? And what is my role in this world, this Future? I’ll greatly enjoy figuring that out.

Planet Money reports:

Phones running Google’s Android operating system outsold the iPhone in the first quarter of this year. What’s more, BlackBerry phones outsold both iPhones and phones running Android.

BlackBerry phones, which run an operating system from Research In Motion, had 36 percent market share, according to NPD group, a research company. Android phones (including the widely advertised Droid) had a 28 percent share. And iPhones, which run Apple’s own operating system, had a 21 percent share.

It was the first time Android phones outsold iPhones.

…and iPhones will never again outsell Android phones, until and unless Google renames Android. Here’s why.

The iPhone is a great tool, and as Charles Stross has been pointing out for a while, Apple is making a big bet that their future is not as a hardware company with ancillary software but as a platform company with ancillary hardware.  Because Apple is Apple, they want to control this platform totally, so this means they make the hardware that runs the platform, and they control the price-point. This works up to a point, but simple economics on both ends (theirs and consumers’) dictates that there’s necessarily a ceiling for both the number of iPhones they can make and the number of people who might buy iPhones. Apple survives by making those numbers as similar as possible, but it’s never going to be approaching 100% – or 50, even. Twenty percent market share is pretty substantial, but I wouldn’t anticipate Apple’s share of any market getting bigger than that.

Research in Motion’s continued dominance of the smartphone market is pretty impressive, and they’ve wisely kept their sights firmly focused on doing one thing and doing it well: making a business- and email-centric device that just plain works, and that its users stick with through multiple generations and structure their digital lives around.  The Blackberry appears to have staying power, but with a substantial caveat: it’s a perfect device for email (and texting) but not for Web2.0 and the social web.  That’s fine – there will always be a business and power-user market – but it’s tough to see RIM’s market share increasing much beyond where it is now (I’d expect it to shrink and stay put at a lower level), because as my research shows, young people don’t really have email as the central communications method of their digital lives.  Phones are central, texting especially so, and the social web after that. Email is for professors and the professional world, so for those that head that direction, Blackberries are in their future.

But Android really is the future of the mobile environment, over the next several years.  Like the Apple ecosystem, it’s an app-heavy and social-web-facilitating environment, but unlike Apple and RIM, Google is happy to let anyone (on any mobile network) make phones that run its OS – and thus experience the mobile web how Google would like you to do so. Which is, for the moment at least, preferable: no censorship in its app store, and a wide (and widening) range of hardware choices on your preferred mobile carrier.  Anything that fights against the Web turning into a set of walled gardens, I can heartily endorse. Android will also push prices down for all smartphones and for access to the mobile web by offering experiences comparable to the iPhone and Blackberry without the single-system lock-in, and that’s (clearly) preferable, too.  While the jury is still out on Google’s entrance into the hardware world, it’s not as important as their introduction and support of an open and high-quality platform for the Web to move to mobiles without intermediation and without sacrificing its values and variety.

Forcing the Party Line

When telephones were first invented, you didn’t just call a number and get a person on the other end: usually, first you’d talk to an operator, who would then connect you to a local loop where the desired party resided.  There were special rings within each loop to distinguish who was getting the call, but if someone else on your loop or the loop you were calling wanted to listen in, you couldn’t stop them.  This was a function of cost – it was pretty expensive to get a residential telephone line before the federal government guaranteed universal access and then deregulated the phone companies.

This was, it’s pretty well agreed, a bad system notwithstanding the excellent fodder it produced for light farce.  The residential system that replaced it was pretty problematic, too, leading as it did to:

a 70 year or so period where for some reason humans decided it was socially acceptable to ring a loud bell in someone else’s life and they were expected to come running, like dogs.

So, not the best either, even though it too did produce some great songs.

The growth of electronically-mediated and time-shifted communications may have a mixed record on a lot of issues, but it’s an unambiguous good in terms of individual control over their method, mode and timing of response to communications. I think this is a good thing. Communications where the sender is unsure of the extent of the audience, or the receiver potentially forced into confrontation, are not beneficial for either the clear conveyance of meaning or social cohesion.

Which is why Facebook’s recent actions are both troubling and perplexing.  By making all connections public for all users, they are ambiguating audience and forcing potential confrontations (between managed identities, work and personal lives, etc.) for all their users.  The shift in Facebook privacy settings takes as its central premise that the advances in telephone communications of the past century were a bad idea. It is forcing all of its users into an always-on global party line, where the conversations are transcribed and sold to all interested parties.  That’s not good.

Digital technologies allow us the ability to talk to basically whomever we want (and only them) whenever we want (and only then).  That Facebook would consider these to be bad things is deeply weird, and makes a compelling case against using it as a central mode of communication.

Police States are Bad Ideas

This is why:

Lower Merion School District employees activated the web cameras and tracking software on laptops they gave to high school students about 80 times in the past two school years, snapping nearly 56,000 images that included photos of students, pictures inside their homes and copies of the programs or files running on their screens, district investigators have concluded.

In most of the cases, technicians turned on the system after a student or staffer reported a laptop missing and turned it off when the machine was found, the investigators determined.

But in at least five instances, school employees let the Web cams keep clicking for days or weeks after students found their missing laptops, according to the review. Those computers – programmed to snap a photo and capture a screen shot every 15 minutes when the machine was on – fired nearly 13,000 images back to the school district servers.

If authorities have the ability to behave badly, some of them always will. Which is why stuff like this is especially bad:

The MPAA and RIAA have submitted their master plan for enforcing copyright to the new Office of Intellectual Property Enforcement. As the Electronic Frontier Foundation’s Richard Esguerra points out, it’s a startlingly distopian work of science fiction. The entertainment industry calls for:

  • spyware on your computer that detects and deletes infringing materials;
  • mandatory censorware on all Internet connections to interdict transfers of infringing material;
  • border searches of personal media players, laptops and thumb-drives;
  • international bullying to force other countries to implement the same policies;
  • and free copyright enforcement provided by Fed cops and agencies (including the Department of Homeland Security!).

The Fourth Amendment has been gutted pretty extensively over the past generation, but if “unreasonable search and seizure” has any meaning at all, it should mean that neither the government nor private corporations should be legally empowered to constantly monitor our activities through our own computers.

I’ve thought for a while that the coming divide in our politics is not going to be one of conservatism versus liberalism, but about authoritarianism versus a politics of individual liberty. News like this goes to reinforce that belief, as does the acrimony of our current political climate in the US. More on the latter, later, but I’ll reiterate the main point: it’s a bad idea to give authorities unlimited surveillance powers because they will always, always be abused.

Masculinity

Tom Shales notes what was unavoidable in last night’s Super Bowl:

An oddly recurring theme had to do with men asserting their masculinity, or attempting to assert it, as well as the perpetual male fear of emasculation. In an ad for a very portable television called FloTV, a man was seen being dragged through a torturous shopping trip by his girlfriend while sportscaster Jim Nantz ridiculed him… [this in particular disappointed me – shame, Jim Nantz]

Men in their underwear kept popping up — in a Coke ad, a man sleepwalks in the wilderness, clad in boxer shorts and a T-shirt. His odyssey ends only after he finds a cold bottle of Coke.

An ad for Dockers was keyed to the mantra “I wear no pants!” and featured men in their underwear romping around aimlessly. A funny ad for Career Builder.com, depicting the notion of Casual Friday run amok, showed men and women, most of them anything but physically fit, spending a day at the office in their undies.

Men and their traditional roles were also mocked, but somehow also celebrated, in adsintroducing Dove for Men, a line of toiletries. A man raced through a recitation of the chores and good deeds he had obediently done to the tune of Rossini’s “William Tell Overture,” once the theme of “The Lone Ranger” on radio and TV.

An ad for Dodge Charger called the muscle car “Man’s Last Stand” after depicting a supposedly put-upon male who listed all the nice things he did for his female mate. Were these ads for a post-feminist age? They seemed to have a retro appeal — for better and worse. Probably worse.

Not coincidentally are these numbers:

The top red line is unemployment among workers with less than high school education; dark yellow is male unemployment; light yellow is female unemployment; and purple unemployment for those with college educations or more.

Even progress will likely stabilize employment patterns more along these lines than previous patterns – the industries recovering first (service, health care) are traditionally and disproportionately female, whereas the industries hardest hit (construction, manufacturing) are traditionally and disproportionately male. Contra a lot of doomsaying, manufacturing is fine in the US – we just make more stuff with many fewer jobs than we used to, and so even with huge manufacturing growth we’ll have a yet more robust sector with fewer jobs than before (see, e.g., Chris Anderson’s recent piece on distributed manufacturing).

Construction has been hit hard by the inflation and then rapid popping of the housing bubble, and we shouldn’t want those particular jobs to return. But there’s plenty of stuff to build: repairing and improving our electric grid and crumbling bridges, sewers and other basic infrastructure. And then there’s “green jobs,” new energy generation, etc.

But when anyone talks about what the jobs of the 21st Century are going to be, it’s all about the “knowledge economy”, science and research: jobs that require education. And the numbers aren’t on men’s side here, either:

So yes, there’s “something out there” that advertising firms (who are not dumb) are picking up on, a reworking of previous patterns of gender roles in our new economy. Backlash always comes first. And a lot of people talk about how to “fix” the problem of boys/men being left behind. But that presumes that it is a problem, an assumption which takes the patriarchal status quo ante as some combination of natural, just and correct.

I’m not arguing that a society where increasing numbers of men are un- or under- educated and employed is a good or desirable thing: it’s pretty clear that over the long term that leads to undesirable outcomes including but not limited to violence and reactionary political movements (the above set of trends is most definitely one of the things fueling the Tea Parties). But more women in greater positions of economic and political power in this country would be a good thing – and given the macroeconomic trends, it seems likely to be part of our future.

I’ll heartily second what Jim Fallows says here (though without rehashing my earlier anti-Kindle thoughts, I wouldn’t say it’s an argument for the Kindle per se so much as eReaders in general):

My main view on communications media is that new systems usually add to old ones, rather than displacing them. Radio didn’t eliminate books and newspapers — that would come later!; movies didn’t eliminate still photos; TV didn’t eliminate either movies or radio; and the internet has not (yet) eliminated TV. A few communications systems do disappear altogether, except for specialist/curio use: vinyl records, photos on real film, etc. Usually the field just becomes more crowded and the options more diverse.

So it will be, at least for a while, with e-readers like the Kindle versus “real” books.

To add to this a bit, what I think this kind of innovation in new communications channels does is to rationalize the kind of content on each. For all the nostalgia that some (e.g., me) have about obsolete forms, books do a better job at holding novels than newspapers, so we don’t see serialized novels anymore. Similarly, TV and movies do a better job at dramatic narrative than radio, so very few radio dramas still exist. But radio’s still excellent at talk shows and sports broadcasts (safer, too, if you’re driving), and the nature of the technology means that nowadays we can shove a radio into just about anything else (e.g., cell phones).

eReaders are going to perform a similar function – eventually (sooner than later) they will mostly eliminate the printing of many academic texts and monographs (and this is going to be a good thing for the people who write those texts, but more on that later). There’s probably a good place for magazines on eReaders but I’m not quite sure on what that is. Many of the books at the top of best-seller lists will find a lot of their sales (or in the Kindle’s case, rentals) moving very quickly to eReaders once there’s a critical mass – which makes sense for the most disposable (if fun) stories. Nobody’s really that well-served by several dozen more Dan Brown books ending up in used book stores.

In the end, eReaders represent not a replacement for books but an overlapping-but-complimentary form. They’ll absolutely cut into book sales but there will be a new equilibrium whereby booksellers will be able to more clearly see what their market is, and isn’t.

Good post by Richard Nash on the future of publishing, most of which I agree with. I don’t agree at all, however, with one of the predictions:

3. Most predictions for 2020 based on models derived from controlling the supply side, that is, from the monopoly on the means of producing and distributing books, will be wrong. By which I mean, the supply chain book publishing and retail model is ending. The book retail chains will disappear, just like Circuit City, Sharper Image, Tower Records disappeared. And the corporate publishers will likely all but disappear just as Atari, Digital, Wang disappeared though the backlists will be spun off to private equity companies looking for semi-predictable IP-based cash flow, and a couple of front list publishing enterprises will likely be operating trying to emulate the Hollywood blockbuster model with just about enough success to be able to stay in business.

It seems certainly possible that Borders will not make it, but the idea that there will be literally no retail book chains is preposterous. Circuit City went out of business because they fired their best employees and destroyed whatever appeal they had as a place to get electronics; Tower Records went under because you can’t just sell CDs. But Best Buy is doing just fine, thankyewverymuch, because they have been flexible and now do all of what both Circuit City and Tower did, but better, and more.

Barnes & Noble is employing a not-dissimilar strategy: they knew from early on that an online presence is key, and while they’re not Amazon they’re well-established online. Similarly, they know that they’ve got to have an entrant in eReader space, so even if Nook doesn’t cut it, something will. B&N has also been pretty smart about store location; some of their mall and exurb locations may shut down but they’ve got a strong college store presence and lots of very attractive downtown city real estate. There was a time when I wished the chains nothing but ill, but I can’t fault B&N on how they’ve played the last several years, and I don’t see them going away.

More on all that later, but I also think this is spot-on from Nash:

8. In 2020 the disaffected twentysomethings of the burgeoning middle classes of India, China, Brazil, Indonesia will be producing novels faster than any of us can possibly imagine.

Yup.

Research and Generalizability

A few weeks back I took an all-day seminar with Don Dillman, “How Visual Design and Layout Influence Responses to Questionairres.” It was a great course and I definitely recommend taking the opportunity to do anything similar with Dillman or Odum if the opportunity presents itself.

In addition to some great walk-throughs on the power of design to elicit greater rates of survey response, and the importance of harmonizing design elements across multiple modes in survey designs (i.e., web, mail, phone), Dillman also made a pretty shocking (even to him!) point about what his latest research showed: namely, that mail surveys are (still?again?) the best method:

Postal delivery sequence file (DSF) provides all residential addresses and may now be our best household address frame.

When you give this a minute to think, it’s not all that outlandish. Despite huge increases in Internet connectivity – even among older and rural populations – it remains far from universal, and any given channel online (e-mail, SNS) is only going to present a relatively small and self-selecting share of the population. Further, there’s no centralized database of “online users”, and those with the biggest files (Facebook, MySpace, Google, Yahoo! and Microsoft) sure aren’t giving you access to them, Mr./Ms. Academic Researcher. Landline use continues to decline and cell-phone-only-households, with the protection of the Federal Do Not Call Registry, continue to move into a patchwork non-contactable space.

But nearly everyone still has a street address, and even if it’s not always correlated reliably with a person-name, it’s the best way to reach the biggest and most generalizable share of the population. So in a world where more and more of our interactions and identity are moving online and mobile, to spaces where we increasingly control access, how can researchers hope to build generalizable samples of the population?

Let’s step back for a minute and talk about the U.S. in five years. Just as most of the population now has a cell phone, most of the population will have a smart phone/iPhone-like device that will handle voice communications, e-mail, SNS, microblogging, etc. [A point for future discussion is just what this will do to the differential effects of media channel as observed in the media effects literature] It will be the pivot point for all of our communications and personal identity information – we’ll increasingly be using it as an identity storage and verification device for airport check-in, payment and receipt of payment, and a half-dozen other things that now seem outlandish and will soon seem mundane. It’ll be how we carry who we are, and how we tell others about that, for any manner of transactions and interactions.

But that identity will also be floating, a bit. There’ll be several big databases – the mobile companies, Apple, Google, Facebook, etc. – but, again, they won’t be distributing Yellow Pages. Your identity will be relational and transactional and contingent but always subject to change and shift depending on your satisfaction with service provided. Which is good, but presents an increasing challenge to anyone who wants to find some kind of “everyone” (e.g., Census takers, public opinion researchers, etc.). What’s needed is a tether, that also is contingent and user-controlled, but is based on a stable hook.

The DSF can provide that hook. Most people will continue to have a street address – indeed, even the homeless can provide some manner of address that would interface with the DSF – even as the majority of their communications are mediated through shifting electronic interface. A user-controlled and -verified system of tying your various communication methods – contingently – to a physical address could allow users the ability to better control access to all manner of modes of communication and contact. Physical address and solicitation could become tokens that would then be entered into whatever other interface you wished (Amazon for deliveries, Gallup for polls, IRS.gov for taxes), allowing the third-party only what permissions you desired but also providing the verification layer that you are indeed [a person]. Of course this would raise all sorts of new issues about interface and self-report data, but given Dillman’s very promising results – >50% response rates to online surveys via mail solicitation (and >70% via mail) – this is certainly worth thinking about more extensively.