Feeds:
Posts
Comments

After a lot of buildup and false starts, Google is finally rolling out (well, soft-launching) a social networking something-or-other. Obviously Google is already a serious social hub, but the various attempts at Google-as-SNS haven’t quite caught fire (unless you’re Brazilian). This is clearly worthy of some substantial attention – not just with Google being one of the 700 lb. gorillas of the Web, but as one of the few firms in a position to be able to challenge Facebook’s walled-garden Web from a running start (Google’s own vision for the future of the Web can be discussed later). In the announcement of Google+ they seem to be confronting this head-on:

+You: putting you first, all across Google
That’s the Google+ project so far: Circles, Sparks, Hangouts and mobile. We’re beginning in Field Trial, so you may find some rough edges, and the project is by invitation only. But online sharing needs a serious re-think, so it’s time we got started. There’s just one more thing—really the only thing: You.

You and over a billion others trust Google, and we don’t take this lightly. In fact we’ve focused on the user for over a decade: liberating data, working for an open Internet, and respecting people’s freedom to be who they want to be. We realize, however, that Google+ is a different kind of project, requiring a different kind of focus—on you. That’s why we’re giving you more ways to stay private or go public; more meaningful choices around your friends and your data; and more ways to let us know how we’re doing. All across Google.

This direct contrast hits on much of the criticism Facebook has received the last years on choices it has made with privacy and disclosure settings (and indeed that Google itself received for its rollout of Buzz). And Google seems to have taken some of the flak it received to heart – as Fred Stutzman notes very succinctly: “Google Circles: What Google has learned from Goffman.” Liz Heron is a bit more skeptical, noting that “Google being more private than Facebook seems like a hard sell.”

Steven Levy does a deep dive on the development of Google+ (with the totally hilarious US-centric line “aside from capturing massive market shares in Brazil and India, Orkut is now a footnote”… dude, that’s the 2nd and 4th largest countries on Earth!) and Techcrunch give a good bit of background on Google’s management of the rollout, including some insight from one of the project chiefs:

“We believe online sharing is broken. And even awkward,” Gundotra says. “We think connecting with other people is a basic human need. We do it all the time in real life, but our online tools are rigid. They force us into buckets — or into being completely public,” he continues. “Real life sharing is nuanced and rich. It has been hard to get that into software,” is the last thing he says before diving into a demo of Google+.

I tend to agree, and this tracks nicely with much of what Paul Jones has been discussing in his (excellent and fascinating) move into #noemail. Paul notes that,

“…small talk, important small talk, is going on in a lot of different environments. It’s as common as breathing. So common that like breathing, we don’t pay serious attention to it until there’s some serious problem. But that common talk enriches our lives and deepens our engagement with our co-workers and the world.”

Which tracks very nicely with what Gundrota is saying and the critique of those buckets in #noemail.

h a personal standpoint, Google+ is exciting because it seems to more directly track how sociability works, rather than trying to corral it (as has been Facebook’s general movement over the years). From a research standpoint, it also seems to exemplify something I’ve thought for a while – that the Internet as a place is receding to the background, becoming the invisible infrastructure (think habout plumbing – and then think about how much you don’t think about it) of our lives. That is: if Google+ works, it’ll be because it isn’t so much because it’s something shiny and exciting but something simpler and easier.

Update: Further thoughts from The Real Paul Jones on Google+.

Advertisement

Success and Failure

From an excellent Telegraph story on the cratering of the denim trade in China:

In one desolate room, a former factory boss sat on a stool in shame: having lost all of his family’s money, he was too ashamed to return home for the Chinese New Year holiday.

And from a recent episode of Planet Money:

the U.S. system makes it easier for people to start over, and to keep their financial lives going. Our financial system is set up to embrace failure.

I can’t really separate myself from national identity, here – I’m an American through and through, and can neither deny my cultural immersion or a certain degree of chauvinism on this point. I think it’s great that US bankruptcy laws (though less good than they used to be) allow people, by and large, to start again after things fall apart. Indeed, in Silicon Valley failure is often a badge of pride.

So I won’t make a normative judgment on this cultural difference but rather note that I think this is another example of entity versus incremental theories of self in action.To the extent that people can define their professional successes or failures as not self-confirming or -indicting evidence of an essential self but rather as a series of events from which they can learn and improve regardless of outcome, I think that’s a good thing.

Distributed Self-Criticism

Fascinating news from ESPN:

A panel of faculty from The Poynter Institute, which offers training to journalists, will serve as the latest ombudsman for ESPN.

The panel, known as the Poynter Review Project, will review ESPN content across all platforms and offer public comment on ESPN’s efforts in the form of monthly essays and additional timely responses as issues arise, ESPN and Poynter announced Thursday.

The panel also will address fans’ concerns during its 18-month tenure. Commentaries will be posted on ESPN.com, beginning with an introductory column in March.

The institute’s role expands the tradition of ESPN ombudsman, most recently held by television producer Don Ohlmeyer. His term was preceded by Le Anne Schreiber, a former New York Times sports editor-turned-author, and George Solomon, former sports editor of The Washington Post. [emphasis added]

The last part is the key. Poynter has been at the forefront of documenting, criticizing and analyzing online news reportage and dissemination, and however accomplished or ethical those previous ombudspersons have been, they were decidedly old media. This shift to not one person but a panel of experts who are at the top of the profession in its current state is great news for fans and readers.

ESPN clearly recognizes that the future of all its business – sports, journalism, commentary – is online. Its free online broadcasts of the World Cup last summer were the best yet done, and recent acquisition of Michael Wilbon (further gutting the once-great Washington Post sports section) for online columns and chats a further investment in same. Bringing on board an ombudsboard that understands not just sports journalism but the emerging dynamics and ethics of online commentary and interaction is a great step forward for the colossus of sports coverage, and at least a potential step towards regaining the kind of credibility that journalism strives for.

Communications Segmentation

TechCrunch reports on a recent ComScore report highlighting changes in webmail usage:


In introducing his messaging platform last November Facebook CEO Mark Zuckerberg said one of the primary motivations behind Messages product strategy was that teenagers have given up on email, “High school kids don’t use email, they use SMS a lot. People want lighter weight things like SMS and IM to message each other.”

A comScore report on 2010 digital trends reinforces at least part of Zuckerberg’s claim.  It’s inevitable: Innovative social messaging platforms like Facebook and Twitter as well mobile communications continue to dominate our online time, and web email begins its steady decline. Total web email usage was down 8% in the past year (YOY), with a whopping 59% decline in use among people between the ages of 12-17. Cue Matt Drudge -style alarm.

Usage was also down 1% among 18-24 year olds, 18% among 25-35 year olds, 8% among 35-44 year olds and 12% among the 45-54 demographic. Because oldsters are continuing to migrate online in droves, web email use actually saw an uptick in the AARP-eligible sector, with 22% gains among 55-64 year olds and 28% among those 65 and older. Obviously this was not enough to offset the decline in youth usage.

Though the numbers don’t lie, “webmail is dying” is entirely the wrong way to look at it. My dissertation research found similar figures in terms of the pre-eminence of social communications methods: cell phones and texting are the center of young peoples’ (in that case, college students’) social universe, with Facebook messages more popular than email for social communications. Contra Zuck, IM is not used as frequently or centrally in their social communications, and it’s my hunch that for the most part it’s getting pushed out by texting.

But all of this changes in a professional context. Young people still use email for communicating with their parents and, in the context of college, want to only use email (and face-to-face meetings) to communicate with their professors: no cell, texting, IM, Facebook messages. Definitely not. This divide was further explicated in interviews where students described that email was for professors, internships (and bosses there), and campus organizations – mailing lists and the like.

What’s clear is that while webmail and email are, among younger cohorts, losing their social centrality, they are not going away at all. Rather, email is becoming increasingly professionally branded. Old people (e.g., me) still use it (albeit at slightly decreasing rates) for social communications, and the ComScore report shows that the oldest cohorts are actually using it increasingly for those communications. But email has become the central tool for business communications, and as young people enter a workforce that is actually increasingly adopting webmail for professional purposes – notice the flat number among 18-24s and smaller decreases above that – email usage will endure. It just might get left at the office.

 

Human Freedom

Lost in Friday-news-dump-land among all the election mishegas was this:

Reversing a longstanding policy, the federal government said on Friday that human and other genes should not be eligible for patents because they are part of nature. The new position could have a huge impact on medicine and on the biotechnology industry.

The new position was declared in a friend-of-the-court brief filed by the Department of Justice late Friday in a case involving two human genes linked to breast and ovarian cancer.

“We acknowledge that this conclusion is contrary to the longstanding practice of the Patent and Trademark Office, as well as the practice of the National Institutes of Health and other government agencies that have in the past sought and obtained patents for isolated genomic DNA,” the brief said.

Regardless of whatever happens on Tuesday, this is a huge win for the future of human freedom and well-being. As we further our knowledge of genetics, leading to even greater advances in potential human wellness, those windfalls should not be the property of any individual or corporation, but rather should accrue to humanity in general. This one ruling won’t ensure that, but it reflects a necessary and welcome shift towards a more basically just future in what will be one of the most important industries and areas of development of this century.

Cell Phones and Polling

As Nate Silver reported the other week:

Pew Research issued a study suggesting that the failure to include cellphones in a survey sample — and most pollsters don’t include them — may bias the results against Democrats. Pew has addressed this subject a number of times before, and in their view, the problem seems to be worsening. Indeed, this is about what you might expect, since the fraction of voters who rely on cellphones is steadily increasing: about 25 percent of the adult population now has no landline phone installed at all.

Clearly, this is a major problem in survey research — and one that, sooner or later, every polling firm is going to have to wrestle with. What isn’t as clear is how much of a problem it is right now.

He goes on to cover several of the key issues that are specific to this case and time. but I’ll focus for a minute on the larger-scale issues. I’ve talked about some of these ideas before, and indeed we were talking about cell-phone undercounting on the Dean campaign in 2003 and Kerry in 2004 (not, as it turned out, the biggest problem in either of those cases). But as Nate says: this is a major problem that sooner or later everyone is going to have to deal with, it’s just a question of when.

Will that be this year? Hopes of Democrats aside, probably not – or at least, not provably, given the substantial problems in constructing likely voter screens this cycle. But when the dust settles and post-election analyses are done, all the pollsters are going to have to take a good, long look at their numbers and at results, and through the lens of Pew’s results, begin to (or further) adjust their approaches. Because by 2012, an even larger share of the voting-age population will be living in cell-phone-only households, due both to continued abandonment of landlines by older demographics and the maturation of millions more who’ve never had a landline (and mostly never will).

This isn’t an impossible problem, but it’s also not solvable with a silver bullet. Polling, like any sort of research, is going to need to become more multi-modal, faster-thinking and -responding, in order to reflect anything like a generalizable sample of the population. This means working harder, thinking more and understanding better the ways in which all different sorts of people use different kinds of communications technologies.

Your Voices, Our Selves

One of the best ongoing investigations of thought and the universe is Radiolab, a show produced at WNYC by Jad Abumrad and Robert Krulwich (no small point of pride to me, both Oberlin grads). One of their very best and most mind-blowing episodes came a couple months back, called “Words.” I’d recommend you listen to the show in its entirety, and there are dozens of strands I could pull out and discuss all day. For now, I’d like to focus on the (intentionally) provocative claim made by Charles Fernyhough, a writer and psychologist at Durham University (UK):

“I don’t think very young children do think.”

Spinning this out in a later podcast led to (to my total delight) an in-depth discussion of L.S. Vygotsky’s theories of self and child development, especially on the internalization of speech – learning to refer to oneself as one refers to others.  The podcast focuses on non-normative variations in development – how sometimes, people internalize not just their voice but other voices as part of their internal monologue. Or dialogue. This can in its worst instantiations lead to things like schizophrenia, which is bad.

But I’d like to move one degree further, and think about how these issues relate to ideas of ourselves, and to our shifting media consumption and discussion habits.

Contra the much-discussed Death of Reading, the media landscape today in fact represents the apogee of reading in all of human history. More people are literate today than ever before, and they consume more written text than ever before. That they do not all do so through a printed medium called “book” or “newspaper” is beside the point, as is the fact that they also watch television. Words are being consumed and produced internally in simply staggering amounts, and a great deal of many people’s days – both in the developed world and less-developed countries – involves people, themselves, consuming and producing words internally.

What is the effect, then, of all these internal words on our own personal monologues? What is the effect, in particular, of the chatter of social media, where the voice is not our construction of anonymous authority (or not) from some Media Source but people that we know, whose actual – both written and spoken – voices we are familiar with?

One of to my mind the most elegant definitions of self (also referenced in “Words“) is that it is nothing more than a continuous story we tell: one thing happened, then another, then another, all in the same voice, and that’s how I’m me. Schizophrenia and similar disorders are so terrifying because that basic premise is violated – all of these voices are competing for attention, and it becomes impossible to determine what is real, or who you are.

Pulling all of these threads together, then, the question becomes: what happens to the story of ourselves becomes the story of ourselves? When the “I” is spending so much time with the “we” and the “they” inside our skulls? As a purely personal anecdote, I do know that while I know more specific and timely things than I used to, source attribution is often murky. Did I hear that on the radio, or when talking to a friend? Did I think it myself, or read a blog? Does it matter?

This is not a new question or problem, entirely – the tension between individualism and communitarianism stems from the same dynamic. But the scale of this shift in our internal voices is unprecedented, as is the breadth of effect in the day-to-day lives of people in our technologically-mediated culture. While I tend to eschew both Utopian and Dystopian readings of technology’s effects on us (the Internet being, like Soylent Green, made of people), I do think that it’s worth considering (agnostically) what the longer-term effects of a society-wide shift in the kinds of internal voices we maintain might entail. Probably a big deal.

“The world is changed… much that once was is lost, for none now live who remember it.”

I’ve lately had the sensation of living in the future – not the future of robots and flying cars (both in still-tragic short supply) but the future of my life, the future of something New and Different. This has caused me, in turn, to consider just what it is that is new or different, and just what is meant by Future, Past and Present.

We are all of us the epic heroes of our own tales, the centers of action and narrative direction, the most dramatis of personae. So it is fairly obvious to see why my internal libretticist would determine this to be a turning point in the action: some months of great preparation leading to a grande moment, followed by a change of scene and a journey into the unknown. The curtain falls, the screen fades to black, End of Book Two – resume in medias res some months or years along when sufficient interest and tension has built along my next act.

Human that I am, I look for patterns to justify this perception, and believe that I have found them. From where I stand now, the 2000s look like a migraine-filled interregnum – citizen of a country making awful decisions, resident of a planet trundling into irreparable change, confused twentysomething unsure of my place in the world or in myself. The Bush years, even while ongoing, always had the eerie unreality of a dream state. That they were succeeded by the election as President of a black man named Barack Hussein Obama was no less hallucinatory, even if I have the pictures on my cell phone to prove it.

And now awake, and the dream was more and less true for good and bad, but we must live with what was wrought through sleepwalkery. I am an adult (or something like it) in this world after kidhood in the pleasant-smelling 1990s, but even while history spins around again the wheel’s not quite the same for its rotation. Anti-government zealots were just as crazy and well-funded in the bad old days of the Arkansas Project and Tim McVeigh, but today’s wingnuts are self-consciously the stars of their own reality television shows and the media an ever-more-efficient conduit for that effluent.

But then there’s always authoritarians, aren’t there, no matter the names they use or the shirts they wear. My villains of copyright maximalization, seedbank patent-squatters and cynical political operatives sure seem to be wearing black hats: everyone does in silhouette.

I can’t really worry about that, though – can’t have access to more than one subjectivity, can’t have the cut-shot pan-over Cinemascope wide angle. Acting As If is the best I can manage.

So for me, right now, I’ve arrived in the future. Things change always, but a period of flux is over and a new dynamic will be the setting for our action over the next little while. It’s a world where the benefits of communications technology accrue in innumerable ways to increasingly huge numbers of the world’s people, but where material economic growth will remain stagnant for the forseeable future – especially for those of us who already have more than our fair share (but not those with way, way, way more than their fair share). It’s a world where despite these unmistakable improvements to our everyday lives (all of us: next year or the one after, more than half of the citizens of Earth will be able to call each other on the phone; soon after, two out of three, and we haven’t even begun to think about How This Changes Everything), the main task of my professional career and political life will be fighting a rearguard action against Know-Nothings who reject a rationalist worldview: people for whom evidence is bias or proof of its opposite. It’s a world where the institutions – national and international – that have done such a good job getting us Here (for good and for ill), are terribly ill-suited to getting us to some better definition of There. Some of those will get better: many will get worse.

But here we are, innit? And what is my role in this world, this Future? I’ll greatly enjoy figuring that out.

Planet Money reports:

Phones running Google’s Android operating system outsold the iPhone in the first quarter of this year. What’s more, BlackBerry phones outsold both iPhones and phones running Android.

BlackBerry phones, which run an operating system from Research In Motion, had 36 percent market share, according to NPD group, a research company. Android phones (including the widely advertised Droid) had a 28 percent share. And iPhones, which run Apple’s own operating system, had a 21 percent share.

It was the first time Android phones outsold iPhones.

…and iPhones will never again outsell Android phones, until and unless Google renames Android. Here’s why.

The iPhone is a great tool, and as Charles Stross has been pointing out for a while, Apple is making a big bet that their future is not as a hardware company with ancillary software but as a platform company with ancillary hardware.  Because Apple is Apple, they want to control this platform totally, so this means they make the hardware that runs the platform, and they control the price-point. This works up to a point, but simple economics on both ends (theirs and consumers’) dictates that there’s necessarily a ceiling for both the number of iPhones they can make and the number of people who might buy iPhones. Apple survives by making those numbers as similar as possible, but it’s never going to be approaching 100% – or 50, even. Twenty percent market share is pretty substantial, but I wouldn’t anticipate Apple’s share of any market getting bigger than that.

Research in Motion’s continued dominance of the smartphone market is pretty impressive, and they’ve wisely kept their sights firmly focused on doing one thing and doing it well: making a business- and email-centric device that just plain works, and that its users stick with through multiple generations and structure their digital lives around.  The Blackberry appears to have staying power, but with a substantial caveat: it’s a perfect device for email (and texting) but not for Web2.0 and the social web.  That’s fine – there will always be a business and power-user market – but it’s tough to see RIM’s market share increasing much beyond where it is now (I’d expect it to shrink and stay put at a lower level), because as my research shows, young people don’t really have email as the central communications method of their digital lives.  Phones are central, texting especially so, and the social web after that. Email is for professors and the professional world, so for those that head that direction, Blackberries are in their future.

But Android really is the future of the mobile environment, over the next several years.  Like the Apple ecosystem, it’s an app-heavy and social-web-facilitating environment, but unlike Apple and RIM, Google is happy to let anyone (on any mobile network) make phones that run its OS – and thus experience the mobile web how Google would like you to do so. Which is, for the moment at least, preferable: no censorship in its app store, and a wide (and widening) range of hardware choices on your preferred mobile carrier.  Anything that fights against the Web turning into a set of walled gardens, I can heartily endorse. Android will also push prices down for all smartphones and for access to the mobile web by offering experiences comparable to the iPhone and Blackberry without the single-system lock-in, and that’s (clearly) preferable, too.  While the jury is still out on Google’s entrance into the hardware world, it’s not as important as their introduction and support of an open and high-quality platform for the Web to move to mobiles without intermediation and without sacrificing its values and variety.

Forcing the Party Line

When telephones were first invented, you didn’t just call a number and get a person on the other end: usually, first you’d talk to an operator, who would then connect you to a local loop where the desired party resided.  There were special rings within each loop to distinguish who was getting the call, but if someone else on your loop or the loop you were calling wanted to listen in, you couldn’t stop them.  This was a function of cost – it was pretty expensive to get a residential telephone line before the federal government guaranteed universal access and then deregulated the phone companies.

This was, it’s pretty well agreed, a bad system notwithstanding the excellent fodder it produced for light farce.  The residential system that replaced it was pretty problematic, too, leading as it did to:

a 70 year or so period where for some reason humans decided it was socially acceptable to ring a loud bell in someone else’s life and they were expected to come running, like dogs.

So, not the best either, even though it too did produce some great songs.

The growth of electronically-mediated and time-shifted communications may have a mixed record on a lot of issues, but it’s an unambiguous good in terms of individual control over their method, mode and timing of response to communications. I think this is a good thing. Communications where the sender is unsure of the extent of the audience, or the receiver potentially forced into confrontation, are not beneficial for either the clear conveyance of meaning or social cohesion.

Which is why Facebook’s recent actions are both troubling and perplexing.  By making all connections public for all users, they are ambiguating audience and forcing potential confrontations (between managed identities, work and personal lives, etc.) for all their users.  The shift in Facebook privacy settings takes as its central premise that the advances in telephone communications of the past century were a bad idea. It is forcing all of its users into an always-on global party line, where the conversations are transcribed and sold to all interested parties.  That’s not good.

Digital technologies allow us the ability to talk to basically whomever we want (and only them) whenever we want (and only then).  That Facebook would consider these to be bad things is deeply weird, and makes a compelling case against using it as a central mode of communication.