Feeds:
Posts
Comments

Jim Fallows puts quite well just what it is we have lost, with the necrosis of Twitter:

Twitter, now X or Xitter, is in hospice. No one knows how long this stage will last. Perhaps no one will ever know whether it was on purpose, through narcissistic impulse, or by sheer incompetence that Elon Musk destroyed the most valuable function that Twitter over 15 years had evolved to serve.

That role, Twitter at its best, was as a near-instant, near-global nervous system that could alert people to events anywhere. It could be an earthquake, an outbreak, an uprising, a World Cup match: through its own version of AI, the old Twitter could direct attention to the people and organizations best positioned to comment about it. That early AI-before-the-name was known as “verification,” which helped you know at a glance which updates were coming from, say, the Ukrainian government after a rocket attack, or Martina Navratilova during a Grand Slam match, or Joan Baez after a concert or protest march. And which updates were not.

https://substack.com/inbox/post/135526939

I think that’s exactly right, and it’s also why, no matter how amusing or interesting or low-level-pleasantly time-wasting Bluesky or Mastodon or Threads or etc., etc. might be, none of them is going to *replace* Twitter.

As Nilay Patel has elucidated repeatedly, the product of social media is moderation and while Twitter had, let’s just say an uneven record on this count, Fallows gets to the heart of what element of Twitter moderation actually worked: a combination of official architecture and hive-mind aggregation that could, faster than any media or technology we’ve had since or probably will have for a while, communicate what was happening.

This function, it’s worth noting, was already breaking down before Elon started guiding the ship to the bottom of the ocean – bad actors of all sorts (cynical political operatives, crypto/NFT scammers, etc.) were leveraging Twitter’s centrality in determining thing-happening-ness to spread mis- and disinformation, run grifts, and generally pollute the information space. It’s a wicked problem and maybe one that some retro-future version of Twitter management could have handled, but – alas.

None of the nascent alt-Twitters, however, are offering a model that (at present) stands a chance of recreating the Twitter-that-was – Bluesky’s velvet-rope approach of a closed beta with limited invites is inimical to scaling, and its moderation leaves something to be desired; Mastodon seems focused on being a scoldy nerd clubhouse; Threads is explicitly not going to be for news and politics, to say nothing of Meta’s, uh, uneven past with content moderation and current willingness to narc on users exercising their bodily autonomy, and help send them to prison. Threads’ rollout of a “Following” tab where you can actually see posts from people you follow (and only them) in chronological order is good – but they can’t help themselves, as there’s no way to make this permanent (it reverts to a suggested, non-chronological timeline every so often). This commitment to non-chronological sorting as a key property of the app (and the lack so far of a desktop app) makes it an impossible solution to the Twitter-shaped hole in our networks.

And there is a hole, even if you weren’t on Twitter. Getting back to Fallows’ metaphor, Twitter did indeed function as “a near-instant, near-global nervous system” that communicated to other parts of the global body. Famously, members of the media were over-represented there (for good reason! it’s where you found out about and disseminated news!) but also there were members of many communities – Black Twitter, comedians, shitposters, human rights activists, sports fans, and others – who performed the function not only of producing content on Twitter but also of connecting it to communities of interest. These niche communities had their own internal logics and discourses, and were connected with other digital networks – surfacing trends from other social media (over time, variously Tumblr, TikTok, and especially networks not dominant in the US and Western world) and also pushing content and consensuses from Twitter back over to those communities.

Importantly, the withering of Twitter does not mean that these local communities cease to exist – but they now in many cases lack a connection to mass-ness that Twitter provided. Not even necessarily Twitter itself, which topped out at about 30-something percent market share (dwarfed by Facebook and Instagram in the West, by other sites and protocols elsewhere) but in its connection to the over-posters and media members that defined its audience. Twitter was never quite a public square but it was an accelerant for discourse, and helped facilitate access to a megaphone for many groups that had never had that access or opportunity.

And most importantly, through that access and acceleration, it became, for a while, a place where you really could get more of a sense of the everything that was going on. Now: this mostly felt terrible. Twitter was the Hellsite for a reason – it’s hard to take on board all the news of the world, because so much of it is bad. But there was a moment of access and honesty to it all, a falling away of the scales from the eyes, the sense that you could for a minute see the system of the world.

Of course this wasn’t ever quite true, Twitter wasn’t real life, and so on. But it was more true, especially at moments of crisis, than has been so elsewhere – and Twitter’s lack of hard moderation made more of the uncomfortable truths bubble up. Threads very clearly doesn’t want that to happen – wants that not to happen. Mastodon wants to stick to its literal knitting. Bluesky wants all the jokes and fun of the top posters without the responsibility of mass scale. It’s not really even worth talking about the right-wing Twitter clones, who all inevitably fail because right-wing posters just want to harass and dunk on left-liberals, and don’t want to just hang out with each other. None of it quite works.

But I’m not sure we want it to work, right now, because I’m not sure there’s a coherent we that can bring together the combination of social knowledge and moderation theory, engineering expertise, capital, audience, and theory of the case. And maybe that’s fine. Google is actively and passively breaking search, money is rushing to “AI” tools that will eat the Internet, then themselves, and pollute the open Web with their excretions, Reddit is in the middle of a dramatic self-immolation, and journalism’s future looks bleak (with a few green shoots of possible futures). To say nothing of the ongoing epistemic crisis in the US and much of the rest of the world, with the underlying basis for determining truth increasingly divergent among communities.

There’s not a snappy ending here – we are kind of drifting in space. But I’ll end with a few questions, and endeavor to pursue those more in the future:

-What do you, personally, want an information ecosystem to look like? What would be good for you?

-What does a sustainable information ecosystem look like, in theory – and how does that hash or not, with the current conditions?

-What can we learn from the current bust cycle of mass social media, that can help inform whatever comes next?

More anon!

I predicted Google+ would (or could) succeed. Why? Because Google already had access to our social graph, through Gmail. It failed.

Meta also has access to social graphs through Instagram – does it follow that Threads will succeed because of that? No. Will it also fail because Google+ failed? No.

But it will fail unless Meta learns (lol) what Google didn’t, and what they’ve already failed to learn on previous product launches: that the social graph isn’t static but dynamic, and they it also includes both cruft (old, now inessential connections) and frivolity (people, or accounts, that we follow for fun in one context but don’t really care about). As discussed on the Vergecast, Meta did make the right move in not basing the social graph for Threads on Facebook, which even it has to know is full of what basically amount to broken links. But they’re not alone – Twitter, even pre-Elon, was taking on a similar feel, with a lot of existing linkages and dominant voices coasting on inertia but not being currently essential in the same way. Instagram is a main source of connection for many, but for most, I’d argue it’s a source of passive entertainment or at least pleasant-enough distraction – these are not necessarily your emergency contacts (though they may be in there, somewhere – and many aren’t there at all).

Does Meta know the difference between these kinds of connections? I think they’d say they do, that our behavior is revealed preference, but I’m not sure. IG shows you what you engage with, yes, but also pushes you to engage with the algorithmically determined “stickiest” content – building a self-similarity into everyone’s social graph, with content made to meet those specs churning in an endless tautology.

All of these links in a given social graph are contextual and may or may not map directly into a social graph with different underlying context, fulfilling different underlying needs. Do I “really” want to read text from a cloyingly cynical cute cat account? Probably not! But mapping IG’s social graph to threads, including both connections and suggestions, means I’ll be opted into a system that thinks I want to.

Jason Gilbert nails the vibes, and the trajectory:

What does Threads feels like?

Threads feels like when a local restaurant you enjoy opens a location in an airport.

It feels like a Twitter alternative you would order from Brookstone.

It feels like if an entire social network was those posts that tell you what successful entrepreneurs do before 6AM.

It feels like watching a Powerpoint from the Brand Research team where they tell you that Pop Tarts is crushing it on social.

It feels like Casual Friday on LinkedIn.

Will Threads last? I don’t know. It is an app stuffed with verified users I’ve never heard of who have 7 million YouTube subscribers. They all do Epic Pranks and they spread Positive Vibes and they Don’t Talk Politics Here.

And similarly, others have pointed out that the Good Internet is there the freaks and weirdos hang, and that the (mostly accidental) trajectory of Twitter as the place where freaks and weirdos hung out – and seeded the culture to make everyone a bit more of a freak and a weirdo (also unhappy, etc.) – was what made Twitter special, for a little while.

Threads will never be fun, it will never be weird – as Gilbert notes, its culture is being seeded by the winners and dominant presences of a separate social platform with its own established culture. Will it succeed? Maybe. There have been plenty of times in our culture where the fun and weird was purged from the mainstream. Mark Zuckerberg has a vision of culture that is not fun, that is not weird, but that is deeply prudish and misogynistic (as Taylor Lorenz notes – no [women’s] nips on Threads); it’s disconnected from the material circumstances of our world (Meta is currently threatening to remove all news links from Canadian Facebook). His vision of the world is happy-clappy, PG-rated soft focus positivity, with those who transgress thrown out of the garden with extreme prejudice, little explanation, and no recourse. We’ve certainly been in a similar place before, and maybe we’ll be there again (maybe we’re already there!).

But I hope one thing that comes out of this disruption is the freaks and the weirdos getting back to making their own fun, in their own spaces, for their own reasons. I don’t think that happens on a social media platform – or at least, not any of the ones we’re talking about now. But maybe at some point you’ll hear about it, and show up and lurk around the edges, and watch something new being made. 

Look at your phone. Go on, look at it. What is it?

It’s a clock. It’s a text-messaging glass slab. It’s a dynamically updating map/tracking device. It’s a ticket. It’s a late-night magazine. It’s an alarm clock. It’s a camera, photo album and publishing platform. It’s a gaming device, newsfeed(s), and a tether keeping work with you 24 hours a day.

Your laptop: it’s forty tabs open at once, word processing documents, music libraries (if you’re old), an EVEN BETTER gaming device, a TV and movie-watching platform, an audio editing suite, and, uh, other forms of entertainment.

You use these devices for dozens of different purposes, out of convenience and functional capacities. What I want you to think about is who you are in each of those purposes, and for whom you are in those purposes.

One of the most intriguing findings from my dissertation research (read it! become a member of a tiny club!) lo these four years ago was the degree to which students segregated audiences by medium. As I put it, they “use different communications technologies in their interactions with social, familial and academic audiences, in part as a manner of combatting the context collapse taking place on social network sites and mediated communications generally.” More directly: they talked to their friends via text message and Facebook message, called their parents on the phone, and only and ever talked to their professors in person and via email. That was, as they say, interesting, and something worthy of further study.

Well: I didn’t. But while the particular practices have shifted in the intervening time, these behaviors are no less intriguing or worthy of study and contemplation.

Cross-medium behavioral research is rare for a number of reasons. It’s expensive, difficult, time-consuming, methodologically fraught, ethically fraught. But I think the main limiting factor is that in any given moment, the incentives for any organization or individual performing research is to answer their central questions, as quickly/cheaply as possible. For an advertising firm: how did a given campaign deliver on KPIs as promised to the client? For an academic researcher: how does X behavior impact on my hopefully-tenure-securing line of research? For a membership organization: what were the A/B test results on a fundraising solicitation?

And to be crystal clear, this is NOT a problem solved by “Big Data.” Few but the most world-spanning organizations have the capacity to iteratively formulate hypotheses, expand data collection across boundaries, and act on findings. And the evidence suggests that even those world-spanning organizations don’t really know what to do with their endless reams of data. But, really, that’s neither here nor there: if you aren’t inside one of the world’s larger walled gardens of behavioral data, you’re still left with the same question. Namely: just who are your users, and who (and when, and how) are you to your users?

One of the foremost issues is attention. There are two ways of looking at attention: as something to maintain, and as something to be acquired. From your perspective, dear reader, you of course want to maintain sustained attention – on relationships, on work, on engaging culture. An advertiser, on the other hand, wants to capture your attention. Chartbeat – which makes a fantastic suite of products for publishers, that I’ve used and enjoyed – is part of a tech vanguard that recognizes this. As they put it:

Online publishers know clicks don’t always reflect content quality.

But research shows more time spent paying attention to content does.

Advertisers know click-through rates don’t matter for display or paid content.

Research shows 2 things matter for getting a brand’s message across: the ad creative and the amount of time someone spends with it.

The Attention Web is about optimizing for your audience’s true attention.

From their perspective, attention equals quality, and a shift to focusing on quantifying attention means better quality content (oh and also more clients). It’s a compelling thesis – but then, it is your attention that they’re selling, to advertisers. Others are more interested in selling your attention to, well, you:

As our computing devices have become smaller, faster, and more pervasive, they have also become more distracting. The numbers are compelling: Americans spend 11 hours per day on digital devices, workers are digitally interrupted every 10.5 minutes, with interruptions costing the U.S. economy an estimated $650 Billion per year. That’s a lot of distraction.

Device makers have largely turned a blind eye to this issue, building distractions in to the very devices we need for work. We address this challenge with tools that simply and effectively reduce digital distractions. Our software interrupts the habitual cycle of distraction associated with social media, streaming sites, and games.

Attention is basically an adversarial dynamic: your devices and the advertiser-supported content therein yelling at you while you struggle to maintain concentration. Many or most of us are in this stage of managing our relationships with digital communicative prostheses – a struggle. It’s not a struggle without benefits, but nor is it one without costs – study after study shows the costs to both productivity and personal health and well-being of a consistently-interrupted existence.

A central part of this struggle is creating a hierarchy – either explicit or implicit – of attention. When do you respond to a text message? It depends when you receive it, and from whom. Do you return an email? Again: who sent it, work or personal, when did it get received? And then: what do you read, or listen to? That also depends – how did you get there? A link from a friend, an immediately-forgotten source on your social media timeline, through a series of unreproducible clicks? The depth, length, and quality of the attention devoted depends on all these factors and more – but I believe it’s impossible to understand the meaning of a given interaction without looking at how these hierarchies are created.

Big Data and Foxes

Earlier this week I went to an excellent discussion put on by danah boyd and her Data & Society Research Institute, entitled “Social, Cultural & Ethical Dimensions of ‘Big Data.’” Right off the top, I have to give major kudos to danah for organizing a fantastic panel that incorporated a great combination of voices – who, not for nothing (indeed, for a lot) were not just a bunch of white dudes (only one white dude, in fact) – from across different disciplines and perspectives. I’ll do a brief play-by-play to set the table for a couple of larger thoughts.

Following a rigorously on-message video from John Podesta and fairly anodyne talk (well, except for this) from Nicole Wong from the White House Office of Science and Technology Policy, danah led off with introductory remarks and passed off to Anil Dash, who served excellently as moderator (mostly by staying out of the way, as he made a point of noting). Alondra Nelson from Columbia University was first up, giving an account by turns moving, terrifying, and engaging on the state of play and human consequences flowing from DNA databases – both those managed by law enforcement and the loopholes that allow privately-managed data repositories to skirt privacy protections. She was followed by Shamina Singh from the MasterCard Center for Inclusive Growth, who provided several on-the-ground examples of working with governments, NGOs, and poor people to more efficiently deliver social benefits. In particular, she focused on a MasterCard program to provide direct transfers of cash to refugee populations, cutting out the vastly inefficient global aid infrastructure network.

Singh was followed by Steven Hodas from the New York City Department of Education, who laid out an illuminating picture of the lifecycle of data in education systems, the ways in which private actors subvert and undermine public privacy, and – not just a critic – offered a genuinely thought-provoking new way of thinking about how to regulate dissemination of private information. The excellent Kate Crawford batted cleanup, discussing predictive privacy harms and what she called “data due process.” Dash facilitated a very long and almost entirely productive audience question and discussion session (45 minutes, at the least), and I left with many more things on my mind than I entered with. I’d had the privilege of listening to eight different speakers, each from a background either subtly or radically different from one another. Not once did a speaker follow another just like them, and no small value came in the synthesis from those differing perspectives and those of the audience.

This week also saw the relaunch of FiveThirtyEight.com under its new ESPN/Disney instance. It was launched with a manifesto from founder Nate Silver, entitled “What the Fox Knows,” which is a bit meandering but generally comes down as setting FiveThirtyEight as opposed to both traditional journalism and science research, based on some fairly blithe generalizations of those fields. What it doesn’t quite do, oddly for a manifesto, is state just what FiveThirtyEight is for other than a sort of process and attitudinal approach. Marx (or even Levine/Locke/Searls/Weinberger) it ain’t.

Silver has come in for no small criticism, and not just from his normal antagonists. Emily Bell laid out the rather less-than-revolutionary staffing makeup of the current raft of new-media startups, led by Ezra Klein, Glenn Greenwald, and Silver. And Paul Krugman detailed some rather serious concerns about Silver’s approach:

you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)

These two critiques are not unrelated. Bell called out Silver for his desire for a “clubhouse,” and rightly so, because groupthink clubhouses – whether of insiders or outsiders – are the most fertile breeding grounds for implicit theorizing. Krugman revisited and expanded his critique, saying:

I hope that Nate Silver understands what it actually means to be a fox. The fox, according to Archilocus, knows many things. But he does know these things — he doesn’t approach each topic as a blank slate, or imagine that there are general-purpose data-analysis tools that absolve him from any need to understand the particular subject he’s tackling. Even the most basic question — where are the data I need? — often takes a fair bit of expertise.

Which brings me around to the beginning of this post. The value in Monday’s discussion flowed directly from both the diversity – in professional background, gender, ethnicity – and the expertise of the speakers present. They each spoke deeply from a particular perspective, and while “Big Data” was the through-line connecting them, the content which animated their discussion, approach, and theorizing was specific to their experience and expertise. The systems that create data have their own biases and agenda, which only discipline-specific knowledge can help untangle and correct for. There is still no Philosopher’s Stone, but base metals have their own stories. Knowing their essential properties isn’t easy or quick, but little is easy that’s of lasting and real value.

There’s been a lot of hyperventilating over the recent news that Green Mountain is going to start cracking down on competitors designing coffee pods for their gross coffee makers:

With its single-serving coffee pods, Green Mountain Coffee has transformed the business of brew. Pop a capsule into one of the company’s Keurig machines, and the machine will instantly churn out your daily caffeine dose.

But Green Mountain doesn’t want copycats taking the business it pioneered away. That’s why CEO Brian Kelley says its new coffee makers will include technology that prevents people from using pods from other companies. The approach has been compared to DRM restrictions that limit the sharing of digital music and video online. But more than just curbing your coffee choices, Green Mountain’s protections portend the kind of closed system that could gut the early promise of the Internet of Things — a promise that hinges on a broad network of digital, connected devices remaking the everyday world.

Cory Doctorow thinks it’s a bridge too far and might end up promoting a backlash, or that Green Mountain’s overzealousness might end up with good court rulings against them. Dan Gillmor isn’t bothered by the coffee pods per se but takes a rather grander tone in conclusion:

We’re still in the early days of this war – and make no mistake, that’s what we face. The interests that want control over our lives and pocketbooks are wealthy and powerful. People are waking up to the threat. Now we all need to fight back.

I don’t disagree with either, really, but am a bit more sanguine in general, because of Microsoft Windows.  Windows might be a bit of a punchline these days, but I’m old enough to remember when it was an existential threat to commerce and resulted in one of the biggest antitrust cases ever. Though the ruling ended up going against Microsoft, the remedies were, through our current view, pretty minor. There were no Baby Bills, and Microsoft continued to have a crushing monopoly on the operating system and office productivity suite market, and make money hand over fist.

And then… the world changed. Microsoft continued to fundamentally misunderstand the Internet, and Apple managed not to go out of business and then became the plucky underdog and then the one of world’s most profitable corporations. Google won the search wars and then started eating everything remotely related (and not) in sight; Facebook was the best/last SNS. And Microsoft still had a massively profitable near-monopoly on the operating system and office productivity suite markets. The Beige Eminence of Redmond continues to have a massive role in day-to-day life around the world, often in problematic ways. But the last 15 years of tech history have shown that despite market control and massive profits, Microsoft cannot shape events entirely to their liking. They were chastened by U.S. v Microsoft, sure, but remain far more chastened by the fact that Internet Explorer and Bing are terrible and nobody likes them. Billions of person-hours are spent annually in computers with Microsoft OSes and in Word and Excel – but that hasn’t changed the fact that Microsoft cannot understand at a very basic level, the things that people spend most of their time doing on computers.

So: coffee pods. Coffee pods were a good idea as a response to legitimately terrible/inconsistent office coffee; to people who like coffee but don’t think about it too much and like not having to clean up coffee grounds (but seriously people, COMPOST, that is some good compost there); and a clever dumbed-down cashing-in on wider use of espresso machines. I briefly worked a job where the office had one – that was great, because otherwise the coffee would have been worse-than-mediocre, in all likelihood; I like it when I stay in a hotel and there’s a coffee pod machine there. But I’ve never for a second considered getting one, myself.

But mostly, coffee pods are a cheap plastic mediocrity, a Beige Eminence which performs the task of producing coffee product with little thought nor distinction. The principle of defending markets from monopolistic domination is important within a system of regulated capitalism (see: US broadband access), but in this case, what is the principle really defending? The opportunity for other suppliers to make a slightly-thinner margin on a mediocre way of delivering coffee to office workers and lazy suburbanites? And there are plenty of other coffee pod standards. It’s also not as if there’s any lack of ways to make – much better! just as easily! – coffee. Or tea. Green Mountain’s power play is, ultimately, a small-time move that doesn’t really impact important innovation in coffee delivery systems. It’s also a natural impulse and end result of our particular market system, and isn’t really worth worrying about until and unless a number of lawsuits that haven’t happened yet have bad results.

The Death of Comments

I don’t know if this quite qualifies as a trend, but then, trend pieces don’t need real facts, so what the hey. Something I’ve noticed these last months across several major sites, is a move away from the traditional, boring-but-understood approach to comments. While pretty much everyone already agreed that YouTube comments were the worst thing on the Internet, somehow Google managed to make them worse. Not more misogynistic, homophobic, racist, or violent – that’d be hard – but far more nonsensical. In necessitating a Google+ account (which, mea culpa, continues to be totally useless), it shut out many, and in re-threading the conversations based on “relevance” it took away the free-wheeling (often awful, but still) conversational threads of comment sections.

Similar complaints have followed on Gawker’s transition to Kinja, but perhaps the most ridiculous post-comment context has to be Voice Media Group’s new “My Voice Nation” system, which I was alerted to after it pulled in a second-order @ exchange I’d had about one of its stories. Not the tweet itself, but the conversation I’d had about the tweet, with a friend. The “comment section” thus becomes a random mash of unrelated, unconnected words – a documentation of buzz, perhaps, but in no ways a conversation.

And of course: I never signed up for that. I did send that tweet, yes, and I suppose that’s public-ish, but again – not all publicly accessible data is meant to be publicized. I’m guessing most people won’t ever notice, but for me, I’ll just make sure to never send out or comment on a piece of Voice Media Group content unless it’s unavoidable (which is to say: never).

What’s curious about this Death of Comments is that they’re not being eliminated as a feature for principled reasons, or as a straight cost-benefit analysis (i.e., it doesn’t really make sense to have a community manager paid to make the comments not *quite* so execrable). Rather, the transition seems to be away from comments and towards a comment-like substance – words related to the content, written at some point in some medium, presented in some relation to the content. I’m not sure what the long game is on that, but it’s all a little lorem ipsumy for my taste.

Following on news from the Guardian that Facebook saw a nearly 2% decline in active UK users over the holidays, I thought I’d briefly cover some of the implications of this news, from my perspective.

  • Obviously this has been coming for quite a while in core markets. As the Guardian notes, in the UK Facebook has 53% market penetration, second only to the US at 54%; in terms of gross users, the US has 169M, Brazil 65M, India 63M. Clearly the play is hoping on further expansion in the latter two markets – but that proposition is tenuous, both because the of the fast-growing but still-smaller middle classes there, and because,
  • Facebook still doesn’t get mobile. Its apps are still only-OK in terms of usability, and as witnessed by the Instagram terms-of-service clusterfcuk – which resulted in more than 50% decline in users – Facebook has a fairly poor understanding of the mobile user. Which is especially unfortunate for its future expansion in emerging markets – e.g., Brazil, India – as connectivity there is primarily through mobile devices, and not desktops.
  • Facebook as public company has always been a questionable proposition, as its whole model of ad-rate-growth-driven-by-traffic-growth-driven-by-user-growth is inherently untenable given that… at a certain point you run out of users. Also, the fact that every social network site so far has seen long-term time-on-site decline from its core users. Basically: if you’ve been shorting $FB, you’ve got to be feeling pretty good right now.
  • Facebook as social utility isn’t going anywhere anytime soon. Too many people, content providers, websites, and the general infrastructure of the Web have too much locked in for that to happen. But there are various ways that it can evolve from here. I’m still convinced that long-term there will be a competitive market for identity hosting, and that Facebook’s best move is to get in front of that in both setting open standards and providing a premium service; but we shall see.

     

I wrote a column for the Txchnologist, and it’s over here. If you like what I’ve been writing about here recently, you’ll like this too. A preview:

Social media, despite its centrality in our daily lives, still causes most businesses to tremble with fear. They fear liability over what employees may post in their official capacity. They fear embarrassing information posted by employees, both current and potential, in their off hours. They conduct social media “background checks” to ferret out anything that might reflect poorly on the business. Such is this fear that social media sites are discouraged or outright blocked at many workplaces.

As modes of business communication, social media channels are treated as loudspeakers, with messages painstakingly cleared through legal and public relations, polished to perfect sheen and void of real meaning. Meanwhile, email remains the central trusted tool of business communications. Used internally, it is the official channel for directives, meeting planning and document-sharing. It is the central way to communicate anything that matters both within your organization and to any collaborators. For external communications, email lists are built, maintained and bombarded. Huge marketing dollars are spent formulating email segmentation strategies, word-smithing, and tracking open rates.

All of this is entirely backwards.

Read the whole thing!

Having had a few weeks to digest both my initial thoughts on Google+ and the experience of actually using it, I thought I’d step back and offer a 30,000-foot-view of where I think things are, and are going, in this space.

First: Google+ is still pretty nice, even if it doesn’t quite know what it is. That’s fine! It’s been a month in psuedo-beta, and has 10M users. What I think the larger picture is here, is this:

Email is dying. Smart people are helping kill it. Google understands this, and that gMail is at this point a continually depreciating asset (something MSFT never recognized about Hotmail). Google+ is among other things, about providing gMail users a bridge to a post-email digital social space with minimal transition costs (always the biggest barrier to entry for a new social service). The social graph is already built into gMail users’ contact lists – Google+ is just about bootstrapping a different interface onto it.

There are real privacy concerns about Google+, and the mass deletion of accounts shows that Google is still struggling with psuedonymity. But long-run, the proposition is clear for both Google and its users.

Let me backtrack and say that I don’t think email will die, exactly, but rather become a mode of communication used for some things, sometimes, but not everything all the time. Physical mail, similarly, isn’t disappearing anytime soon (well not until 3D printing really scales up), and while telephony might be wearing different hats these days (i.e., mobile and digital rather than locked-position and analog), the fundamental dynamics are the same.

What we’re seeing is a rationalization of communications mediums, with people – young people especially, who don’t have the cruft of legacy communications patterns built on top – only using what makes sense for a given use, at a given time. For quick and short communications, this means text messaging – and given the shift in mobile plans towards offering unlimited texting, this doesn’t make anyone money, because save for the NSA and NewsCorp, nobody’s scanning and indexing your text message for relevant advertising content. Likewise, the efforts of both Facebook and Google to incorporate SMS into their user interfaces just haven’t caught on – why make the easiest way of communicating less easy?

Advertisers don’t know how good they’ve got it right now – despite click-through rates of at best 0.1% on banner ads, they’ve never known more about their target audiences for less money. From here on out, as more and more communication moves from the public social web and indexable email back to a range of peer-to-peer communications (TXT, telephony, video chats), it’s only going to get more and more expensive to know what people are thinking and talking about.  Because you’re going to have to actually ask the people.

This is why it’s a shame how comparatively little institutional support there’s been – and how slow IRBs have been in addressing the pace of online user research – for research around online sociability (no sour grapes here though! SILS is kicking butt!). Apart from the great work of a relatively small cohort of pioneering researchers who’ve been gathering data when and how they can this last decade, we’ve just lost a lot of data: those questions that went unasked, data went unscraped. And when perceptions of an interface aren’t asked in the moment, it’s not just gone, it’s gone-gone – who can remember what Facebook looked like 18 months ago?

But this is also an opportunity. Going forward into a world of multi-channel communications presents a fascinating set of new questions not just about why thus-and-so interface creates certain effects, but what people want to do, and how they realize those desires in different contexts. This is changing constantly, and shows no sign of slowing down. The real opportunity is to build on what we as researchers (both academic and market-oriented) already know about online sociability by asking questions focused not on the vagaries of changing interfaces and services, but rooted in the first principles of how people use communications technology. When we can keep asking those same questions over time – building real longitudinal data that can take into account the ebb and flow of seasons and services – we will build from knowledge to understanding. And when we understand, we can become the masters of our technologies, not the other way around.

It’s been less than 10 years since the initial rise of Friendster, the first mass-popularity online social network. Since its rise and (lamentable) fall, MySpace has grown and shrunk, and Facebook pioneered an ever-upwards trajectory. Though the implementation and particular social networks harnessed in each of these cases has been different, all have shared a similar initial launch strategy – focusing on the tightly-knit real-life networks of young people. For this reason, it’s become something of an article of faith that this is how social network services do and have to build and grow.

This is the core of Henry Copeland’s skeptical take on Google+, which I encourage you to read if you’ve not yet done so. I’m on record as saying that Google+ is already a success, and Henry’s post has clarified for me exactly what I meant and where I see it going. Henry’s main points are that (and I’m paraphrasing here – do read the whole thing):

  • community is the value, not the interface
  • you can’t grow a social network from the top-down
  • elite, diffuse users are the wrong initial population
  • Google doesn’t know social and doesn’t have the patience to grow a social network
I don’t necessarily disagree with any of these, but I also think that in the case of Google+ much of it is beside the point. What I think is the point, is that Google already has a massive collection of embedded, real-life social networks. They don’t need to do a frontal assault on Facebook at college campuses, don’t have to do a below-the-radar launch in Silicon Alley (which would be, at this point, totally impossible for either Google or any other social network), and don’t need to worry as much (which isn’t the same as “not at all”) about the particular shape of their multiplier effects. What they do need to worry about is how to expand the range of social tools that their current massive user base of real-life social networks uses within the Googleverse.
Keith Kleiner has a very positive run-down of his reactions to Google+, and while in the long-run I think that the enthusiasm for enhanced privacy features and the multiple-context management of Circles will prove more popular among the technorati than the general public (as killer apps, anyways), I do think he hits the nail on the head here:
That secret weapon is everything that Google has that is not Google+.  A formidable armada of Google products including Gmail, Picasa, Calendar, Docs, Maps, Search, News, Youtube, Chrome Web Browser, Blogger, Translation, Android, and more stands at the ready to assist and join Google+ in the battle for the future of social networking.  These products are best in class, extremely difficult to replicate, and are used by more than a billion people across the planet.  As these products are seamlessly integrated with Google+, we are about to witness an incredible two way explosion of value and utility.  Google’s products will gain all of the powerful attributes that social networks deliver – virality, discovery, crowdsourcing, sharing, “liking”, and so much more.  Meanwhile, Google+ will be given a steroid boost of products that deliver content, tools, and capabilities to its hungry hordes of social minions.
That’s exactly right, and I’d like to go a little further here. Google’s social network service won’t look like other social network services because it can’t – because of who Google is, and because of Facebook’s near-monopoly on certain sectors of the social graph. What it will look like will be different, and potentially more organic. As danah boyd excellently documented [PDF], in the early days of Friendster there were two main user communities who latched on to the service, both located in the Bay Area – gay men and Burners. When the two communities eventually discovered each other, the reaction was something akin to, “What are you doing here?” Social network services since then have been the story of this context collapse again and again, over many kinds of communities. There are many, many, many  more communities and real-life social networks already extant and embedded in one or several of Google’s currently-owned suite of social services, from gMail to Blogger to YouTube. Google+ will be a success not if it displaces Facebook but if it can deepen usage of Google’s already-formidable user base on the social web. As Luis Suarez notes in an insightful post,
“Its unique opportunity to be pervasive enough to be part of Google’s entire ecosystem makes it tremendously powerful.”
Because of the varied nature of that ecosystem and where different users find value, the growth, shape and experience of Google+ will vary substantially between different networks and different users. But that’s okay! Different people have different needs, both socially and informationally, and an approach that views mediated sociability as best addressed by a suite of services and possibilities – which is how I conceive of Google+ and how I believe they do, as well – is fundamentally just different than the one-size-fits-all approach of Facebook (or Twitter, or LinkedIn, or any of the predecessor SNSes)

Anyway, it’s clear Google has turned a corner. They have now proven to everyone that they can do social and get on the playing field.

But they haven’t yet proven that they can convince your mom to use it and that’s just fine with me.

That all is a long way of saying that I really love Google+ and I don’t care what the average user thinks of it. I’m getting a ton of utility out of it and I am having a blast with it. Hope to see you there soon, but please leave yo momma over on Facebook, OK?

In point of fact, my mom will actually like Google+ just fine, but Scoble’s point is a good one and flows the other way, too. There are plenty of “average users” who will like parts of what Google+ does just fine, and won’t give a fig that Robert Scoble and assorted other nerds (e.g., myself) are using it for if it can help them chat, share pictures, and video chat all in the same place easier than they could before with just gMail, Facebook or Skype. Does that count as “beating” Facebook? I don’t know that it’s that simple – I expect Facebook to be around a while, but not at the current level of buzz or valuation – but if Google+ can give a better and more holistic social experience for its users, I would count that as a victory for everyone.