Sharepocalypse, and why social sharing is noisy

Mashable’s recent post on the social media Sharepocalypse has caught everyone’s attention. Author Nova Spivak breaks down the issues social media users face in the sheer volume and diversity of sharing activity across our favorite social networks.  And comments on some of the resources and solutions that may be on offer if “social assistance” services can deliver effetively.

I was on a different topic recently, that of scaling and population, when I got to thinking about noise. Much of the sharepocalypse problem, I think, comes down to noise. Noise, because there are often motives behind social sharing. Motives that suggest that the act of sharing often means more than meets the eye.

This is interesting, because if sharing produces content, and if the sharepocalypse concerns an excess of content and content sharing activity, then it’s not just the volume of content that needs addressing, but the intentions of those who share. Sharing, after all, is a social act.

There would be no sharing if there were no friends, peers, colleagues, and fans to “consume.” And likely much less sharing if there were no measurement of sharing activities: no new followers, friend requests, comments, likes, +1s and so on.

Not to mention the meta message of sharing metrics, of which Klout is the best example. Our activity and the responsiveness of our “networks” are transformed into a meaningful number — an “influence” metric, or klout.

Point being that the act of sharing is not just an act of sharing content. It’s a social act, and social acts solicit some amount of acknowledgment and recognition. Receiving that, they can become communication (as happens when any two or more people engage in an exchange).

Content, then, is often the vehicle for a communication not yet established. It’s the opening move, if you will: the statement or expression.

It belongs to human communication that we are able to distinguish an utterance from the thing uttered (the claim). We can tell the meaning expressed in talking from the actual sentences and expressions used. In the case of sarcasm, for example, we know that the meaning intended actually contradicts the the expression.

And this applies, to some degree, in online sharing. Knowing our friends, and less so our peers and online social connections, we’re often able to tell what a person intends when they share. The content is the vehicle, not the conversation. And in fact, content often opens up comments and exchanges permitting all involved to relate something of their own.

Content shared then is often just the ice-breaking move in social exchange. It’s the starting point, the springboard, and the context. And it’s fine, generally, if talk moves past the content itself to other things.

Which brings us to noise. Noise is the problem. Some hope it can be filtered out, say algorithmically. Algorithms may be written to anticipate the individual and personal preferences of a user. Or to collect information from aggregated activity. So individual vs a social approaches.

Noise might also be reduced by means of services that sit on top of sharing networks. This is the social assistance idea noted by Spivak.

But there’s still the matter of noise and why it is an unavoidable byproduct of social sharing. This has implications for the feasibility of noise reduction.

Social networking platforms can be viewed as social systems — a combination of mediating technologies and the practices that emerge around them. They’re self-reproducing systems: that is, it’s the constant social activity of users that keeps them going.  My thought is that if a social system reproduces itself by means of mediated interactions and communication, different types of noise are produced.

The noise of redundancy that results from distribution of activity across tightly connected social networks — a kind of noise that would not trouble situated and co-located “real world” interactions. Call this the noise of amplification. It exists because content and communication rapidly escape the site of their original production and “appear” elsewhere. (Face to face talk is governed by the physical distance in which your voice can be heard.)

The noise produced by an attention economy. This being noise resulting from the online social condition that only activity can get attention. One has to post and share in order to have presence. Here the act of sharing is what matters, less so what is shared, for the act maintains presence and creates the contexts around which others can engage.

The noise of system self reporting. This being notifications, which are system messages reporting on user activities but not authored by those users (Bill is now following you). Facebook was built on this (“Jill uploaded a photo” creates social activity by proxy, leading to more activity by those who respond to it).

The noise of bots and non-human accounts. Twitter is the most guilty of this, but wasn’t the first to allow it. (Remember Fakesters on Friendster?) This noise helps to circulate news, but results in a kind of tolerably false communication.

The noise of obligatory social etiquette. This is the noise created by adhering to online social norms and conventions, such as following back, or adding to Circles, reblogging, liking, and so on. (Social gestures — likes — have communicative purpose.) Many of these acts are simply baseline social etiquette and whether they pay off or not, are the online social equivalent of buying a lottery ticket: your chances of winning increase dramatically when you buy a ticket. A social act that has potential.

So given these different types of noise, what are the prospects for smart noise reduction? Content shared is hardly just content shared, but is almost always a form of social action. Can the social acts be separated from their contents? Should filters be designed to sift out bots? Why not then sift out users whose social media use is primarily promotional?

Or the reverse: sift out content that’s intended just to network and connect, but which has little news or information value? There could be so many further ways to tweak filtration, based on person, content, genre, timing, status, relevance, personal preferences, social preferences, recent activity, etc. It’s mind boggling.

Sharepocalypse is just the tip of the sharing iceberg. The flotsam and jetsam that drifts downstream in a medium that never stops flowing. But the currents beneath are deeply social and mean far more than meets the eye. It’s going to be hard to sort through all that noise. Because collect the empties as you will, more often than not, there’s a message in that bottle.

This entry was posted in Streams, SxD Theory. Bookmark the permalink.