From User Practices to Social Practices

Designers like to establish who their users are before commencing with a system’s engineering and architecture, and for obvious reasons. Knowing one’s user is tantamount to knowing how one’s product or service will be used. And yet it’s impossible to know one’s individual users, so generalizations have to be made in their stead. Which is to say that stereotypes must stand in for actual users, providing guidance during development for what is hoped actual users will do with technology or application.

A corrective to the generalization of user types, termed by its inventor Alan Cooper as “personas,” has become quite popular as a heuristic by which designers can get closer to actual users and usage, and thus design better. It’s not possible to simply extend the personas model to social software, and for one simple reason. We’re dealing here neither with individual users nor with individual needs, goals and objectives. We’re dealing instead with social phenomena, and that means that we need to understand the manner in which social practices satisfy individuals and their pursuits. What we wish to model our systems on is not a single user’s (persona’s) agenda, but rather a social system and its (more complicated) organization.

The type of social practices a system is designed to support, and which it counts on for its own ongoing sustenance will be manifest in a wide range of behaviors and modes of participation. Shortly we will cover the manner in which we might model some of these social practices. For now, simply consider some of the ways in which a view towards social practices exceeds the language and concerns we know from an orientation to user practices.

An emphasis on social practices, and designing towards social interaction, would shift our point of focus:

  • From users to groups
  • From individual use to community participation
  • From direct interaction (HCI) to social interaction
  • Our usability concerns would shift also, from first order interaction design to second order, social interaction design:
  • Interaction tools designed for the transparency of the interaction between users, rather than transparency between the user and the tool.
  • Attention to the way in which users might confuse interpersonal ambiguity for technical shortcomings or errors.
  • In organizing, archiving, and indexing discussions and other kinds of member contributions, concern for the contributor as well as for the text of his or her contribution.
  • An appreciation of the way in which users present themselves to others in their profiles and other contributions—that is, how they might second guess their audience, attempt to “game the system,” and so on. Usability concerns here amount to the difference between choosing a photo of oneself and uploading it with a form page.
  • Constraints built into the system should involve not only technical and interface choices (or better, limiting those choices), but should involve the cultivation of social behaviors and normative restraints.
  • Error management and handling should focus on risk management as a social undertaking, rather than a technical one. The kinds of mistakes that happen among users of communication technologies are more often than not fraught with social repercussions.
  • Similarly, help systems and documentation should involve not only standard online help and troubleshooting. Expert users should be encouraged to help others, in the spirit of neighborliness and good will.
  • User competence will involve social considerations as much as technical ones, and designers should be aware of the difference. Many novice users will lurk before becoming actively engaged participants, seeking to understand how the community works and what it values before making their own contributions.
  • We could cite other examples of the shift from user-centric interaction design to social interaction design, but points noted above should suffice.
  • As always, our systems should satisfy tried and tested success criteria. They should win points for their:
    • Effectiveness. In short, they should do what they were built to do, as well as possible.
    • Efficiency. They should require as little overhead as possible. Steps ought to be kept to their minimum, and users should not be required to absorb application shortcomings.
    • Safety. Users should be spared unnecessary embarrassment or other unwanted surprises. To the usual list of annoyances (crashes, mistakes, disappointments, misundersanding and confusion, lack of confirmation/acknowledgment, etc), we should also list:
      • Our systems should not add to the ambiguity that can already exist among social transactions and encounters!
      • Our systems should avoid further obscuring a member’s identity or intentions, as well as how we are to interact with him/her.
      • Our systems should give us feedback, and quickly if possible, as to whether or not we have behaved properly, done the right thing, etc.
    • Utility. Systems should be useful, or at lsat users should find them useful!
    • Learnability. It should not require a PhD to navigate and learn a community’s or system’s architecture.
    • Memorability. Labeling systems, functions, features and so on should be transparent (no, not see-through), sensible, and familiar.
This entry was posted in All and tagged . Bookmark the permalink.