Friday, May 30, 2008

Trust in online communities

Next month the OECD is holding its annual ministerial meeting in Seoul on The Future of the Internet Economy. I've been invited to speak on "Trust in Online Communities" at the meeting. My extended abstract and slides are below — any comments are welcome, and might even make it into my presentation!

Trust in online communities

Social networking sites such as Facebook and MySpace have attracted huge numbers of Internet users over the last five years. Dutton and Helsper (2007) recently found that 42% of all UK students had created a social networking profile, while DataMonitor found in August 2007 that North Americans spent almost 450m hours in total visiting these sites each month.

Other types of online communities are equally popular. Massively Multiplayer Online Role Playing Games (MMORPGs) like World of Warcraft and virtual worlds such as Second Life collectively boast tens of millions of active users. Many more specialised communities also exist online, linking for example physicians or those coping with diseases such as breast cancer or HIV (Bray, Croxson, Dutton & Konsynski, 2008).

What public policy interests do online communities implicate? Most generally, they provide a mechanism by which citizens can build social capital by strengthening their networks of emotional and practical support. They also present a means to reduce geographical barriers to participation in society, providing rural users with new means to interact and transact with those in more urban areas – particularly useful in countries such as the UK where urban/rural divides in broadband connectivity are now being overcome (OFCOM, 2008).

Recent social psychology research suggests a number of ways in which community designers can increase the production of these social goods by improving user trust in sites and in other community members. Green (2007) suggested the provision of non-verbal clues, links to mutual acquaintances and mechanisms to allow users to verify information. Reigelsberger, Sasse and McCarthy (2007) advised that sites should aim to increase users’ temporal, social and institutional embeddedness. These features can already been seen in social networking sites, such as Facebook’s lists of shared friends, institutional networks and links to external websites.

Reputation measures are also popular within online communities. These show how far community members have interacted with each other and with users’ own friends, sometimes with explicit ratings from peers. They also can identify users with similar activities and interests, a key predictor of trust (Jensen, Davies & Farnham, 2002).

Lacohee, Phippen and Furnell (2006) found that education, openness and tools for experimentation increase trust. Users need to feel in control, and be provided with restitution when problems occur.

Because it is difficult to reverse harms caused by the revelation of private information, community members need to be supported in setting appropriate limits on access to their personal data. Their trust can be seriously damaged by unintended releases of this information (Adams & Sasse, 2001). Facebook has seen a number of these incidents due to over-permissive sharing defaults (allowing for example Oxford University to fine students who had shared more widely than they had intended photographs of friends taking part in banned exam celebrations). Facebook's insecure application platform also allows unauthorised access to user data (Brown, Edwards & Marsden, 2007). The crude binary nature of “friend” relationships in social networks does not match the subtle manner in which individuals share different aspects of their lives with partners, friends, family members, managers, staff, acquaintances and strangers (Boyd, 2004).

Governments have two general policy levers at their disposal to encourage the development of trustworthy online communities. They can use competition law to drive up the quality of community sites – for example, by mandating interoperability for dominant players to reduce the winner-takes-all nature of markets based on networks of individuals. Privacy law can be used to enforce users’ control over their personal information, particularly in jurisdictions that have implemented the OECD's 1980 Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

Given the popularity of online communities with teenagers and younger children, governments have also come under pressure to intervene to reduce potential harms to young users. A recent review for the UK Department for Children, Schools and Families suggested that community sites targeted at young people should be encouraged to subscribe to a code of conduct that includes commitments to provide safety advice to users and to promptly takedown “harmful and inappropriate” content when notified of its presence (Byron, 2008). Byron also suggested that children be encouraged to talk with their friends, siblings and parents about e-safety; that parents install filtering software on home PCs; and that Internet Service Providers be required to block access to illegal content.


Adams A & Sasse MA (2001) Privacy in Multimedia Communications: Protecting Users, Not Just Data. In Blandford A, Vanderdonkt J & Gray P (Eds.) People and Computers XV - Interaction without frontiers. Joint Proceedings of HCI2001 and ICM2001, Lille, Sept. 2001. pp. 49-64

Boyd DM (2004) Friendster and publicly articulated social networking. Computer-Human Interaction ‘04, pp. 1279-1282

Bray, DA, Croxson K, Dutton WH & Konsynski B (2008) Sermo: A Community-Based, Knowledge Ecosystem. Distributed Problem-Solving Networks Conference, Oxford. Available from (last accessed 30 May 2008)

Brown I, Edwards L & Marsden C (2007) Stalking 2.0: privacy protection in a leading Social Networking Site. GikII 2 — law, technology and popular culture, London. Available from (last accessed 30 May 2008)

Byron T (2008) Safer Children in a Digital World (Department for Children, Schools and Families) p. 64

Dutton WH & Helsper E (2007) Oxford Internet Survey 2007 Report: The Internet in Britain (Oxford Internet Institute) p. 52

Green MC (2007) Trust and social interaction on the Internet. In Joinson et al. (eds) The Oxford Handbook of Internet Psychology (Oxford University Press) pp.43-52

Jensen C, Davies J & Farnham S (2002) Finding Others Online: Reputation Systems for Social Online Spaces. Computer-Human Interaction ‘02, 4(1) p.449

Lacohee H, Phippen AD & Furnell SM (2006) Risk and Restitution: Assessing how users establish online trust. Computers & Security 25(7) pp.486-493

OFCOM (2008) The Nations & Regions Communications Market. Available from (last accessed 30 May 2008)

Riegelsberger J, Sasse MA & McCarthy JD (2007) Trust in mediated interactions. In Joinson et al. (eds) The Oxford Handbook of Internet Psychology (Oxford University Press) pp.53-70


Citizen Dave said...

Nice slides, but what about a mention for PETs? I think one of the key points is that since regulators don't know what PETs can deliver (eg, pseudonymity), they don't ask for it.

pangloss said...

Similar quick comment to above - interesting that your last slide canvasses law norm and market solutions but not code ones. Recent Home Office report on SNSs poss worth citing - includes code solutions to ltd extent, eg default of friends-only access for minors. Saw v g panel around Solove's book on reputation, privacy and gossip - will try to blog it shortly. Frank Pasquale sceptical on panel that interoperability would drive market competition - tendency to want to amass friends in one place, regardless of portability of profile. I think he is probably right.. we are serial monogamists not polygamists of SNS son the whole, at least for "fun" purpose.

Ian Brown said...

I would include the defaults issue within the privacy law enforcement - perhaps even for adult users.

On interoperability, I mean that dominant players should be required to allow users to link to "friends" on other SNSes, not just export profiles.

Ian Brown said...

On PETs: you could certainly imagine an SNS where public-key crypto was used to enforce access control to profile information, even against the site operator. That of course strongly goes against the economic interests of the operator in serving you targetted adverts.

Ian Brown said...

My colleague Prof Angela Sasse has kindly e-mailed the following comments and given permission for them to be published.
Couple of (possible) additional points, depending on which aspect you want to emphasise most

1. Jens' thesis (and the paper you cite) pointed out there is a very large number of indicators that can be used by systems to support users' ability to make correct trust decisions … institutional ones are only one set … which ones work best depend on the user community and the level and type of risk and uncertainty. Also, institutional cues have proved to be easy to fake/impersonate in a world of electronic credentials, and when there is an increasing number of credentials most people have no way of verifying …

2. Information on social networking sites is not just used for straightfoward ID theft, but more sneaky ways for social engineering attacks — e.g. you use info from site to impersonate someone who the victim does not know directly, but who is once removed, i.e. has one person in common with. "You don't know me but we are both friends of X, and he thought you might be interested in …" as a way of extracting further info, blagging a job interview, getting them to endorse something …

3. Merging of social networking and recommender systems seems inevitable — see Philip Bonhard's thesis or

This is a good thing, but makes manipulation (covert marketing etc.) a more likely threat. Unfortunately, most HCI research on trust has focussed on how to manipulate users' trust decisions (i.e. to trust), through media effect (as demonstrated in Jens' last paper on the topic
(attached). This may used directly (to get you to buy from me, or vote for me), or indirect (so you give me a good rating for my reputation score, vouch for me).

I am coming to the conclusion that developments to move interactions with commercial and govt organisations online are really disadvantaging the most vulnerable groups in society — the older and less educated — because it's just too darn complicated for them. I think large-scale closure of high-st banks, post offices or local govt offices, and pushing people online who can't cope, is a recipe for disaster. Their choice is to withdraw from these types of system, or have an agent act on their behalf … You need a trustworthy infrastructure, simple rules of engagement and verification to make this work.