Category Archives: Research

Real name coercion: a survey that helps ‘make Facebook better’

I just read the recent news that Facebook, apparently through various iterations over the past few months, has been asking users to confirm whether or not their friends are using real names.

This news appears to have gotten big following a Tweet on September 20 that helpfully included a screenshot. An article on talkingpointsmemo.com (TPM) outlines the story well, along with some of the concerns, and includes a few official responses from Facebook. I share the concern that it’s unclear as to what Facebook is using these survey results for, but I think there’s more to this move than direct policing by the service themselves.

First of all I wanted to address a claim that appeared in the TPM article and happens to have been adopted by others breaking the story. It ends with a paragraph stating

In general, Web users may prefer anonymity for reasons of personal safety. But Facebook is not alone in enforcing a real names policy: Google Plus provoked a backlash for employing a similar policy shortly after in launched in late June 2011, a response dubbed “Nym Wars” as in “pseudonym,” for the desire of some users to use pseudonyms. Google Plus has since backed away from this policy as well.

The final sentence suggests Google no longer enforces a real names policy. Interestingly, the first embedded link doesn’t suggest this at all. The second link discusses the eventual move by Google to finally allow pseudonyms in addition to real names within its service. Allowing the inclusion of pseudonyms does not mean it is no longer enforcing a real names policy. (I wrote about this back in January.) I think it’s dangerous to paint Google as leading the way in social media privacy; that’s not what they have achieved here.

The second issue that isn’t really being raised is that of how this may affect engagement with these systems. TPM quotes a Facebook representative as stating “This isn’t so we can go and get that person in trouble […] None of our surveys are used for any enforcement action.” The story as discussed by this and other posts appears to centre around whether or not Facebook is sincere when making such statements, but I feel this is largely irrelevant. The simple suggestion of potential enforcement can change user practices much more than any actual enforcement system – which Google and Facebook both know is extremely difficult and all too easily results in a PR nightmare.

Say you use a pseudonym on Facebook and you get one of these notices asking you to confirm whether one of your friends are using a ‘real name’ (whatever that is). Regardless of what you do next, you’re going to be a little less comfortable risking pseudonymous engagement yourself now you are aware of the possibility friends of yours could just as easily receive a similar message about your account. And if you’re not yet on Facebook and regularly read these sorts of alarmist articles, you’re going to feel even less confident signing up with a ‘fake name’.

Facebook enforces their ‘real names policy’ in a rather intelligent way. They don’t do heavy-handed bans like Google did last year; rather, they regularly publicise their stance on real names (in their official documentation, by allowing interviews with Zuckerberg, etc) to suggest an environment where there is a risk of account deactivation. All they need to do is occasionally ban accounts when they receive a complaint from an enemy (an individual, a company, a government) of that account (this is believed to be what happened to blogger Michael Anti last year), and let the low-level news coverage do the work for them.

No one wants their account to be deactivated. Even if Facebook keeps to their word and does not act on data gathered by these name surveys, which I suspect might be the case, actions and stories like these serve to scare users into compliance.

Advertisements

Version control: In which Facebook makes improvements and my work suffers

In a paper I presented last year I discussed how hiding your sex/gender from your public Facebook profile didn’t actually remove the gendered language the system uses to refer to you. So, when leaving the box unselected, like so

'Hiding' sex status on Facebook

your public profile would still use sentences such as, “If you know Anne, add her as a friend or send her a message” (emphasis mine).

Facebook's gendered language

I see this as a concern for various reasons I’ve discussed elsewhere and I won’t go into them now. (Google+ also had a similar practice during its testing phase, which it changed fairly quickly after receiving a lot of user feedback. Frances Haugen from Google talks about the situation here, for anyone wanting a refresher.)

I mention this again this week because I’m in the process of finalising a paper which reviews gender and sex usage in social media systems and discusses potential social effects leading from these system design choices. It was going well and I had a great structure that appeared to work, but yesterday I went to check out Facebook’s system to confirm my statements and noticed that they appear to have addressed the above issue. Now, when looking at a person’s public profile, regardless of whether they’ve chosen to hide their declared gender/sex status, you see shorter sentences that avoid pronoun usage altogether.

Facebook system avoiding gendered pronouns

I think this is a great move! Eventually I’ll make time to look into other instances of this and evaluate how easily other systems can introduce similar changes (it’s difficult because I’m not that familiar with other languages/cultures). But for now I have to finish writing my paper. And this improvement throws a spanner into the works. (Or, rather, removes one that I was hoping to talk about a bit.)

I can work around it. I just need to do some work on changing the way I’ve structured everything. But it does raise one major difficulty I’ve been having throughout my research; that of being unable to easily record how the systems I discuss change over time when they are ‘closed’ systems.

Last year when Google Profiles allowed users to hide their gender status I kicked myself because I didn’t think to take a screenshot (a documented first-hand account that may constitute a better academic reference than a random person’s blog post) before it was changed – honestly, I didn’t think they’d fix it! I could keep taking regular screenshots of various interfaces, but that’s time consuming when I don’t actually know what I may want to focus on later. It’s what changes that becomes interesting, and I don’t have forewarning. In this case with Facebook removing gendered pronouns I actually have screenshots; what I don’t have is an idea about when, exactly, within the twelve months between my two screenshots these changes were actually made.

Couple this with the fact that some users see different iterations of a system interface, depending on their location or the server they’re using, documenting something accurately for later use can be quite difficult.

And this is one important (though small) reason I love open source software: I can find out which version is running on a particular server and then look through the code personally to document it accurately. This allows one to take a historical look at these services.

With all their talk about ‘transparency’, I wish Facebook and Google+ would take this direction. Just for me.

It would make my work a little easier.

Title select

At some point in the past few months I registered for an event where I was given an uncommonly large number of options to choose from for the ‘Title’ field. I’ve written before about how rare it is to have the option to use a non-gendered title, and how even then they are mostly related to official qualifications and roles so they can’t be used legitimately by the average person.

However, this time while scrolling through the list I came across a new, totally unexpected option: ‘The Late’. Even though I didn’t physically qualify for it I just had to choose it.

Whether it’s more likely it was someone’s intentional stroke of genius or if the options were copy/pasted from a list of possible titles without that final but important step of quality control, there’s at least one person who appreciates it.

I wanted to share it but I couldn’t remember where I encountered this. But tonight, while signing in at the UNSW Alumni’s Brainfood Lecture, ‘It Won’t Happen to Me’: Cybercrime Myths and Misconceptions, I saw that all important field waiting to greet me on the paper.

So, for a good example of many options for a title field, though it’s still quite limiting because it is a required field, UNSW Alumni’s registration page is a good place to check out while it’s still up.

Is limited representation bad for advertisers?

I’m writing a chapter about the representation of individual persons this month so I’ve been thinking more than usual about the justifications for limiting profile options.

One common reason for, say, giving users of your system a limited set of predefined options to choose from (drop-down menus, toggle switch, ticky boxes, etc) is that such strict categorisation allows for easy collection of specific data. You can more easily compare users. And often – for example, if you’re using a free corporate social media service – this categorisation ties right into the advertising side of the business; they offer targeted advertising services to other businesses wanting web traffic in order to remain financially viable.

I was looking at Facebook’s ad targeting page this week, which outlines the process of targeting specific demographics and later seeing detailed metrics that help you review your approach. (Check it out, it’s interesting!)

Another concept I’ve been looking at a lot this week is that of ‘Big Data’. I had some trouble defining it so I evaluated other attempts . . . which weren’t as helpful as I had hoped. It appears Big Data is one of those new buzz words that everyone is using and defining differently in relation to their own work or theoretical context.

However, the MIKE2.0 site offers an interesting understanding that emphasises the complexity of data, rather than its size, as being the definitive property of Big Data. This appeals to me and my approach because it highlights the confusion and potential for new understandings that this phenomenon introduces. To complement this, I also like Mac Slocum’s description of social data as “an oracle waiting for a question” in this context. There is just so much data out there and the new problems we face are less to do with accessing data than trying to ask the right questions to find out something new and exciting.

And this is where we return to Facebook. I’ve always thought the best argument for limited representation (gender/sex is always a good example, but other categories are just as relevant here) is that it helps with their advertisers. However, we have technology that can produce rich datasets and give us more details about individuals, so wouldn’t allowing a greater range of representation actually improve targeted advertising? Companies can include or exclude demographics and then better review the effectiveness of their advertising in much more detail.

One example that Facebook gives is a success story where a wedding photography business targeted women aged 24-30 who were recently engaged. Of course this works*, but it would be even more successful if they could look at and target finer demographics within this quite broad dataset. Imagine if Facebook simply allowed for an ‘other’ gender/sex option – even this simple change would help many advertisers not pay to target ‘those crazy, politically correct hippies who probably wouldn’t want my products anyway’, and give many more advertisers the ability to more easily target this specialised group.

Big Data is complicated, but it enables us to uncover interesting and important details. And where corporations such as Facebook may benefit from expanding its user options, government funded projects like the census who still enforce a male/female binary and ignore non-standard religions – thus making us all miss out on exploring and better understanding our diverse population now and in the future, which is an important aspect of its purpose – really have no excuse.

Admittedly, this is the very early stage of an idea so it’s not completely thought out, but it’s the first time I’ve been able to see Facebook’s binary gender/sex field as being detrimental to their business and thought this was important enough to share.

* As much as any online advertising works, at least. I’m always surprised to hear that people click on advertising links, because it’s counters my own practices so strongly. But I’m happy these people are out there, keeping the Internet alive!

Marginalisation remains in Google’s ‘more inclusive’ naming policy

In a post on Google+ today, Bradley Horowitz announced that Google+ have revised their handling of names in order to work “toward a more inclusive naming policy”. In itself, this sounds great, but I was right to be hesitant in my celebration.

Previous problems

There were many issues with Google+’s original ‘Real Names’ policy. Put simply, Google tells users they must use their real names on Google+ and, if it is suspected users are not complying with this, they may have their account suspended – unless they happen to be a high-profile celebrity, of course. Disregarding the obvious profitability that comes with accurate user data, we heard the typical arguments about how real names create accountability and make people play nice with one another. (I’m still far from convinced this is the case. Boing Boing has a nice, recent discussion on this debate if you’re interested.)

The Geek Feminism Wiki page, Who is harmed by a “Real Names” Policy?, which I keep linking everyone to, highlights the issues better than I can. Along with the simple technical issues – ‘Um, I don’t have exactly two names so I can’t fill in my real name in your system?’ – comes a long list of people who can not or do not want to use their real name for valid reasons such as safety, avoiding harassment, or not wanting their voice marginalised due to assumptions others make about them from their name.  This is a real issue for a lot of people directly, and for the rest indirectly – we lose their voices in the conversation.

So any improvements on the policy should be positive, right?

The changes

As well as facilitating more languages (this is great!) Google has allowed users to include a desired nickname along with their full, ‘real name’.  To be absolutely clear, there is no indication that users will ever be allowed to hide their real name from others. This is simply a feature that allows users to include additional information.


First and last names are still unable to be hidden on Google+.I admit, this is a step forward, but it certainly is, as Horowitz states, “a small step”. They’re helping people use more complicated real names and they’re helping people be recognised next to their more common pseudonyms. But the people for whom major changes are more urgent are not assisted at all here. Those victims of assault who don’t want do be located by their abusers? Those people who dare to prefer that their social presence is not easily searchable by banks and potential future employers? Citizens who want their words heard for what they say rather than for the gender or colour of the hands that type them? They still need to be comfortable listing their full, legal names or not use the service at all. In short, they’re still not welcome.

Statistics and justifications

And this is where it pains me to read the justifications for this system change. It is claimed that because users submit three times more appeals to state a nickname than to use a pseudonym primarily, this is a reasonable response. However, if people do not want to declare their real names in the first place, then they would not fall under the category of ‘users’. They are not included as part of this statistic that wants to be included. However, if it’s simply referring to users attempting to create a new account (the wording is a little unclear), this isn’t including those who are aware of the real names policy and do not bother signing up as a result, or join using a fake name that the system happens to let through. They go unrecorded.

Of course, there are other issues with the wording as it stands – just because someone doesn’t submit a name appeal (I haven’t!) it doesn’t mean they have no opinion on this issue or would not be negatively affected by Google doing nothing – but the suggestion that allowing pseudonyms is an unimportant feature request because of some careful number gathering appears to be an indication that they’re just going to keep on avoiding this legitimate concern. They’ve “listened closely to community feedback” but decided to only implement those changes that don’t question the original real names policy.

In short, I believe the stated 0.02% of users who submit a name appeal to use a pseudonym is a strong under-representation of the number of users who would actually prefer this option – not to mention those who would simply like it to be available, even if they don’t change their own name to a pseudonym.

Every time I see Google implementing a new feature, I see ever more clearly who they really are.

I read Alan Moore’s V for Vendetta this afternoon while thinking about social media service exclusions. The following verse from V’s sardonic, “This Vicious Cabaret”, struck me as relevant here:

There’s thrills and chills and girls galore, there’s sing-songs and surprises!

There’s something here for everyone, reserve your seat today!

There’s mischiefs and malarkies . . .

but no queers . . . or yids . . . or darkies . . .

within this bastard’s carnival, this vicious cabaret.

So, I admit it may be a stretch to suggest Google is comparable to the fascist, post-apocalyptic governing body in power throughout most of the story, but the point is, if these services do what they (as corporations) intend to and gain a strong user base, while also refusing service to significant demographics and important voices, they begin erode those democratic elements of communication we were promised at the dawn of the Internet.

And this isn’t the world I want to live in.

Stick-figure sexism and user profiles or: my new favourite xkcd comic

My research became a little more complicated in November.  And by ‘complicated’, I always mean ‘interesting and fun’.

I was having a conversation with Nina Funnell about my work on gendered spaces and how this influences practices of social engagement.  The idea is that enforced gender declaration together with a limited range of response and an imposed prominence of this attribute creates issues for equality of participation.  Users with perceived feminine profiles are often marginalised, their voices weakened through experiences of harassment, whether direct or observed (one study found “female usernames received 25 times more threatening and/or sexually explicit private messages than those with male or ambiguous usernames”1), and gender stereotyping.

Gender, I find, is a helpful example that explains this issue of marginalisation and fairness of participation within communities.  But it is a specific illustration that highlights a broader issue where individuals are not considered equal in practice – whether through silencing or through others internally delegitimising others’ voices through stereotyping – and the level of control individuals have to diminish the negative stereotypes that work against them.

One clear solution is to give individuals more control over deciding what aspects of themselves they wish to reveal.  Removing the mandatory gendering of social media spaces and allowing pseudonyms, for example, is a good step in the right direction.  Users ‘a1’ and ‘a2’ (with no other declared attributes) are arguably far more equal than users ‘alison’ and ‘ben’.  The more attributes added to the basic user ‘skeleton’ the less equal users become, depending on the viewers and their personal understandings of these attributes within a social context and their process of stereotyping.  I argue that digital systems have too much control over the mandatory enforcement of declaring information that becomes publicly attributable to the users.

Putting it simply, publicly categorising users within communities, such as gendering spaces through mandatory declaration, harms equality of participation.

Giving users more control over their public appearance may seem like a simple solution that fosters equality in community engagement.  In fact, this was the direction my argument had been taking for much of the year.  However!, a real solution is much more complex.

Nina mentioned a concept called ‘stick-figure sexism’.  It is where we are shown a simple stick figure and we are asked to describe the type of person we believe it represents.  More often than not, the response outlines a middle-aged, able-bodied, white male.  This, then, is said to be considered a ‘normal’ person (in stick-figure land), and any divergence to it is represented through ‘add-ons’ such as long hair, coloured in heads that represent different skin shades, walking canes, etc.

gethen blog has a fun, short post on stick-figure sexism, which describes xkcd comics as a “serial offender”2.  On a related note, I’d like to share my new favourite xkcd comic, as published on that post.


Remixed by gethenhome from the xkcd original.

Hearing about this concept, I recognised important implications for my own work.  If I’m advocating the removal of mandatory categorical fields within public user profiles, it is conceivable that some communities may be no better off – or be even worse.  If we remove our focus on gender, say, then it could give rise to the assumption that more users fall under whatever gender we imagine is more likely to participate within those communities.  We may assume that all (or the vast majority of) ungendered, pseudonymous users are young, white, male Americans and in doing so destroy the sense of diversity we are led to experience in real-world situations.

All in all, I believe the negative consequences of this (let’s give it a name) ‘profile sexism’ in practice will be small, especially when compared to the positive consequences that would be far more apparent.  However, it’s certainly important for me to address in my work.  To find a reasonable solution we need to look at both the technical and social gendering of spaces.

An early observation I noted was that, through many years’ engagement with communities on livejournal, I have personally experienced many situations where assumptions have not followed this idea of profile sexism.  In fact, many communities lead me to perceive a large variation of cultures (I use the term very loosely) coming together to discuss a topic they have a shared interest in.  One reason for this is that I’ve been participating within these communities for so long that of course it would have sunk in that users are from different locations and account for a variety of different cultural demographics.  This idea suggests the sense of multiculturalism and the acceptance of various views is learned over time.  However, it’s difficult to determine the strength of this because it’s difficult to remember first impressions to compare present understandings to.

Another reason for the sense of inclusiveness I register from these communities is that there is something about them, some aspect of the design – influenced by the livejournal system, the community moderators and the members themselves – that may facilitate this.  It may be possible that some element of these communities’ appearance suggests they are more welcoming and inclusive than, say, the feeling I get reading YouTube and reddit comments, or user responses to articles on smh.com.

In truth, it’s probably a mix of both a learned understanding through previous interaction and particular design elements that help inform a sense of inclusiveness.  In effect, what I’m now looking at (though this is only a small part of my research) is a way to determine better system design for various communities, based on the kind of interaction desired, arguing against the common privacy demarcation between Zuckerberg’s and Schmidt’s “communities are better when everything is public [also: we can make more money from it]” and my previous call for users to have full control over their public profiles and be discouraged from publicising anything that’s not relevant.

I still hold the latter view, of course.  But I seek to determine which elements of system design facilitate healthy interaction between users with different backgrounds and social identifications.  This can’t be answered simply by discussing ‘privacy’.

Oh, look, I appear to have summarised my thesis in a few sentences.

——————–

1. “Female-Name Chat Users Get 25 Times More Malicious Messages”, 9 May 2006, physorg.com, <http://www.physorg.com/news66401288.html>

2. “Stick-Figure Sexism”, 29 December 2009, gethenhome.wordpress.com, <http://gethenhome.wordpress.com/2009/12/29/stick-figure-sexism/>

Some important words for the holiday season

Here is a short but brilliant blog entry on Christmas.

It’s a made-up thing, it only exists because people believe in it. You can imagine that if you had been born into a culture that didn’t take any notice of Christmas, you could have lived a long and happy life and never even missed it.

But here in our culture, Christmas is a big deal. Even people who personally don’t find Christmas enjoyable or meaningful, even people who dislike it intensely, still get sucked into exchanging gifts and cards, going to Christmas parties and family Christmas dinners, and wishing “Merry Christmas” to friends, co-workers, and strangers on the street. In fact, for a person who was really determined to avoid Christmas, the only alternative would be to drop out of society altogether.

Gender is just like that.

Gender” from gethen blog.

Please share on whatever social networking sites kids are using these days =)