Tag Archives: marginalisation

I’m published! – Unlike Us Reader out now

I got notification this morning that the new Unlike Us reader is now available. My essay, ‘None of Your Business? Analyzing the Legitimacy and Effects of Gendering Social Spaces Through System Design’ appears on pages 200-219.

You can read the release announcement at the networkcultures.org site here. The short trailer for the reader is also available on vimeo.

There are multiple ways you can get a copy of the reader. If you go to this page you can read it online using issuu. (It should be noted, however, that if you try to download it on the issuu site it requires you to register, and registration requires you to choose your gender as being either ‘male’ or ‘female’, displaying a perfect example of what I argue is a terrible practice in my essay. Needless to say, I don’t recommend downloading it through issuu!) You can also read and download it through scribd without having to register, or download it directly from the networkcultures.org site.

Of course, as the reader is shared using the CC BY-NC-SA 3.0 license, I can host it on my own server, too! (This is especially convenient because it appears the networkcultures.org site is currently down.) However, if you only want to download my essay, I’ve also uploaded an edited (remixed!) version that cuts out most of the other pages. Links to both versions are below.

Unlike Us Reader: Social Media Monopolies and Their Alternatives (3.9MB .pdf)

Andrew McNicol – None of Your Business?: Analyzing the Legitimacy and Effects of Gendering Social Spaces Through System Design (622kB .pdf)

Special thanks to Miriam Rasch and Geert Lovink who have done an amazing job with this release. I’m looking forward to checking out some of the other contributions once I get some free time.

Please share with anyone you think may be interested. And please feel free to comment or email with any constructive feedback you may have – I haven’t read through this in months, but I think there are a few sections I would change slightly.

Not everyone can sit on a chair

Facebook recently released a celebration/advertisement of their services to mark its reaching of one billion users*. It’s called The Things That Connect Us and I highly recommend watching it if you haven’t already been linked to it by your friends wanting to have a laugh. The text from the video is as follows

Chairs. Chairs are made so that people can sit down and take a break. Anyone can sit on a chair. And if the chair is large enough they can sit down together. And tell jokes. Or make up stories. Or just listen. Chairs are for people. And that is why chairs are like Facebook. Doorbells, airplanes, bridges. These are things people use to get together so they can open up and connect. About ideas. And music. And other things that people share. Dance floors. Basketball. A great nation. A great nation is something people build so that they can have a place where they belong. The universe. It is vast. And dark. And makes us wonder if we are alone. So maybe the reason we make all of these things is to remind ourselves that we are not.

Various spoofs have already appeared, as is the internet’s wont, as well as Facebook communities, tumblr accounts and articles discussing the confusing meaning behind the ad. While hilarious in some respects, I actually find the message a little disturbing.

First of all, for a company that talks about opening things up for us to make our own connections, and regularly championing the concept of openness, I believe it’s telling that they’ve removed the commenting and voting features on the YouTube video**.

The main concern is that it purports Facebook as being for everybody – ‘Anyone can sit on a chair’, using wide demographics as subjects, etc. – while behind the scenes doing absolutely nothing to address real concerns about its policies that marginalise. I offer a revision:

Chairs are for people. As long as they feel comfortable using a name Facebook determines is valid for all of their interactions with others. And that is why chairs are like Facebook. Oh, wait . . .

And there’s this one great moment in the video (0:18) where a young person is putting a doll on a small chair. Presumably, now that Facebook has seen this they will move in and remove the doll for violating the chair’s policy of not using pseudonyms.

Actually, I believe one metaphor introduced in the short video is fairly accurate, though not for the reason intended: ‘airplanes’. They’re an exclusionary technology. They help you better connect with others, but there’s a significant cost involved – privacy, security, safety – making it realistically unavailable to many. Panics about security fears lead to the removal of civil liberties and then the service becomes worse for everybody.

Facebook is getting bigger every day. According to their numbers, about one in every seven people use the service worldwide, and this fraction is considerably higher in some locations. But at least with air travel there is the sense that an elected government*** has some level of regulatory control. Facebook is a company who have complete authoritarian control over dictating the terms of discourse on a social platform that (apparently) one seventh of the global population use.

Are we comfortable with this? I know I’m not.

——————–

* I don’t believe they’ve actually reached this number yet. See my recent discussion on Facebook numbers for some of the reasons why. Despite whatever the real number is, it’s still considerably large, and it’s not that useful to focus on the details. But because I’m talking about it now anyway, I might estimate that, conservatively, at least one in every nine people globally, rather than one in every seven, relate to a legitimate, personal Facebook account.

** I can’t be sure if it was like this originally, but both commenting and voting were disabled on October 9 when I first saw the video.

*** Obviously, this varies country to country.

ETA: Facebook say they have reached ’1 billion monthly active users on September 14 at 12.45 PM Pacific time’. Mentions of the ’1 billion’ number I saw elsewhere just said ‘users’, not ‘active users’. the latter suggests the number of legitimate users is closer to the stated 1 billion than I thought in my notes above.

Percentages of Facebook users in Australia

In the comments of my previous post I mentioned the benefits of working out the percentage of people who are on Facebook by location, as this would help support an argument about the ethics of excluding people from such spaces. (If more people are on there, the social cost of opting out becomes greater.) I felt inspired to do a quick calculation of the numbers. Prepare yourself for an onslaught of numbers and tables!

Calculating Australia’s current population

We last held a census on August 9 2011 and the Australian Bureau of Statistics (ABS) website is a great resource for demographic information. This page helpfully displays census data for the Greater Sydney area, New South Wales and Australia overall, which I will be referring to here.

However, it has been a little over a year since the census and the population will have increased. To address this I will be increasing the values found on that page by 1.4%, the estimated rate of national population increase in 2011 from the previous year as stated on this page. This won’t make my results exact (states and cities increase at different rates, and the past year’s increase may be different to 2011′s), but I think this is close enough for my purposes.

Estimates of the current Australian population by location:

Location Population
Sydney 4,453,157
New South Wales 7,014,505
Australia 21,808,825

Factoring in age restrictions

Facebook doesn’t (officially) allow users under the age of 13. Many ignore this, and some are even assisted by their parents in  setting up an account. For this brief study I’m going to pretend this (possibly significant?) number of underage users does not exist.

The problem, then, is removing people under 13 from my population results table above so I can compare it with Facebook data. The census statistics page helpfully breaks down the population by age, but includes ranges ’10-14′ and ’15-19′ rather than having a convenient break between ages 12 and 13. To address this I have used the 2011 totals by location and removed the entire population of under 10s and 3/5 of the 10-14 demographic, and then increased these values by 1.4% as above to estimate the current population.

Revised estimates of current populations to include only those over 13:

Location Population (13+)
Sydney 3,762,165
New South Wales 5,930,528
Australia 18,440,93

Getting data from Facebook

One thing I love about Facebook is that its advertising page allows you to find interesting demographic data about its users. You need to be logged in first (I used my testing account), but all you need to do is go to the ‘Advertise on Facebook’ page, put in a URL (anything – it doesn’t matter), and then play around with the ‘Choose your audience’ section that appears. For example, if you wanted to work out how many Australian accounts have no declared ‘gender’ (yet another example of interchangeability of terms), you select ‘Australia’ as the location and ‘All’ under ‘Gender’ and it displays the audience size to the right (11,624,680 – this and following values retrieved 24 September 2012). Subtract from this the number of ‘Men’ (5,336,740) and ‘Women’ (6,091,320) users and you find that 196,620 (1.69%) Australian accounts have not succumbed to Facebook’s insistence, since 2008, that they choose a sex/gender. (It’s impossible for new accounts to opt out now, of course.)

However, there are issues with trusting this data. First of all, it isn’t clear how accurate the values displayed are, and – discounting a highly unlikely set of chance results – all values are rounded to 10. But more than that the results rely entirely on the accuracy of user entered data.

The 11.6 million ‘Australia’ Facebook accounts counted here include, among those from legitimate users,

  • Accounts of deceased persons
  • Abandoned accounts
  • Accounts from those who have moved away and not updated the location on their profile
  • Additional accounts from those who have multiple accounts
  • Accounts from those who are under 13 (which presents a problem for my calculation)
  • Fake accounts (like mine)
  • Accounts made by others for non-humans such as groups, brands, non-human animals, children (who are human but not operating their own account), etc.
  • Closed accounts that are still archived (I have no way of determining whether these are part of the number)
  • Anything else … ?

I’ll call those ‘non-legitimate accounts’ for the purposes of this discussion.

In addition to this, the total number does not include Australian users who

  • Do not declare their location (for example, I calculated that 5.11% ‘Australia’ Facebook users have not declared a state – it’s unclear how many users choose not to declare a country*)
  • Declare another (fictional or real) location

To an extent, items on these two lists cancel each other out, but it’s difficult to argue whether, say, the active number of Australians on Facebook is higher or lower than the 11.6M stated by the advertising page. (Any thoughts?)

For my purposes here I’m simply going to use the data given to me by Facebook, though I freely admit the issues raised here make any conclusions or results problematic.

Population of Australians on Facebook

By refining my Facebook advertising results by choosing the city ‘Sydney’, the state ‘NSW’, and the country ‘Australia’, and comparing them with the results of my earlier tables, I get the following results:

Location Estimated population (13+) Declared Facebook population Percentage on Facebook
Sydney 3,762,165 2,669,540 60%
New South Wales 5,930,528 3.792.440 54.1%
Australia 18,440,933 11,624,680 53.3%

It makes sense that Sydney has a higher percentage of Facebook users than the rest of the state and country, but I’m actually surprised it’s quite that high as I assumed fewer users would go the extra trouble of declaring a city, let alone their state, on their profile. I also don’t know what the earlier Facebook profile interface was like, but I assume it wouldn’t have categorised locations as well as it currently does so many older accounts may say ‘Sydney’ (text field), or one of many locations that are encapsulated within it, rather than ‘Sydney’ as a tagged, searchable category option. Therefore I suspect many may be missing from this 60% total, though that number may be countered by the number of non-legitimate accounts.

Conclusions

In short, what we’ve seen here is how one may use available data from Facebook and the ABS to determine a non-reliable percentage of Facebook users by location.

This and similar studies could be used in arguments regarding the coerciveness of Facebook and how, because of the high prevalence of use within a population, Facebook and similar systems, despite their status as private enterprises who ‘should be able to do whatever they want because it’s their system’, may actually have a responsibility in making their systems accessible to all by removing barriers such as ‘real identity’ requirements. Because such a high percentage of our social engagement now occurs online, and much of it on Facebook, those who are barred from entry lose the ability to meaningfully engage with their communities, both geographic and virtual, to a fair extent. When systems impose marginalisation and society silences diverse voices because they adopt such systems for everyday interaction, ideas stagnate and we lose as a civilisation.

… And other things I won’t go into now.

Of course, if anyone is actually tempted to use my results or method in making these or other arguments, be prepared for others to dispute the numbers. There’s a wide range of uncertainty within the Facebook data that I’ve only just begun to address.

But that’s not necessarily a bad thing. As someone who advocates for profile systems making all fields optional, it makes me happy to recognise the major limitations of this data =)

——————–

* If I could be bothered, I could work out the populations of every country, add them together and subtract that from the number of total Facebook users, and then attribute a percentage of this number (relating to the ratio of declared Australian users) to the Australian population … but that could take a while.

Postgraduate symposium abstract – Stranded Deviations

I’ve just submitted a finalised abstract for a twenty minute paper I’ll be giving at the UNSW postgraduate symposium on Monday September 3. (Specific time and location TBA.)

The symposium theme is ‘Making Tracks’ so, naturally, I’ll be using plenty of dinosaurs in my presentation.

Title and abstract are copied below.

I might actually blog about some of this stuff one day, though the rest of the year sees me quite busy writing other things so it may take a while =/

====================

Stranded deviations: Big Data and the contextually marginalised

Knowingly and otherwise, we all leave traces when we use digital technologies. As social and practical interactions moved to the digital realm, facilitated by technological breakthroughs and social pressures, many have become understandably concerned about user privacy. With the increased scale and complexity of stored information, commonly referred to as ‘Big Data’, the potential for another person to scrutinise our personal information in a way that makes us uncomfortable increases.

However, it can also be argued that because there is so much personal data stored in various digital systems our privacy is retained ‒ we all become lost in the noise. Attention is a finite resource so it becomes unlikely that we will experience a privacy breach by a real person. In practice our traces are most often treated as data, computationally analysed, rather than content, scrutinised by biological eyes.

‘Security through obscurity’ may appear to be an inadequate concept here because privacy breaches occur regularly. However, ‘cyber attacks’ are directed at targets who stand out from the noise, chosen based on some form of profiling. Therefore, within any context, certain individuals become disproportionately targeted. Those regularly contextually marginalised have the most to lose from participating in a culture of Big Data, raising issues of equal access.

In this paper I bring these ideas together to argue that the privacy discourse should not only focus on the potential for scrutiny of personal data, but also the systems in place, both social and technological, that facilitate an environment where some users are more safe than others.

Marginalisation remains in Google’s ‘more inclusive’ naming policy

In a post on Google+ today, Bradley Horowitz announced that Google+ have revised their handling of names in order to work “toward a more inclusive naming policy”. In itself, this sounds great, but I was right to be hesitant in my celebration.

Previous problems

There were many issues with Google+’s original ‘Real Names’ policy. Put simply, Google tells users they must use their real names on Google+ and, if it is suspected users are not complying with this, they may have their account suspended – unless they happen to be a high-profile celebrity, of course. Disregarding the obvious profitability that comes with accurate user data, we heard the typical arguments about how real names create accountability and make people play nice with one another. (I’m still far from convinced this is the case. Boing Boing has a nice, recent discussion on this debate if you’re interested.)

The Geek Feminism Wiki page, Who is harmed by a “Real Names” Policy?, which I keep linking everyone to, highlights the issues better than I can. Along with the simple technical issues – ‘Um, I don’t have exactly two names so I can’t fill in my real name in your system?’ – comes a long list of people who can not or do not want to use their real name for valid reasons such as safety, avoiding harassment, or not wanting their voice marginalised due to assumptions others make about them from their name.  This is a real issue for a lot of people directly, and for the rest indirectly – we lose their voices in the conversation.

So any improvements on the policy should be positive, right?

The changes

As well as facilitating more languages (this is great!) Google has allowed users to include a desired nickname along with their full, ‘real name’.  To be absolutely clear, there is no indication that users will ever be allowed to hide their real name from others. This is simply a feature that allows users to include additional information.


First and last names are still unable to be hidden on Google+.I admit, this is a step forward, but it certainly is, as Horowitz states, “a small step”. They’re helping people use more complicated real names and they’re helping people be recognised next to their more common pseudonyms. But the people for whom major changes are more urgent are not assisted at all here. Those victims of assault who don’t want do be located by their abusers? Those people who dare to prefer that their social presence is not easily searchable by banks and potential future employers? Citizens who want their words heard for what they say rather than for the gender or colour of the hands that type them? They still need to be comfortable listing their full, legal names or not use the service at all. In short, they’re still not welcome.

Statistics and justifications

And this is where it pains me to read the justifications for this system change. It is claimed that because users submit three times more appeals to state a nickname than to use a pseudonym primarily, this is a reasonable response. However, if people do not want to declare their real names in the first place, then they would not fall under the category of ‘users’. They are not included as part of this statistic that wants to be included. However, if it’s simply referring to users attempting to create a new account (the wording is a little unclear), this isn’t including those who are aware of the real names policy and do not bother signing up as a result, or join using a fake name that the system happens to let through. They go unrecorded.

Of course, there are other issues with the wording as it stands – just because someone doesn’t submit a name appeal (I haven’t!) it doesn’t mean they have no opinion on this issue or would not be negatively affected by Google doing nothing – but the suggestion that allowing pseudonyms is an unimportant feature request because of some careful number gathering appears to be an indication that they’re just going to keep on avoiding this legitimate concern. They’ve “listened closely to community feedback” but decided to only implement those changes that don’t question the original real names policy.

In short, I believe the stated 0.02% of users who submit a name appeal to use a pseudonym is a strong under-representation of the number of users who would actually prefer this option – not to mention those who would simply like it to be available, even if they don’t change their own name to a pseudonym.

Every time I see Google implementing a new feature, I see ever more clearly who they really are.

I read Alan Moore’s V for Vendetta this afternoon while thinking about social media service exclusions. The following verse from V’s sardonic, “This Vicious Cabaret”, struck me as relevant here:

There’s thrills and chills and girls galore, there’s sing-songs and surprises!

There’s something here for everyone, reserve your seat today!

There’s mischiefs and malarkies . . .

but no queers . . . or yids . . . or darkies . . .

within this bastard’s carnival, this vicious cabaret.

So, I admit it may be a stretch to suggest Google is comparable to the fascist, post-apocalyptic governing body in power throughout most of the story, but the point is, if these services do what they (as corporations) intend to and gain a strong user base, while also refusing service to significant demographics and important voices, they begin erode those democratic elements of communication we were promised at the dawn of the Internet.

And this isn’t the world I want to live in.

Stick-figure sexism and user profiles or: my new favourite xkcd comic

My research became a little more complicated in November.  And by ‘complicated’, I always mean ‘interesting and fun’.

I was having a conversation with Nina Funnell about my work on gendered spaces and how this influences practices of social engagement.  The idea is that enforced gender declaration together with a limited range of response and an imposed prominence of this attribute creates issues for equality of participation.  Users with perceived feminine profiles are often marginalised, their voices weakened through experiences of harassment, whether direct or observed (one study found “female usernames received 25 times more threatening and/or sexually explicit private messages than those with male or ambiguous usernames”1), and gender stereotyping.

Gender, I find, is a helpful example that explains this issue of marginalisation and fairness of participation within communities.  But it is a specific illustration that highlights a broader issue where individuals are not considered equal in practice – whether through silencing or through others internally delegitimising others’ voices through stereotyping – and the level of control individuals have to diminish the negative stereotypes that work against them.

One clear solution is to give individuals more control over deciding what aspects of themselves they wish to reveal.  Removing the mandatory gendering of social media spaces and allowing pseudonyms, for example, is a good step in the right direction.  Users ‘a1’ and ‘a2’ (with no other declared attributes) are arguably far more equal than users ‘alison’ and ‘ben’.  The more attributes added to the basic user ‘skeleton’ the less equal users become, depending on the viewers and their personal understandings of these attributes within a social context and their process of stereotyping.  I argue that digital systems have too much control over the mandatory enforcement of declaring information that becomes publicly attributable to the users.

Putting it simply, publicly categorising users within communities, such as gendering spaces through mandatory declaration, harms equality of participation.

Giving users more control over their public appearance may seem like a simple solution that fosters equality in community engagement.  In fact, this was the direction my argument had been taking for much of the year.  However!, a real solution is much more complex.

Nina mentioned a concept called ‘stick-figure sexism’.  It is where we are shown a simple stick figure and we are asked to describe the type of person we believe it represents.  More often than not, the response outlines a middle-aged, able-bodied, white male.  This, then, is said to be considered a ‘normal’ person (in stick-figure land), and any divergence to it is represented through ‘add-ons’ such as long hair, coloured in heads that represent different skin shades, walking canes, etc.

gethen blog has a fun, short post on stick-figure sexism, which describes xkcd comics as a “serial offender”2.  On a related note, I’d like to share my new favourite xkcd comic, as published on that post.


Remixed by gethenhome from the xkcd original.

Hearing about this concept, I recognised important implications for my own work.  If I’m advocating the removal of mandatory categorical fields within public user profiles, it is conceivable that some communities may be no better off – or be even worse.  If we remove our focus on gender, say, then it could give rise to the assumption that more users fall under whatever gender we imagine is more likely to participate within those communities.  We may assume that all (or the vast majority of) ungendered, pseudonymous users are young, white, male Americans and in doing so destroy the sense of diversity we are led to experience in real-world situations.

All in all, I believe the negative consequences of this (let’s give it a name) ‘profile sexism’ in practice will be small, especially when compared to the positive consequences that would be far more apparent.  However, it’s certainly important for me to address in my work.  To find a reasonable solution we need to look at both the technical and social gendering of spaces.

An early observation I noted was that, through many years’ engagement with communities on livejournal, I have personally experienced many situations where assumptions have not followed this idea of profile sexism.  In fact, many communities lead me to perceive a large variation of cultures (I use the term very loosely) coming together to discuss a topic they have a shared interest in.  One reason for this is that I’ve been participating within these communities for so long that of course it would have sunk in that users are from different locations and account for a variety of different cultural demographics.  This idea suggests the sense of multiculturalism and the acceptance of various views is learned over time.  However, it’s difficult to determine the strength of this because it’s difficult to remember first impressions to compare present understandings to.

Another reason for the sense of inclusiveness I register from these communities is that there is something about them, some aspect of the design – influenced by the livejournal system, the community moderators and the members themselves – that may facilitate this.  It may be possible that some element of these communities’ appearance suggests they are more welcoming and inclusive than, say, the feeling I get reading YouTube and reddit comments, or user responses to articles on smh.com.

In truth, it’s probably a mix of both a learned understanding through previous interaction and particular design elements that help inform a sense of inclusiveness.  In effect, what I’m now looking at (though this is only a small part of my research) is a way to determine better system design for various communities, based on the kind of interaction desired, arguing against the common privacy demarcation between Zuckerberg’s and Schmidt’s “communities are better when everything is public [also: we can make more money from it]” and my previous call for users to have full control over their public profiles and be discouraged from publicising anything that’s not relevant.

I still hold the latter view, of course.  But I seek to determine which elements of system design facilitate healthy interaction between users with different backgrounds and social identifications.  This can’t be answered simply by discussing ‘privacy’.

Oh, look, I appear to have summarised my thesis in a few sentences.

——————–

1. “Female-Name Chat Users Get 25 Times More Malicious Messages”, 9 May 2006, physorg.com, <http://www.physorg.com/news66401288.html>

2. “Stick-Figure Sexism”, 29 December 2009, gethenhome.wordpress.com, <http://gethenhome.wordpress.com/2009/12/29/stick-figure-sexism/>