Privacy and Surveillance


Simon Davies wrote an interesting blog post on “Why I’ve stopped caring about what the public thinks about privacy” in which he explains the trap of advocates for any social benefit caring too much about whether there is majority support for their position. I agree with him that privacy advocates who understand the importance of privacy rights and privacy practices should not despair when faced with survey after survey after experiment in which many people, often a majority, either state they don’t care very much about their privacy or demonstrate through action that even if they care about privacy, they ware willing to give it up for a small perceived benefit. However, I think Simon’s article needs further consideration. Of course those of us who see the importance of privacy should not give up our advocacy. However, we should understand where the apathy or even hostility to privacy rights comes from. We need solid empirical research on this and good conceptual presentations of why it happens. Only then can we try to apply force to the right levers to improve everyone’s access to privacy.

My own work on the psychological impact of social network site affordances shows some of the reasons why people’s responses to surveys show that they have limited their desire for privacy. It’s not that they don’t care, it’s that in order to maintain their sanity in the face of peer pressure and network effects pushing them into using things like Facebook, including using their “legal name” and having most things open on there, is to downgrade their privacy perceptions. If we can push back against the privacy invasive nature of these systems, giving people technological, legal and economic possibilities to connect without exposure, then I am sure that their reported perceptions of privacy will swing back.

The ethics of big data is generating a lot of discussion these days. I read an interesting article today which showed that some managers in the health sector find the voracious attitude that “everything must go into the pot” “creepy”, while analytics professionals go on about the benefits of more (good quality) data giving more useful information. This article, though, was quite typical in the area in that it focussed on the US situation, with the problem that health-care providers in the US are driven by their revenue systems: the source of the data for big data health analytics in the US in the article is cited as the “Revenue Cycle Management (RCM) systems” which capture data mostly so that the healthcare provider can charge the right (i.e. the legally/contractually allowed) price to the funder. Of course it’s pretty much only the US that has this crazy system. Elsewhere there are fewer payers for healthcare for the majority of people, sometimes down to (almost) one in places like the UK. The US situation also raises large questions because of the crazy way its healthcare is funded in that patients are severely lacking in trust that the use of their data will not lead to significant individual problems, up to and including being sacked for being potentially too expensive to provide health insurance for.

Of course this does not mean that in other countries there are no big ethical issues with big data for health analytics. The proposals by the UK government to limit or ignore patients’ ability to opt out of the care.data program, through which private companies such as pharmaceutical companies would gain potentially significant private benefits alongside possible public health benefits, but with no guarantees of privacy or security of the data, raises similar questions to the century-plus debate about census data (before WWII ethnicity data in the US census was supposed to be inaccessible to the government at large – that guarantee was wiped away after Pearl Harbour, leading to the disenfranchisement, loss of property and internment of over 100,000 American citizens of Japanese descent).

Europe, with its more heterogenous health funding systems must explore the issues around all the models and not be driven by US-centric concerns.

I know at least one of my LJ friends will have sympathy with this one. I’ve received the proofs for a new journal article(*). While most of the comments are reasonable there’s a pair that are rather stupid when taken together. In the paper we reference this paper:

Dick , A . R . and Brooks , M . J . ( 2003 ) Issues in automated visual surveillance . In: Sun e t al (eds .) .

which as anyone who udnerstands referencing can see then cross-references:

Sun , C . , Talbot , H . , Ourselin , S . and Adriaansen , T . (eds). ( 2003 ) Proceedings of the Seventh International Conference on Digital Image Computing: Techniques and Applications, DICTA 2003, 10 – 12 December 2003, Macquarie University, Sydney, Australia . CSIRO Publishing .

The copy editors have separately asked:

Please provide further publication details in the reference Dick and Brooks (2003).

and:

Reference Sun et al (2003) not cited in the text. Please cite in the text, else delete from the reference list.

Argh!

 

(*) From my web page “News” section about this paper: A joint paper with Dr James Ferryman of the School of Systems Engineering, University of Reading has just been accepted by Security Journal. The pre-print of The Future of Video Analystics for Surveillance and Its Ethical Implications is available from the The Open Depot.

A copy of a message I sent to Bugzilla today.

I would like to be able to report crashes on my system using Bugzilla. However, I will not sign up for an account on that service because they violate a basic principle of user privacy, and for no good reason so far as I can tell. They require an email address for people to sign up, but this email adddress is then visible to all on any bug reports submitted. They “helpfully” suggest that users should use a “secondary” email account to avoid spam on their main account. This is just a ridiculous suggestion. If I wish to make use of bugzilla to do more than just submit automated bug reports, such as actually track the status of my bug, I’m going to want to use a “push” service to report changes to the bug, and that means accessing the email account I register with them, making it pointless as to whether it’s a primary or secondary account – I’m still going to have to wade through any spam to get at the real contents, and the publication of the email address will pretty much ensure that it gets spammed. Why not follow the more-or-less standard approach of having users select a username which is visible to other users and if it’s really necessary to allow users to contact others for whom they don’t separately know an email address, provide a simple user-to-user personal message system? Since the purpose of Bugzilla is to allow community-minded users to report problems with software to the development community, discouraging them from doing so degrades the whole community effort.

There has been a recent spate of reports regarding Research In Motion and their difficulties with various surveillance-oriented regimes (UAE, Saudi Arabia, Pakistan) demanding access to the emails sent from the famous and popular Blackberry mobile communications system. The most recent addition to the countries demanding such access is India. I find it interesting that they are targetting the Blackberry in this way. Standard email protocols provide exactly the same facility as the proprietary systems used by Blackberry and many other smartphone systems to send and receive email to remote servers with end-to-end encryption so that only if the user device is cracked or the server is located in-country, can the government access the communications data (modulo claims of encryption cracking capabilities of Forth Worth and GCHQ).

More flexible smartphones such as the iPhone, Mobile Windows- or Android-based systems can of course be set up with standard email servers anywhere in the world. Are these the next target, or are the users of Crackberries seen as the most likely to be “misusing” (according to the governments in question) email? This attempted fragmentation and re-bordering of the internet was analysed by Goldsmith and Wu a few years ago in Who Controls the Internet? Will open platforms such as the Android be banned in favour of iPhones but only if Apple follows RIM’s example and limits email apps to in-country servers? What about travel to these countries? Will entry into Pakistan with an iPhone be followed by a revocation of any app allowing out-of-country encrypted communications?

In a clear abuse of the parliamentary process and a travesty of democracy, the Digital Economy Bill had its second reading in the House of Commons yesterday, a process which now allows the final passage of the bill to be pushed through “wash-up”. The reason this is a travesty is that the wash-up process is supposed to be for bills with cross-party support and few concerns about the detailed provisions needing further parliamentary scrutiny, to avoid clogging up the post-election parliamentary timetable with uncontroversial matters getting in the way of (supposedly) the new governments’ manifesto commitments. Neither of these is truly the case for the Digital Economy Bill. While the Conservative and Labour Front Benches may have whipped sufficient of their MPs into line this did not have all-party support. It was not (and is not) uncontroversial. Claims that it had received significant debate in the Lords ignores the constant cries from the current government about how undemocratic our Upper Chamber is. When the Lords blocks something the government doesn’t like, it’s undemocratic, but when it serves as a mechanism for the near-dictator Lord Mandelson to push through a piece of captured legislation then it’s sufficient democratic scrutiny for a major bill. The Digital Economy is incredibly important to the UK and a bill to support and develop it needed to be put through the appropriate parliamentary scrutiny and crafted with balance on all sides of the discussions. Ramming something through with Henry VIII powers, a lop-sided set of proposals which run the risk of destroying significant chunks of internet access and business through chilling effects if not legal action, all because Lord Mandelson got his ear bent by a rich representative of a dinosaur industry, is not democracy, it’s corruption and abuse of power.

Larry Lessig changed his tack in the US from lobbying for more sensible copyright (and related rights) laws to the issue of corruption in US politics and the capture of the law-making process by small groups with large amounts of money. After the DEBill fiasco in the UK, it’s easy to see why he felt that move necessary.

According to this article in The Grauniad, the UK government is set on ignoring the recommendations of yet another report it commissioned (this time the Digital Britain Report, last time the Gowers Report) and are set to introduce proposals for a two strikes law on suspending/removing internet access from those accused by rights’ holders of illicitly sharing copyrighted material online (official government details). (more…)

The Counter Terrorism Act 2008 includes the provision:

76. Offences relating to information about members of armed forces etc

(1) After section 58 of the Terrorism Act 2000 (collection of information) insert:
“58A Eliciting, publishing or communicating information about members of armed forces etc

(1) A person commits an offence who:

(a) elicits or attempts to elicit information about an individual who is or has been:

(i) a member of Her Majesty’s forces,

(ii) a member of any of the intelligence services, or

(iii) a constable,

which is of a kind likely to be useful to a person committing or preparing an act of terrorism, or

(b) publishes or communicates any such information.”

This is in addition to a prior claim in December 2008 where the Home Secretary informed the National Union of Journalists that photography in public places may be restricted when it “may cause or lead to public order situations or inflame an already tense situation or raise security considerations”.
(more…)

I’ve painfully pushed my way through “Cult of the Amateur”, despite its huge flaws. As mentioned last time, the author constantly follows the “broken window fallacy” in all his economic arguments so far.

A couple of sections cover the issues of accountability in the press and the undermining of advertising. Keen offers up examples of where mainstream media have been caught out, including outright lies, poorly researched stories etc. He offers these up as examples of the higher quality of the infrastructure because of the sanctions then applied. However, the very fact that these failings exist in the mainstream media rather undermine his case, particularly as there’s no way of knowing how many flawed articles aren’t spotted. He also excoriates the self reinforcing groups “talking only to themselves”. These groups are no worse than the existing examples of biased media, for example “Fox News”. One of the differences between mainstream media and the new online media is that new media does not generally make the same claim to lack of bias, or claim to “authority” made by existing media. (more…)

The UK government has commissioned a review of children’s access to online material. Are we about to see an attempt by the UK government to introduce CDA, COPA or ChIPA-style laws over here, and without the protections of a constitutional guarantee to freedom of speech that led to those acts being substantially struck down by the US Supreme Court?

(more…)

Next Page »