Category Archives: Data Privacy

Ad blocker detection under the new proposed e-Privacy Regulation

The position regarding whether ad blocker detection is caught by the consent requirements under the currently in force e-Privacy Directive has always been contentious.

The IAB wrote a helpful summary last summer 2016 available here. Broadly speaking, depending on how the ePrivacy Directive consent requirements and the exceptions are interpreted, and depending on the technical implementation used to implement ad blocker detection, it’s arguable (at least as far as the European Commission and the Art 29 Working Party are concerned) that ad blocker detection would require a user’s prior consent. This is because, in most cases, it constitutes the accessing of information from a user’s device (i.e. the information being whether the user has implemented an ad blocker).

However, when reading through the European Commission’s proposal for the new e-Privacy Regulation published recently on 10 January 2017 and due to come into force in May 2018 along with the GDPR, you’d be forgiven for being a bit confused (as I am) about the intended position under the new Regulation regarding ad blocker detection.

Following publication of the proposed Regulation, EMMA, the European Magazine Media Association, and ENPA, the European Newspaper Publishers Association issued a press release which stated (amongst other things) that they:

…deeply regret that the proposed Regulation does not foresee more exceptions than for the purpose of first-party analytics. Exceptions to the proposed prohibiting rule on accessing and storing data on a user’s device would however be necessary for such purposes as ad-block detection…

However, on the same day the FT published an article which stated, amongst other things, that:

in a proposed reform of the law on Tuesday, the commission attempted to clear up legal confusion by deciding that detection of an ad blocker would not break EU rules.

The above interpretations of the new Regulation seem to conflict. So what does the Commission and importantly the new Regulation actually say?

Firstly, in the Q&A contained in the Fact Sheet published by the Commission, the Commission says the following:

…the proposal allows website providers to check if the end-user’s device is able to receive their content, including advertisement, without obtaining the end-user’s consent. If a website provider notes that not all content can be received by the end-user, it is up to the website provider to respond appropriately, for example by asking end-users if they use an ad-blocker and would be willing to switch it off for the respective website.

The above seems pretty clear that the Commission sees the checking by publishers to see if a device can receive ads, as not requiring consent. I interpret this as meaning the Commission are saying that accessing information from an end user’s device for the purpose of ad blocker detection is an exception to the rule in the Regulation that any accessing of information from end user devices is prohibited unless prior consent (amongst other things) is given. This statement from the Commission probably formed the basis of the FT article referred to above.

So on the basis of the above, why the deep regret from the EMMA and ENPA in their press release? Well, whilst the position from the Commission seems to be clear, the actual drafting of the Regulation itself is unfortunately not so clear.

The Regulation doesn’t refer expressly to ad block detection or ad blockers at all. This is unsurprising given its aim of being technologically neutral and futureproof etc etc. However, Recital 21 of the Regulation does say:

the mere logging of the fact that the end-user’s device is unable to receive content requested by the end-user should not constitute access to such a device or use of the device processing capabilities.

On the face of it you might think that the above Recital sets out the same position as per the principle referred to above in the Commission’s Fact Sheet. However, it’s not very clear.

The Recital refers to “content requested by the end-user” and in the Commission’s Fact Sheet, the Commission includes “ads” within such content. However, when someone implements an ad-blocker, they are not requesting “ads”. The precise reason why they have implemented the ad blocker is because they only want editorial and specifically do not request advertising. The Recital therefore implies that the mere logging of the fact that the end-user’s device is technically unable to receive the editorial requested by the user won’t constitute access to the device. However, if a publisher logs the fact that the user’s device is unable to receive other content which the user has not requested (e.g. ads) and almost definitely doesn’t want (i.e. because they have an ad blocker installed) then perhaps that does constitute accessing the device?

The other problem is that the point of Recitals is to provide guidance or background as to how the legislative provisions are to be interpreted. In most cases the Recitals also summarise the legislative provisions themselves. However, it’s not clear where the above principle is actually covered in the Regulations themselves.

The consent requirements for local data/device access are contained in Article 8. This states that the use of processing and storage capabilities and the collection of information from users’ devices is prohibited unless any of the grounds in Articles 8(1)(a)-(d) apply. Art 8(1)(b) provides for the user’s consent to be a valid ground. Art 8.1(d) gives “web audience measuring” (i.e. analytics) as another permitted ground. However, there does not appear to be an express ground permitting the collection of information from users’ devices to log the fact that the end-user’s device is unable to receive content requested by the end-user.

Perhaps this is because the Commission are not saying that this type of local access is expressly excluded from the prohibition by being a permissible “ground” (as per “consent” and “web audience measurement”) but rather that this type of local access doesn’t itself constitute local access at all. This doesn’t seem to make sense though because it does constitute local access – in which case why not just include it as an additional ground in Article 8(1)?

Advertisements

Getting over it: New meanings of privacy

On Thursday last week I spoke at the SCL Policy Forum during the “Social Data” session – my talk was about privacy, social media, young people, social norms, regulation and all that kind of thing. Below is a rough transcript of what I said (including links to references etc):

The reference in the title of this presentation to “Getting over it” comes from a now infamous quote by the then CEO of Sun Microsystems, Scott McNealy, who’s reported to have said in an interview in 1999: “You have zero privacy. Get over it!”. When McNealy said that, I was 19 years old. Mark Zuckerberg was only 15 years old and 5 years away from launching Facebook in 2004.

11 years after that McNealy interview, Mark Zuckerberg was interviewed himself in 2010 and he said the following:

People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people… that social norm is just something that has evolved over time.”

That quote of his has also become quite famous and led to a flurry of media attention about Facebook’s attitude to privacy. In that same interview, Mark Zuckerberg went on to say:

We [Facebook] view it as our role in the system to constantly be innovating and updating what our system is to reflect what the current social norms are”.

It’s this last sentence of his that I think is particularly interesting because it raises various questions about whether it’s social media which drives the development of social norms relating to privacy, or whether the opposite’s the case, and it’s social norms which drive the direction of social media. In reality, we see a symbiotic relationship whereby they influence and are influenced by each other as well as various other factors which I’ll come onto.

By “social norms”, I’m talking about group-held beliefs or societal conventions which specify how individuals should behave in a given context. As a result they create certain expectations regarding that behaviour. Those expectations become significant from a policy or a regulatory perspective when they get used as the basis for legal tests. A relevant one in this case being the “reasonable expectation of privacy” which the English courts have used as a test in the various Article 8 cases around the “misuse of private information”.

Privacy, however, is a nebulous concept. It’s very difficult to pin down an accepted definition. In the late 19th century, the US lawyers Warren and Brandeis came up with their often quoted description of privacy as the “right to be let alone”. In Europe, Article 8 of the ECHR talks about privacy in terms of a right to respect for private and family life, home and correspondence.

So there are various aspects to privacy and they’re protected in different ways. There’s privacy relating to your property, relating to you physically, relating to your communications, and finally there’s “informational” privacy regarding information which relates to you. In this case I’m broadly focusing on informational privacy and its relationship with social media. Of course one of the ways that relationship is regulated is through data protection law which, as we all know, provides rights to data subjects and imposes obligations on data controllers in the context of the automated processing of personal data.

So what’s so special about social media? One of the things people use social media for is to fulfil the same role as a physical social space. So in the same way as people use a cafe to meet up, socialise and communicate, social media acts as an online social space where users socialise and interact. However, whilst these online social spaces may be used for the same purpose as physical social spaces, there are various fundamental differences which affect certain social norms relating to privacy and create certain risks. These are well documented so I won’t go into much detail.

For example, we all know that when you say something in a physical social space, your words remain only in the memory of the person you spoke to. In online social spaces your words stay there. That continuity becomes problematic from a privacy perspective if you say something you might regret later. Particularly if it’s discovered by, say, a university you’re applying to or an employer you’re interviewing with.

That risk is exacerbated by the fact that anything you say can be so easily copied, altered and re-published on a global scale. The potential exposure increases further because online social spaces allow us to be indexed and easily found.

There’s also the issue of audiences. In physical spaces, you can generally see who’s within earshot and so who can hear what you’re saying. In online social spaces, the potential audience for your communications is invisible and potentially vast, and includes the proprietor of the online social space who’s business model is likely to be predicated on you sharing information publicly.

Whilst users of social media may attempt to control this audience with, for example, a selected “friend list” on Facebook, this can create what’s been referred to as the illusion of intimacy” because the notion of “friends” in an online social space may differ significantly from friendship in a physical social space.

Differing social pressures can also lead to an audience in an online social space taking a different form to that of a physical social space. For example, there aren’t yet well established social conventions regarding the acceptability of rejecting or accepting friend requests on Facebook – so the pressure a user may feel to accept a friend request could lead to a broader audience and the sharing of information with people who aren’t in fact your friends.

A key issue here is that in physical social spaces there are various well established physical social conventions people use as a tool to indicate the degree of privacy or publicity they expect to apply to a particular communication. The volume or tone of my voice for example, or my facial expression, or my body language. The difference with an online social space is that none of these physical social conventions are possible and as a result, in the absence of substitute tools to indicate the user’s intention, communications can end up being more “public” than the user wants or expects.

One of the things the regulatory regime seems to have been trying to do, with varying degrees of success, is place an obligation on the proprietor of an online social space to build functionality which provides equivalent tools to users as a substitute for those missing physical social conventions.

However, there’s an inherent tension in the ease with which that can be done for various reasons. Not only because the service provider’s business model is likely to prefer the public sharing of information, but also because, firstly it puts the onus on the consumer to learn, understand and use those tools, and secondly physical social conventions are nuanced and complex and the effectiveness with which they can be simulated online in a natural way is very difficult.

Before I look at some of these issues further, I want to look at it from a user perspective. It’s particularly interesting when you look at younger social media users. That’s because through the use of social media, I’d suggest that young people understand and value their privacy in a different way to when their parents were young (and social media didn’t exist). There’s evidence of this when you look at young people’s motivations for using social media.

A recurrent theme in relation to privacy is “control”. Some interesting studies conducted by the US Researcher danah boyd [sic] have found that whilst adults think of their “home” as private, it’s a different experience for young people who live at home because they don’t exercise the same control over their personal space as their parents do. Young people may not feel they can control who comes into their house or their room for example. As a result online social spaces, where the young person feels he/she has more control, can feel more “private” than their home. So the increased sharing of information online by young people doesn’t necessarily indicate a disinterest in privacy but rather a search for privacy elsewhere.

A particularly well known piece of ongoing research into young people’s use of social media and their attitudes towards privacy is the research by the Pew and Berkman Centers at Harvard University. In May this year, they published a report in which they found that whilst young people are certainly sharing more personal information on their profiles than in the past, they’re still mindful of their privacy.

Interestingly, the focus groups in that study showed that many of the teens had waning enthusiasm for Facebook because they disliked the increasing adult presence and the excessive sharing by other users but they keep using it because it’s such an important part of their social life – so again it’s not that they don’t care about their privacy, it’s that they feel they need to stay on Facebook in order not to miss out, so the perceived social cost of not being on Facebook outweighs their desire for privacy.

Using Facebook as an example, 60% of teens in the study kept their profiles private. What they refer to as “friend curation” was also an important part of the interviewed teens’ perceived privacy management. For example 74% of them had deleted people from their network or friends list.

A particularly interesting aspect of the study was that it showed that many teen social media users acknowledged that their communications on social media were public and as a result exchanged coded messages that only certain of their friends would understand as a way of creating a different sort of privacy.

It’s easy to keep the focus on Facebook because of its dominance and talks about social media often group all the different services together under the heading “social media”. However it’s important to take other sites and services into consideration and the different meanings that privacy has in relation to them because of their perception, functionality and models.

For example, in the Harvard study (referred to above), while those teens with Facebook profiles most often choose private settings, Twitter users, by contrast, were much more likely to have a public account. The fact that people use Twitter to broadcast their tweets to as many followers as possible means that different expectations relating to privacy may arise compared to, say, updates on Facebook which users may anticipate only sharing with their “friends”.

Different social media services provide people with the opportunity to present different personas or to share different aspects of their identities. What someone chooses to share on Facebook, may be different to what they share on Twitter and different still to what they share on LinkedIn. There’s also the issue of different devices and how social media usage varies on PCs, tablets and of course mobiles – but that’s a whole other talk in itself.

So whilst we have all these different conventions evolving on social media, what role can, should or does regulation play in all of this? I said earlier that one of the things the regulatory regime seems to have been trying to do is place an obligation on service providers to build functionality as a substitute for certain missing physical social conventions. I think the Irish Data Protection Commissioner’s audit of Facebook at the end of 2011 was a good example of this. As part of that audit Facebook’s privacy settings and functionality were examined in great detail and various recommendations were made.

However, as I also said earlier, physical social conventions are nuanced and complex and aside from the fact that a service provider’s business model will prefer the public sharing of information, it’s a massive challenge for an online service to try to emulate the sophistication and nuances of our physical social conventions in a way that consumers will understand and be inclined to use.

As a result, a tension’s created whereby Facebook’s privacy settings got increasingly more complex as they were pressurised to provide more options to users to mirror the granularity with which people understand the privacy of their communications in the physical world. Of course, the more complex the privacy settings get, the more the object’s defeated because the less users understand their options – so the privacy settings then have to become simpler. But of course, when you start to simplify the privacy settings, you then lose the sophisticated and granular way in which people attach different levels of privacy to each of their communications depending on the audience and the context etc.

I think that technology can make progress in resolving that tension, whereby the increasing sophistication of technology allows all the complexities and nuances of physical social conventions to be more naturally and intuitively mapped to social media. However, I think that leads to some important questions that I’d like to leave you with.

Firstly, what should the goal of regulating social media be? Do we actually need regulation to oblige service providers to try to map offline social conventions to the online world or should we just accept that they are fundamentally different?

Also, in this context, who should we actually be trying to regulate? Is it the platform or the users? If it’s the users, do we actually need more regulation? What’s the risk here? Perhaps there may already be sufficient protection from existing laws such as defamation, confidentiality or intellectual property?

Cookies – are we asking the right questions?

Last week saw the first anniversary since the ICO decided to start enforcing the new cookie rules in the UK. If you’re reading this, you’ll almost definitely know that the law actually came into force two years ago as a result of changes to the E-Privacy Directive. The “old” rules operated on a notice and opt-out basis. Under the “new” rules, broadly speaking, notice and prior consent is required.

Ever since the law came into force, lots of questions have been asked by lots of different stakeholders. The main question I’ve been asked as a legal adviser in this area is what consent mechanism a website needs to implement to be compliant (implied consent notice, banner, pop-up etc?).

One of the much discussed problems with the prior consent rule is that everyone knows the average internet user does not understand and/or will not make the effort to try to understand what cookies are and how they’re used. The notion of the average internet user providing genuine, freely given, specific and above all “informed” consent in relation to cookies is therefore completely spurious.

I went to a seminar recently where Dave Evans from the ICO showed some statistics about the number of complaints the ICO had received about cookies since the rules came into force. According to the ICO, the number of complaints was very low compared to other data protection / privacy issues which they receive complaints about.

What is the point of asking how many people have complained about cookies? Does a low number of complaints indicate a successful regulatory regime or does it indicate a pointless one?  Why did the relevant people actually complain? Why did other people not complain? Is it because they don’t care about cookies? Is it because they didn’t know who to complain to? Is it because they do care about cookies but couldn’t be bothered to complain? Is it because they don’t care about cookies but enjoy complaining? Is it because they would care about cookies if they understood what the hell they were? And… so… on…

The legislation admits that prior consent is pointless for certain cookies (i.e. the ones that are strictly necessary for the site to offer a service requested by the user, such as an online shopping basket). The real target of the rules, as we have been continually told by the regulators, is online behavioural advertising (OBA).

In Opinion WP171 from June 2010, the Article 29 Working Party (an independent body made up of the various European data protection regulators) acknowledged that whilst there are “possible economic benefits to advertisers” through using OBA, these should not come at the expense of individuals’ privacy rights. “Possible economic benefits”?! Surely that’s an understatement. In any event, surely the implementation of a completely spurious notice and consent regime does nothing to safeguard individual’s privacy rights.

Omer Tene and Jules Polonetsky from the Future of Privacy Forum wrote an article last year in the Minnesota Journal of Law, Science & Technology in which they nicely summarised the regulatory conundrum we’ve found ourselves in:

By emphasizing “transparency and user consent,”… the current legal framework imposes a burden on business and users that both parties struggle to lift. Imposing this burden on users places them at an inherent disadvantage and ultimately compromises their rights. It is tantamount to imposing the burden of health care decisions on patients instead of doctors. Instead of repeatedly passing the buck to users, the debate should focus on the limits of online behavioral tracking practices by considering which activities are socially acceptable and spelling out default norms accordingly.

The purpose of OBA is to display adverts to people for products/services which they are more likely to be interested in and therefore buy. OBA and the development of real-time bidding and programmatic buying are the future (or even the present) of the internet. It seems that instead of spending all this time asking consumers to provide consent to something which they either don’t understand, don’t want to understand or don’t care about, the regulators should spend more time asking a fundamental question about what they are actually trying to regulate.

Surely attention should instead be focused on what businesses can/can’t do with people’s personal data and ensuring that online businesses do not abuse that data in a way which causes people either real distress, financial injustice or discrimination (e.g. unfairly increasing prices or denying financial services based on incorrect assumptions drawn from web browsing history). If you asked consumers whether they care about that stuff I know what their answer would be.

Legal issues of interest/confusion to digital marketers

A few weeks ago I went to an interesting event about content marketing hosted by Digital Doughnut in London’s fashionable Shoreditch. It was interesting for various reasons. Firstly, there were three great presentations by marketers from the Guardian Digital AgencyNewsReach, and iTrigga; secondly, I was the only lawyer there (I think); and thirdly, when each person I spoke to discovered I was a lawyer, there was some consistency to the legal issues they were interested in and confused about.

These legal issues were (i) the ASA’s “digital remit”, (ii) the “fair dealing” exception under copyright law, and (iii) the applicability of UK data protection law.

The ASA’s digital remit

It’s actually been quite a while (over two years) since the ASA’s remit was extended to cover marketing on advertisers’ own websites and social network sites “under their control”. Prior to this extension of the CAP Code, the ASA’s digital remit only included online ads in paid-for space (e.g. banners, pop-ups, keyword ads on Google etc), as well as emails and SMSs.

The fact that content on a company’s Facebook page could potentially be within scope of the advertising regulations surprised some of the people I spoke to. Some people were particularly surprised that even UGC on a Facebook page could be covered if the content was incorporated into a marketing message.

The relevant part of the CAP Code is paragraph (h) of the introductory section which states that the CAP Code covers content on companies’ own websites, or in other non-paid-for space online under their control, that are directly connected with the supply or transfer of goods, services, opportunities and gifts. What that essentially means is that any content designed to sell something will be captured, as opposed to, for example, editorial, PR, press releases, and investor relations copy, which are outside the scope of the CAP Code.

Incidentally, when the remit extension was announced back in 2011, the ASA said that it would undertake a quarterly review of the extended digital remit with the intention of carrying out a comprehensive review in Q2 of 2013 – so that’s something to look out for…

Fair Dealing

Quite a few people I spoke to were interested in copyright issues and in particular the extent to which the “fair dealing” exception under copyright law meant they could “reuse” content (note that “fair use” is the similar, but not identical, exception under US copyright law).

In reality, the scope of the fair dealing exception in UK copyright law is much narrower than most people think. Under sections 29 and 30 of the Copyright Designs and Patents Act 1988 (CDPA), the fair dealing exceptions only apply to research, private study, criticism, review, or reporting current events. This means that the exception is highly unlikely to apply in the case of third party copyright works which are “borrowed” for marketing purposes.

In the case of research, broadly speaking, it has to be for a “non-commercial purpose” and it’s worth noting that the English courts have been willing to interpret what constitutes a commercial purpose broadly.

The point of the “reporting current events” exception is to protect the role of the media in informing the public about current events.

In terms of what constitutes “criticism” or “review”, the English courts have been unimpressed with advertisers’ attempts to incorporate third party content into ads and then rely on the fair dealing defence. For example, in a case between IPC v News Group, The Sun used a picture of the front page of IPC’s “What’s on TV” magazine in an ad comparing it to “TV Choice” (The Sun’s listings magazine). The court held that this didn’t constitute “criticism” within the meaning of the CDPA (because the criticism could have been made simply by referring to What’s on TV).

Applicability of UK data protection law

In this globalised world of SaaS and cloud hosting, it can be confusing as to whether UK data protection law applies.

The basic rule is set out in section 5 of the Data Protection Act 1998 (DPA). If a company “controls” personal data and that company is (i) established in the UK and (ii) processes that personal data (which would include collecting it, storing it and even deleting it) in the context of that “establishment”, then UK data protection law will apply – regardless of whose data it is and where the data is stored.

“Establishment” is defined quite broadly in the DPA and includes UK registered companies, or even offices or branches in the UK – i.e. if a US company has an office in the UK and personal data is processed in connection with that branch, then that processing will need to be compliant with UK data protection law.

If there is no establishment in the UK, but a company uses “equipment” in the UK to process personal data (not including where it’s merely for the purposes of transit through the UK), UK data protection law will also apply – i.e. if a US company with no offices in the UK uses servers in the UK to process personal data, then that processing would also (strictly speaking) need to comply with UK data protection law.

It’s also worth noting that certain European data protection regulators have been inclined to take a broader view about what amounts to “using equipment”. The Article 29 Working Party (an independent body made up of representatives of the European data protection regulators) has even suggested that setting cookies on users’ devices could amount to using equipment so that the data protection law of the European country where the device is located would apply. This is controversial because, arguably, this would mean every single website in the world which can be accessed by Europeans would be subject to European data protection law!

The above is only a brief summary of the various legal issues which people at the event were interested in. The world of marketing can be a legal minefield. When marketing enters the digital domain the legal issues increase both in number and complexity!

The future of TV advertising – a lawyer’s perspective

Earlier this week I went to two sessions at the fantastic Future TV Advertising Forum. One session was about the “second screen” market and how the apps ecosystem in this area is taking shape. The other session was about TV VOD advertising, how to optimise advertising inventory, and the formats that work for TV VOD.

I noticed two recurrent themes from these sessions (i) data and (ii) targeting – in particular, the potential to behaviourally target dynamically inserted TV VOD ads. The ability to collect data from TV viewing and dynamically target advertising based on that behaviour would have been a dream to advertisers years ago but now IPTV platforms make this a possibility.

A particularly interesting legal/regulatory challenge in this area is how to assess the application of the cookie notification and consent rules (under the amended E-Privacy Directive) in the context TV VOD behavioural advertising.

In the UK, the rules are implemented by Regulation 6 of the Privacy and Electronic Communications (EC Directive) Regulations 2003. Regulation 6(1) state the following:

…a person shall not store or gain access to information stored, in the terminal equipment of a subscriber or user unless the requirements… are met.”

The first point to note is that there’s no reference to “cookies”. The relevant activity is “storing or gaining access to information stored”. This means the regulation covers any kind of customer-side storage and accessing of any information by a service provider. The second point to note is that there’s no reference to computers or even “devices”. All the regulation says is: “terminal equipment”. The rules therefore cover mobile phones, games consoles, tablets, and connected-TVs.

Whilst all the fuss about cookies has been focused on the web, it’s worth noting that collecting data from TV VOD viewing and using it to target ads would be captured by the regulation to the extent it involves setting, for example, a unique identifier on the TV and collecting data from it.

So what are the requirements? Regulation 6(2) says the following:

The requirements are that the.. user… (a) is provided with clear and comprehensive information about the purposes of the storage of, or access to, that information; and (b) has given his or her consent.

There have been seemingly endless debates about how to interpret the above regulation. Is it ok to simply display the “clear and comprehensive information” in a privacy/cookie policy linked to from the footer of the webpage? Do I need to implement a pop-up or a drop-down banner on my site? Is it ok to imply consent from the user’s continued use of the site? And so on.

These debates have virtually always been in the context of cookies set by websites. It will be interesting to see what happens when/if the regulators and legislators start to turn their eyes to the behavioural targeting of TV VOD ads which, although currently nascent, is a rapidly developing proposition.

TV is of course a fundamentally different user experience to using a computer (or a tablet). The screen is much further away, you use a remote control instead of a keyboard (or touchscreen), the ads take up the whole screen (as opposed to sitting inside a box on the side of the screen), and so on. All the debates about privacy policies, notices and consent may need to be recontextualised in accordance with this different user experience.

It is also worth considering the advertising industry’s self-regulatory solution involving an icon and opt-out mechanism for online behavioural advertising. Whilst this may have received criticism from the European data protection regulators, it has been specifically endorsed by the UK ASA through the addition of a new appendix in the CAP Code (with effect from February 2013). However, again this is a system designed for the online, as opposed to TV VOD, user experience. It will be interesting to see how this and/or other related solutions develop in the growing TV VOD advertising market.