Clearview AI Wins Appeal Against GDPR Fine 

Last week a Tribunal overturned a GDPR Enforcement Notice and a Monetary Penalty Notice issued to Clearview AI, an American facial recognition company. In Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC), the First-Tier Tribunal (Information Rights) ruled that the Information Commissioner had no jurisdiction to issue either notice, on the basis that the GDPR/UK GDPR did not apply to the personal data processing in issue.  

Background 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database.  

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to… the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” 

The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems. (see our earlier blog for more detail on these notices.) 

The Judgement  

The First-Tier Tribunal (Information Rights) has now overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article Art. 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

Learning Points 

While the Tribunal’s judgement in this case reflects the specific circumstances, some of its findings are of wider application: 

  • The term “behaviour” (in Article Art. 3(2)(b)) means something about what a person does (e.g., location, relationship status, occupation, use of social media, habits) rather than just identifying or describing them (e.g., name, date of birth, height, hair colour).  

  • The term “monitoring” not only comes up in Article 3(2)(b) but also in Article 35(3)(c) (when a DPIA is required). The Tribunal ruled that monitoring includes tracking a person at a fixed point in time as well as on a continuous or repeated basis.

  • In this case, Clearview was not monitoring UK residents directly as its processing was limited to creating and maintaining a database of facial images and biometric vectors. However, Clearview’s clients were using its services for monitoring purposes and therefore Clearview’s processing “related to” monitoring under Article 3(2)(b). 

  • A provider of services like Clearview, may be considered a joint controller with its clients where both determine the purposes and means of processing. In this case, Clearview was a joint controller with its clients because it imposed restrictions on how clients could use the services (i.e., only for law enforcement and national security purposes) and determined the means of processing when matching query images against its facial recognition database.  

Data Scraping 

The ruling is not a greenlight for data scraping; where publicly available data, usually from the internet, is collected and processed by companies often without the Data Subject’s knowledge. The Tribunal ruled that this was an activity to which the UK GDPR could apply. In its press release, reacting to the ruling, the ICO said: 

“The ICO will take stock of today’s judgment and carefully consider next steps.
It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.” 

This is a significant ruling from the First Tier Tribunal which has implications for the extra territorial effect of the UK GDPR and the ICO powers to enforce it. It merits an appeal by the ICO to the Upper Tribunal. Whether this happens depends very much on the ICO’s appetite for a legal battle with a tech company with deep pockets.  

This and other GDPR developments will be discussed by Robert Bateman in our forthcoming GDPR Updateworkshop.  

Act Now Launches New RIPA E Learning Course

Screenshot 2020-11-24 at 10.26.09

The Investigatory Powers Commissioner’s Office (IPCO), like its predecessor the Office of the Surveillance Commissioner(OSC), undertakes inspections of public authorities to ensure their compliance with Part 2 of the Regulation of Investigatory Act 2000 (RIPA).
A common feature of an IPCO report into a council is the highlighting of the lack of regular refresher training for those who undertake covert surveillance, including when using social media.  

The coronavirus pandemic as well as decreasing council budgets means that training staff is difficult to say the least. Social distancing and home working make face to face training impossible and live online training may not always be cost effective for those who need a quick refresher.  

Act Now Training is pleased to announce the launch of RIPA Essentials. This is a new e learning course, consisting of an animated video followed by an online quiz, designed to update local authority employees’ knowledge of Part 2 of RIPA which covers Directed Surveillance, Intrusive Surveillance and CHIS. Designed by our RIPA experts, Ibrahim Hasan and Steve Morris, it uses simple clear language and animation to make the complex simple. 

In just 30 minutes your employees can learn about the main provisions of Part 2 of RIPA including the different types of covert surveillance, the serious crime test and the authorisation process. It also covers how RIPA applies to social media monitoring and how to handle the product of surveillance having regard to data protection. All this at a time and in a place of your employees’ choosing. (See the full contents here.

Steve Morris said: 

“Ibrahim and I have over 40 years of experience in training and advising local authorities on covert surveillance and RIPA. We have used this experience, as well as the latest guidance from the Home Office and IPCO, to produce an online training course which is engaging, interactive and fun.” 

With full admin controls, RIPA Essentials will help you to build a RIPA compliance culture in your organisation and develop a workforce that is able to identify and address privacy risks when conducting surveillance. The course is specifically designed for local authority investigators including trading standards officers, environmental health officers, licensing officers, auditors and legal advisers.  

You can watch a demo of RIPA Essentials here. Prices start from as little as £69 plus vat per user. For a bespoke quote please get in touch

RIPA Essentials follows the successful launch of GDPR Essentials which has been used by our clients to train thousands of staff in the public and private sector.

Google v Lloyd- Representative action for damages fails under the DPA 1998

 

GoogleAs more individuals become aware of the way in which organisations such as Facebook, and Google have used their personal data unlawfully, then the prospect of litigation, and class actions, seems increasingly likely.  However, the recent case of Lloyd v Google [2018] EWHC 2599 (QB) demonstrates that it doesn’t necessarily follow that a clear breach of data protection legislation will result in a successful claim for damages.  The case shows that even if claimants can prove that there has been a breach of data protection legislation (now the GDPR and DPA 2018) they need to identify what harm the breach has caused and how the damage has been caused by the breach. This will inevitably be a fact specific exerciser.

The background-the Safari Workaround and DoubleClick Ad Cookie

The case concerned the use, by Google, of a cookie known as the “DoubleClick Ad cookie” between 2011 -2012. Google allegedly used the cookie to secretly track the internet activity of iPhone users in the US and the UK. Ordinarily the Safari browser (developed by Apple) had a default setting that blocked the use of third-party cookies, such as the DoubleClick Ad cookie. However, Google was able to exploit certain exceptions to this default blockage and implement the so called “Safari Workaround” which enabled Google to set the cookie on an iPhone, when the user used the Safari browser. This gave Google access to a huge amount of browser generated personal information, including the address or URL of the website which the browser is displaying to the user. It was claimed that this information enabled Google to obtain or deduce other sensitive information about individuals, such as their interests and habits, race ethnicity, class, political or religious view, health, age, sexuality and financial position. Google was also alleged to have aggregated this information to create lists of different types of people, such as “football lovers’, and offered these lists to subscribing advertisers.

Regulatory action was against Google in the USA with Google agreeing to pay US$25.5 million civil penalty to settle charges brought by the US Federal Trade Commission, and a further US$17 million to settle state consumer-based actions.  No such regulatory action as taken by the Information Commissioner even though the breach clearly affected UK iPhone users.

 The representative claim

The action against google was brought by Mr Lloyd who was the only named claimant. However, he brought this action as a representative of a much larger class of people. This is a novel type of litigation that allows a representative to sue in a representative capacity on behalf of a class of people who have “the same interest” in the claim. It was not entirely clear how big the class was, but estimates ranged between 5.4-4.4 million people. Google not surprising was keen that permission was denied, bearing in mind it estimated its potential liability (if the case was successful) of between £1-3 billion.

Mr Lloyd argued that he and each member of the group/class he represented had a right to be compensated “for the infringement of their data protection rights”. Specifically, it was alleged that Google had carried out the secret tracking and collation of personal data without the data subject’s consent or knowledge; that this was a breach of Google’s duty under s 4(4) of the DPA 1998 and that the data subjects were entitled to compensation under s 13 DPA 1998.

In other words, the fact of the contravention gave them a right to be compensated.  Neither Mr Lloyd or any member of the group alleged or gave evidence about any financial loss, distress or anxiety. There were no individual allegations of harm. In fact, Mr Lloyd asserted that the claim was generic and claimed an equal, standard “tariff” award for each member of the class (the claim was for £750 per person). This turned out to be fatal to the claim.

Litigation against a US based company

Any litigant, or group of litigants, considering an action against Apple or Google or any other such company that is based outside the UK first need the permission of the High Court in order to serve a claim against a defendant outside of the jurisdiction of the domestic courts. Before the court will grant permission, the claimant must prove three things. First that the case falls within one of the listed “jurisdictional gateways”; second, that the case has a reasonable prospect of success and finally that England is the appropriate place to deal with the case.  The High Court had no difficulty deciding that England would be the natural jurisdiction for the case since the claimants were all in the UK and the alleged damage had been incurred in the UK.  However, the High Court Judge found that Mr Lloyd’s case failed on the remaining two issues and denied permission for the case to proceed.

The Court identified that the relevant gateway in this case was that the claimant had to prove they had a good arguable claim in tort and the damage was sustained in England & Wales.  The Judge was clear that a claim for damages under the DPA 1998 is a claim in tort. He was also satisfied that each member of the class was (for at least some of the relevant period) within the jurisdiction when they connected to the internet using the Safari browser.

However, the real and substantial issue in this case was whether the Safari Workaround had caused “damage” within the meaning of the DPA 1998.  The Court engaged in a lengthy analysis of the case law on DPA damages and concluded that the claimants had not sustained damages in this case. On this basis the court decided that Mr Lloyd did not have a good arguable case or a reasonable prospect of success.

Damages under the DPA 1998

Section 13 of the DPA 1998 provided that an individual who suffers damage by reason of any contravention by a data controller of any of the requirements of the DPA 1998 is entitled to compensation from the data controller for that damage.

The High Court decided that giving the words their natural meaning, this statutory right to compensation arises where

(a) there has been a breach of the DPA; and

(b) as a result, the claimant suffers damage.

These are two separate events connected by a causal link.  In short, the breach must cause the damage. Based on this logic, it necessarily follows that some breaches will not give rise to damages.  The High Court judge suggested some examples where a data controller processes personal data in breach of the DPA, but where the breach may not warrant an award of compensation, such as:

  • recording inaccurate data, but not using or disclosing it
  • Holding, but not disclosing, using or consulting personal data that are irrelevant
  • Holding data for too long
  • Failing, without consequences, to take adequate security measures.

Of course, this is not to say that these types of breaches could never give rise to a successful claim for damages, as much will depend on the context and facts of the case. However, the Court did suggest that data subjects had alternative remedies such as rectification, erasure and objection.

One of the key arguments presented by Lloyd was that the claimants had incurred damage because they lost control of their data. According to the Court, there will be circumstances where the loss of control may have significantly harmful consequences, such as in Vidal Hall. (Google in v Vidal-Hall and others & The Information Commissioner [2015] EWCA Civ 311) The focus in that case was on the significant distress caused to the claimants by the delivery to their screens of unwanted advertising material. However, decision was very fact specific; it seemed that the type of information that had secretly been tracked and used to send targeting advertising was of a particularly private and sensitive nature, such that it would have caused harm to the claimants had any one else seen their computer screens.

The High Court in Lloyd v Google also accepted that delivery of unwanted commercial advertising can be upsetting in other ways, for example where repeated or bulk unwanted communications:

  • Is so distressing it constitutes harassment even if the content is inherently innocuous
  • Infringes a person’s right to respect for their autonomy
  • Represents a material interference with their freedom of choice over how they lead their life.

However, on the facts of the case the Court concluded that the claimants had not provided any particulars of any damage suffered.   Rather the claimants seemed to be relying on the fact that the claimants were entitled to be compensated because of the breach alone. The judge rejected this as a possibility.

A Court cannot award compensation just because the data protection rules have been breached. The Court also rejected the idea that the claimants should be compensated in order to “censure” the defendant’s behaviour. The Court also rejected any argument that damages under the DPA should be awarded in order on a sort of “restitutionary” basis, that is ‘calculated by reference to the market value of the data which has been refused’.

Representative action cases- what lessons can be learnt?

This was a novel litigation, it involved one named claimant bringing an action on behalf of a large group. The cation faced difficulties right from the start, not least in trying to identify the group. The Judge identified three real difficulties with this type of action:

  1. The representative (Lloyd) and the members of the class do not all have the “same interest” which is an essential requirement for any representative action. Some people may have suffered no damage and others different types of damage. Consequently, they did not all have the same interest in the action.
  2. Even if it was possible to define the class of people represented, it would be practically impossible to identify all members of the class.
  3. The court would not exercise its discretion to allow this case to go forward, particularly given the costs of the litigation, the fact that the damages payable to each individual (were the case to succeed) would be modest, and that none of the class had shown any interest in, appeared to care about in the claim.

Anyone contemplating pursuing this type of claim in future would be well advised to carefully consider and take on board the judge’s criticisms and seek to address them before pursing an action.

LGL Advert

Susan Wolf will be delivering the forthcoming GDPR workshop in Birmingham on the 19th November. Book your place now! 

 

Facebook Fan page administrators need to be GDPR compliant

 

canstockphoto53604156

By Susan Wolf

In our previous blog we considered the recent, and much awaited, decision of the Court of Justice of the European Union  (CJEU) on the status of Facebook fan page users [1]. After protracted litigation in the German Courts, the CJEU ruled on June 5th 2018, that the concept of data controller was wide enough to include a user of a fan page hosted on a social network (in this case Facebook).

WirtschaftsakademieSchleswig-Holstein GmbH (a private training academy) operated a Facebook fan page, which it used to promote its activities. Facebook provided Wirtschaftsakademie with anonymsied statistical data about people who visited the fan pages. The German Data Protection authority for Schleswig-Holstein ordered Wirtschaftsakademie to deactivate the page or risk a fine. This is because visitors to the fan page were not warned that their personal data was being being collected by Facebook, by means of cookies that were placed on the visitor’s hard disk. The purpose of that data collection was to compile viewing statistics for the Wirtschaftsakademieand to enable Facebook to publish targeted advertisements.

Technically the Court’s jurisdiction is limited to providing authoritative rulings on the interpretation of EU law and not determining the outcome of a case. However, in this case the Court made it very clear that, Wirtschaftsakademie was a data controller responsible for processing personal data, jointly with Facebook Ireland. However, the ruling has much wider implications and could affect all organisations that use Facebook fan pages, or other similar online social media.

Joint Data Controllers Must have an Agreement that sets out respective responsibilities under the GDPR

 

The fact that an administrator of a fan page uses the platform provided by Facebook in order to benefit from the associated services does not mean it escapes any of the obligations concerning the protection of personal data. In short, as a joint data controller, the fan page user must comply with the GDPR.  Similarly the fact that the fan page user acts as a joint controller, in that it decides to use Facebook as its platform, does not relieve Facebook of its obligations as controller either.  They are joint data controllers; a concept specifically acknowledged by Article 26 of the GDPR, which states.

“Where two or more controllers jointly determine the purposes and means of processing, they shall be joint controllers. They shall, in a transparent manner determine their respective responsibilities for compliance with the obligations under [the GDPR] in particular as regards the exercising of the rights of the data subject and their respective duties to provide the information referred to in Articles 13 and 14, by means of an arrangement between them unless….The arrangement may designate a contact point for data subjects.”

Joint controllers must enter into a specific agreement, or contract, that sets out their respective responsibilities under the GDPR.

Joint Controller does not necessarily mean ‘equal controller’

 

The fact that two entities are joint controllers does not mean that they are ‘equals’. The CJEU acknowledges that the existence of joint responsibility, with an online social network, such as Facebook does not necessarily imply equal responsibility.

Depending on the circumstances, different operators may be involved at different stages of that processing, and also to different degrees.  So for example, it is not necessary for a data controller to have complete control over all aspects of data processing. Indeed data processing today is becoming much more complex and may involve several distinct processes that involve numerous parties, each exercising different degrees of control. With such complexity it is even more important that roles and responsibilities are clearly defined and easily allocated.  However Article 26 GDPR also requires that the ‘allocation’ of responsibilities must be transparent. The Article 29 Working Party 2010 Opinion on Data Controllers [2] (Now the European Data Protection Board) emphasises that the complexities of joint control arrangements must not result in an unworkable distribution of responsibilities that will make it more difficult for data subjects to enforce their rights.

On 15th June Facebook issued a statement for users of Facebook fan pages. This also acknowledges that ‘it does not make sense to impose an equal footing on page operators for the data processing carried out by Facebook’.  Accordingly Facebook has indicated that it will update its own terms and conditions to clarify the respective data protection responsibilities of Facebook and Fan Page site users. (The statement does not expressly refer to the GDPR). However, at the time of writing this blog nothing further has been issued.

A note of caution: Liabilities

The terms of any joint controller agreement will be very important because of the provisions of Article 82 (4). This states that where more than one data controllers are involved in the ‘same processing’ and where they are responsible for any damage caused by processing, each controller shall be held liable for the entire damage. This is to ensure the effective compensation of data subjects who suffer any ‘material or non material’ damage as a result of any breach of the GDPR. However, GDPR Recital 146 states that where both controllers are joined in the same legal proceedings, compensation may be apportioned according to the responsibility of each controller. (Subject to the caveat that the data subject who has suffered any damage is compensated in full).  Therefore an agreement that specifically allocates responsibilities, and liabilities, should be regarded as essential.

What steps should Fan Page users be taking now?

Until Facebook clarifies its position on joint controller agreement, it might be prudent for anyone thinking of opening a Facebook fan page, to defer from doing so.

However, existing fan page users do need to take steps to become GDPR compliant.

The Information Commissioner’s Office has not, as yet, issued any guidance to fan page users. However, the German Data Protection Authorities have issued a statement advising Facebook fan page users/operators that they must comply with the applicable provisions of the GDPR and specifically the following obligations:

  • The operator must provide information on processing activities by Facebook and by the operator itself transparently and in an understandable form.
  • The operator must ensure that Facebook provides the relevant information to enable the operator to fulfil its information obligations.
  • The operator must obtain opt-in consent for tracking visitors to a fan page (e.g., by using cookies or similar technologies).
  • The operator must enter into a co-controller agreement with Facebook.

Perhaps a more pragmatic solution is for fan page users to consider what steps an organisation would need to take, as data controller, if they had created their own website (other than via Facebook) and embedded cookies and implemented a tool similar to the Facebook Insights tool, in order to compile viewing statistics.

[1] Case C210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v

Wirtschaftsakademie Schleswig-Holstein GmbH

[2] Article 29 Data Protection Working Party, Opinion 1/2010 on the concepts of “controller” and “processor”

 

Act Now provides a full GDPR Course programme including one day workshops, elearning, Healthchecks and our GDPR Practitioner Certificate. 

Book now to avoid disappointment! 

 

Decision: Facebook Fan Page Administrators are Data Controllers

canstockphoto29052513

By Susan Wolf

On 5th June 2018 the Court of Justice of the European Union (CJEU) delivered its long awaited Facebook fan page decision. The case concerned the definition of data controller under the now repealed Data Protection Directive 95/46/EC [1] and in particular whether the administrator user of a Facebook fan page was a data controller.

The fact that the Data Protection Directive has been replaced by the GDPR 2016 should not diminish the importance of this ruling, particularly for organisations that use Facebook or other social media platforms to promote their business or organisation.

We explain some of the issues raised in the case and consider the implications of the ruling for administrators of Facebook fan pages under the GDPR.

The case

The case involved Wirtschaftsakademie Schleswig-Holstein GmbH, a private training academy in Germany. The company provided business training for commerce and industry (including GDPR training).  It operated a Facebook fan page to make people aware of its range of services and activities.

Fan pages are user accounts that can be set up on Facebook by individuals or businesses. According to Facebook, a fan page is a place where businesses can create a space on Facebook, to connect with people to tell them about their business.  Fan pages are not the same as Facebook profiles, which are limited purely for individuals’ personal use. Unlike a personal Facebook profile, a Fan page is accessible to anyone using the Internet.

Authors of fan pages must register with Facebook in order to use the online platform to post any kind of communication. At that time, fan page administrators could obtain, from Facebook, anonymous statistical information on visitors to the fan page, via a function called ‘Facebook Insights’. That information was collected by means of ‘cookies’, each containing a unique user code, which remained active for two years and were stored by Facebook on the hard disk of the computer or on other media of visitors to fan pages. The user code, which could be matched with the connection data of users registered on Facebook, was collected and processed when the fan pages were opened.

The service, which was provided free of charge under non-negotiable terms, was no doubt very useful to the German Training Academy.  Unfortunately, neither Wirtschaftsakademie, nor Facebook Ireland notified anybody ‘visiting’ the fan page about the use of the cookies or the subsequent processing of the personal data.  The German Data Protection Supervisory Authority for the Schleswig-Holstein Land (Region) took the view that by setting up its fan page, the Wirtschaftsakademie had made an active and deliberate contribution to the collection by Facebook of personal data relating to visitors to the fan page, from which it profited by means of the statistics provided to it by Facebook.  The regulator concluded (in November 2011) that the Wirtschaftsakademie was a data controller and consequently ordered it to deactivate its fan page and threatened a penalty payment if the page was not removed.

The Wirtschaftsakademie challenged that before the German Administrative Court. Their main argument was that it was not responsible under data protection law for the processing of the data by Facebook or the cookies that Facebook installed, and neither had it commissioned Facebook to process personal data on its behalf. This argument was successful before the administrative court. However the regulator appealed and what followed was lengthy protracted litigation in the German courts. By 2016 the case had reached the Federal Administrative Court. The Federal Court also agreed that the Wirtschaftsakademie was not responsible for the data processing as defined by Article 2 (d) of the Data Protection Directive:

  • (d) ‘controller’ shall mean the natural or legal person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data. The GDPR, Article 4 defines data controller in identical terms.

However, the Federal Court also decided that it was necessary to refer the question to the CJEU under the preliminary rulings, particularly since the CJEU had previously ruled [2] that the concept of data controller should be given a broad interpretation in the interests of the effective protection of the right of privacy.

The CJEU Ruling

The CJEU has no difficulty in concluding that Facebook Inc. and Facebook Ireland were data controllers because they

determined the purposes and means of processing the personal data of Facebook users and anyone visiting fan pages hosted on Facebook. However, the Court recalls that the definition includes entities that  ‘alone or jointly with others’ determine the purposes and means of data processing. In other words, the purposes may be determined by more than one controller and may be determined by ‘several actors taking part in the processing’ with each being subject to the provisions of the Directive.

On the facts, the Court considered that the administrator of a Facebook fan page:

  • Enters into a contract with Facebook Ireland and subscribes to the conditions of use, including the use of cookies.
  • Is able to define the parameters of the fan page, which has an influence on the processing of personal data for the purposes of producing statistics based on visits to the fan page.
  • Could, with the help of filters made available by Facebook, define the criteria for statistical analysis of data.
  • Could designate the categories of persons whose personal data is to be made use of by Facebook.
  • Can ask Facebook for demographic data relating to its target audience, including age, sex, relationship and occupation, lifestyle and purchasing habits.

These factors pointed to the fact that the administrator of a fan page hosted on Facebook takes part in the determination of the purposes and means of processing the personal data of visitors to the fan page. Consequently the administrator of the fan page is to be regarded as a data controller, jointly with Facebook Ireland.

The Court rejected arguments that the Wirtschaftsakademie only received the statistical data in anonymised form because the fact remained that the statistics were based on the collection, by cookies, of the personal data of visitors to the fan page.

The fact that the fan page administrator uses the platform provided by Facebook does not exempt it from compliance with the Directive. The Court also added that non Facebook users may visit a fan page and therefore the administrator’s responsibilities for the processing of the personal data appears to be even greater as the mere consultation of the home page automatically starts the processing of personal data.

[1]  Case C210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v

Wirtschaftsakademie Schleswig-Holstein GmbH

[2]Case C 212/13  František Ryneš v Úřad pro ochranu osobních údajů

 

We are running GDPR and DPA 2018 workshops throughout the UK. Head over to our website to book your place now. New Dates added for London!

Need to train frontline staff quickly? Try our extremely popular GDPR e-learning course.

Dont forget about our GDPR Helpline, its a great tool to use for some advice when you really need it.

Monitoring Staff Use of Social Networks: The Human Rights Implications

canstockphoto9076695

According to a recent FOI request made by BBC Radio 5 live, last year there was a rise in the number of UK council staff suspended after being accused of breaking social media rules. Many employers, both in the public and the private sector, now monitor staff use of social media within the office environment. The possibilities are endless but care must be taken not to overstep the legal limits.

All employers have to respect their employees’ right to privacy under Article 8 of the European Convention on Human Rights (ECHR).  This means that any surveillance or monitoring must be carried out in a manner that is in accordance with the law and is necessary and proportionate (see Copland v UK (3rd April 2007 ECHR))

A January 2016 judgment of the European Court of Human Rights show that a careful balancing exercise needs to be undertaken when applying the law (Barbulescu v Romania (application 61496/08). In this case, the employer had asked employees such as the applicant to set up Yahoo! messenger accounts for work purposes. Its policies clearly prohibited the use of such work accounts for personal matters. The employer suspected the applicant of misusing his account, so it monitored his messages for a period during July 2007 without his knowledge.

The employer accused the applicant of using his messenger account for personal purposes; he denied this until he was presented with a 45-page printout of his messages with various people, some of which were of an intimate nature. The employer had also accessed his private messenger account (though it did not make use of the contents).

The applicant was sacked for breach of company policy. When he challenged his dismissal before the courts, his employer relied on the print out of his messages as evidence. He argued that, in accessing and using those personal messages, the employer had breached his right to privacy under Article 8 ECHR.

The Court accepted the applicant’s privacy rights were engaged in this case. However the employer’s monitoring was limited in scope and proportionate. It is reasonable for an employer to verify that employees are completing their professional tasks during working hours. Key considerations were:

  • The emails at the centre of the debate had been sent via a Yahoo Messenger account that was created, at the employer’s request, for the specific purpose of responding to client enquiries.
  • The employee’s personal communications came to light only as a result of the employer accessing communications that were expected to contain only business related materials and had therefore been accessed legitimately.
  • The employer operated a clear internal policy prohibiting employees from using the internet for personal and non-business related reasons.
  • The case highlights the need for companies to have a clear internet and electronic communications policy and the importance of such a policy being communicated to employees.

When monitoring employees, the employer will inevitably be gathering personal data about employees and so consideration also has to be given to the provisions of the Data Protection Act 1998 (DPA). The Information Commissioner’s Office’s (ICO) Employment Practices Code, includes a section on surveillance of employees at work. In December 2014, Caerphilly County Borough Council signed an undertaking after an ICO investigation found that the Council’s surveillance of an employee, suspected of fraudulently claiming to be sick, had breached the DPA.

Compliance with the DPA will also help demonstrate that the surveillance is human rights compliant since protection of individuals’ privacy is a cornerstone of the DPA. Of course the data protection angle will bite harder when the new EU Data Protection Regulation comes into force in 2018. Failure to comply could lead to a fine of up to 20 million Euros or 4% of global annual turnover.

Act Now has a range of workshops relating to surveillance and monitoring both within and outside the workplace. Our products include a RIPA polices and procedures toolkit and e-learning modules.

Facebook, Social Networks and the Need for RIPA Authorisations

canstockphoto12584745By Ibrahim Hasan

Increasingly local authorities are turning to the online world, especially social media, when conducting investigations. There is some confusion as to whether the viewing of suspects’ Facebook accounts and other social networks requires an authorisation under Part 2 of the Regulation of Investigatory Powers Act 2000 (RIPA). In his latest annual report the Chief Surveillance Commissioner states (paragraph 5.42):

“Perhaps more than ever, public authorities now make use of the wide availability of details about individuals, groups or locations that are provided on social networking sites and a myriad of other means of open communication between people using the Internet and their mobile communication devices. I repeat my view that just because this material is out in the open, does not render it fair game. The Surveillance Commissioners have provided guidance that certain activities will require authorisation under RIPA or RIP(S)A and this includes repetitive viewing of what are deemed to be “open source” sites for the purpose of intelligence gathering and data collation.”

Careful analysis of the legislation suggests that whilst such activity may be surveillance, within the meaning of RIPA (see S.48(2)), not all of it will require a RIPA authorisation. Of course RIPA geeks will know that RIPA is permissive legislation anyway and so the failure to obtain authorisation does not render surveillance automatically unlawful (see Section 80).

There are two types of surveillance, which may be involved when examining a suspect’s Facebook or other social network pages; namely Directed Surveillance and the deployment of a Covert Human Intelligence Source (CHIS). Section 26 of the Act states that surveillance has to be covert for it to be directed:

“surveillance is covert if, and only if, it is carried out in a manner that is calculated to ensure that persons who are subject to the surveillance are unaware that it is or may be taking place” (my emphasis)

If an investigator decides to browse a suspect’s public blog, website or “open” Facebook page (i.e. where access is not restricted to “friends”, subscribers or followers) how can that be said to be covert? It does not matter how often the site is accessed as long as the investigator is not taking steps to hide his/her activity from the suspect. The fact that the suspect is not told does about the “surveillance” does not make it covert. Note the words in the definition of covert; “unaware that it is or may be taking place.” If a suspect chooses to publish information online they can expect the whole world to read it including law enforcement and council investigators. If he/she wants or expects privacy it is open to them to use the available privacy settings on their blog or social network.

The Commissioner stated in last year’s annual report:

“5.31 In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should. The same considerations of privacy, and especially collateral intrusion against innocent parties, must be applied regardless of the technological advances.” (my emphasis)

I agree with the last part of this statement. The gathering and use of online personal information by public authorities will still engage Human Rights particularly the right to privacy under Article 8 of the European Convention on Human Rights. To ensure such rights are respected the Data Protection Act 1998 must be complied with. A case in point is the monitoring last year of Sara Ryan’s blog by Southern Health NHS Trust. Our data protection expert Tim Turner wrote recently about the data protection implications of this kind of monitoring.

Where online surveillance involves employees then the Information Commissioner’s Office’s (ICO) Employment Practices Code (part 3) will apply. This requires an impact assessment to be done before the surveillance is undertaken to consider, amongst other things, necessity, proportionality and collateral intrusion. Whilst the code is not law, it will be taken into account by the ICO and the courts when deciding whether the DPA has been complied with. In December 2014, Caerphilly County Borough Council signed an undertaking after an ICO investigation found that the Council’s surveillance of an employee , suspected of fraudulently claiming to be sick, had breached the DPA.

Facebook Friends – A Friend Indeed

Of course the situation will be different if an investigator needs to become a “friend’ of a person on Facebook in order to communicate with them and get access to their profile and activity pages. For example, local authority trading standards officers often use fake profiles when investigating the sale of counterfeit goods on social networks. In order to see what is on sale they have to have permission from the suspect. This, in my view, does engage RIPA as it involves the deployment of a CHIS defined in section 26(8):

“For the purposes of this Part a person is a covert human intelligence source if—

(a) he establishes or maintains a personal or other relationship with a person for the covert purpose of facilitating the doing of anything falling within paragraph (b) or (c);

(b) he covertly uses such a relationship to obtain information or to provide access to any information to another person; or

(c) he covertly discloses information obtained by the use of such a relationship, or as a consequence of the existence of such a relationship”  (my emphasis)

Here we have a situation where a relationship (albeit not personal) is formed using a fake online profile to covertly obtain information for a covert purpose. In the case of a local authority, this CHIS will not only have to be internally authorised but also, since 1st November 2012, approved by a Magistrate.

This is a complex area and staff who do not work with RIPA on a daily basis can be forgiven for failing to see the RIPA implications of their investigations. From the Chief Surveillance Commissioner’s comments (below) in his annual report, it seems advisable for all public authorities to have in place a corporate policy and training programme on the use of social media in investigations:

“5.44 Many local authorities have not kept pace with these developments. My inspections have continued to find instances where social networking sites have been accessed, albeit with the right intentions for an investigative approach, without any corporate direction, oversight or regulation. This is a matter that every Senior Responsible Officer should ensure is addressed, lest activity is being undertaken that ought to be authorised, to ensure that the right to privacy and matters of collateral intrusion have been adequately considered and staff are not placed at risk by their actions and to ensure that ensuing prosecutions are based upon admissible evidence.”

We have a workshop on investigating E – Crime and Social Networking Sites, which considers all the RIPA implications of such activities. It can also be delivered in house.

In conclusion, my view is that RIPA does not apply to the mere viewing of “open” websites and social network profiles. However in all cases the privacy implications have to be considered carefully and compliance with the Data Protection Act is essential.

Ibrahim will be looking at this issue in depth in our forthcoming webinars.

Looking to update/refresh your colleagues’ RIPA Knowledge. Try our RIPA E Learning Course. Module 1 is free.

We also have a full program of RIPA Courses and our RIPA Policy and Procedures Toolkit contains standard policies as well as forms (with detailed notes to assist completion).

Our survey said…

 

image

 

I bought a new car. On delivery day it was in the showroom draped in a royal blue cloth with a sign saying Reserved for Mr Onassis. The salesman before handing me the keys mumbled in an apologetic fashion “The Sales Manager likes to talk to every customer when they take delivery…”

The Sales Manager didn’t waste much time. He said that I’d shortly be receiving a call from a company who surveys new car buyers to find out what they thought of the dealership. Then he slipped in the hard sell. “They’ll ask you to mark us on a scale of 1 to 10. Only 9 and 10 are positive; anything below that is negative.”

The survey duly arrived. I declined to answer even though I was very happy with the car and the dealership.

Days later my bank called me. I was probably going to be asked to rate my bank. From a list of phrases from very displeased to very pleased I had to choose the phrase that best described my experience. “Please be sure to say you’re very pleased with our service. Anything else is considered negative”. Again I declined to do the survey even though my bank is pretty awful.

Last week a hotel that Act Now Training uses did the same thing. Please let us know what you think of our hotel. This time the hotel manager foolishly put his suggestion “Actually it’s a yes/no question; anything under 8 is negative. We need 9s and 10s” in an email. Now we have the evidence that the practice exists. Previously the conspiracy had only survived by word of mouth.

I haven’t answered yet.

What value does a survey have when the surveyees are primed to deliver the response the company wants? Is every survey result is the product of a self selecting group – the group of people who like to give high scores in surveys? Or is there another group like me who never participate in the survey who feel there’s no value in a survey where the traditional Likert scale has been morphed into a 50/50 shot? Most brits are stiff upper lip types who won’t take a survey if their views would have been critical in case someone contacted them afterwards.

Is the information age producing better information or is the value or a survey subjective, objective or merely the result of a carefully orchestrated customer manipulation.

This article already had 12,500 likes before I posted it. Find them on Ebay.

Paul Simpkins is a Director and Trainer at Act Now Training Ltd. He will be delivering the internationally recognized BCS certificate in Data Protection in June. If you are interested in this or any other Act Now training courses on Information governance, please visit our website www.actnow.org.uk

Controlling, Lying and Blocking: Ways for the individual to win the privacy arms race?

This is a version of Marion Oswald’s speech at the launch of the Centre for Law & Information Policy at the Institute of Advanced Legal Studies on 24 February 2015.

DPA5My talk is about controlling, lying and blocking. Could these activities enable an individual to win the privacy arms race against the data collection, surveillance, behavioural tracking and profiling abilities of search engines, marketers, social networking sites and others?

When we think about an arms race, we might imagine two sides evenly matched, both equally able to equip themselves with weapons and defences. But when it comes to individuals versus data collectors, the position is considerably unbalanced, the equivalent of a cavalry charge against a tank division.

It’s not however as if the individual is without protections. Let’s take consent, a key principle, as we know, of European data protection law. Consent based on privacy policies is rather discredited as an effective means of enforcing privacy rights over data held by commercial third parties. If I might quote Lillian Edwards, ‘consent is no guarantee of protection on Facebook and its like, because the consent that is given by users is non-negotiable, non-informed, pressurised and illusory.’[i] So what about regulatory enforcement? In the UK, it could be described as mostly polite, in the rest of Europe, sometimes a little more robust. The FTC in the US has had some notable successes with its enforcement action based on unfair practices, with Jessica Rich, Director of the FTC’s Bureau of Consumer Protection, advocating privacy as being part of the ‘bottom line.’[ii] It remains to be seen whether market pressures will drive good faith changes in privacy practices – alternative subscription, advertising-free business models have failed to make much headway in terms of market share. The so-called ‘right-to-be-forgotten’ has been much debated and I would question how much the Google Spain decision[iii] adds to the individual’s armoury, the original publication remaining unaffected. And as for personal data anonymisation, this could be subject of a whole afternoon’s debate in itself!

What can individuals do if they want to take matters into their own hands, and become a ‘privacy vigilante’?[iv] Here are three possibilities: first, personal data stores (or ‘personal information management services’) are said by their promoters to enable individuals to take back control over their personal data and manage their relationship with suppliers. Pentland from MIT describes a PDS as ‘a combination of a computer network that keeps track of user permissions for each piece of personal data, and a legal contract that specifies both what can and can’t be done with the data, and what happens if there is a violation of the permissions.’[v]

Secondly, blocking. Systems could prevent tagging of individuals by third parties and set privacy defaults at the most protective. Lifelogging technologies could prevent the display of any recognisable image unless that individual has given permission.[vi] Individuals could deploy a recently invented Google Glass detector, which impersonates the Wi-fi network, sends a ‘deauthorisation’ command and cuts the headset’s internet connection.[vii]

Finally, obfuscation, by which technology is used to produce false or misleading data in an attempt, as Murray-Rust et al. put it, to ‘cloud’ the lens of the observer.[viii] It’s the technological equivalent of what most of us will have already done online: missing off the first line of our address when we enter our details into an online form; subtly changing our birthday; accidentally/on-purpose giving an incorrect email address in exchange for a money-off voucher. A personal data store could, for instance, be used to add ‘chaff’ (adding multiple data points amongst the real ones), or simulating real behaviour such as going on holiday. Brunton & Nissenbaum describe obfuscation as a ‘viable and reasonable method of last-ditch privacy protection.’[ix] On the face of it, obfuscation may seem to be an attractive alternative approach, providing individuals with a degree of control over how much ‘real’ information is released and some confidence that profiling activities will be hampered.

Are these methods ways for the individual to win the privacy arms race? As things stand, I have my doubts, although that is not to say that a legal and regulatory regime could not be created to support these methods. PDSs raise numerous questions about contract formation, incorporation, offers and counter-offers. Service providers would need to be prepared to change their business models fundamentally if PIMS are to fulfil their potential. In the short term, there appears to be little commercial incentive for them to do so.

In terms of blocking, systems could adopt protective measures but they don’t, because they don’t have to. Google Glass blockers may well fall foul of computer misuse legislation if used by members of the public rather than the network owner. In the UK, there would be a risk of a section 3 offence under the Computer Misuse Act 1990 – an unauthorised act with intent to impair the operation of any computer. Haddadi et al. suggest the ‘continuous broadcast of a Do-Not-Track beacon from smart devices carried by individuals who prefer not to be subjected to image recognition by wearable cameras’ although the success of this would depend on regulatory enforcement and whether device providers received and conformed to such requests.[x] It would be rather ironic, however, if one had to positively broadcast one’s presence to avoid image recognition.

As for obfuscation or lying on the internet, Murray-Rust et al. distinguish between official data, where obfuscation may be a criminal offence, and other data that can be obfuscated ‘without legal consequence.’[xi] The distinction is unlikely to be so clear cut: both on the civil side, and on the criminal side (fraud and computer misuse spring to mind), and this is something that I’ll be writing about in the future.

I would like to finish with this question about privacy vigilantism: by continuing to shift responsibility onto the individual, is this letting society off-the-hook for finding better solutions to privacy concerns?[xii] I think it probably is. Finding better solutions will require even closer interaction between computer scientists, lawyers and policy-makers.

Marion Oswald is a Senior Fellow and Head of the Centre for Information Rights at the University of Winchester (marion.oswald@winchester.ac.uk @_UoWCIR). This article was first published by the Society for Computers & Law and is reproduced with the author’s kind permission.

The 2nd Winchester Conference on Trust, Risk, Information & the Law on 21 April 2015 will be exploring the theme of the privacy arms race. To book, please click here.


[i] Lillian Edwards, Privacy, law, code and social networking sites, in Research Handbook on Governance of the Internet, (2013) Edward Elgar (Cheltenham) Ian Brown (Ed), 309-352, 324-328

[ii] Jessica Rich, Director, Bureau of Consumer Protection, Federal Trade Commission Beyond Cookies: Privacy Lessons for Online Advertising, AdExchanger Industry Preview 2015, January 21, 2015, 4 http://www.ftc.gov/system/files/documents/public_statements/620061/150121beyondcookies.pdf

[iii] Google Spain v AEPD and Mario Costeja Gonzalez (C-131/12), 13 May 2014

[iv] Marion Oswald, Seek, and Ye Shall Not Necessarily Find: The Google Spain Decision, the Surveillant on the Street and Privacy Vigilantism, 99-115, Digital Enlightenment Yearbook 2014 (K. O’Hara et al. (Eds)

[v] A. Pentland, Social Physics: How Good Ideas Spread – The Lessons from a New Science, The Penguin Press, New York, 2014

[vi] C. Gurrin, R. Albatal, H. Joho, K. Ishii, ‘A Privacy by Design Approach to Lifelogging’, Digital Enlightenment Yearbook 2014 (K. O’Hara et al. (Eds), 49-73, 68

[vii] A. Greenberg, Cut Off Glassholes’ Wi-Fi With This Google Glass Detector, Wired, June 3, 2014, http://www.wired.com/2014/06/find-and-ban-glassholes-with-this-artists-google-glass-detector/

[viii] D. Murray-Rust, M. Van Kleek, L. Dragan, N. Shadbolt, Social Palimpsests – Clouding the Lens of the Personal Panopticon, 75-96, 76, Digital Enlightenment Yearbook 2014 (K. O’Hara et al. (Eds)

[ix] Finn Brunton, Helen Nissenbaum, ‘Vernacular resistance to data collection and analysis: A political theory of obfuscation’ First Monday, Volume 16, Number 5, 2 May 2011 http://firstmonday.org/article/view/3493/2955

[x] H. Haddadi, A. Alomainy, I. Brown, Quantified Self and the Privacy Challenge in Wearables, Society for Computers & Law, 5 August 2014 http://www.scl.org/site.aspx?i=ed38111

[xi] nviii,90

[xii] nix

Data Protection, the Law and Social Media: Keeping Your Boat Afloat

 [ File # csp10560861, License # 2907340 ]
Licensed through http://www.canstockphoto.com in accordance with the End User License Agreement (http://www.canstockphoto.com/legal.php)
(c) Can Stock Photo Inc. / buchachon

Paul Gibbons writes…

Social media have been good for me. Without my FOIMan blog and Twitter feed, I would never have been asked to deliver training for Act Now Training, or indeed offered many of the wonderful opportunities that have come my way in the last few years. I’ve made a whole new career off the back of them. Not only has my profile been raised by my use of these tools, but I’ve been able to learn from a whole range of knowledgeable people online – expanding my awareness and horizons way beyond anything I’d have considered possible just five years ago.

But even if I remove my FOIMan cape for a moment, social media has had a significant impact on me. I keep in touch with old friends via Facebook. My CV is widely available to hundreds of business contacts via LinkedIn. Before I book a holiday or dine out, I check Trip Advisor. If I want to know how decisions are made by my local council or indeed the Ministry of Justice, I can submit an FOI request via WhatDoTheyKnow. With an election on the way I can find out my MP’s voting record by consulting TheyWorkForYou, and perhaps write to them to ask what their position is on a particular issue. If I feel particularly strongly about that issue I might add my details to an online petition. Social media in their many forms pervade our lives. Many of us would be lost without them.

And it’s not just individuals that are becoming reliant on it. These tools provide novel ways to engage with the people who use them. Businesses have not been slow to exploit them for marketing and public relations purposes. Politicians – often accused of being remote from their electorate – have, with varying success, used them to speak directly to parts of that group. Academics conduct surveys, then disseminate their research, both via social media. A recent study found that 40% of students use social media as their primary form of communication with lecturers. Journalists also use it to research and report on stories. No television broadcast is complete these days without a hashtag allowing the viewers to interact. The police have used them to investigate or prosecute criminal acts. Central government encourages civil servants to embrace Twitter as a tool to communicate about public policy and gain insights into people’s reaction to it. Local government too, has found social media a productive way to interact with local citizens. We’re only beginning to see the ways in which social media can benefit our businesses, government, work and lifestyles.

However, as with most things, there are downsides. There are the trolls lurking not under a bridge but under assumed names on Twitter, ready to spread their malice. It’s easy to get carried away and post in haste – repenting at our leisure. Just as social media can make careers and boost reputations, it can destroy them overnight. It empowers individuals, and many companies and public bodies have been keen to use it to give a human face to their corporate image. But those same individuals can use it intentionally or not to disfigure that public face. They can disclose confidential information more easily, expose the business to liability for breach of copyright or defamation, and breach the Data Protection Act by discussing personal matters relating to clients, customers or colleagues.

Don’t believe me? Take the social worker who posted information on Facebook about a child protection court case she was involved in, potentially allowing the family to be identified. Or the companies at the centre of Twitter storms. Or sued for using a photographer’s images without permission. In a recent post on my FOIMan site, I highlighted an academic who posted internal correspondence relating to an FOI request on WhatDoTheyKnow, in the process potentially damaging the institution’s reputation, relationships with their colleagues, and almost certainly causing their employer to breach the Data Protection Act’s first data protection principle (to handle personal data fairly and lawfully) in the process. Even those organisations whose employees should know better have had to take disciplinary action: between 2009 and 2014, 519 disciplinary actions were taken against police officers for social media related transgressions, and the Crown Prosecution Service reported that nine of its staff had been disciplined for similar reasons over that period. Not for nothing has the Ministry of Defence warned its employees that “Loose Tweets Sink Fleets”.

The temptation in the face of this litany of institutional and individual disaster is to adopt the ostrich position. Ban your employees from using social media altogether. Avoid their corporate use. This won’t work. For a start, you will miss out on all the benefits highlighted at the start of this piece and more. But besides, it’s way too late for that. Pandora is not just out of the box but is running the show. You could impose contractual obligations on your staff requiring them not to use social media, or at least not to discuss their work there. If you do though you may find yourself losing staff who choose to work for a more progressive employer. In any case, it may be too late, as the Kent Police and Crime Commissioner discovered when she appointed a 17 year old to the post of Youth Police and Crime Commissioner.

You can’t stop your customers or the public writing about you on social media, but if you’re not using it, you’ll only find out what they’re saying about you too late. You’ll have no way to react to adverse comment online save through the traditional media which may not go to press until your business has collapsed clothed only in the tatters of its reputation.

So if you can’t avoid the risks of social media altogether, what can you do? The next best thing is to mitigate those risks. Like any other tool that you use, you need policies setting out acceptable use. You need to secure your most valuable and sensitive information. You need to raise awareness of your policies and legal restrictions so that your employees understand what they are allowed (or even encouraged) to do using social media, and also what they shouldn’t do – and what the consequences of doing it will be.

Where can you find out more about the risks that social media poses to your organisation? Or indeed the opportunities it offers? What should you include in a social media policy? Do you need to keep records of your social media use, and if so, how?

Well, social media itself will offer many solutions if you’re brave enough to jump in. But if you want a guide, my new training course on Data Protection, the Law & Social Media will provide answers to the questions above, and will point you to resources to help your organisation and its employees use social media effectively whilst avoiding the pitfalls. The course runs for the first time in Manchester on 20 April, and in London on 22 April 2015, and can also be run as an in-house course for your Data Protection, Communications and other staff. Get in touch with Act Now Training now for more details or book through their website.