CDD

Newsroom

  • In March 2018, The New York Times and The Guardian/Observer broke an explosive story that Cambridge Analytica, a British data firm, had harvested more than 50 million Facebook profiles and used them to engage in psychometric targeting during the 2016 US presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal erupted amid ongoing concerns over Russian use of social media to interfere in the electoral process. The new revelations triggered a spate of congressional hearings and cast a spotlight on the role of digital marketing and “big data” in elections and campaigns. The controversy also generated greater scrutiny of some of the most problematic tech industry practices — including the role of algorithms on social media platforms in spreading false, hateful, and divisive content, and the use of digital micro-targeting techniques for “voter suppression” efforts (Green & Issenberg; 2016; Howard, Woolley, & Calo, 2018). In the wake of these cascading events, policymakers, journalists, and civil society groups have called for new laws and regulations to ensure transparency and accountability in online political advertising.Twitter and Google, driven by growing concern that they will be regulated for their political advertising practices, fearful of being found in violation of the General Data Protection Regulation (GDPR) in the European Union, and cognisant of their own culpability in recent electoral controversies, have each made significant changes in their political advertising policies (Dorsey, 2019; Spencer, 2019). Despite a great deal of public hand wringing, on the other hand, US federal policymakers have failed to institute any effective remedies even though several states have enacted legislation designed to ensure greater transparency for digital political ads (California Clean Money Campaign, 2019; Garrahan, 2018). These recent legislative and regulatory initiatives in the US are narrow in scope and focused primarily on policy approaches to political advertising in more traditional media, failing to hold the tech giants accountable for their deleterious big data practices.On the eve of the next presidential election in 2020, the pace of innovation in digital marketing continues unabated, along with its further expansion into US electoral politics. These trends were clearly evident in the 2018 election, which, according to Kantar Media, were “the most lucrative midterms in history”, with $5.25 billion USD spent for ads on local broadcast cable TV, and digital — outspending even the 2016 presidential election. Digital ad spending “quadrupled from 2014” to $950 million USD for ads that primarily ran on Facebook and Google (Axios, 2018; Lynch, 2018). In the upcoming 2020 election, experts are forecasting overall spending on political ads will be $6 billion USD, with an “expected $1.6 billion to be devoted to digital video… more than double 2018 digital video spending” (Perrin, 2019). Kantar (2019), meanwhile, estimates the portion spent for digital media will be $1.2 billion USD in the 2019-2020 election cycle.In two earlier papers, we documented a number of digital practices deployed during the 2016 elections, which were emblematic of how big data systems, strategies and techniques were shaping contemporary political practice (Chester & Montgomery, 2017, 2018). Our work is part of a growing body of interdisciplinary scholarship on the role of data and digital technologies in politics and elections. Various terms have been used to describe and explain these practices — from computational politics to political micro-targeting to data-driven elections (Bodó, Helberger, & de Vreese, 2017; Bennett, 2016; Karpf, 2016; Kreiss, 2016; Tufekci, 2014). All of these labels highlight the increasing importance of data analytics in the operations of political parties, candidate campaigns, and issue advocacy efforts. But in our view, none adequately captures the full scope of recent changes that have taken place in contemporary politics. The same commercial digital media and marketing ecosystem that has dramatically altered how corporations engage with consumers is now transforming the ways in which campaigns engage with citizens (Chester & Montgomery, 2017).We have been closely tracking the growth of this marketplace for more than 25 years, in the US and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007, 2015; Montgomery & Chester, 2009; Montgomery, Chester, Grier, & Dorfman, 2012; Montgomery, Chester, & Kopp, 2018). CDD has worked closely with leading EU civil society and data protection NGOs to address digital marketplace issues. Our work has included providing analysis to EU-based groups to help them respond critically to Google’s acquisition of DoubleClick in 2007 as well as Facebook’s purchase of WhatsApp in 2014. Our research has also been informed by a growing body of scholarship on the role that commercial and big data forces are playing in contemporary society. For example, advocates, legal experts, and scholars have written extensively about the data and privacy concerns raised by this commercial big data digital marketing system (Agre & Rotenberg, 1997; Bennett, 2008; Nissenbaum, 2009; Schwartz & Solove, 2011). More recent research has focused increasingly on other, and in many ways more troubling, aspects of this system. This work has included, for example, research on the use of persuasive design (including “mass personalisation” and “dark patterns”) to manage and direct human behaviours; discriminatory impacts of algorithms; and a range of manipulative practices (Calo, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Susser, Roessler, & Nissenbaum, 2019; Zarsky, 2019; Zuboff, 2019). As digital marketing has migrated into electoral politics, a growing number of scholars have begun to examine the implications of these problematic practices on the democratic process (Gorton, 2016; Kim et al., 2018; Kreiss & Howard, 2010; Rubinstein, 2014; Bashyakarla et al., 2019; Tufekci, 2014).The purpose of this paper is to serve as an “early warning system” — for policymakers, journalists, scholars, and the public — by identifying what we see as the most important industry trends and practices likely to play a role in the next major US election, and flagging some of the problems and issues raised. Our intent is not to provide a comprehensive analysis of all the tools and techniques in what is frequently called the “politech” marketplace. The recent Tactical Tech (Bashyakarla et al, 2019) publication, Personal Data: Political Persuasion, provides a highly useful compendium on this topic. Rather, we want to show how further growth and expansion of the big data digital marketplace is reshaping electoral politics in the US, introducing both candidate and issue campaigns to a system of sophisticated software applications and data-targeting tools that are rooted in the goals, values, and strategies for influencing consumer behaviours.1 (link is external) Although some of these new digitally enabled capabilities are extensions of longstanding political practices that pre-date the internet, others are a significant departure from established norms and procedures. Taken together, they are contributing to a major shift in how political campaigns conduct their operations, raising a host of troubling issues concerning privacy, security, manipulation, and discrimination. All of these developments are taking place, moreover, within a regulatory structure that is weak and largely ineffectual, posing daunting challenges to policymakers.In the following pages, we: 1) briefly highlight five key developments in the digital marketing industry since the 2016 election that are influencing the operations of political campaigns and will likely affect the next election cycle; 2) discuss the implications of these trends and techniques for the ongoing practice of contemporary politics, with a special focus on their potential for manipulation and discrimination; 3) assess both the technology industry responses and recent policy initiatives designed to address political advertising in the US; and 4) offer our own set of recommendations for regulating political ad and data practices.The growing big data commercial and political marketing systemIn the upcoming 2020 elections, the US is likely to witness an extremely hard-fought, under-the-radar, innovative, and in many ways disturbing set of races, not only for the White House but also for down-ballot candidates and issue groups. Political campaigns will be able to avail themselves of the current state-of-the-art big data systems that were used in the past two elections, along with a host of recent advances developed by commercial marketers. Several interrelated trends in the digital media and marketing industry are likely to play a particularly influential role in shaping the use of digital tools and strategies in the 2020 election. We discuss them briefly below:Recent mergers and partnerships in the media and data industries are creating new synergies that will extend the reach and enhance the capabilities of contemporary political campaigns. In the last few years, a wave of mergers and partnerships has taken place among platforms, data brokers, advertising exchanges, ad agencies, measurement firms and companies specialising in advertising technologies (so-called “ad-tech”). This consolidation has helped fuel the unfettered growth of a powerful digital marketing ecosystem, along with an expanding spectrum of software systems, specialty firms, and techniques that are now available to political campaigns. For example, AT&T (n.d.), as part of its acquisition of Time Warner Media, has re-launched its digital ad division, now called Xandr (n.d.). It also acquired the leading programmatic ad platform AppNexus.Leading multinational advertising agencies have made substantial acquisitions of data companies, such as the Interpublic Group (IPG) purchase of Acxiom in 2018 and the Publicis Groupe takeover of Epsilon in 2019. One of the “Big 3” consumer credit reporting companies, TransUnion (2019), bought TruSignal, a leading digital marketing firm. Such deals enable political campaigns and others to easily access more information to profile and target potential voters (Williams, 2019).In the already highly consolidated US broadband access market, only a handful of giants provide the bulk of internet connections for consumers. The growing role of internet service providers (ISPs) in the political ad market is particularly troubling, since they are free from any net neutrality, online privacy or digital marketing rules. Acquisitions made by the telecommunications sector are further enabling ISPs and other telephony companies to monetise their highly detailed subscriber data, combining it with behavioural data about device use and content preferences, as well as geolocation. (Schiff, 2018).Increasing sophistication in “identity resolution” technologies, which take advantage of machine learning and artificial intelligence applications, is enabling greater precision in finding and reaching individuals across all of their digital devices. The technologies used for what is known as “identity resolution” have evolved to enable marketers — and political groups — to target and “reach real people” with greater precision than ever before. Marketers are helping perfect a system that leverages and integrates, increasingly in real-time, consumer profile data with online behaviours to capture more granular profiles of individuals, including where they go, and what they do (Rapp, 2018). Facebook, Google and other major marketers are also using machine learning to power prediction-related tools on their digital ad platforms. As part of Google’s recent reorganisation of its ad system (now called the “Google Marketing Platform”), the company introduced machine learning into its search advertising and YouTube businesses (Dischler, 2018; Sluis, 2018). It also uses machine learning for its “Dynamic Prospecting” system, which is connected to an “Automatic Targeting” apparatus that enables more precise tracking and targeting of individuals (Google, n.d.-a-b). Facebook (2019) is enthusiastically promoting machine learning as a fundamental advertising tool, urging advertisers to step aside and let automated systems make more ad-targeting decisions.Political campaigns have already embraced these new technologies, even creating a special category in the industry awards for “Best Application of Artificial Intelligence or Machine Learning”, “Best Use of Data Analytics/Machine Learning”, and “Best Use of Programmatic Advertising” (“2019 Reed Award Winners”, 2019; American Association of American Political Consultants, 2019). For example, Resonate, a digital data marketing firm, was recognised in 2018 for its “Targeting Alabama’s Conservative Media Bubble”, which relied on “artificial intelligence and advanced predictive modeling” to analyse in real-time “more than 15 billion page loads per day. According to Resonate, this process identified “over 240,000 voters” who were judged to be “persuadable” in a hard-fought Senate campaign (Fitzpatrick, 2018). Similar advances in data analytics for political efforts are becoming available for smaller campaigns (Echelon Insights, 2019). WPA Intelligence (2019) won a 2019 Reed Award for its data analytics platform that generated “daily predictive models, much like microtargeting advanced traditional polling. This tool was used on behalf of top statewide races to produce up to 900 million voter scores, per night, for the last two months of the campaign”. Deployment of these techniques was a key influence in spending for the US midterm elections (Benes, 2018; Loredo, 2016; McCullough, 2016).Political campaigns are taking advantage of a rapidly maturing commercial geo-spatial intelligence complex, enhancing mobile and other geotargeting strategies. Location analytics enable companies to make instantaneous associations between the signals sent and received from Wi-Fi routers, cell towers, a person’s devices and specific locations, including restaurants, retail chains, airports, stadiums, and the like (Skyhook, n.d.). These enhanced location capabilities have further blurred the distinction between what people do in the “offline” physical world and their actions and behaviours online, giving marketers greater ability both to “shadow” and to reach individuals nearly anytime and anywhere.A political “geo-behavioural” segment is now a “vertical” product offered alongside more traditional online advertising categories, including auto, leisure, entertainment and retail. “Hyperlocal” data strategies enable political campaigns to engage in more precise targeting in communities (Mothership Strategies, 2018). Political campaigns are also taking advantage of the widespread use of consumer navigation systems. Waze, the Google-owned navigational firm, operates its own ad system but also is increasingly integrated into the Google programmatic platform (Miller, 2018). For example, in the 2018 midterm election, a get-out-the-vote campaign for one trade group used voter file and Google data to identify a highly targeted segment of likely voters, and then relied on Waze to deliver banner ads with a link to an online video (carefully calibrated to work only when the app signalled the car wasn’t moving). According to the political data firm that developed the campaign, it reached “1 million unique users in advance of the election” (Weissbrot, 2019, April 10).Political television advertising is rapidly expanding onto unregulated streaming and digital video platforms. For decades, television has been the primary medium used by political campaigns to reach voters in the US. Now the medium is in the process of a major transformation that will dramatically increase its central role in elections (IAB, n.d.-a). One of the most important developments during the past few years is the expansion of advertising and data-targeting capabilities, driven in part by the rapid adoption of streaming services (so-called “Over the Top” or “OTT”) and the growth of digital video (Weissbrot, 2019, October 22). Leading OTT providers in the US are actively promoting their platform capabilities to political campaigns, making streaming video a new battleground for influencing the public. For example, a “Political Data Cloud” offered by OTT specialist Tru Optik (2019) enables “political advertisers to use both OTT and streaming audio to target specific voter groups on a local, state or national level across such factors as party affiliation, past voting behavior and issue orientation. Political data can be combined with behavioral, demographic and interest-based information, to create custom voter segments actionable across over 80 million US homes through leading publishers and ad tech platforms” (Lerner, 2019).While political advertising on broadcast stations and cable television systems has long been subject to regulation by the US Federal Communications Commission, newer streaming television and digital video platforms operate outside of the regulatory system (O’Reilly, 2018). According to research firm Kantar “political advertisers will be able to air more spots on these streaming video platforms and extend the reach of their messaging—particularly to younger voters” (Lafayette, 2019). These ads will also be part of cross-device campaigns, with videos showing up in various formats on mobile devices as well.The expanding role of digital platforms enables political campaigns to access additional sources of personal data, including TV programme viewing patterns. For example, in 2018, Altice and smart TV company Vizio launched a new partnership to take advantage of recent technologies now being deployed to deliver targeted advertising, incorporating viewer data from nearly nine million smart TV sets into “its footprint of more than 90 million households, 85% of broadband subscribers and one billion devices in the U.S.” (Clancy, 2018). Vizio’s Inscape (n.d.) division produces technology for smart TVs, offering what is known as “automatic content recognition” (ACR) data. According to Vizio, ACR enables what the industry calls “glass level” viewing data, using “screen level measurement to reveal what programs and ads are being watched in near-real time”, and incorporating the IP address from any video source in use (McAfee, 2019). Campaigns have demonstrated the efficacy of OTT’s role. AdVictory (n.d.) modelled “387,000 persuadable cord cutters and 1,210 persuadable cord shavers” (the latter referring to people using various forms of streaming video) to make a complex media buy in one state-wide gubernatorial race that reached 1.85 million people “across [video] inventory traditionally untouched by campaigns”.Further developments in personalisation techniques are enabling political campaigns to maximise their ability to test an expanding array of messaging elements on individual voters. Micro-targeting now involves a more complex personalisation process than merely using so-called behavioural data to target an individual. The use of personal data and other information to influence a consumer is part of an ever-evolving, orchestrated system designed to generate and then manage an individual’s online media and advertising experiences. Google and Facebook, in particular, are adept at harvesting the latest innovations to advance their advertising capabilities, including data-driven personalisation techniques that generate hundreds of highly granular ad-campaign elements from a single “creative” (i.e., advertising message). These techniques are widely embraced by the digital marketing industry, and political campaigns across the political spectrum are being encouraged to expand their use for targeting voters (Meuse, 2018; Revolution Marketing, n.d.; Schuster, 2015). The practice is known by various names, including “creative versioning”, “dynamic creative”, and “Dynamic Creative Optimization”, or DCO (Shah, 2019). Google’s creative optimisation product, “Directors Mix” (formerly called “Vogon”), is integrated into the company’s suite of “custom affinity audience targeting capabilities, which includes categories related to politics and many other interests”. This product, it explains, is designed to “generate massively customized and targeted video ad campaigns” (Google, n.d.-c). Marketing experts say that Google now enables “DCO on an unprecedented scale”, and that YouTube will be able to “harness the immense power of its data capabilities…” (Mindshare, 2017). Directors Mix can tap into Google’s vast resources to help marketers influence people in various ways, making it “exceptionally adept at isolating particular users with particular interests” (Boynton, 2018). Facebook’s “Dynamic Creative” can help transform a single ad into as many as “6,250 unique combinations of title, image/video, text, description and call to action”, available to target people on its news feed, Instagram and outside of Facebook’s “Audience Network” ad system (Peterson, 2017).Implications for 2020 and beyondWe have been able to provide only a partial preview of the digital software systems and tools that are likely to be deployed in US political campaigns during 2020. It’s already evident that digital strategies will figure even more centrally in the upcoming campaigns than they have in previous elections (Axelrod, Burke, & Nam, 2019; Friedman, 2018, June 19). Many of the leading Democratic candidates, and President Trump, who has already ramped up his re-election campaign apparatus, have extensive experience and success in their use of digital technology. Brad Parscale, the campaign manager for Trump’s re-election effort, explained in 2019 that “in every single metric, we’re looking at being bigger, better, and ‘badder’ than we were in 2016,” including the role that “new technologies” will play in the race (Filloux, 2019).On the one hand, these digital tools could be harnessed to create a more active and engaged electorate, with particular potential to reach and mobilise young voters and other important demographic groups. For example, in the US 2018 midterm elections, newcomers such as Congresswoman Alexandria Ocasio-Cortez, with small budgets but armed with digital media savvy, were able to seize the power of social media, mobile video, and other digital platforms to connect with large swaths of voters largely overlooked by other candidates (Blommaert, 2019). The real-time capabilities of digital media could also facilitate more effective get-out-the-vote efforts, targeting and reaching individuals much more efficiently than in-person appeals and last-minute door-to-door canvassing (O’Keefe, 2019).On the other hand, there is a very real danger that many of these digital techniques could undermine the democratic process. For example, in the 2016 election, personalised targeted campaign messages were used to identify very specific groups of individuals, including racial minorities and women, delivering highly charged messages designed to discourage them from voting (Green & Issenberg, 2016). These kinds of “stealth media” disinformation efforts take advantage of “dark posts” and other affordances of social media platforms (Young et al., 2018).Though such intentional uses (or misuses) of digital marketing tools have generated substantial controversy and condemnation, there is no reason to believe they will not be used again. Campaigns will also be able to take advantage of a plethora of newer and more sophisticated targeting and message-testing tools, enhancing their ability to fine tune and deliver precise appeals to the specific individuals they seek to influence, and to reinforce the messages throughout that individual’s “media journey”.But there is an even greater danger that the increasingly widespread reliance on commercial ad technology tools in the practice of politics will become routine and normalised, subverting independent and autonomous decision making, which is so essential to an informed electorate (Burkell & Regan, 2019; Gorton, 2016). For example, so-called “dynamic creative” advertising systems are in some ways extensions of A/B testing, which has been a longstanding tool in political campaigns. However, today’s digital incarnation of the practice makes it possible to test thousands of message variations, assessing how each individual responds to them, and changing the content in real time and across media in order to target and retarget specific voters. The data available for this process are extensive, granular, and intimate, incorporating personal information that extends far beyond the conventional categories, encompassing behavioural patterns, psychographic profiles, and TV viewing histories. Such techniques are inherently manipulative (Burkell & Regan, 2019; Gorton, 2016; Susser, Roessler, & Nissenbaum, 2019). The increasing use of digital video, in all of its new forms, raises similar concerns, especially when delivered to individuals through mobile and other platforms, generating huge volumes of powerful, immersive, persuasive content, and challenging the ability of journalists and scholars to review claims effectively. AI, machine learning, and other automated systems will be able to make predictions on behaviours and have an impact on public decision-making, without any mechanism for accountability. Taken together, all of these data-gathering, -analysis, and -targeting tools raise the spectre of a growing political surveillance system, capable of capturing unlimited amounts of detailed and highly sensitive information on citizens and using it for a variety of purposes. The increasing predominance of the big data political apparatus could also usher in a new era of permanent campaign operations, where individuals and groups throughout the country are continually monitored, targeted, and managed.Because all of these systems are part of the opaque and increasingly automated operations of digital commercial marketing, the techniques, strategies, and messages of the upcoming campaigns will be even less transparent than before. In the heat of a competitive political race, campaigns are not likely to publicise the full extent of their digital operations. As a consequence, journalists, civil society groups, and academics may not be able to assess them fully until after the election. Nor will it be enough to rely on documenting expenditures, because digital ads can be inexpensive, purposefully designed to work virally and aimed at garnering “free media”, resulting in a proliferation of messages that evade categorisation or accountability as “paid political advertising”.Some scholars have raised doubts about the effectiveness of contemporary big data and digital marketing applications when applied to the political sphere, and the likelihood of their widespread adoption (Baldwin-Philippi, 2017). It is true we are in the early stages of development and implementation of these new tools, and it may be too early to predict how widely they will be used in electoral politics, or how effective they might be. However, the success of digital marketing worldwide in promoting brands and products in the consumer marketplace, combined with the investments and innovations that are expanding its ability to deliver highly measured impacts, suggest to us that these applications will play an important role in our political and electoral affairs. The digital marketing industry has developed an array of measurement approaches to document their impact on the behaviour of individuals and communities (Griner, 2019; IAB Europe, 2019; MMA, 2019). In the no-holds-barred environment of highly competitive electoral politics, campaigns are likely to deploy these and other tools at their disposal, without restraint. There are enough indications from the most recent uses of these technologies in the political arena to raise serious concerns, making it particularly urgent to monitor them very closely in upcoming elections.Industry and legislative initiativesThe largest US technology companies have recently introduced a succession of internal policies and transparency measures aimed at ensuring greater platform responsibility during elections. In November 2019, Twitter announced it was prohibiting the “promotion of political content”, explaining that it believed that “political message reach should be earned, not bought”. CEO Jack Dorsey (2019) was remarkably frank in explaining why Twitter had made this decision: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale”.That same month, Google unveiled policy changes of its own, including restricting the kinds of internal data capabilities available to political campaigns. As the company explained, “we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level)”. Google also announced it was “clarifying” its ads policies and “adding examples to show how our policies prohibit things like ‘deep fakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” (Spencer, 2019). It remains to be seen whether such changes as Google’s and Twitter’s will actually alter, in any significant way, the contemporary operations of data-driven political campaigns. Some observers believe that Google’s new policy will benefit the company, noting that “by taking away the ability to serve specific audiences content that is most relevant to their values and interests, Google stands to make a lot MORE money off of campaigns, as we’ll have to spend more to find and reach our intended audiences” (“FWIW: The Platform Self-regulation Dumpster Fire”, 2019).Interestingly, Facebook, the tech company that has been subject to the greatest amount of public controversy over its political practices, had not, at the time of this writing, made similar changes in its political advertising policies. Though the social media giant has been widely criticised for its refusal to fact-check political ads for accuracy and fairness, it has not been willing to institute any mechanisms for intervening in the content of those ads (Ingram, 2018; Isaac, 2019; Kafka, 2019). However, Facebook did announce in 2018 that it was ending its participation in the industry-wide practice of embedding, which involved sales teams working hand-in-hand with leading political campaigns (Ingram, 2018; Kreiss & McGregor, 2017). After a research article generated extensive news coverage of this industry-wide marketing practice, Facebook publicly announced it would cease the arrangement, instead “offering tools and advice” through a politics portal that provides “candidates information on how to get their message out and a way to get authorised to run ads on the platform” (Emerson, 2018; Jeffrey, 2018). In May 2019, the company also announced it would stop paying commissions to employees who sell political ads (Glazer & Horowitz, 2019). Such a move may not have a major effect on sales, however, especially since the tech giant has already generated significant income from political advertising for the 2020 campaign (Evers-Hillstrom, 2019).Under pressure from civil rights groups over discriminatory ad targeting practices in housing and other areas, Facebook has undergone an extensive civil rights audit, which has resulted in a number of internal policy changes, including some practices related to campaigns and elections. For example, the company announced in June 2019 that it had “strengthened its voter suppression policy” to prohibit “misrepresentations” about the voting process, as well as any “threats of violence related to voting”. It has also committed to making further changes, including investments designed to prevent the use of the platform “to manipulate U.S. voters and elections” (Sandberg, 2019).Google, Facebook, and Twitter have all established online archives to enable the public to find information on the political advertisements that run on their platforms. But these databases provide only a limited range of information. For example, Google’s (2018) archive contains copies of all political ads run on the platform, shows the amount spent overall and on specific ads by a campaign, as well as age range, gender, area (state) and dates when an ad appeared, but does not share the actual “targeting criteria” used by political campaigns (Walker, 2018). Facebook’s (n.d.-b) Ad Library describes itself as a “comprehensive, searchable collection of all ads currently running across Facebook Products”. It claims to provide “data for all ads related to politics or to issues of national importance” that have run on its platform since May 2018 (Sullivan, 2019). While the data include breakdowns on the age, gender, state where it ran, number of impressions and spending for the ad, no details are provided to explain how the ad was constructed, tested, and altered, or what digital ad targeting techniques were used. For example, Facebook (n.d.-a-e) permits US-based political campaigns to use its “Custom or Lookalike Audiences” ad-targeting product, but it does not report such use in its ad library. Though all of these new transparency systems and ad archives offer useful information, they also place a considerable burden on users. Many of these new measures are likely to be more valuable for watchdog organisations and journalists, who can use the information to track spending, identify emerging trends, and shed additional light on the process of digital political influence.While these kinds of changes in platform policies and operations should help to mitigate some of the more egregious uses of social media by unscrupulous campaigns and other actors, they are not likely to alter in any major way the basic operations of today’s political advertising practices. With each tech giant instituting its own set of internal ad policies, there are no clear industry-wide “rules-of-the-game” that apply to all participants in the digital ecosystem. Nor are there strong transparency or accountability systems in place to ensure that the policies are effective. Though platform companies may institute changes that appear to offer meaningful safeguards, other players in the highly complex big data marketing infrastructure may offer ways to circumvent these apparent restrictions. As a case in point, when Facebook (2018, n.d.-c) announced in the wake of the Cambridge Analytica scandal that it was “shutting down Partner Categories”, the move provoked alarm inside the ad-tech industry that a set of powerful applications was being withdrawn (Villano, 2018). The product had enabled marketers to incorporate data provided by Facebook’s selected partners, including Acxiom and Epsilon (Pathak, 2018). However, despite the policy change, Facebook still enables marketers to bring a tremendous amount of third-party data to Facebook for targeting (Popkin, 2019). Indeed, shortly after Facebook’s announcement, LiveRamp offered assurances to its clients that no significant changes had been made, explaining that “while there’s a lot happening in our industry, LiveRamp customers have nothing to fear” (Carranza, 2018).The controversy generated by recent foreign interference in US elections has also fuelled a growing call to update US election laws. However, the current policy debate over regulation of political advertising continues to be waged within a very narrow framework, which needs to be revisited in light of current digital practices. Legislative proposals have been introduced in Congress that would strengthen the disclosure requirements for digital political ads regulated by the Federal Election Commission (FEC). For example, under the Honest Ads Act, digital media platforms would be required to provide information about each ad via a “public political file”, including who purchased the ad, when it appeared, how much was spent, as well as “a description of the targeted audience”. Campaigns would also be required to provide the same information for online political ads that are required for political advertising in other media. The proposed legislation currently has the support of Google, Facebook, Twitter and other leading companies (Ottenfeld, 2018, April 25). A more ambitious bill, the For the People Act is backed by the new Democratic majority in the House of Representatives, and includes similar disclosure requirements, along with a number of provisions aimed at reducing “the influence of big money in politics”. Though these bills are a long-overdue first step toward bringing transparency measures into the digital age, neither of them addresses the broad range of big data marketing and targeting practices that are already in widespread use across political campaigns. And it is doubtful whether either of these limited policy approaches stands a chance of passage in the near future. There is strong opposition to regulating political campaign and ad practices at the federal level, primarily because of what critics claim would be violations of the free speech principle of the US First Amendment (Brodey, 2019).While the prospects for regulating political advertising appear dim at the present time, there is a strong bi-partisan move in Congress to pass federal privacy legislation that would regulate commercial uses of data, which could, in turn, affect the operations, tools, and techniques available for digital political campaigns. Google, Facebook, and other digital data companies have long opposed any comprehensive privacy legislation. But a number of recent events have combined to force the industry to change its strategy: the implementation of the EU General Data Protection Regulation (GDPR) and the passage of state privacy laws (especially in California); the seemingly never-ending news reports on Facebook’s latest scandal; massive data breaches of personal information; accounts of how online marketers engage in discriminatory practices and promote hate speech; and the continued political fallout from “Russiagate”. Even the leading tech companies are now pushing for privacy legislation, if only to reduce the growing political pressure they face from the states, the EU, and their critics (Slefo, 2019). Also fuelling the debate on privacy are growing concerns over digital media industry consolidation, which have triggered calls by political leaders as well as presidential candidates to “break up” Amazon and Facebook (Lecher, 2019). Numerous bills have been introduced in both houses of Congress, with some incorporating strong provisions for regulating both data use and marketing techniques. However, as the 2020 election cycle gets underway, the ultimate outcome of this flurry of legislative activity is still up in the air (Kerry, 2019).Opportunities for interventionGiven the uncertainty in the regulatory and self-regulatory environment, there is likely to be little or no restraint in the use of data-driven digital marketing practices in the upcoming US elections. Groups from across the political spectrum, including both campaigns and special interest groups will continue to engage in ferocious digital combat (Lennon, 2018). With the intense partisanship, especially fuelled by what is admittedly a high-stakes-for-democracy election (for all sides), as well as the current ease with which all of the available tools and methods are deployed, no company or campaign will voluntarily step away from the “digital arms race” that US elections have become. Given what is expected to be an extremely close race for the Electoral College that determines US presidential elections, 2020 is poised to see both parties use digital marketing techniques to identify and mobilise the handful of voters needed to “swing” a state one way or another (Schmidt, 2019).Campaigns will have access to an unprecedented amount of personal data on every voter in the country, drawing from public sources as well as the growing commercial big data infrastructure. As a consequence, the next election cycle will be characterised by ubiquitous political targeting and messaging, fed continuously through multiple media outlets and communication devices.At the same time, the concerns over continued threats of foreign election interference, along with the ongoing controversy triggered by the Cambridge Analytica/Facebook scandal, have re-energised campaign reform and privacy advocates and engaged the continuing interest of watchdog groups and journalists. This heightened attention on the role of digital technologies in the political process has created an unprecedented window of opportunity for civil society groups, foundations, educators, and other key stakeholders to push for broad public policy and structural changes. Such an effort would need to be multi-faceted, bringing together diverse organisations and issue groups, and taking advantage of current policy deliberations at both the federal and state levels.In other western democracies, governments and industry organisations have taken strong proactive measures to address the use of data-driven digital marketing techniques by political parties and candidates. For example, the Institute for Practitioners in Advertising (IPA), a leading UK advertising organisation, has called for a “moratorium on micro-targeted political advertising online”. “In the absence of regulation”, the IPA explained, “we believe this almost hidden form of political communication is vulnerable to abuse”. Leading members of the UK advertising industry, including firms that work on political campaigns, have endorsed these recommendations (Oakes, 2018). The UK Information Commissioner’s Office (ICO, 2018), which regulates privacy, conducted an investigation of recent digital political practices, and issued a report urging the government to “legislate at the earliest opportunity to introduce a statutory code of practice” addressing the “use of personal information in political campaigns” (Denham, 2018). In Canada, the Privacy Commissioner offered “guidance” to political parties in their use of data, including “Best Practices” for requiring consent when using personal information (Office of the Privacy Commissioner of Canada, 2019). The European Council (2019) adopted a similar set of policies requiring political parties to adhere to EU data protection rules.We recognise that the United States has a unique regulatory and legal system, where First Amendment protections of free speech have limited regulation of political campaigns. However, the dangers that big data marketing operations pose to the integrity of the political process require a rethinking of policy approaches. A growing number of legal scholars have begun to question whether political uses of data-driven digital marketing should be afforded the same level of First Amendment protections as other forms of political speech (Burkell & Regan, 2019; Calo, 2013; Rubinstein, 2014; Zarsky, 2019). “The strategies of microtargeting political ads”, explain Jacquelyn Burkell and Priscilla Regan (2019), “are employed in the interests not of informing, or even persuading voters but in the interests of appealing to their non-rational biases as defined through algorithmic profiling”.Advocates and policymakers in the US should explore various legal and regulatory strategies, developing a broad policy agenda that encompasses data protection and privacy safeguards; robust transparency, reporting and accountability requirements; restrictions on certain digital advertising techniques; and limits on campaign spending. For example, disclosure requirements for digital media need to be much more comprehensive. At the very least, campaigns, platforms and networks should be required to disclose fully all the ad and data practices they used (e.g., cross-device tracking, lookalike modelling, geolocation, measurement, neuromarketing), as well as variations of ads delivered through dynamic creative optimisation and other similar AI applications. Some techniques — especially those that are inherently manipulative in nature — should not be allowed in political campaigns. Greater attention will need to be paid to the uses of data and targeting techniques as well, articulating distinctions between those designed to promote robust participation, such as “Get Out the Vote” efforts, and those whose purpose is to discourage voters from exercising their rights at the ballot box. Limits should also be placed on the sources and amount of data collected on voters. Political parties, campaigns, and political action committees should not be allowed to gain unfettered access to consumer profile data, and voters should have the right to provide affirmative consent (“opt-in”) before any of their information can be used for political purposes. Policymakers should be required to stay abreast of fast-moving innovations in the technology and marketing industries, identifying the uses and abuses of digital applications for political purposes, such as the way that WhatsApp was deployed during recent elections in Brazil for “computational propaganda” (Magenta, Gragnani, & Souza, 2018).In addition to pushing for government policies, advocates should place pressure on the major technology industry players and political institutions, through grassroot campaigns, investigative journalism, litigation, and other measures. If we are to have any reform in the US, there must be multiple and continuous points of pressure. The two major political parties should be encouraged to adopt a proposed new best-practices code. Advocates should also consider adopting the model developed by civil rights groups and their allies in the US, who negotiated successfully with Google, Facebook and others to develop more responsible and accountable marketing and data practices (Peterson & Marte, 2016). Similar efforts could focus on political data and ad practices. NGOs, academics, and other entities outside the US should also be encouraged to raise public concerns.All of these efforts would help ensure that the US electoral process operates with integrity, protects privacy, and does not engage in discriminatory practices designed to diminish debate and undermine full participation.citations available via: https://policyreview.info/articles/analysis/digital-commercialisation-us... (link is external)This paper is part of Data-driven elections (link is external), a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon: https://policyreview.info/data-driven-elections (link is external)
  • A new report (link is external) on how political marketing insiders and platforms such as Facebook view the “ethical” issues raised by the role of digital marketing in elections illustrates why advocates and others concerned about election integrity should make this issue a public-policy priority. We cannot afford to leave it in the hands of “Politech” firms and political campaign professionals, who appear unable to acknowledge the consequences to democracy of their unfettered use of powerful data-driven online-marketing applications. “Digital Political Ethics: Aligning Principles with Practice” reports on a series of conversations and a two-day meeting last October that included representatives of firms (such as Blue State, Targeted Victory, WPA Intelligence, and Revolution Messaging) that work either for Democrats or Republicans, as well as officials from both Facebook and Twitter. The goal of the project was to “identify areas of agreement among key stakeholders concerning ethical principles and best practices in the conduct of digital campaigning in the U.S.” Perhaps it should not be a surprise that this group of people appears to be incapable of critically examining (or even candidly assessing) all of the problems connected with the role of digital marketing in political campaigns. Missing from the report is any real concern about how today’s electoral process takes advantage of the absence of any meaningful privacy safeguards in the U.S. A vast commercial surveillance apparatus that has no bounds has been established. This same system that is used to market goods and services, and which is driven by data-brokers, marketing clouds, (link is external) real-time ad-decision engines, geolocation (link is external) identification and other AI-based (link is external)technologies—along with the clout of leading platforms and publishers—is now also used for political purposes. All of us are tracked and profiled 24/7, including where we go and what we do—with little location privacy anymore. Political insiders and data ad companies such as Facebook, however, are unwilling to confront the problem of this loss of privacy, given how valuable all this personal data is to their business model or political goal. Another concern is that these insiders now view digital marketing as a normative, business-as-usual process—and nothing out of the ordinary. But anyone who knows how the system operates should be deeply concerned about the nontransparent and often far-reaching ways digital marketing is constructed to influence (link is external) our decision-making and behaviors, including at emotional (link is external) and subconscious (link is external) levels. The report demonstrates that campaign officials have largely accepted as reasonable the various invasive and manipulative technologies and techniques that the ad-tech industry has developed over the past decade. Perhaps these officials are simply being pragmatic. But society cannot afford such a cynical position. Today’s political advertising is not yesterday’s TV commercial—nor is it purely an effort to “microtarget” sympathetic market segments. Today’s digital marketing apparatus follows all of us continuously, Democrats, Republicans, and independents alike. The marketing ecosystem (link is external) is finely tuned to learn how we react, transforming itself depending on those reactions, and making decisions about us in milliseconds in order to use—and refine—various tactics to influence us, entirely including new ad formats, each tested and measured to have us think and behave one way or another. And this process is largely invisible to voters, regulators and the news media. But for the insiders, microtargeting helps get the vote out and encourages participation. Nothing much is said about what happened in the 2016 U.S. election, when some political marketers sought to suppress the vote among communities of color, while others engaged is disinformation. Some of these officials now propose that political campaigns should be awarded a digital “right of way” that would guarantee them unfettered access to Facebook, Google and other sites, as well as ensure favorable terms and support. This is partly in response to the recent and much-needed reforms adopted by Twitter (link is external)and Google (link is external)that either eliminate or restrict how political campaigns can use their platforms, which many in the politech industry dislike. Some campaign officials see FCC (link is external) rules regulating TV ads for political ads as an appropriate model to build policies for digital campaigning. That notion should be alarming to those who care about the role that money plays in politics, let alone the nature of today’s politics (as well as those who know the myriad failures of the FCC over the decades). The U.S. needs to develop a public policy for digital data and advertising that places the interests of the voter and democracy before that of political campaigns. Such a policy should include protecting the personal information of voters; limiting deceptive and manipulative ad practices (such as lookalike (link is external) modeling); as well as prohibiting those contemporary ad-tech practices (e.g., algorithmic based real-time programmatic (link is external) ad systems) that can unfairly influence election outcomes. Also missing from the discussion is the impact of the never-ending expansion of “deep-personalization (link is external)” digital marketing applications designed to influence and shift consumer behavior more effectively. The use of biodata, emotion recognition (link is external), and other forms of what’s being called “precision data”—combined with a vast expansion of always-on sensors operating in an Internet of Things world—will provide political groups with even more ways to help transform electoral outcomes. If civil society doesn’t take the lead in reforming this system, powerful insiders who have their own conflicts of interests will be able to shape the future of democratic decision-making in the U.S. We cannot afford to leave it to the insiders to decide what is best for our democracy.
  • Press Release

    Popular Dating, Health Apps Violate Privacy

    Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate

    Popular Dating, Health Apps Violate Privacy Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate For Immediate Release: Jan. 14, 2020 Contact: David Rosen, drosen@citizen.org (link is external), (202) 588-7742 Angela Bradbery, abradbery@citizen.org (link is external), (202) 588-7741 WASHINGTON, D.C. – Nine consumer groups today asked (link is external) the Federal Trade Commission (FTC), congressional lawmakers and the state attorneys general of California, Texas and Oregon to investigate several popular apps available in the Google Play Store. A report (link is external) released today by the Norwegian Consumer Council (NCC) alleges that the apps are systematically violating users’ privacy. The report found that 10 well-known apps – Grindr, Tinder, OkCupid, Happn, Clue, MyDays, Perfect365, Qibla Finder, My Talking Tom 2 and Wave Keyboard – are sharing information they collect on users with third-party advertisers without users’ knowledge or consent. The European Union’s General Data Protection Regulation forbids sharing information with third parties without users’ knowledge or consent. When it comes to drafting a new federal privacy law, American lawmakers cannot trust input from companies who do not respect user privacy, the groups maintain. Congress should use the findings of the report as a roadmap for a new law that ensures that such flagrant violations of privacy found in the EU are not acceptable in the U.S. The new report alleges that these apps (and likely a great many others) are allowing commercial third parties to collect, use and share sensitive consumer data in a way that is hidden from the user and involves parties that the consumer neither knows about nor would be familiar with. Although consumers can limit some tracking on desktop computers through browser settings and extensions, the same cannot be said for smartphones and tablets. As consumers use their smartphones throughout the day, the devices are recording information about sensitive topics such as our health, behavior, religion, interests and sexuality. “Consumers cannot avoid being tracked by these apps and their advertising partners because they are not provided with the necessary information to make informed choices when launching the apps for the first time. In addition, consumers are unable to make an informed choice because the extent of tracking, data sharing, and the overall complexity of the adtech ecosystem is hidden and incomprehensible to average consumers,” the letters sent to lawmakers and regulators warn. The nine groups are the American Civil Liberties Union of California, Campaign for a Commercial-Free Childhood, the Center for Digital Democracy, Consumer Action, Consumer Federation of America, Consumer Reports, the Electronic Privacy Information Center (EPIC), Public Citizen and U.S. PIRG. In addition to calling for an investigation, the groups are calling for a strong federal digital privacy law that includes a new data protection agency, a private right of action and strong enforcement mechanisms. Below are quotes from groups that signed the letters: “Every day, millions of Americans share their most intimate personal details on these apps, upload personal photos, track their periods and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. Industry calls it adtech. We call it surveillance. We need to regulate it now, before it’s too late.” Burcu Kilic, digital rights program director, Public Citizen “The NCC’s report makes clear that any state or federal privacy law must provide sufficient resources for enforcement in order for the law to effectively protect consumers and their privacy. We applaud the NCC’s groundbreaking research on the adtech ecosystem underlying popular apps and urge lawmakers to prioritize enforcement in their privacy proposals.” Katie McInnis, policy counsel, Consumer Reports “U.S. PIRG is not surprised that U.S. firms are not complying with laws giving European consumers and citizens privacy rights. After all, the phalanx of industry lobbyists besieging Washington, D.C., has been very clear that its goal is simply to perpetuate a 24/7/365 surveillance capitalism business model, while denying states the right to protect their citizens better and denying consumers any real rights at all.” Ed Mierzwinski, senior director for consumer programs, U.S. PIRG “This report reveals how the failure of the U.S. to enact effective privacy safeguards has unleashed an out-of-control and unaccountable monster that swallows up personal information in the EU and elsewhere. The long unregulated business practices of digital media companies have shred the rights of people and communities to use the internet without fear of surveillance and manipulation. U.S. policymakers have been given a much-needed wake-up call by Norway that it’s overdue for the enactment of laws that bring meaningful change to the now lawless digital marketplace.” Jeff Chester, executive director, Center for Digital Democracy “For those of us in the U.S., this research by our colleagues at the Norwegian Consumer Council completely debunks the argument that we can protect consumers’ privacy in the 21st century with the old notice-and-opt-out approach, which some companies appear to be clinging to in violation of European law. Business practices have to change, and the first step to accomplish that is to enact strong privacy rights that government and individuals can enforce.” Susan Grant, director of consumer protection and privacy, Consumer Federation of America “The illuminating report by our EU ally the Norwegian Consumer Council highlights just how impossible it is for consumers to have any meaningful control over how apps and advertising technology players track and profile them. That’s why Consumer Action is pressing for comprehensive U.S. federal privacy legislation and subsequent strong enforcement efforts. Enough is enough already! Congress must protect us from ever-encroaching privacy intrusions.” Linda Sherry, director of national priorities, Consumer Action “For families who wonder what they’re trading off for the convenience of apps like these, this report makes the answer clear. These companies are exploiting us – surreptitiously collecting sensitive information and using it to target us with marketing. It’s urgent that Congress pass comprehensive legislation which puts the privacy interests of families ahead of the profits of businesses. Thanks to our friends at the Norwegian Consumer Council for this eye-opening research.” David Monahan, campaign manager, Campaign for a Commercial-Free Childhood “This report highlights the pervasiveness of corporate surveillance and the failures of the FTC notice-and-choice model for privacy protection. Congress should pass comprehensive data protection legislation and establish a U.S. Data Protection Agency to protect consumers from the privacy violations of the adtech industry.” Christine Bannan, consumer protection counsel, EPIC
  • Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Groups Praise Sen. Markey and Google for Ensuring Children on YouTube Receive Key Safeguards BOSTON, MA & WASHINGTON, DC—December 18, 2019—The organizations that spurred the landmark FTC settlement with Google over COPPA violations applauded the announcement of additional advertising safeguards for children on YouTube today. The Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) commended Google for announcing it would apply most of its robust marketing protections on YouTube Kids, including no advertising of food or beverages or harmful products, to all child-directed content on its main YouTube platform. The groups also lauded Senator Markey for securing (link is external) a public commitment from Google to implement these long-overdue safeguards. The advocates expressed disappointment, however, that Google did not agree to prohibit paid influencer marketing and product placement to children on YouTube as it does on YouTube Kids “Sen. Ed Markey has long been and remains the champion for kids,” said Jeff Chester, CDD’s executive director. “Through the intervention of Sen. Markey, Google has finally committed to protecting children whether they are on the main YouTube platform or using the YouTube Kids app. Google has acted responsibly in announcing that its advertising policies now prohibit any food and beverage marketing on YouTube Kids, as well as ads involving ‘sexually suggestive, violent or dangerous content.’ However, we remain concerned that Google may try to weaken these important child- and family-friendly policies in the near future. Thus we call on Google to commit to keeping these rules in place, and to implement other needed safeguards that children deserve,” added Chester. Josh Golin, Executive Director of CCFC, said, “We are so grateful to Senator Markey for his leadership on one of the most crucial issues faced by children and families today. And we commend Google for implementing a robust set of advertising safeguards on the most popular online destination for children. We urge Google to take another critical step and prohibit child-directed influencer content on YouTube; if this manipulative marketing isn’t allowed on children’s TV or YouTube Kids, it shouldn’t be targeted to children on the main YouTube platform either.” ###
  • Washington, December 11, 2019 In comments filed today in response to the Federal Trade Commission’s review of COPPA, the Center for Digital Democracy, the Campaign for a Commercial-Free Childhood, the American Academy of Pediatrics, and a total of 19 advocacy groups faulted the FTC for failing to engage in sufficient enforcement and oversight of the children’s privacy law. The groups suggested how COPPA can better protect children’s privacy, and urged the Commission not to weaken the law to satisfy industry’s thirst for more data about kids. The advocates also urged the FTC first to investigate the children’s “kid tech” market before it proposes any changes in how to implement its rules. The following can be attributed to Jeff Chester, Executive Director, Center for Digital Democracy: “Children are at greater risk today of losing their digital privacy because the FTC has failed to enforce COPPA. For years, the Commission has allowed Google and many others to ignore the landmark bipartisan law designed to protect children under 13. It’s time for the FTC to stand up to the big data companies and put the interests of young people and families first.” The following can be attributed to Josh Golin, Executive Director, Campaign for a Commercial-Free Childhood: “This is a critical moment for the future of children’s online privacy. The ink is barely dry on the FTC’s first major COPPA enforcement, and already industry is mobilizing to weaken the rules. The FTC should not make any changes to COPPA until it uses its authority to learn exactly how Big Tech is collecting and monetizing our children’s data.” The following can be attributed to Kyle Yasuda, MD, FAAP, President, American Academy of Pediatrics: “Keeping children safe and healthy where they learn and grow is core to what pediatricians do every day, and today more than ever before that extends to the digital spaces that children inhabit. The Children’s Online Privacy Protection Act is a foundational law that helps hold companies accountable to basic standards of safety when it comes to children’s digital privacy, but it’s only as effective as its enforcement by the Federal Trade Commission. Before any major changes are made to COPPA, we must ensure that the FTC is doing its part to keep children safe wherever they engage online.” The following can be attributed to Laura Moy, Associate Professor of Law, Director of the Communications and Technology Law Clinic, Institute for Public Representation at Georgetown University Law Center: “A recent survey showed that the majority of Americans feel that ‘the threat to personal privacy online is a crisis.’ We are at a critical point in our nation’s history right now—when we are deciding whether or not to allow companies to track, profile, and target us to an extent that compromises our ability to be and make decisions for ourselves. At the forefront of that discussion are children. We must protect the next generation from inappropriate tracking so that they can grow up with privacy and dignity. To make good on that, the FTC must thoroughly investigate how companies are collecting and using children’s data, and must enforce and strengthen COPPA.”
  • Press Release

    Leading child advocacy, health, and privacy groups call on FTC to Investigate Children’s Digital Media Marketplace Before Proposing any Changes to Privacy Protections for Children

    Threats to young people from digital marketing and data collection must be analyzed to ensure meaningful safeguards under the Children’s Online Privacy Protection Act (COPPA).

    EMBARGOED UNTIL DECEMBER 5, 2019 AT 12:01 AM Contact: Jeffrey Chester, CDD, jeff@democraticmedia.org (link sends e-mail), (202) 494-7100 Josh Golin, CCFC, josh@commercialfreechildhood.org (link sends e-mail); 617-896-9369 Leading child advocacy, health, and privacy groups call on FTC to Investigate Children’s Digital Media Marketplace Before Proposing any Changes to Privacy Protections for Children Threats to young people from digital marketing and data collection must be analyzed to ensure meaningful safeguards under the Children’s Online Privacy Protection Act (COPPA). WASHINGTON, DC and BOSTON, MA – December 5, 2019 – A coalition of 31 advocacy groups is urging the Federal Trade Commission to use its subpoena authority to obtain information from leading digital media companies that target children online. In comments filed today by the Institute for Public Representation at Georgetown and organized by Center for Digital Democracy (CDD) and the Campaign for a Commercial-Free Childhood (CCFC), the coalition explained the opaque data and digital marketing practices targeting kids. The comments are filed with the FTC as part of its early review of the rules protecting children under the Children’s Online Privacy Protection Act (COPPA). The advocates’ call was supported by Sesame Workshop, the leading producer of children’s educational programming, in a separate filing. To better assess the impacts on children from today’s digital data-driven advertising system, and features such as cross-device tracking, artificial intelligence, machine learning, virtual reality, and real-time measurement—the advocates urge the commission to gather and analyze data from leading companies that target children. Any proposed changes to COPPA must be based on empirical data, which is consistent with calls by Commissioners Wilson, Phillips, and Simons that rulemaking must be evidence-based. In their comments, the organizations ask the FTC to use its authority under rule 6(b) to: - Examine today’s methods of advertising to children and their impact, including their discriminatory effects - Examine practices concerning data collection and retention - Illuminate children’s presence on “general audience” platforms and those platforms’ awareness of children’s presence - Identify how the data of children is being used by contemporary data platforms, including “marketing clouds,” “identity management” systems, in-house data management platforms, and data brokers - Illuminate the efficacy—or lack thereof—of safe harbors Groups that have signed the comments are Campaign for a Commercial-Free Childhood; Center for Digital Democracy; American Academy of Pediatrics; Badass Teachers Association; Berkeley Media Studies Group; Center for Science in the Public Interest; Children and Screens; Color of Change; Common Sense Media; Consumer Action; Consumer Federation of America; Consumer Federation of California; Consumer Reports; Consumer Watchdog; Corporate Accountability; Defending the Early Years; Electronic Frontier Foundation; Electronic Privacy Information Center; Obligation, Inc.; Parent Coalition for Student Privacy; Parents Across America; Parents Television Council; P.E.A.C.E. (Peace Educators Allied For Children Everywhere) (link is external); Privacy Rights Clearinghouse; Public Citizen; Public Knowledge; The Story of Stuff; TRUCE (Teachers Resisting Unhealthy Childhood Entertainment); UnidosUS; United Church of Christ; and U.S. Public Interest Research Group (U.S. PIRG). …. The following can be attributed to Kyle Yasuda, MD, FAAP, President, American Academy of Pediatrics: “As children become more digitally connected, it becomes even more important for parents, pediatricians and others who care for young children to understand how digital media impacts their health and development. Since digital technology evolves rapidly, so must our understanding of how data companies are engaging with children’s information online. As we pursue the promise of digital media for children’s development, we must design robust protections to keep them safe based on an up-to-date understanding of the digital spaces they navigate.” The following can be attributed to Josh Golin, Executive Director of Campaign for Commercial-Free Childhood: As kids are spending more time than ever on digital devices, we need the full power of the law to protect them from predatory data collection -- but we can't protect children from Big Tech business models if we don't know how those models truly work. The FTC must use its full authority to investigate opaque data and marketing practices before making any changes to COPPA. We need-to-know what Big Tech knows about our kids." The following can be attributed to Katharina Kopp, Director of Policy, Center for Digital Democracy (CDD): “Children are being subjected to a purposefully opaque ‘Big Data’ digital marketing system that continually gathers their information when they are online. The FTC must use its authority to understand how new and evolving advertising practices targeting kids really work, and whether these data practices are having a discriminatory, or other harmful impact, on their lives.” The following can be attributed to James P. Steyer, CEO and Founder of Common Sense: "Kids and families have to be the priority in any changes to COPPA and in order to do that, we must fully understand what the industry is and isn’t doing when it comes to tracking and targeting kids. Tech companies are never going to be transparent about their business practices which is why it is critical that the FTC use its authority to look behind the curtain and shed light on what they are doing when it comes to kids so that if any new rules are needed, they can be smart and well-informed." The following can be attributed to Katie McInnis, Policy Counsel, Consumer Reports: "We’re glad the FTC is asking for comments on the implementation of COPPA through the 2013 COPPA rule. But the Commission should have the fullest possible picture of how children's personal information is being collected and used before it considers any changes. It’s well-documented that compliance with COPPA is uneven among apps, connected toys, and online services. The FTC must fully understand how kids’ personal information is treated before the 2013 rule can be modified, in order to ensure that children and their data are protected.” The following can be attributed to Marc Rotenberg, President, Electronic Privacy Information Center (EPIC): “The FTC should complete its homework before it proposes changes to the regulations that safeguard children’s privacy. Without a clear understanding of current industry practices, the agency’s proposal will be ill-informed and counterproductive." The following can be attributed to Lindsey Barrett, Staff Attorney and Teaching Fellow, Institute for Public Representation, Georgetown Law: The FTC should conduct 6(b) studies to shed light on the complex and evolving profiling practices that violate children’s privacy. Children are being monitored, quantified, and analyzed more than ever before, and the Commission cannot make informed decisions about the rules that protect them online based on limited or skewed information about the online ecosystem. The following can be attributed to Robert Weissman, President, Public Citizen: “The online corporate predators are miles ahead of the FTC, employing surveillance and targeting tactics against children that flout the protections enshrined in COPPA. The first thing the FTC should do is invoke its investigative powers to get a firm grasp on how Big Tech is systematically invading children’s privacy.” The following can be attributed to Cheryl A. Leanza, Policy Advisor, UCC OC Inc.: “In the modern era, our data are our lives and our children’s lives are monitored and tracked in more detail than any previous generation to unknown effect. Parents seek to pass on their own values and priorities to their children, but feel subverted at every turn by unknown algorithms and marketing efforts directed to their children. At a minimum, the FTC must collect basic facts and trends about children and their data privacy.” The following can be attributed to Eric Rodriguez, Senior Vice President, UnidosUS: “All children should have the right to privacy and live free from discrimination, including in digital spaces. Latino children are targeted by digital marketing efforts and with real consequences to their health and wellbeing. UnidosUS urges the Commission to use its authority and study how children of color operate in the digital space, what happens to their personal data, and how well they are protected by COPPA. Only then can the Commission take effective and objective action to strengthen COPPA to protect an increasingly diverse youth population.”
    Jeff Chester
  • In the aftermath of Google’s settlement with the FTC over its COPPA violations, some independent content producers on YouTube have expressed unhappiness with the decision. They are unclear how to comply with COPPA, and believe their revenue will diminish considerably. Some also worry that Google’s recently announced (link is external) system to meet the FTC settlement—where producers must identify if their content is child-directed—will affect their overall ability to “monetize” their productions even if they aren’t aiming to primarily serve a child audience. These YouTubers have focused their frustration at the FTC and have mobilized to file comments in the current COPPA proceedings (link is external). As Google has rolled out its new requirements, it has abetted a misdirected focus on the FTC and created much confusion and panic among YouTube content producers. Ultimately, their campaign, designed to weaken the lone federal law protecting children’s privacy online, could create even more violations of children’s privacy. While we sympathize with many of the YouTubers’ concerns, we believe their anger and sole focus on the FTC is misplaced. It is Google that is at fault here, and it needs finally to own up and step up. The truth is, it is Google’s YouTube that has violated the 2013 COPPA rule (link is external) pretty much since its inception. The updated rule made it illegal to collect persistent identifiers from children under 13 without parental consent. Google did so while purposefully developing YouTube as the leading site for children. It encouraged content creators to go all in and to be complicit in the fiction that YouTube is only for those whose age is 13 and above. Even though Google knew that this new business model was a violation of the law, it benefitted financially by serving personalized ads to children (and especially by creating the leading online destination for children in the U.S. and worldwide). All the while small, independent YouTube content creators built their livelihood on this illegitimate revenue stream. The corporate content brand channels of Hasbro, Mattel and the like, who do not rely on YT revenue, as well as corporate advertisers, also benefitted handsomely from this arrangement, allowing them to market to children unencumbered by COPPA regulations. But let’s review further how Google is handling the post-settlement world. Google chose to structure its solution to its own COPPA violation in a way that continues to place the burden and consequences of COPPA compliance on independent content creators. Rather than acknowledging wrong-doing and culpability in the plight of content creators who built their livelihoods on the sham that Google had created, Google produced an instructional video (link is external) for content creators that emphasizes the consequences of non-compliance and the potential negative impact on the creators’ monetization ability. It also appeared to have scared those who do not create “for kids” content. Google requires content creators to self-identify their content as “for kids,” and it will use automated algorithms to detect and flag “for kids” content. Google appears to have provided little useful information to content providers on how to comply, and confusion now seems rampant. Some YouTubers also fear (link is external) that the automated flagging of content is a blunt instrument “based on oblique overly broad criteria.” Also, Google declared that content designated as “for kids” will no longer serve personalized ads. The settlement and Google’s implementation are designed to assume the least risk for Google, while maximizing its monetary benefits. Google will start limiting the data it collects on made “for kids” content – something they should have done a long time ago, obviously. As a result, Google said it will no longer show personalized ads. However, the incentives for content creators to self-identify as “for kids” are not great, given that disabling (link is external) behavioral ads “may significantly reduce your channel’s revenue.” Although Google declares that it is “committed to help you with this transition,” it has shown no willingness to reduce its own significant cut of the ad revenue when it comes to children’s content. While incentives for child-directed content creators are high to mis-label their content, and equally high for Google to encourage them in this subterfuge, the consequences for non-compliance now squarely rest with content creators alone. Let’s be clear here. Google should comply with COPPA as soon as possible where content is clearly child- directed. Google has already developed a robust set of safeguards and policies (link is external) on YouTube Kids to protect children from advertising (link is external) for harmful products and from exploitative influencer marketing. It should apply the same protections on all child-directed content, regardless of which YouTube platform kids are using. When CCFC and CDD filed our COPPA complaint in 2018, we focused on how Google was shirking its responsibilities under the law by denying that portions of YouTube were child-directed (and thus governed by COPPA). The channels we cited in our complaint were not gray-area channels that might be child attractive but also draw lots of teen and adult viewers. Our complaint discussed such channels as Little Baby Bum, ChuChu TV Nursery Rhymes and Kids Songs, and Ryan’s Toy Reviews. We did not ask the FTC to investigate or sanction any channel owners, because Google makes the rules on YouTube, particularly with regard to personal data collection and use, and therefore it was the party that chose to violate COPPA. (Many independent content creators concur indirectly when they say that they should not be held accountable under COPPA. They maintain that they actually don’t have access to detailed audience data and do not know if their YouTube audience is under 13 at all. Google structures what data they have access to.) For other content, in the so-called “gray zone,” such as content for general audiences that children under 13 also watch, or content that cannot be easily classified, we need more information about Google’s internal data practices. Do content creators have detailed access to demographic audience data and are thus accountable, or does Google hold on to that data? Should accountability for COPPA compliance be shifted more appropriately to Google? Can advertising restrictions be applied at the user level once a user is identified as likely to be under thirteen regardless of what content they watch? We need Google to open up its internal processes, and we are asking the FTC to develop rules that share accountability appropriately between Google and its content creators. The Google settlement has been a significant victory for children and their parents. For the first time, Google has been forced to take COPPA seriously, a U.S. law that was passed by Congress to express the will of the majority of the electorate. Of course, the FTC is also complicit in this problem as it waited six years to enforce the updated law. They watched Google’s COPPA violations increase over time, allowing a monster to grow. What’s worse, the quality of the kids YouTube content was to most, particularly to parents, more than questionable (link is external), and at times even placed children seriously at risk (link is external). What parents saw in the offering for their children was quantity rather than quality content. Now, however, after six years, the FTC is finally requiring Google and creators to abide by the law. Just like that. Still, this change should not come as a complete surprise to content creators. We sympathize with the independent YT creators and understand their frustration, but they have been complicit in this arrangement as well. The children’s digital entertainment industry has discussed compliance with COPPA for years behind closed doors, and many knew that YouTube was in non-compliance with COPPA. The FTC has failed to address the misinformation that Google is propagating among content creators, its recent guidance (link is external) not withstanding. Moreover, the FTC has allowed Google to settle its COPPA violation by developing a solution that allows Google to abdicate any responsibility with COPPA compliance, while continuing to maximize revenue. It’s time for the FTC to study Google’s data practices and capabilities better, and squarely put the onus on Google to comply with COPPA. As the result of the current COPPA proceedings, rules must be put in place to hold platforms, like YouTube, accountable.
  • SUBJECT: CCFC and CDD statement on today’s YouTube inquiry by Senator Markey Campaign for a Commercial-Free Childhood and the Center for Digital Democracy, whose complaint led to the FTC settlement (link is external) which requires YouTube to change its practices to comply with federal children’s privacy law, applaud Senator Ed Markey for writing to Google (link is external) to inquire about YouTube’s child-directed advertising practices. “To its credit, Google has developed a robust set of safeguards and policies on YouTube Kids to protect children from advertising for harmful products and exploitative influencer marketing. Now that Google has been forced to abandon the fiction that the main YouTube platform is exclusively for ages 13 and up, it should apply the same protections on all child-directed content, regardless of which YouTube platform kids are using.” Josh Golin, Campaign for a Commercial-Free Childhood “Google should treat all children fairly on YouTube and apply the same set of advertising and content safeguards it has especially developed for YouTube Kids. When young people view child-directed programming on YouTube, they should also be protected from harmful and unfair practices such as ‘influencer’ marketing, exposure to ‘dangerous’ material, violent content, and exposure to food and beverage marketing.” Jeff Chester, Center for Digital Democracy
  • Press Release

    Grading Digital Privacy Proposals in Congress

    Which digital privacy proposals in Congress make the grade?

    Subject: Which digital privacy proposals in Congress make the grade? Nov. 21, 2019 Contact: David Rosen, drosen@citizen.org (link sends e-mail), (202) 588-7742 Susan Grant, sgrant@consumerfed.org (link sends e-mail), (202) 387-6121 Caitriona Fitzgerald, fitzgerald@epic.org (link sends e-mail), (617) 945-8409 Katharina Kopp, kkopp@democraticmedia.org (link sends e-mail), (202) 836-4621 Campaign for a Commercial-Free Childhood · Center for Digital Democracy · Color of Change · Consumer Federation of America · Consumer Action · Electronic Privacy Information Center · Parent Coalition for Student Privacy · Privacy Rights Clearinghouse · Public Citizen · U.S. PIRG NOTE TO REPORTERS Grading Digital Privacy Proposals in Congress When it comes to digital privacy, we’re facing an unprecedented crisis. Tech giants are spying on our families and selling the most intimate details about our lives for profit. Bad actors, both foreign and domestic, are targeting personal data gathered by U.S. companies – including our bank details, email messages and Social Security numbers. Algorithms used to determine eligibility for jobs, housing, credit, insurance and other life necessities are having disparate, discriminatory impacts on disadvantaged groups. We need a new approach. Consumer, privacy and civil rights groups are encouraged by some of the bills that recently have been introduced in Congress, many of which follow recommendations in the groups’ Framework for Comprehensive Privacy Protection and Digital Rights in the United States (link is external). The framework calls for baseline federal privacy legislation that: - Has a clear and comprehensive definition of personal data; - Establishes an independent data protection agency; - Establishes a private right of action allowing individuals to enforce their rights; - Establishes individual rights to access, control and delete data; - Puts meaningful privacy obligations on companies that collect personal data; - Requires the establishment of algorithmic governance to advance fair and just data practices; - Requires companies to minimize privacy risks and minimize data collection; - Prohibits take-it-or-leave-it or pay-for-privacy terms; - Limits government access to personal data; and - Does not preempt stronger state laws. Three bills attained the highest marks in the recent Privacy Legislation Scorecard (link is external) compiled by the Electronic Privacy Information Center (EPIC): - The Online Privacy Act (H.R. 4978 (link is external)), introduced by U.S. Reps. Anna Eshoo (D-Calif.) and Zoe Lofgren (D-Calif.), takes a comprehensive approach and is the only bill that calls for a U.S. Data Protection Agency. The bill establishes meaningful rights for individuals and clear obligations for companies. It does not preempt state law, but it lacks explicit anti-preemption language, which would make it more effective. - The Mind Your Own Business Act (S. 2637 (link is external)), introduced by U.S. Sen. Ron Wyden (D-Ore.), requires companies to assess the impact of the automated systems they use to make decisions about consumers and how well their data protection mechanisms are working. It has explicit anti-preemption language and holds companies accountable when they fail to protect privacy. The private right of action should be broader, and the bill needs clear limits on data uses. - The Privacy Rights for All Act (S. 1214 (link is external)), introduced by U.S. Sen. Ed Markey (D-Mass.), has important provisions minimizing data collection and delinking user identities from collected data, and prohibits bias and discrimination in automated decision-making. It also includes a strong private right of action and bans forced arbitration for violations. It does not preempt state law, but it lacks explicit anti-preemption language, which would make it more effective. Two bills are plainly anti-privacy. The Information Transparency & Personal Data Control Act (H.R. 2013 (link is external)), introduced by U.S. Rep. Suzan DelBene (D-Wash.), falls woefully short. It provides few protections for individuals, contains overly broad exemptions and preempts stronger state laws. The Balancing the Rights of Web Surfers Equally and Responsibility (BROWSER) Act (S. 1116 (link is external)), introduced by U.S. Sen. Marsha Blackburn (R-Tenn.), is based on the old, ineffective take-it-or-leave-it terms of use model, does not allow agency rulemaking, is weak on enforcement and preempts state laws. Both are bad, anti-privacy bills. Future federal privacy bills must make the grade. Additional privacy bills are expected to be introduced by U.S. Sen. Maria Cantwell (D-Wash.) and U.S. Rep. Jan Schakowsky (D-Ill.). Separately, U.S. Sens. Richard Blumenthal (D-Conn.), Roger Wicker (R-Miss.) and Josh Hawley (R-Mo.) may release their own bills. These leaders should strive to meet the standards that the framework lays out. Baseline privacy legislation must not preempt stronger state protections and laws – such as the California Consumer Privacy Protection Act (link is external) that takes effect in 2020, biometric data protection laws such as those in Illinois (link is external) and Texas (link is external), and data breach notification laws (link is external) that exist in every state. States must be allowed to continue serving as “laboratories of democracy,” pioneering innovative new protections to keep up with rapidly changing technologies. In addition, federal privacy legislation must include a strong private right of action – a crucial tool consumers need to enforce their rights and change the behavior of powerful corporations – and establish safeguards against data practices that lead to unjust, unfair, manipulative and discriminatory outcomes. For more information, see these fact sheets (link is external). Please contact any of the individuals listed above to speak with an expert. ###
  • Big Tech companies are lobbying to undermine the only federal online privacy law in the US – one which protects children--and we need your help to stop them. Along with the Campaign for a Commercial-Free Childhood (CCFC), we ask for your help to urge the Federal Trade Commission to strengthen—not weaken—the Children’s Online Privacy Protection Act (COPPA). Please sign this petition because your voice is essential to a future where children’s privacy is protected from marketers and others. Take action (link is external) to protect the privacy of children now and in the future! Commercialfreechildhood.org/coppa (link is external)
  • Most parents can tell you the most popular website for kids is YouTube. But for years, while Google made millions luring children to YouTube, vacuuming up their sensitive information, and using it to target them with ads, Google told the Big Lie: “YouTube is not for kids. It says so right in our terms of service.” That has now changed, thanks to the advocacy of Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) and the support of a coalition of advocacy groups. Google deliberately developed YouTube as the leading site for children with programming and marketing strategies designed to appeal directly to kids. But it ignored the only federal law addressing commercial privacy online—the Children’s Online Privacy Protection Act (COPPA). Their behavior sent a message that a corporation as powerful and well-connected corporation as Google is above the law—even laws designed to protect young people. CCFC, CDD, and our attorneys at the Institute for Public Representation (link is external) (IPR) at Georgetown University Law Center, with a broad coalition of consumer, privacy, public health and child rights groups, began filing complaints at the Federal Trade Commission (FTC) in 2015 concerning Google’s child-directed practices on YouTube and the YouTube Kids app. We kept up the pressure on the FTC, with the help of Congress and the news media. After we filed a complaint (link is external) in April 2018 describing YouTube’s ongoing violations of COPPA, the FTC, under the leadership of Chairman Joe Simons, finally decided to take action. The result was the FTC’s September decision (link is external)—which in many ways is both historic and a major step in the direction of protecting children online. Google was fined $170 million for its violations of children’s privacy, a record amount for a COPPA-connected financial sanction. The FTC’s action also implemented important new policies (link is external) protecting children, most of which will go into effect by January 2020: Children will no longer be targeted with data-driven marketing and advertising on YouTube programming targeted to kids: This is the most important safeguard. Google will no longer conduct personalized “behavioral” marketing on YouTube programming that targets children. In other words, they will stop the insidious practice of using kids’ sensitive information in order to target them with ads tailored for their eyes. Google will require video producers and distributors to self-identify that their content is aimed at kids; and will also employ its own technology to identify videos that target young audiences. Google will substantially curtail the data they collect from children watching YouTube videos. Since the main YouTube site has no age gate, they will limit data collection and tracking of viewers’ identities for anyone watching child-directed content there to only the data “needed to support the operation of the service.”. The same limitation will apply to videos on YouTube Kids. Google is taking steps to drive kids from the main YouTube site to YouTube Kids, where parental consent is required. Google launched the YouTube Kids app in 2015. But the app never rivaled the main YouTube platform’s hold on children, and was plagued with a number of problems, such as inadequate screening of harmful content. As a result of the FTC investigation, Google has launched a YouTube Kids website, and when kids watch children’s content on the main YouTube site they get a pop-up suggesting they visit YouTube Kids. Google says it will more effectively curate different programming that will appeal to kids aged 4 through 12. This is a positive development because, while a number of concerns remain about YouTube Kids, children are better off using the Kids site rather than the Wild West of the main YouTube platform. Google created a $100 million fund for “quality kids, family and educational content.” CCFC and CDD had proposed this, and we are gratified Google acknowledged it bears responsibility to support programing that enriches the lives of children. This is to be a three-year program to spur “the creation of thoughtful, original children’s content.” Google has made changes to make YouTube a “safer platform for children:” The company is proactively promoting “quality” children’s programming on YouTube by revising the algorithm used to make recommendations. It is also not permitting comments and notifications on child-directed content. Google has told CCFC and CDD it will make these changes regarding data collection and targeted marketing worldwide. Other questions remain to be answered. How will it treat programming classified as “family viewing”—exempt it from the new data targeting safeguards? (It should not be permitted to do so.) Will the new $100 million production fund commit to supporting child-directed non-commercial content (instead of serving as a venture for Google to expand its marketing to kids)? Will Google ensure that its other child-directed commercial activities—such as its Play Store—also reflect the new safeguards the company has adopted for YouTube? Google also permits the targeting of young people via “influencers,” including videos where toys and other products are unboxed. When such videos are child-directed, Google should put an end to them. CCFC, CDD and our allies intend to play a proactive role holding Google, its programmers, advertisers and the FTC accountable to make sure that these new policies are implemented effectively. Our work in bringing about this change and the work we will do to make other companies follow suit is part of our commitment to ensuring that young people around the world grow up in a media environment that respects and promotes their health, privacy, and well-being.
  • Press Release

    Will the FTC Weaken Children’s Privacy Rules?

    Invited Advocates Raise Concerns About Upcoming COPPA Workshop, Plans to Undermine Federal Protections for Kids, October 7 D.C. lineup dominated by tech industry supporters

    Contact: David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) Will the FTC Weaken Children’s Privacy Rules? Invited Advocates Raise Concerns About Upcoming COPPA Workshop, Plans to Undermine Federal Protections for Kids October 7 D.C. lineup dominated by tech industry supporters WHAT: The Future of the Children’s Online Privacy Protection Act Rule (COPPA): An FTC Workshop (link is external) WHEN: October 7, 2019, 9:00 am ET WHERE: Constitution Center, 400 7th St SW, Washington, DC WORKSHOP PRESENTERS FOR CAMPAIGN FOR A COMMERCIAL-FREE CHILDHOOD (CCFC) AND CENTER FOR DIGITAL DEMOCRACY (CDD): THE CHALLENGE: In 2012, the FTC approved new safeguards to protect children’s privacy in the digital era, heeding the advice of child advocates, consumer groups, privacy experts and health professionals. But now the Commission has called for comments (link is external) on COPPA three years before a new review is mandated by statute. The questions posed by the Commission, as well as public comments made by FTC staff, make privacy advocates wary that the FTC’s goal is to roll back COPPA safeguards rather than strengthen protections for children. Concerns about the FTC creating new loopholes or supporting industry calls to weaken the rules are heightened by the FTC’s speaker list for this workshop, replete with tech and marketing companies and their lawyers and lobbyists, with just a few privacy and children’s advocates at the table. The advocates are also concerned that the FTC is contemplating this action just weeks after its most significant COPPA enforcement action to date—requiring major changes to Google’s data collection practices on YouTube—a move that could result in rules being changed before those new practices have even been implemented. Children and families need increased COPPA enforcement, not weaker rules. The key problems, the advocates note, are the lack of enforcement of the law by the FTC; the failure of the agency to protect children from unfair marketing practices, such as influencers; and the need to maintain the strongest possible safeguards—whether in the home, school or on mobile devices. Speakers at the workshop include: Josh Golin, Executive Director, CCFC Will participate in a panel entitled Scope of the COPPA Rule. Katharina Kopp, Ph.D., Deputy Director, Director of Policy, CDD Will participate in a panel entitled Uses and Misuses of Persistent Identifiers. Laura M. Moy, Associate Professor of Law, Director, Communications & Technology Law Clinic, Georgetown University Law Center Will participate in a panel entitled State of the World in Children’s Privacy. Josh, Katharina, and Laura are available for questions in advance of the workshop, and will also be available to speak with press on site. See video of Future of COPPA Workshop here: https://www.ftc.gov/news-events/audio-video/video/future-coppa-rule-ftc-... (link is external) https://www.ftc.gov/news-events/audio-video/video/future-coppa-rule-ftc-... (link is external) ###
  • CDD, EPIC, USPIRG Opposition to Google/Doubleclick "Big Data" Merger

    2007 FTC filings example of groups calling for antitrust, privacy and other safeguards for digital marketplace

    Working closely with the Electronic Privacy Information Center (epic.org) and US PIRG, CDD led a campaign to oppose (link is external) the acquisition of Doubleclick by Google. CDD opposed (link is external) the deal on privacy, consumer protection and competiton grounds. We all foresaw what would happen if Google was allowed to swallow a leading digital marketing giant--more data collection, industry consolidation, weakening of consumer and privacy rights. It all happened of course, in part because the FTC hasn't ever been able to deal with the marketplace. Here are two of the filings done in this case.
    Jeff Chester
  • I played a key role (link is external) helping get the Children’s Online Privacy Protection Act (COPPA) passed by Congress in 1998 (when I was executive director of the Center for Media Education). Since then, I have tried to ensure that the country’s only federal law addressing commercial privacy online was taken seriously. That’s why it has been especially egregious to have witnessed Google violating COPPA for many (link is external) years, as it deliberately developed YouTube as the leading site for children. Google disingenuously claimed in its terms of service that YouTube was only meant for those 13 (link is external) and older, while it simultaneously unleashed programming and marketing strategies designed to appeal directly to kids. Google’s behavior sent a message that any powerful and well-connected corporation could ignore U.S. privacy law, even when that law was specifically designed to protect young people. In collaborations with our colleagues at the Campaign for Commercial-Free Childhood (CCFC (link is external)), our attorneys at the Institute for Public Representation (IPR (link is external)) at Georgetown University Law Center, and a broad coalition of consumer, privacy, public health and child rights groups, we began filing complaints at the FTC in 2015 concerning Google’s child-directed practices (on YouTube, its YouTube Kids app, and elsewhere). We also told top officials at the commission that Google was not abiding by COPPA, and repeatedly provided them documentation (link is external) of Google’s child-directed business operations. CCFC, CDD and IPR kept up the pressure on the FTC, in Congress and with the news media (see attached, for example). For a variety of reasons, the FTC, under the leadership of Chairman Joe Simons, finally decided to take action. The result was last week’s decision (link is external)—which in many ways is both historic and highly positive. Google was fined $170 million for its violations of children’s privacy, a record amount in terms of previous COPPA-connected financial sanctions. The FTC’s action also implemented important new policies (link is external) protecting children: Children will no longer be targeted with data-driven marketing and advertising on YouTube programming targeted to kids: This is the most important safeguard. Google announced that starting around January 2020, there would no longer be any form of personalized “behavioral” marketing permitted on YouTube’s programming that targets children. The “Official” YouTube blog post explained that Google “will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely….” Google will require video producers and distributers to self-identify that their content is aimed at kids; it also committed to “use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.” Google also explained that child-directed programming on YouTube will receive an additional safeguard—it won’t permit any personalized targeting on its child-directed content. Google committed to make substantial investments in its YouTube Kids (link is external) service: Google launched the YouTube Kids “app” in 2015, claiming it was “the first Google product (link is external) built from the ground up with little ones in mind.” But the app never rivaled the main YouTube platform’s hold on children, and was plagued with a number of problems (such as harmful content). Now, as a result of the FTC investigation, Google announced that it will bring “the YouTube Kids experience to the desktop,” increase its promotion of the service to parents, and more effectively curate different programming that will appeal to more young people—with new tiers of content suitable for “Preschool (ages 4 & under); Younger (ages 5-7); and Older (ages 8-12).” Google created a $100 million fund for “quality kids, family and educational content.” This is another proposal CCFC and CDD made and we are gratified Google acknowledged it bears responsibility to support programing that enriches the lives of children. This is to be a three-year program that is designed for “the creation of thoughtful, original children’s content on YouTube and YouTube globally.” Google has made changes to make YouTube a “safer platform for children:” The company is proactively promoting “quality” children’s programming by revising the algorithm used to make recommendations. It is also not permitting comments and notifications on its YouTube child-directed content. There are questions that still need to be answered about how Google will implement these new policies. For example, will the company prohibit the data targeting of children on YouTube worldwide? (It should.) How will it treat programming classified as “family viewing”—exempt it from the new data targeting safeguards? (It should not be permitted to do so.) Will the new $100 million production fund commit to supporting child-directed non-commercial content (instead of serving as a venture investment strategy for Google to expand its marketing to kids plans). Will Google ensure that its other child-directed commercial activities—such as its Play Store—also reflect the new safeguards the company have adopted for YouTube? Google also targets young people via so-called “influencers,” including videos where toys and other products are “unboxed.” Google needs to declare such content as child-directed (and should refrain from these practices as well). CCFC, CDD and our allies intend to play a proactive role holding Google, its programmers, advertisers and the FTC accountable to make sure that these new policies are implemented effectively. These new FTC-forced changes to how Google serves children are part of our commitment to ensuring that young people around the world grow up in a media environment that respects and promotes their health, privacy, and well-being.
    Jeff Chester
  • Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Advocates Who Filed the Privacy Complaint Against Google/YouTube Laud Improvements, But Say FTC Settlement Falls Far Short BOSTON, MA & WASHINGTON, DC—September 4, 2019—The advocates who triggered the Federal Trade Commission’s (FTC) investigation into YouTube’s violations of the Children’s Online Privacy Protection Act (COPPA) say the FTC’s settlement with Google will likely significantly reduce behavioral marketing to children on YouTube, but doesn’t do nearly enough to ensure children will be protected or to hold Google accountable. In April, 2018, Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD), through their attorneys at Georgetown Law’s Institute for Public Representation (IPR), filed an FTC complaint (link is external) detailing YouTube’s COPPA violations. Twenty-one other privacy and consumer groups signed on to CCFC and CDD’s complaint, which detailed how Google profits by collecting personal information from kids on YouTube, without first providing direct notice to parents and obtaining their consent as required by law. Google uses this information to target advertisements to children across the internet and across devices, in clear violation of COPPA. Today, the FTC and the New York Attorney General announced a settlement with Google, fining the company $170 million. The settlement also “requires Google and YouTube to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA.” Content creators will be asked to disclose if they consider their videos to be child-directed; if they do, no behavioral advertising will be served to viewers of those videos. “We are pleased that our advocacy has compelled the FTC to finally address YouTube’s longstanding COPPA violations and that there will be considerably less behavioral advertising targeted to children on the number one kids’ site in the world,” said CCFC’s Executive Director Josh Golin. “But it’s extremely disappointing that the FTC isn’t requiring more substantive changes or doing more to hold Google accountable for harming children through years of illegal data collection. A plethora of parental concerns about YouTube – from inappropriate content and recommendations to excessive screen time – can all be traced to Google’s business model of using data to maximize watch time and ad revenue.” In a July 3, 2019 (link is external) letter to the FTC, the advocates specifically warned that shifting the burden of COPPA compliance from Google and YouTube to content creators would be ineffective. The letter noted many children’s channels were unlikely to become COPPA compliant by turning off behavioral advertising, since Google warns that turning off these ads “may significantly reduce your channel’s revenue.” The letter also detailed Google’s terrible track record of ensuring COPPA compliance on its platforms; a 2018 study found that 57% of apps in the Google Play Store’s Designed for Families program were violating COPPA despite Google’s policy that apps in the program must be COPPA compliant. And as Commissioner Rebecca Slaughter wrote in her dissent, many children’s content creators are not U.S.-based and therefore are unlikely to be concerned about FTC enforcement. “We are gratified that the FTC has finally forced Google to confront its longstanding lie that it wasn’t targeting children on YouTube,” said CDD’s executive director Jeff Chester, who helped spearhead the campaign that led to the 1998 passage of COPPA “However, we are very disappointed that the Commission failed to penalize Google sufficiently for its ongoing violations of COPPA and failed to hold Google executives personally responsible for the roles they played. A paltry financial penalty of $170 million—from a company that earned nearly $137 billion in 2018 alone -- sends a signal that if you are a politically powerful corporation, you do not have to fear any serious financial consequences when you break the law. Google made billions off the backs of children, developing a host of intrusive and manipulative marketing practices that take advantage of their developmental vulnerabilities. More fundamental changes will be required to ensure that YouTube is a safe and fair platform for young people.” Echoing Commissioner Rohit Copra’s dissent, the advocates noted that unlike smaller companies sanctioned by the FTC, Google was not forced to pay a penalty larger than its “ill-gotten gains.” In fact, with YouTube earning a reported $750 million annually from children’s content alone, the $170 million fine amounts to less than three months of advertising revenue from kids’ videos. With a maximum fine of $41,484 per violation, the FTC easily could have sought a fine in the tens of billions of dollars. "I am pleased that the FTC has made clear that companies may no longer avoid complying with COPPA by claiming their online services are not intended for use by children when they know that many children in fact use their services,” said Angela Campbell, Director Emeritus of IPR’s Communications and Technology Clinic at Georgetown Law, which researched and drafted the complaint. Campbell, currently chair of CCFC’s Board, served as lead counsel to CCFC and CDD on the YouTube and other complaints alleging COPPA violations. She, along with Chester, was responsible for filing an FTC complaint in 1996 against a child-directed website that led to Congress’s passage of COPPA in 1998 (link is external). COPPA gave the FTC expanded authority to implement and enforce the law, for example, by including civil penalties. About the proposed settlement, Campbell noted: “It’s disappointing that the FTC has not fully used its existing authority to hold Google and YouTube executives personally liable for adopting and continuing to utilize a business model premised on ignoring children’s privacy protection, to adopt a civil penalty substantial enough to deter future wrongdoing, or to require Google to take responsibility for ensuring that children’s content on YouTube platforms complies with COPPA.” On the heels of a sweetheart settlement with Facebook, the advocates said the deal with Google was further proof the FTC wasn’t up to the task of protecting consumers’ privacy. Said Campbell, “I support Commissioner Slaughter’s call to state attorney generals to step up and hold Google accountable. Added Chester, “The commission’s inability to stop Google’s cynically calculated defiance of COPPA underscores why Congress must create a new consumer watchdog that will truly protect Americans’ privacy.” Organizations which signed on to the CCFC/CDD 2018 FTC complaint were Berkeley Media Studies Group; Center for Media Justice; Common Sense; Consumer Action; Consumer Federation of America; Consumer Federation of California; Consumers Union, the advocacy division of Consumer Reports; Consumer Watchdog; Corporate Accountability; Defending the Early Years; Electronic Privacy Information Center (“EPIC”); New Dream; Obligation, Inc.; Parent Coalition for Student Privacy; Parents Across America; Parents Television Council; Privacy Rights Clearinghouse; Public Citizen; The Story of Stuff Project; TRUCE (Teachers Resisting Unhealthy Childhood Entertainment); and USPIRG. ###
  • Press Statement Google YouTube FTC COPPA Settlement Statement of Katharina Kopp, Ph.D. Deputy Director Center for Digital Democracy August 30, 2019 It has been reported that Google has agreed to pay between $150 million and $200 million to resolve an FTC investigation into YouTube over alleged violations of a children's privacy law. A settlement amount of $150-200 million would be woefully low, considering the egregious nature of the violation, how much Google profited from violating the law, and given Google’s size and revenue. Google’s unprecedented violation requires an unprecedented FTC response. A small amount like this would effectively reward Google for engaging in massive and illegal data collection without any regard to children’s safety. In addition to assessing substantial civil penalties, the FTC must enjoin Google from committing further violations of COPPA and impose effective means for monitoring compliance; the FTC must impose a 20-year consent decree to ensure Alphabet Inc. acts responsibly when it comes to serving children and parents. ------ In April, 2018, the Center for Digital Democracy (CDD) and the Campaign for Commercial-Free Childhood (CCFC), through their attorneys at Georgetown Law’s Institute for Public Representation (IPR), filed an FTC complaint (link is external) detailing YouTube’s COPPA violations. Twenty-one other privacy and consumer groups signed on to CCFC and CDD’s complaint, which detailed how Google profits by collecting personal information from kids on YouTube, without first providing direct notice to parents and obtaining their consent as required by law. Google uses this information to target advertisements to children across the internet and across devices, in clear violation of COPPA.
  • Blog

    CDD Memo to FTC on Facebook Consent Decree Violations--2013

    FTC has long ignored how market operates-it still does in 2019

  • News

    Groups Join Legal Battle to Fight Ineffective FTC Privacy Decision on Facebook

    Statements from Campaign for Commercial-Free Childhood, CDD, Color of Change, Common Sense Media, Consumer Action, Consumer Federation of America, Open Markets, Public Citizen, USPIRG

    FOR RELEASE July 26, 2019 Consumer Privacy Organizations to Challenge Facebook Settlement Statement from Groups --------- “The Settlement Fails to Provide Meaningful Relief to Facebook Users” WASHINGTON, DC – Many of the nation’s leading consumer privacy organizations are urging a federal court in Washington, DC to consider public comments before finalizing a proposed settlement between the Federal Trade Commission and Facebook. “The Facebook settlement is both historic and controversial. Many believe the FTC failed to establish meaningful safeguards for consumer privacy. We believe the court overseeing the case should consider the views of interested parties,” said Marc Rotenberg, President of the Electronic Privacy Information Center. Under the terms of the settlement, Facebook will pay a record-breaking $5 b fine to the United States Treasury, but there will be no significant changes in Facebook’s business practices and the FTC will release all pending complaints against the company. Typically in a proposed FTC settlement, the public would be provided an opportunity to provide comments to the agency before finalizing the deal. But no such opportunity was provided in the Facebook settlement. Many of the organizations that are joining the effort have also filed detailed complaints with the Federal Trade Commission, alleging that Facebook has violated privacy laws, including the Children’s Online Privacy Protection Act. A Freedom of Information Act case revealed that there are more than 26,000 complaints against Facebook currently pending at the Commission. In a similar case in 2012, the privacy group Consumer Watchdog challenged the FTC settlement with Google regarding the Safari hack. In other consumer privacy cases, courts have created opportunities for interested parties to file papers and be heard prior to a final determination on a proposed settlement. The case is In the Matter of Facebook, No. 19-cv-2184 (D.D.C. Filed July 24, 2019) EPIC filed with the court today: https://epic.org/2019/07/epic-challenges-ftc-facebook-s.html (link is external) Statements of Support: Brandi Collins-Dexter, Senior Campaign Director, Color of Change, “Despite the large price tag, the FTC settlement provides no meaningful changes to Facebook’s structure or financial incentives. It allows Facebook to continue to set its own limits on how much user data it can collect and it gives Facebook immunity for unspecified violations. The public has a right to know what laws Facebook violated. Corporations should face consequences for violating the public trust, not be given a rubber stamp to carry out business as usual. This settlement limits the ability of Black users to challenge Facebook’s misuse of their data and force real accountability which is why the courts must review the fairness of this settlement.” Susan Grant, Director of Consumer Protection and Privacy, Consumer Federation of America: “The FTC’s settlement with Facebook sells consumers short by failing to change the company’s mass surveillance practices and wiping away other complaints that deserved to be addressed. It needs to be stronger to truly protect our privacy.” Linda Sherry, Director of National Priorities, Consumer Action: “The FTC’s pending Facebook settlement does not take adequate measures to limit the collection and sharing of consumers’ personal information, but appears to provide the company with extensive protections from even future violations. Consumer Action respectfully urges the court to consider positions from interested parties who have related complaints filed with the FTC to ensure that the most fair and comprehensive agreement is approved.” Sally Hubbard, Director of Enforcement Strategy, Open Markets. “The FTC’s settlement is woefully insufficient in light of Facebook’s persistent privacy violations. The fine is a mere cost of doing business that makes breaking the law worth it for Facebook. Remedies must curb Facebook’s widespread data collection and promote competition. Otherwise Facebook will continue to fortify its monopoly power by surveilling users both on Facebook and off, and users can’t vote with their feet when Facebook violates their privacy. The public must have the opportunity to be heard on this negligent settlement." Robert Weissman, President, Public Citizen: “The FTC's settlement amounts to Facebook promising yet again to adhere to its own privacy policy, while reserving the right to change that policy at any time. That approach will fail to protect users' privacy. The court should reject the settlement and order the FTC to try again and do better.” Josh Golin, Executive Director, Campaign for Commercial-Free Childhood: “Facebook has been exploiting kids for years, and this proposed settlement is essentially a get-out-of-jail-free card. It potentially extinguishes our children's privacy complaints against Facebook, but offers absolutely no protections for kids' privacy moving forward. It also sweeps under the rug a complaint detailing how Facebook knowingly and intentionally tricked kids into spending money on mobile games over several years, sometimes to the tune of thousands of dollars per child.” James P. Steyer, CEO and Founder of Common Sense Media: "On behalf of families across the country, Common Sense fully stands behind EPIC's motion. The proposed settlement is a "get out of jail free" card for Facebook, purporting to absolve Facebook not only of liability for privacy abuses but for other -- completely unaddressed and unexplored -- Section 5 abuses. One such abuse that the FTC is aware of and that court documents confirm includes tricking kids into making in-app purchases that have put families out hundreds and even thousands of dollars —something the company has yet to meaningfully change its policies on to this day. Such a broad release is unprecedented, unjustified and unacceptable." Edmund Mierzwinski, Senior Director for Federal Consumer Programs, U.S. PIRG: "This laughable $5 billion settlement with the category-killer social media giant Facebook makes the much smaller Equifax settlement for sloppy security look harsh. Facebook intentionally collects and shares an ever-growing matrix of information about consumers, their friends and their interests in a mass surveillance business model. It routinely changes its previous privacy promises without consent. It doesn't adequately audit its myriad business partners. The FTC essentially said to Facebook: "Pay your parking ticket but don't ever change. Your fast-and-loose practices are okay with 3 of the 5 of us." Not changing those practices will come back to haunt the FTC, consumers and the world.” Jeff Chester, Executive Director, Center for Digital Democracy: "The 3-2 Facebook decision by the FTC leaves millions of Americans vulnerable to all the problems unleashed by the Cambridge Analytica scandal. The commission adopted a woefully inadequate remedy that does nothing to stem the fundamental loss of its user privacy which led to our original 2009 complaint."
    Jeff Chester