CDD

Reports

  • Regulating the Digital Obesogenic Ecosystem

    Lessons from the 20-year Effort to Pass the United Kingdom’s Online Ban on Unhealthy Food and Beverage Advertising

    Regulating the Global Online Junk Food Marketing SystemThe UK Experience In a March 2023 report, the World Obesity Federation issued a dire prognosis and warning: “The majority of the global population (51%, or over 4 billion people) will be living with either overweight or obesity by 2035 if current trends prevail,” based on the latest figures. The greatest and most rapid increase is expected among young people between the ages of 5 and 19.  Yet, despite these alarming trends, food and beverage companies around the world continue to push ads for junk food, sugar-sweetened sodas, and other harmful products to young people, using increasingly sophisticated and intrusive digital marketing campaigns, such as this one in Indonesia by McDonald’s. The Center for Digital Democracy’s 2021 report, Big Food, Big Tech and the Global Childhood Obesity Pandemic, described the far-reaching, global, digital media and marketing system that now targets children and teens across social media, gaming platforms, and mobile devices, and called for international advocacy efforts to address this threat.The World Health Organization and other international health bodies have urged nations to adopt strong policies to curb digital food marketing. Governments around the world have responded with a host of new restrictions in countries such as Chile, Mexico, Argentina, and Norway.  Amid this growing momentum for regulation, the UK stands out as the country where some of the most comprehensive efforts have been underway for more than two decades to develop food marketing safeguards. These include a recently-passed ban on online junk food advertising, which has triggered a powerful backlash from the industry, along with attempts to derail its implementation. CDD’s latest report – Regulating the Obesogenic Ecosystem: Lessons from the 20-year Effort to Pass the United Kingdom’s Online Ban on Unhealthy Food and Beverage Advertising – offers a detailed case study of this campaign, chronicling the interplay among health advocates, researchers, government policymakers, and corporate lobbyists, and offering insights for other organizations around the world that are seeking to rein in the powerful global food/tech marketing complex.                        
    Kathryn C. Montgomery and Jeff Chester
  • Government Needs to Step up its Efforts to Provide Meaningful and Effective Regulation.Under intensifying pressure from Congress and the public, top social media platforms popular with young people – Instagram, Snapchat, TikTok, Twitch, and YouTube – have launched dozens of new safety features for children and teens in the last year, according to a report from the Center for Digital Democracy (CDD). Researchers at CDD conducted an analysis of tech industry strategies to head off regulation in the wake of the 2021 Facebook whistleblower revelations and the rising tide of public criticism, Congressional hearings, and pressures from abroad. These companies have introduced a spate of new tools, default navigation systems, and AI software aimed at increasing safeguards against child sexual abuse material, problematic content, and disinformation, the report found. But tech platforms have been careful not to allow any new safety systems to interfere significantly with advertising practices and business models that target the lucrative youth demographic. As a consequence, while industry spokespersons tout their concerns for children, “their efforts to establish safeguards are, at best, fragmented and conflicted,” the report concludes.  “Most of the operations inside these social media companies remain hidden from public view, leaving many questions about how the various safety protocols and teen-friendly policies actually function.”  More attention should also be placed on advertisers, the report suggests, which have become a much more powerful and influential force in the tech industry in recent years. Researchers offer a detailed description of the industry’s “brand safety” system –  an “expanding infrastructure of specialized companies, technological tools, software systems, and global consortia that now operate at the heart of the digital economy, creating a highly sophisticated surveillance system that can determine instantaneously which content can be monetized and which cannot.” This system, which was set up to protect the advertisers from having their ads associated with problematic content, could do much more to ensure better protections for children. “The most effective way to ensure greater accountability and more meaningful transparency by the tech industry,” the authors argue, “is through stronger public policies.” Pointing out that protection of children online remains a strong bipartisan issue, researchers identify a number of current legislative vehicles and regulatory proceedings – including bills that are likely to be reintroduced in the next Congress – which could provide more comprehensive protections for young people, and rein in some of the immense power of the tech industry. “Tech policies in the U.S. have traditionally followed a narrow, piecemeal approach to addressing children’s needs in the online environment,” the authors note, “providing limited safeguards for only the youngest children, and failing to take into account the holistic nature of young peoples’ engagement with the digital media environment.” What is needed is a more integrated approach that protects privacy for both children and teens, along with safeguards that cover advertising, commercial surveillance, and child safety.   Finally, the report calls for a strategic campaign that brings together the diverse constituencies working on behalf of youth in the online media. “Because the impacts of digital technologies on children are so widespread, efforts should also be made to broaden the coalition of organizations that have traditionally fought for children’s interests in the digital media to include groups representing the environment, civil rights, health, education, and other key stakeholder communities.”
    Jeff Chester
  • CDD's Jeff Chester contributed to report's focus on online marketing practices, inc. use of big data analytics, by alcoholic beverage companies(Excerpt from WHO release):  Just as with tobacco, a global and comprehensive approach is required to restrict digital marketing of alcohol.“The vast majority of alcohol advertising online is “dark”Children and young people are especially at risk from the invasion of their social spaces by communication promoting alcohol consumption, normalising alcohol in all social contexts and linked to development of adult identities.“Current policies across the WHO European Region are insufficient to protect people from new formats of alcohol marketing. Age verification schemes, where they exist, are usually inadequate to protect minors from exposure to alcohol marketing. The fact that the vast majority of alcohol advertising online is “dark”, in the sense that it is only visible to the consumer to whom it is marketed, is challenging for policy makers thus requiring new mechanisms and a new approach,” said Dr Carina Ferreira-Borges, Acting Director for Noncommunicable Diseases and Programme Manager for Alcohol and Illicit Drugs at WHO/Europe.Link to releaseLink to report
    Jeff Chester
  • Reports

    “Big Food” and “Big Data” Online Platforms Fueling Youth Obesity Crisis as Coronavirus Pandemic Rages

    New Report Calls for Action to Address Saturation of Social Media, Gaming Platforms, and Streaming Video with Unhealthy Food and Beverage Products

    The coronavirus pandemic triggered a dramatic increase in online use. Children and teens whose schools have closed relied on YouTube for educational videos, attending virtual classes on Zoom and Google Classroom, and flocking to TikTok, Snapchat, and Instagram for entertainment and social interaction. This constant immersion in digital culture has exposed them to a steady flow of marketing for fast foods, soft drinks, and other unhealthy products, much of it under the radar of parents and teachers. Food and beverage companies have made digital media ground zero for their youth promotion efforts, employing a growing spectrum of new strategies and high-tech tools to penetrate every aspect of young peoples’ lives.Our latest report, Big Food, Big Tech, and the Global Childhood Obesity Pandemic, takes an in-depth look at this issue. Below we outline just three of the many tactics the food industry is using to market unhealthy products to children and teens in digital settings.1. Influencer marketing - Travis Scott & McDonald'sMcDonald’s enlisted rapper Travis Scott, to promote the “Travis Scott Meal” to young people, featuring “a medium Sprite, a quarter pounder with bacon, and fries with barbecue sauce.” The campaign was so successful that some restaurants in the chain sold out of supplies within days of its launch. This and other celebrity endorsements have helped boost McDonald’s stock price, generated a trove of valuable consumer data, and triggered enormous publicity across social media.2. Gaming Platforms - MTN DEW Amp Game Fuel - TwitchPepsiCo’s energy drink, MTN DEW Amp Game Fuel, is specifically “designed with gamers in mind.” Each 16 oz can of MTN DEW Amp Game Fuel delivers a powerful “vitamin-charged and caffeine-boosted” formula, whose ingredients of high fructose corn syrup, grape juice concentrate, caffeine, and assorted herbs “have been shown to improve accuracy and alertness.” The can itself features a “no-slip grip that mirrors the sensory design of accessories and hardware in gaming.” It is also “easier to open and allows for more uninterrupted game play.”To attract influencers, the product was featured on Twitch’s “Bounty Board,” a one-stop-shopping tool for “streamers,” enabling them to accept paid sponsorship (or “bounties”) from brands that want to reach the millions of gamers and their followers.3. Streaming and Digital Video - "It's a Thing" Campaign - FantaConcerned that teens were “drinking less soda,” Coca-Cola’s Fanta brand developed a comprehensive media campaign to trigger “an ongoing conversation with teen consumers through digital platforms” by creating four videos based on the brand’s most popular flavors, and targeting youth on YouTube, Hulu, Roku, Crackle, and other online video platforms. “From a convenience store dripping with orange flavor and its own DJ cat, to an 8-bit videogame-ified pizza parlor, the digital films transport fans to parallel universes of their favorite hangout spots, made more extraordinary and fantastic once a Fanta is opened.” The campaign, which was aimed at Black and Brown teens, also included use of Snapchat’s augmented-reality technology to creative immersive experiences, as well as promotional efforts on Facebook-owned Instagram, which generated more than a half a million followers.
  • Reports

    Data Governance for Young People in the Commercialized Digital Environment

    A report for UNICEF's Global Governance of Children's Data Project

    TikTok (also known by its Chinese name, Dǒuyīn) has quickly captured the interest of children, adolescents, and young adults in 150 countries around the world. The mobile app enables users to create short video clips, customize them with a panoply of user-friendly special effects tools, and then share them widely through the platform’s vast social network. A recent industry survey of children’s app usage in the United States, the UK, and Spain reported that young people between the ages of 4 and 15 now spend almost as much time per day (80 minutes) on TikTok as they do on the highly popular YouTube (85 minutes). TikTok is also credited with helping to drive growth in children’s social app use by 100 percent in 2019 and 200 percent in 2020. Among the keys to its success is a sophisticated artificial intelligence (AI) system that offers a constant stream of highly tailored content, and fosters continuous interaction with the platform. Using computer vision technology to reveal insights based on images, objects, texts, and natural-language processing, the app “learns” about an individual’s preferences, interests and online behaviors so it can offer “high-quality and personalized” content and recommendations. TikTok also provides advertisers with a full spectrum of marketing and brand-promotion applications that tap into a vast store of user information, including not only age, gender, location, and interests, but also granular data sets based on constant tracking of behaviors and activities...TikTok is just one of many tech companies deploying these techniques… [full article attached and also here (link is external); more from series here (link is external)]
  • Press Release

    USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report

    Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers - Need for Safeguards Urgent During Covid-19 Crisis

    Contact: Jeff Chester jeff@democraticmedia.org (link sends e-mail) 202-494-7100 Katharina Kopp kkopp@democraticmedia.org (link sends e-mail) https://www.democraticmedia.org/ USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers Need for Safeguards Urgent During Covid-19 Crisis Washington, DC, July 16, 2020—A pilot program designed to enable the tens of millions of Americans who participate in the USDA’s Supplemental Nutrition Assistance Program (SNAP) to buy groceries online is exposing them to a loss of their privacy through “increased data collection and surveillance,” as well as risks involving “intrusive and manipulative online marketing techniques,” according to a report from the Center for Digital Democracy (CDD). The report reveals how online grocers and retailers use an orchestrated array of digital techniques—including granular data profiling, predictive analytics, geolocation tracking, personalized online coupons, AI and machine learning —to promote unhealthy products, trigger impulsive purchases, and increase overall spending at check-out. While these practices affect all consumers engaged in online shopping, the report explains, “they pose greater threats to individuals and families already facing hardship.” E-commerce data practices “are likely to have a disproportionate impact on SNAP participants, which include low-income communities, communities of color, the disabled, and families living in rural areas. The increased reliance on these services for daily food and other household purchases could expose these consumers to extensive data collection, as well as unfair and predatory techniques, exacerbating existing disparities in racial and health equity.” The report was funded by the Robert Wood Johnson Foundation, as part of a collaboration among four civil rights, digital rights, and health organizations: Color of Change, UnidosUS, Center for Digital Democracy, and Berkeley Media Studies Group. The groups issued a letter today to Secretary of Agriculture Sonny Perdue, urging the USDA to take immediate action to strengthen online protections for SNAP participants. USDA launched (link is external) its e-commerce pilot last year in a handful of states, with an initial set of eight retailers approved for participation: Amazon, Dash’s Market, FreshDirect, Hy-Vee, Safeway, ShopRite, Walmart and Wright’s Market. The program has rapidly expanded (link is external) to a majority of states, in part as a result of the current Covid-19 health crisis, in order to enable SNAP participants to shop more safely from home by following “shelter-in-place” rules. Through an analysis of the digital marketing and grocery ecommerce practices of the eight companies, as well as an assessment of their privacy policies, CDD found that SNAP participants and other online shoppers confront an often manipulative and nontransparent online grocery marketplace, which is structured to leverage the tremendous amounts of data gathered on consumers via their mobile devices, loyalty cards, and shopping transactions. E-commerce grocers deliberately foreground the brands and products that partner with them (which include some of the most heavily advertised, processed foods and beverages), making them highly visible on store home pages and on “digital shelves,” as well as through online coupons and well-placed reminders at the point of sale. Grocers working with the SNAP pilot have developed an arsenal of “adtech” (advertising technology) techniques, including those that use machine learning and behavioral science to foster “frictionless shopping” and impulsive purchasing of specific foods and beverages. The AI and Big Data operations documented in the report may also lead to unfair and discriminatory data practices, such as targeting low-income communities and people of color with aggressive promotions for unhealthy food. Data collected and profiles created during online shopping may be applied in other contexts as well, leading to increased exposure to additional forms of predatory marketing, or to denial of opportunities in housing, education, employment, and financial services. “The SNAP program is one of our nation’s greatest success stories because it puts food on the table of hungry families and money in the communities where they live,” explained Dr. Lori Dorfman, Director of the Berkeley Media Studies Group. “Shopping for groceries should not put these families in danger of being hounded by marketers intent on selling products that harm health. Especially in the time of coronavirus when everyone has to stay home to keep themselves and their communities safe, the USDA should put digital safeguards in place so SNAP recipients can grocery shop without being manipulated by unfair marketing practices.” CDD’s research also found that the USDA relied on the flawed and misleading privacy policies of the participating companies, which fail to provide sufficient data protections. According to the pilot’s requirement for participating retailers, privacy policies should clearly explain how a consumer’s data is gathered and used, and provide “optimal” protections. A review of these long, densely worded documents, however, reveals the failure of the companies to identify the extent and impact of their actual data operations, or the risks to consumers. The pilot’s requirements also do not adequately limit the use of SNAP participant’s data for marketing. In addition, CDD tested the companies’ data practices for tracking customers’ behavior online, and compared them to the USDA’s requirements. The research found widespread use of so-called “third party” tracking software (such as “cookies”), which can expose an individual’s personal data to others. “In the absence of strong baseline privacy and ecommerce regulations in the US, the USDA’s weak safeguards are placing SNAP recipients at substantial risk,” explained Dr. Katharina Kopp, one of the report’s authors. “The kinds of e-commerce and Big Data practices we have identified through our research could pose even greater threats to communities of color, including increased commercial surveillance and further discrimination.” “Being on SNAP, or any other assistance program, should not give corporations free rein to use intrusive and manipulative online marketing techniques on Black communities,” said Jade Magnus Ogunnaike, Senior Campaign Director at Color of Change. “Especially in the era of COVID, where online grocery shopping is a necessity, Black people should not be further exposed to a corporate surveillance system with unfair and predatory practices that exacerbate disparities in racial and health equity just because they use SNAP. The USDA should act aggressively to protect SNAP users from unfair, predatory, and discriminatory data practices.” “The SNAP program helps millions of Latinos keep food on the table when times are tough and our nation’s public health and economic crises have highlighted that critical role,” said Steven Lopez, Director of Health Policy at UnidosUS. “Providing enhanced access to healthy and nutritious foods at the expense of the privacy and health of communities of color is too high of a price. Predatory marketing practices have been linked to increased health disparities for communities of color. The USDA must not ignore that fact and should take strong and meaningful steps to treat all participants fairly, without discriminatory practices based on the color of their skin.” The report calls on the USDA to “take an aggressive role in developing meaningful and effective safeguards” before moving the SNAP online purchasing system beyond its initial trial. The agency needs to ensure that contemporary e-commerce, retail and digital marketing applications treat SNAP participants fairly, with strong privacy protections and safeguards against manipulative and discriminatory practices. The USDA should work with SNAP participants, civil rights, consumer and privacy groups, as well as retailers like Amazon and Walmart, to restructure its program to ensure the safety and well-being of the millions of people enrolled in the program. ###
  • In March 2018, The New York Times and The Guardian/Observer broke an explosive story that Cambridge Analytica, a British data firm, had harvested more than 50 million Facebook profiles and used them to engage in psychometric targeting during the 2016 US presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal erupted amid ongoing concerns over Russian use of social media to interfere in the electoral process. The new revelations triggered a spate of congressional hearings and cast a spotlight on the role of digital marketing and “big data” in elections and campaigns. The controversy also generated greater scrutiny of some of the most problematic tech industry practices — including the role of algorithms on social media platforms in spreading false, hateful, and divisive content, and the use of digital micro-targeting techniques for “voter suppression” efforts (Green & Issenberg; 2016; Howard, Woolley, & Calo, 2018). In the wake of these cascading events, policymakers, journalists, and civil society groups have called for new laws and regulations to ensure transparency and accountability in online political advertising.Twitter and Google, driven by growing concern that they will be regulated for their political advertising practices, fearful of being found in violation of the General Data Protection Regulation (GDPR) in the European Union, and cognisant of their own culpability in recent electoral controversies, have each made significant changes in their political advertising policies (Dorsey, 2019; Spencer, 2019). Despite a great deal of public hand wringing, on the other hand, US federal policymakers have failed to institute any effective remedies even though several states have enacted legislation designed to ensure greater transparency for digital political ads (California Clean Money Campaign, 2019; Garrahan, 2018). These recent legislative and regulatory initiatives in the US are narrow in scope and focused primarily on policy approaches to political advertising in more traditional media, failing to hold the tech giants accountable for their deleterious big data practices.On the eve of the next presidential election in 2020, the pace of innovation in digital marketing continues unabated, along with its further expansion into US electoral politics. These trends were clearly evident in the 2018 election, which, according to Kantar Media, were “the most lucrative midterms in history”, with $5.25 billion USD spent for ads on local broadcast cable TV, and digital — outspending even the 2016 presidential election. Digital ad spending “quadrupled from 2014” to $950 million USD for ads that primarily ran on Facebook and Google (Axios, 2018; Lynch, 2018). In the upcoming 2020 election, experts are forecasting overall spending on political ads will be $6 billion USD, with an “expected $1.6 billion to be devoted to digital video… more than double 2018 digital video spending” (Perrin, 2019). Kantar (2019), meanwhile, estimates the portion spent for digital media will be $1.2 billion USD in the 2019-2020 election cycle.In two earlier papers, we documented a number of digital practices deployed during the 2016 elections, which were emblematic of how big data systems, strategies and techniques were shaping contemporary political practice (Chester & Montgomery, 2017, 2018). Our work is part of a growing body of interdisciplinary scholarship on the role of data and digital technologies in politics and elections. Various terms have been used to describe and explain these practices — from computational politics to political micro-targeting to data-driven elections (Bodó, Helberger, & de Vreese, 2017; Bennett, 2016; Karpf, 2016; Kreiss, 2016; Tufekci, 2014). All of these labels highlight the increasing importance of data analytics in the operations of political parties, candidate campaigns, and issue advocacy efforts. But in our view, none adequately captures the full scope of recent changes that have taken place in contemporary politics. The same commercial digital media and marketing ecosystem that has dramatically altered how corporations engage with consumers is now transforming the ways in which campaigns engage with citizens (Chester & Montgomery, 2017).We have been closely tracking the growth of this marketplace for more than 25 years, in the US and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007, 2015; Montgomery & Chester, 2009; Montgomery, Chester, Grier, & Dorfman, 2012; Montgomery, Chester, & Kopp, 2018). CDD has worked closely with leading EU civil society and data protection NGOs to address digital marketplace issues. Our work has included providing analysis to EU-based groups to help them respond critically to Google’s acquisition of DoubleClick in 2007 as well as Facebook’s purchase of WhatsApp in 2014. Our research has also been informed by a growing body of scholarship on the role that commercial and big data forces are playing in contemporary society. For example, advocates, legal experts, and scholars have written extensively about the data and privacy concerns raised by this commercial big data digital marketing system (Agre & Rotenberg, 1997; Bennett, 2008; Nissenbaum, 2009; Schwartz & Solove, 2011). More recent research has focused increasingly on other, and in many ways more troubling, aspects of this system. This work has included, for example, research on the use of persuasive design (including “mass personalisation” and “dark patterns”) to manage and direct human behaviours; discriminatory impacts of algorithms; and a range of manipulative practices (Calo, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Susser, Roessler, & Nissenbaum, 2019; Zarsky, 2019; Zuboff, 2019). As digital marketing has migrated into electoral politics, a growing number of scholars have begun to examine the implications of these problematic practices on the democratic process (Gorton, 2016; Kim et al., 2018; Kreiss & Howard, 2010; Rubinstein, 2014; Bashyakarla et al., 2019; Tufekci, 2014).The purpose of this paper is to serve as an “early warning system” — for policymakers, journalists, scholars, and the public — by identifying what we see as the most important industry trends and practices likely to play a role in the next major US election, and flagging some of the problems and issues raised. Our intent is not to provide a comprehensive analysis of all the tools and techniques in what is frequently called the “politech” marketplace. The recent Tactical Tech (Bashyakarla et al, 2019) publication, Personal Data: Political Persuasion, provides a highly useful compendium on this topic. Rather, we want to show how further growth and expansion of the big data digital marketplace is reshaping electoral politics in the US, introducing both candidate and issue campaigns to a system of sophisticated software applications and data-targeting tools that are rooted in the goals, values, and strategies for influencing consumer behaviours.1 (link is external) Although some of these new digitally enabled capabilities are extensions of longstanding political practices that pre-date the internet, others are a significant departure from established norms and procedures. Taken together, they are contributing to a major shift in how political campaigns conduct their operations, raising a host of troubling issues concerning privacy, security, manipulation, and discrimination. All of these developments are taking place, moreover, within a regulatory structure that is weak and largely ineffectual, posing daunting challenges to policymakers.In the following pages, we: 1) briefly highlight five key developments in the digital marketing industry since the 2016 election that are influencing the operations of political campaigns and will likely affect the next election cycle; 2) discuss the implications of these trends and techniques for the ongoing practice of contemporary politics, with a special focus on their potential for manipulation and discrimination; 3) assess both the technology industry responses and recent policy initiatives designed to address political advertising in the US; and 4) offer our own set of recommendations for regulating political ad and data practices.The growing big data commercial and political marketing systemIn the upcoming 2020 elections, the US is likely to witness an extremely hard-fought, under-the-radar, innovative, and in many ways disturbing set of races, not only for the White House but also for down-ballot candidates and issue groups. Political campaigns will be able to avail themselves of the current state-of-the-art big data systems that were used in the past two elections, along with a host of recent advances developed by commercial marketers. Several interrelated trends in the digital media and marketing industry are likely to play a particularly influential role in shaping the use of digital tools and strategies in the 2020 election. We discuss them briefly below:Recent mergers and partnerships in the media and data industries are creating new synergies that will extend the reach and enhance the capabilities of contemporary political campaigns. In the last few years, a wave of mergers and partnerships has taken place among platforms, data brokers, advertising exchanges, ad agencies, measurement firms and companies specialising in advertising technologies (so-called “ad-tech”). This consolidation has helped fuel the unfettered growth of a powerful digital marketing ecosystem, along with an expanding spectrum of software systems, specialty firms, and techniques that are now available to political campaigns. For example, AT&T (n.d.), as part of its acquisition of Time Warner Media, has re-launched its digital ad division, now called Xandr (n.d.). It also acquired the leading programmatic ad platform AppNexus.Leading multinational advertising agencies have made substantial acquisitions of data companies, such as the Interpublic Group (IPG) purchase of Acxiom in 2018 and the Publicis Groupe takeover of Epsilon in 2019. One of the “Big 3” consumer credit reporting companies, TransUnion (2019), bought TruSignal, a leading digital marketing firm. Such deals enable political campaigns and others to easily access more information to profile and target potential voters (Williams, 2019).In the already highly consolidated US broadband access market, only a handful of giants provide the bulk of internet connections for consumers. The growing role of internet service providers (ISPs) in the political ad market is particularly troubling, since they are free from any net neutrality, online privacy or digital marketing rules. Acquisitions made by the telecommunications sector are further enabling ISPs and other telephony companies to monetise their highly detailed subscriber data, combining it with behavioural data about device use and content preferences, as well as geolocation. (Schiff, 2018).Increasing sophistication in “identity resolution” technologies, which take advantage of machine learning and artificial intelligence applications, is enabling greater precision in finding and reaching individuals across all of their digital devices. The technologies used for what is known as “identity resolution” have evolved to enable marketers — and political groups — to target and “reach real people” with greater precision than ever before. Marketers are helping perfect a system that leverages and integrates, increasingly in real-time, consumer profile data with online behaviours to capture more granular profiles of individuals, including where they go, and what they do (Rapp, 2018). Facebook, Google and other major marketers are also using machine learning to power prediction-related tools on their digital ad platforms. As part of Google’s recent reorganisation of its ad system (now called the “Google Marketing Platform”), the company introduced machine learning into its search advertising and YouTube businesses (Dischler, 2018; Sluis, 2018). It also uses machine learning for its “Dynamic Prospecting” system, which is connected to an “Automatic Targeting” apparatus that enables more precise tracking and targeting of individuals (Google, n.d.-a-b). Facebook (2019) is enthusiastically promoting machine learning as a fundamental advertising tool, urging advertisers to step aside and let automated systems make more ad-targeting decisions.Political campaigns have already embraced these new technologies, even creating a special category in the industry awards for “Best Application of Artificial Intelligence or Machine Learning”, “Best Use of Data Analytics/Machine Learning”, and “Best Use of Programmatic Advertising” (“2019 Reed Award Winners”, 2019; American Association of American Political Consultants, 2019). For example, Resonate, a digital data marketing firm, was recognised in 2018 for its “Targeting Alabama’s Conservative Media Bubble”, which relied on “artificial intelligence and advanced predictive modeling” to analyse in real-time “more than 15 billion page loads per day. According to Resonate, this process identified “over 240,000 voters” who were judged to be “persuadable” in a hard-fought Senate campaign (Fitzpatrick, 2018). Similar advances in data analytics for political efforts are becoming available for smaller campaigns (Echelon Insights, 2019). WPA Intelligence (2019) won a 2019 Reed Award for its data analytics platform that generated “daily predictive models, much like microtargeting advanced traditional polling. This tool was used on behalf of top statewide races to produce up to 900 million voter scores, per night, for the last two months of the campaign”. Deployment of these techniques was a key influence in spending for the US midterm elections (Benes, 2018; Loredo, 2016; McCullough, 2016).Political campaigns are taking advantage of a rapidly maturing commercial geo-spatial intelligence complex, enhancing mobile and other geotargeting strategies. Location analytics enable companies to make instantaneous associations between the signals sent and received from Wi-Fi routers, cell towers, a person’s devices and specific locations, including restaurants, retail chains, airports, stadiums, and the like (Skyhook, n.d.). These enhanced location capabilities have further blurred the distinction between what people do in the “offline” physical world and their actions and behaviours online, giving marketers greater ability both to “shadow” and to reach individuals nearly anytime and anywhere.A political “geo-behavioural” segment is now a “vertical” product offered alongside more traditional online advertising categories, including auto, leisure, entertainment and retail. “Hyperlocal” data strategies enable political campaigns to engage in more precise targeting in communities (Mothership Strategies, 2018). Political campaigns are also taking advantage of the widespread use of consumer navigation systems. Waze, the Google-owned navigational firm, operates its own ad system but also is increasingly integrated into the Google programmatic platform (Miller, 2018). For example, in the 2018 midterm election, a get-out-the-vote campaign for one trade group used voter file and Google data to identify a highly targeted segment of likely voters, and then relied on Waze to deliver banner ads with a link to an online video (carefully calibrated to work only when the app signalled the car wasn’t moving). According to the political data firm that developed the campaign, it reached “1 million unique users in advance of the election” (Weissbrot, 2019, April 10).Political television advertising is rapidly expanding onto unregulated streaming and digital video platforms. For decades, television has been the primary medium used by political campaigns to reach voters in the US. Now the medium is in the process of a major transformation that will dramatically increase its central role in elections (IAB, n.d.-a). One of the most important developments during the past few years is the expansion of advertising and data-targeting capabilities, driven in part by the rapid adoption of streaming services (so-called “Over the Top” or “OTT”) and the growth of digital video (Weissbrot, 2019, October 22). Leading OTT providers in the US are actively promoting their platform capabilities to political campaigns, making streaming video a new battleground for influencing the public. For example, a “Political Data Cloud” offered by OTT specialist Tru Optik (2019) enables “political advertisers to use both OTT and streaming audio to target specific voter groups on a local, state or national level across such factors as party affiliation, past voting behavior and issue orientation. Political data can be combined with behavioral, demographic and interest-based information, to create custom voter segments actionable across over 80 million US homes through leading publishers and ad tech platforms” (Lerner, 2019).While political advertising on broadcast stations and cable television systems has long been subject to regulation by the US Federal Communications Commission, newer streaming television and digital video platforms operate outside of the regulatory system (O’Reilly, 2018). According to research firm Kantar “political advertisers will be able to air more spots on these streaming video platforms and extend the reach of their messaging—particularly to younger voters” (Lafayette, 2019). These ads will also be part of cross-device campaigns, with videos showing up in various formats on mobile devices as well.The expanding role of digital platforms enables political campaigns to access additional sources of personal data, including TV programme viewing patterns. For example, in 2018, Altice and smart TV company Vizio launched a new partnership to take advantage of recent technologies now being deployed to deliver targeted advertising, incorporating viewer data from nearly nine million smart TV sets into “its footprint of more than 90 million households, 85% of broadband subscribers and one billion devices in the U.S.” (Clancy, 2018). Vizio’s Inscape (n.d.) division produces technology for smart TVs, offering what is known as “automatic content recognition” (ACR) data. According to Vizio, ACR enables what the industry calls “glass level” viewing data, using “screen level measurement to reveal what programs and ads are being watched in near-real time”, and incorporating the IP address from any video source in use (McAfee, 2019). Campaigns have demonstrated the efficacy of OTT’s role. AdVictory (n.d.) modelled “387,000 persuadable cord cutters and 1,210 persuadable cord shavers” (the latter referring to people using various forms of streaming video) to make a complex media buy in one state-wide gubernatorial race that reached 1.85 million people “across [video] inventory traditionally untouched by campaigns”.Further developments in personalisation techniques are enabling political campaigns to maximise their ability to test an expanding array of messaging elements on individual voters. Micro-targeting now involves a more complex personalisation process than merely using so-called behavioural data to target an individual. The use of personal data and other information to influence a consumer is part of an ever-evolving, orchestrated system designed to generate and then manage an individual’s online media and advertising experiences. Google and Facebook, in particular, are adept at harvesting the latest innovations to advance their advertising capabilities, including data-driven personalisation techniques that generate hundreds of highly granular ad-campaign elements from a single “creative” (i.e., advertising message). These techniques are widely embraced by the digital marketing industry, and political campaigns across the political spectrum are being encouraged to expand their use for targeting voters (Meuse, 2018; Revolution Marketing, n.d.; Schuster, 2015). The practice is known by various names, including “creative versioning”, “dynamic creative”, and “Dynamic Creative Optimization”, or DCO (Shah, 2019). Google’s creative optimisation product, “Directors Mix” (formerly called “Vogon”), is integrated into the company’s suite of “custom affinity audience targeting capabilities, which includes categories related to politics and many other interests”. This product, it explains, is designed to “generate massively customized and targeted video ad campaigns” (Google, n.d.-c). Marketing experts say that Google now enables “DCO on an unprecedented scale”, and that YouTube will be able to “harness the immense power of its data capabilities…” (Mindshare, 2017). Directors Mix can tap into Google’s vast resources to help marketers influence people in various ways, making it “exceptionally adept at isolating particular users with particular interests” (Boynton, 2018). Facebook’s “Dynamic Creative” can help transform a single ad into as many as “6,250 unique combinations of title, image/video, text, description and call to action”, available to target people on its news feed, Instagram and outside of Facebook’s “Audience Network” ad system (Peterson, 2017).Implications for 2020 and beyondWe have been able to provide only a partial preview of the digital software systems and tools that are likely to be deployed in US political campaigns during 2020. It’s already evident that digital strategies will figure even more centrally in the upcoming campaigns than they have in previous elections (Axelrod, Burke, & Nam, 2019; Friedman, 2018, June 19). Many of the leading Democratic candidates, and President Trump, who has already ramped up his re-election campaign apparatus, have extensive experience and success in their use of digital technology. Brad Parscale, the campaign manager for Trump’s re-election effort, explained in 2019 that “in every single metric, we’re looking at being bigger, better, and ‘badder’ than we were in 2016,” including the role that “new technologies” will play in the race (Filloux, 2019).On the one hand, these digital tools could be harnessed to create a more active and engaged electorate, with particular potential to reach and mobilise young voters and other important demographic groups. For example, in the US 2018 midterm elections, newcomers such as Congresswoman Alexandria Ocasio-Cortez, with small budgets but armed with digital media savvy, were able to seize the power of social media, mobile video, and other digital platforms to connect with large swaths of voters largely overlooked by other candidates (Blommaert, 2019). The real-time capabilities of digital media could also facilitate more effective get-out-the-vote efforts, targeting and reaching individuals much more efficiently than in-person appeals and last-minute door-to-door canvassing (O’Keefe, 2019).On the other hand, there is a very real danger that many of these digital techniques could undermine the democratic process. For example, in the 2016 election, personalised targeted campaign messages were used to identify very specific groups of individuals, including racial minorities and women, delivering highly charged messages designed to discourage them from voting (Green & Issenberg, 2016). These kinds of “stealth media” disinformation efforts take advantage of “dark posts” and other affordances of social media platforms (Young et al., 2018).Though such intentional uses (or misuses) of digital marketing tools have generated substantial controversy and condemnation, there is no reason to believe they will not be used again. Campaigns will also be able to take advantage of a plethora of newer and more sophisticated targeting and message-testing tools, enhancing their ability to fine tune and deliver precise appeals to the specific individuals they seek to influence, and to reinforce the messages throughout that individual’s “media journey”.But there is an even greater danger that the increasingly widespread reliance on commercial ad technology tools in the practice of politics will become routine and normalised, subverting independent and autonomous decision making, which is so essential to an informed electorate (Burkell & Regan, 2019; Gorton, 2016). For example, so-called “dynamic creative” advertising systems are in some ways extensions of A/B testing, which has been a longstanding tool in political campaigns. However, today’s digital incarnation of the practice makes it possible to test thousands of message variations, assessing how each individual responds to them, and changing the content in real time and across media in order to target and retarget specific voters. The data available for this process are extensive, granular, and intimate, incorporating personal information that extends far beyond the conventional categories, encompassing behavioural patterns, psychographic profiles, and TV viewing histories. Such techniques are inherently manipulative (Burkell & Regan, 2019; Gorton, 2016; Susser, Roessler, & Nissenbaum, 2019). The increasing use of digital video, in all of its new forms, raises similar concerns, especially when delivered to individuals through mobile and other platforms, generating huge volumes of powerful, immersive, persuasive content, and challenging the ability of journalists and scholars to review claims effectively. AI, machine learning, and other automated systems will be able to make predictions on behaviours and have an impact on public decision-making, without any mechanism for accountability. Taken together, all of these data-gathering, -analysis, and -targeting tools raise the spectre of a growing political surveillance system, capable of capturing unlimited amounts of detailed and highly sensitive information on citizens and using it for a variety of purposes. The increasing predominance of the big data political apparatus could also usher in a new era of permanent campaign operations, where individuals and groups throughout the country are continually monitored, targeted, and managed.Because all of these systems are part of the opaque and increasingly automated operations of digital commercial marketing, the techniques, strategies, and messages of the upcoming campaigns will be even less transparent than before. In the heat of a competitive political race, campaigns are not likely to publicise the full extent of their digital operations. As a consequence, journalists, civil society groups, and academics may not be able to assess them fully until after the election. Nor will it be enough to rely on documenting expenditures, because digital ads can be inexpensive, purposefully designed to work virally and aimed at garnering “free media”, resulting in a proliferation of messages that evade categorisation or accountability as “paid political advertising”.Some scholars have raised doubts about the effectiveness of contemporary big data and digital marketing applications when applied to the political sphere, and the likelihood of their widespread adoption (Baldwin-Philippi, 2017). It is true we are in the early stages of development and implementation of these new tools, and it may be too early to predict how widely they will be used in electoral politics, or how effective they might be. However, the success of digital marketing worldwide in promoting brands and products in the consumer marketplace, combined with the investments and innovations that are expanding its ability to deliver highly measured impacts, suggest to us that these applications will play an important role in our political and electoral affairs. The digital marketing industry has developed an array of measurement approaches to document their impact on the behaviour of individuals and communities (Griner, 2019; IAB Europe, 2019; MMA, 2019). In the no-holds-barred environment of highly competitive electoral politics, campaigns are likely to deploy these and other tools at their disposal, without restraint. There are enough indications from the most recent uses of these technologies in the political arena to raise serious concerns, making it particularly urgent to monitor them very closely in upcoming elections.Industry and legislative initiativesThe largest US technology companies have recently introduced a succession of internal policies and transparency measures aimed at ensuring greater platform responsibility during elections. In November 2019, Twitter announced it was prohibiting the “promotion of political content”, explaining that it believed that “political message reach should be earned, not bought”. CEO Jack Dorsey (2019) was remarkably frank in explaining why Twitter had made this decision: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale”.That same month, Google unveiled policy changes of its own, including restricting the kinds of internal data capabilities available to political campaigns. As the company explained, “we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level)”. Google also announced it was “clarifying” its ads policies and “adding examples to show how our policies prohibit things like ‘deep fakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process” (Spencer, 2019). It remains to be seen whether such changes as Google’s and Twitter’s will actually alter, in any significant way, the contemporary operations of data-driven political campaigns. Some observers believe that Google’s new policy will benefit the company, noting that “by taking away the ability to serve specific audiences content that is most relevant to their values and interests, Google stands to make a lot MORE money off of campaigns, as we’ll have to spend more to find and reach our intended audiences” (“FWIW: The Platform Self-regulation Dumpster Fire”, 2019).Interestingly, Facebook, the tech company that has been subject to the greatest amount of public controversy over its political practices, had not, at the time of this writing, made similar changes in its political advertising policies. Though the social media giant has been widely criticised for its refusal to fact-check political ads for accuracy and fairness, it has not been willing to institute any mechanisms for intervening in the content of those ads (Ingram, 2018; Isaac, 2019; Kafka, 2019). However, Facebook did announce in 2018 that it was ending its participation in the industry-wide practice of embedding, which involved sales teams working hand-in-hand with leading political campaigns (Ingram, 2018; Kreiss & McGregor, 2017). After a research article generated extensive news coverage of this industry-wide marketing practice, Facebook publicly announced it would cease the arrangement, instead “offering tools and advice” through a politics portal that provides “candidates information on how to get their message out and a way to get authorised to run ads on the platform” (Emerson, 2018; Jeffrey, 2018). In May 2019, the company also announced it would stop paying commissions to employees who sell political ads (Glazer & Horowitz, 2019). Such a move may not have a major effect on sales, however, especially since the tech giant has already generated significant income from political advertising for the 2020 campaign (Evers-Hillstrom, 2019).Under pressure from civil rights groups over discriminatory ad targeting practices in housing and other areas, Facebook has undergone an extensive civil rights audit, which has resulted in a number of internal policy changes, including some practices related to campaigns and elections. For example, the company announced in June 2019 that it had “strengthened its voter suppression policy” to prohibit “misrepresentations” about the voting process, as well as any “threats of violence related to voting”. It has also committed to making further changes, including investments designed to prevent the use of the platform “to manipulate U.S. voters and elections” (Sandberg, 2019).Google, Facebook, and Twitter have all established online archives to enable the public to find information on the political advertisements that run on their platforms. But these databases provide only a limited range of information. For example, Google’s (2018) archive contains copies of all political ads run on the platform, shows the amount spent overall and on specific ads by a campaign, as well as age range, gender, area (state) and dates when an ad appeared, but does not share the actual “targeting criteria” used by political campaigns (Walker, 2018). Facebook’s (n.d.-b) Ad Library describes itself as a “comprehensive, searchable collection of all ads currently running across Facebook Products”. It claims to provide “data for all ads related to politics or to issues of national importance” that have run on its platform since May 2018 (Sullivan, 2019). While the data include breakdowns on the age, gender, state where it ran, number of impressions and spending for the ad, no details are provided to explain how the ad was constructed, tested, and altered, or what digital ad targeting techniques were used. For example, Facebook (n.d.-a-e) permits US-based political campaigns to use its “Custom or Lookalike Audiences” ad-targeting product, but it does not report such use in its ad library. Though all of these new transparency systems and ad archives offer useful information, they also place a considerable burden on users. Many of these new measures are likely to be more valuable for watchdog organisations and journalists, who can use the information to track spending, identify emerging trends, and shed additional light on the process of digital political influence.While these kinds of changes in platform policies and operations should help to mitigate some of the more egregious uses of social media by unscrupulous campaigns and other actors, they are not likely to alter in any major way the basic operations of today’s political advertising practices. With each tech giant instituting its own set of internal ad policies, there are no clear industry-wide “rules-of-the-game” that apply to all participants in the digital ecosystem. Nor are there strong transparency or accountability systems in place to ensure that the policies are effective. Though platform companies may institute changes that appear to offer meaningful safeguards, other players in the highly complex big data marketing infrastructure may offer ways to circumvent these apparent restrictions. As a case in point, when Facebook (2018, n.d.-c) announced in the wake of the Cambridge Analytica scandal that it was “shutting down Partner Categories”, the move provoked alarm inside the ad-tech industry that a set of powerful applications was being withdrawn (Villano, 2018). The product had enabled marketers to incorporate data provided by Facebook’s selected partners, including Acxiom and Epsilon (Pathak, 2018). However, despite the policy change, Facebook still enables marketers to bring a tremendous amount of third-party data to Facebook for targeting (Popkin, 2019). Indeed, shortly after Facebook’s announcement, LiveRamp offered assurances to its clients that no significant changes had been made, explaining that “while there’s a lot happening in our industry, LiveRamp customers have nothing to fear” (Carranza, 2018).The controversy generated by recent foreign interference in US elections has also fuelled a growing call to update US election laws. However, the current policy debate over regulation of political advertising continues to be waged within a very narrow framework, which needs to be revisited in light of current digital practices. Legislative proposals have been introduced in Congress that would strengthen the disclosure requirements for digital political ads regulated by the Federal Election Commission (FEC). For example, under the Honest Ads Act, digital media platforms would be required to provide information about each ad via a “public political file”, including who purchased the ad, when it appeared, how much was spent, as well as “a description of the targeted audience”. Campaigns would also be required to provide the same information for online political ads that are required for political advertising in other media. The proposed legislation currently has the support of Google, Facebook, Twitter and other leading companies (Ottenfeld, 2018, April 25). A more ambitious bill, the For the People Act is backed by the new Democratic majority in the House of Representatives, and includes similar disclosure requirements, along with a number of provisions aimed at reducing “the influence of big money in politics”. Though these bills are a long-overdue first step toward bringing transparency measures into the digital age, neither of them addresses the broad range of big data marketing and targeting practices that are already in widespread use across political campaigns. And it is doubtful whether either of these limited policy approaches stands a chance of passage in the near future. There is strong opposition to regulating political campaign and ad practices at the federal level, primarily because of what critics claim would be violations of the free speech principle of the US First Amendment (Brodey, 2019).While the prospects for regulating political advertising appear dim at the present time, there is a strong bi-partisan move in Congress to pass federal privacy legislation that would regulate commercial uses of data, which could, in turn, affect the operations, tools, and techniques available for digital political campaigns. Google, Facebook, and other digital data companies have long opposed any comprehensive privacy legislation. But a number of recent events have combined to force the industry to change its strategy: the implementation of the EU General Data Protection Regulation (GDPR) and the passage of state privacy laws (especially in California); the seemingly never-ending news reports on Facebook’s latest scandal; massive data breaches of personal information; accounts of how online marketers engage in discriminatory practices and promote hate speech; and the continued political fallout from “Russiagate”. Even the leading tech companies are now pushing for privacy legislation, if only to reduce the growing political pressure they face from the states, the EU, and their critics (Slefo, 2019). Also fuelling the debate on privacy are growing concerns over digital media industry consolidation, which have triggered calls by political leaders as well as presidential candidates to “break up” Amazon and Facebook (Lecher, 2019). Numerous bills have been introduced in both houses of Congress, with some incorporating strong provisions for regulating both data use and marketing techniques. However, as the 2020 election cycle gets underway, the ultimate outcome of this flurry of legislative activity is still up in the air (Kerry, 2019).Opportunities for interventionGiven the uncertainty in the regulatory and self-regulatory environment, there is likely to be little or no restraint in the use of data-driven digital marketing practices in the upcoming US elections. Groups from across the political spectrum, including both campaigns and special interest groups will continue to engage in ferocious digital combat (Lennon, 2018). With the intense partisanship, especially fuelled by what is admittedly a high-stakes-for-democracy election (for all sides), as well as the current ease with which all of the available tools and methods are deployed, no company or campaign will voluntarily step away from the “digital arms race” that US elections have become. Given what is expected to be an extremely close race for the Electoral College that determines US presidential elections, 2020 is poised to see both parties use digital marketing techniques to identify and mobilise the handful of voters needed to “swing” a state one way or another (Schmidt, 2019).Campaigns will have access to an unprecedented amount of personal data on every voter in the country, drawing from public sources as well as the growing commercial big data infrastructure. As a consequence, the next election cycle will be characterised by ubiquitous political targeting and messaging, fed continuously through multiple media outlets and communication devices.At the same time, the concerns over continued threats of foreign election interference, along with the ongoing controversy triggered by the Cambridge Analytica/Facebook scandal, have re-energised campaign reform and privacy advocates and engaged the continuing interest of watchdog groups and journalists. This heightened attention on the role of digital technologies in the political process has created an unprecedented window of opportunity for civil society groups, foundations, educators, and other key stakeholders to push for broad public policy and structural changes. Such an effort would need to be multi-faceted, bringing together diverse organisations and issue groups, and taking advantage of current policy deliberations at both the federal and state levels.In other western democracies, governments and industry organisations have taken strong proactive measures to address the use of data-driven digital marketing techniques by political parties and candidates. For example, the Institute for Practitioners in Advertising (IPA), a leading UK advertising organisation, has called for a “moratorium on micro-targeted political advertising online”. “In the absence of regulation”, the IPA explained, “we believe this almost hidden form of political communication is vulnerable to abuse”. Leading members of the UK advertising industry, including firms that work on political campaigns, have endorsed these recommendations (Oakes, 2018). The UK Information Commissioner’s Office (ICO, 2018), which regulates privacy, conducted an investigation of recent digital political practices, and issued a report urging the government to “legislate at the earliest opportunity to introduce a statutory code of practice” addressing the “use of personal information in political campaigns” (Denham, 2018). In Canada, the Privacy Commissioner offered “guidance” to political parties in their use of data, including “Best Practices” for requiring consent when using personal information (Office of the Privacy Commissioner of Canada, 2019). The European Council (2019) adopted a similar set of policies requiring political parties to adhere to EU data protection rules.We recognise that the United States has a unique regulatory and legal system, where First Amendment protections of free speech have limited regulation of political campaigns. However, the dangers that big data marketing operations pose to the integrity of the political process require a rethinking of policy approaches. A growing number of legal scholars have begun to question whether political uses of data-driven digital marketing should be afforded the same level of First Amendment protections as other forms of political speech (Burkell & Regan, 2019; Calo, 2013; Rubinstein, 2014; Zarsky, 2019). “The strategies of microtargeting political ads”, explain Jacquelyn Burkell and Priscilla Regan (2019), “are employed in the interests not of informing, or even persuading voters but in the interests of appealing to their non-rational biases as defined through algorithmic profiling”.Advocates and policymakers in the US should explore various legal and regulatory strategies, developing a broad policy agenda that encompasses data protection and privacy safeguards; robust transparency, reporting and accountability requirements; restrictions on certain digital advertising techniques; and limits on campaign spending. For example, disclosure requirements for digital media need to be much more comprehensive. At the very least, campaigns, platforms and networks should be required to disclose fully all the ad and data practices they used (e.g., cross-device tracking, lookalike modelling, geolocation, measurement, neuromarketing), as well as variations of ads delivered through dynamic creative optimisation and other similar AI applications. Some techniques — especially those that are inherently manipulative in nature — should not be allowed in political campaigns. Greater attention will need to be paid to the uses of data and targeting techniques as well, articulating distinctions between those designed to promote robust participation, such as “Get Out the Vote” efforts, and those whose purpose is to discourage voters from exercising their rights at the ballot box. Limits should also be placed on the sources and amount of data collected on voters. Political parties, campaigns, and political action committees should not be allowed to gain unfettered access to consumer profile data, and voters should have the right to provide affirmative consent (“opt-in”) before any of their information can be used for political purposes. Policymakers should be required to stay abreast of fast-moving innovations in the technology and marketing industries, identifying the uses and abuses of digital applications for political purposes, such as the way that WhatsApp was deployed during recent elections in Brazil for “computational propaganda” (Magenta, Gragnani, & Souza, 2018).In addition to pushing for government policies, advocates should place pressure on the major technology industry players and political institutions, through grassroot campaigns, investigative journalism, litigation, and other measures. If we are to have any reform in the US, there must be multiple and continuous points of pressure. The two major political parties should be encouraged to adopt a proposed new best-practices code. Advocates should also consider adopting the model developed by civil rights groups and their allies in the US, who negotiated successfully with Google, Facebook and others to develop more responsible and accountable marketing and data practices (Peterson & Marte, 2016). Similar efforts could focus on political data and ad practices. NGOs, academics, and other entities outside the US should also be encouraged to raise public concerns.All of these efforts would help ensure that the US electoral process operates with integrity, protects privacy, and does not engage in discriminatory practices designed to diminish debate and undermine full participation.citations available via: https://policyreview.info/articles/analysis/digital-commercialisation-us... (link is external)This paper is part of Data-driven elections (link is external), a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon: https://policyreview.info/data-driven-elections (link is external)
  • Online political misinformation and false news have already resurfaced in the 2018 midterm elections. CDD has produced a short e-guide to help voters understand how online media platforms can be hijacked to fan political polarization and social conflict. Enough Already! Protect Yourself from Online Political Manipulation and False News in Election 2018 describes the tactics that widely surfaced in the last presidential election, how they have evolved since, and deconstructs the underlying architecture of online media, especially social networks, that have fueled the rise of disinformation and false news. The e-guide tells readers what they can do to try to take themselves out of the targeted advertising systems developed by Facebook, Twitter, YouTube and other big platforms. The guide also describes the big picture issues that must be addressed to rein in the abuses unleashed by Silicon Valley’s big data surveillance economy and advertising-driven revenue machine.
  • Reports

    The Influence Industry - Contemporary Digital Politics in the United States

    researched and written by Jeff Chester and Kathryn C. Montgomery

  • Computational politics—the application of digital targeted-marketing technologies to election campaigns in the US and elsewhere—are now raising the same concerns for democratic discourse and governance that they have long raised for consumer privacy and welfare in the commercial marketplace. This paper examines the digital strategies and technologies of today’s political operations, explaining how they were employed during the most recent US election cycle, and exploring the implications of their continued use in the civic context.---For the full journal, please visit https://policyreview.info/node/773/pdf (link is external) .
  • Reports

    Health Wearable Devices Pose New Consumer and Privacy Risks

    Lack of Regulation Fostering Unchecked Use of Personal Health Data. Debate over Future of Health Care System Must Address Need for Safeguards.

    Personal health wearable devices that consumers are using to monitor their heart rates, sleep patterns, calories, and even stress levels raise new privacy and security risks, according to a report released today by researchers at American University and the Center for Digital Democracy. Watches, fitness bands, and so-called “smart” clothing, linked to apps and mobile devices, are part of a growing “connected-health” system in the U.S., promising to provide people with more efficient ways to manage their own health. But while consumers may think that federal laws will protect their personal health information collected by wearables, the report found that the weak and fragmented health-privacy regulatory system fails to provide adequate safeguards. The report, Health Wearable Devices in the Big Data Era: Ensuring Privacy, Security, and Consumer Protection, provides an overview and analysis of the major features, key players, and trends that are shaping the new consumer-wearable and connected-health marketplace.“Many of these devices are already being integrated into a growing Big Data digital-health and marketing ecosystem, which is focused on gathering and monetizing personal and health data in order to influence consumer behavior,” the report explains. As the use of these devices becomes more widespread, and as their functionalities become increasingly sophisticated, “the extent and nature of data collection will be unprecedented.”The report documents a number of current digital-health marketing practices that threaten the privacy of consumer health information, including “condition targeting,” “look-alike modeling,” predictive analytics, “scoring,” and the real-time buying and selling of individual consumers. The technology of wearable devices makes them particularly powerful tools for data collection and digital marketing. For example, smartphones and other mobile devices already provide access to users’ location information, enabling marketers to target individuals wherever they are, based on analyses of “visitation patterns” and a host of other behavioral and demographic data.The report also explains how an emerging set of techniques and Big-Data practices are being developed to harness the unique capabilities of wearables—such as biosensors that track bodily functions, and “haptic technology” that enables users to “feel” actual body sensations. Pharmaceutical companies are poised to be among the major beneficiaries of wearable marketing.The report offers suggestions for how government, industry, philanthropy, nonprofit organizations, and academic institutions can work together to develop a comprehensive approach to health privacy and consumer protection in the era of Big Data and the Internet of Things. These include:Clear, enforceable standards for both the collection and use of information;Formal processes for assessing the benefits and risks of data use; andStronger regulation of direct-to-consumer marketing by pharmaceutical companies.“The connected-health system is still in an early, fluid stage of development,” explained Kathryn C. Montgomery, PhD, professor at American University and a co-author of the report. “There is an urgent need to build meaningful, effective, and enforceable safeguards into its foundation.”Such efforts “will require moving beyond the traditional focus on protecting individual privacy, and extending safeguards to cover a range of broader societal goals, such as ensuring fairness, preventing discrimination, and promoting equity,” the report says.“In the wake of the recent election, the United States is on the eve of a major public debate over the future of its health-care system,” the report notes. “The potential of personal digital devices to reduce health-care spending will likely play an important role,” as lawmakers deliberate the fate of the Affordable Care Act. However, unless there are adequate regulatory safeguards in place, “consumers and patients could face serious risks to their privacy and security, and also be subjected to discrimination and other harms.”“Americans now face a growing loss of their most sensitive information, as their health data are collected and analyzed on a continuous basis, combined with information about their finances, ethnicity, location, and online and off-line behaviors,” said Jeff Chester, Executive Director of the Center for Digital Democracy, and another co-author of the report. “Policy makers must act decisively to protect consumers in today’s Big Data era.”The Robert Wood Johnson Foundation provided funding for the report.The three authors of the report —Kathryn Montgomery, Jeff Chester, and Katharina Kopp—have played a leading role on digital privacy issues, and were responsible for the campaign during the 1990s that led to enactment by Congress of the Children’s Online Privacy Protection Act (COPPA).---Full report attached.
    Kathryn Montgomery, Jeff Chester, Katharina Kopp
  • Blog

    New Report: Health Wearable Devices Pose New Consumer and Privacy Risks

    Lack of Regulation Fostering Unchecked Use of Personal Health Data. Debate over Future of Health Care System Must Address Need for Safeguards.

    Personal health wearable devices that consumers are using to monitor their heart rates, sleep patterns, calories, and even stress levels raise new privacy and security risks, according to a report released today by researchers at American University and the Center for Digital Democracy. Watches, fitness bands, and so-called “smart” clothing, linked to apps and mobile devices, are part of a growing “connected-health” system in the U.S., promising to provide people with more efficient ways to manage their own health. But while consumers may think that federal laws will protect their personal health information collected by wearables, the report found that the weak and fragmented health-privacy regulatory system fails to provide adequate safeguards. The report, Health Wearable Devices in the Big Data Era: Ensuring Privacy, Security, and Consumer Protection, provides an overview and analysis of the major features, key players, and trends that are shaping the new consumer-wearable and connected-health marketplace.“Many of these devices are already being integrated into a growing Big Data digital-health and marketing ecosystem, which is focused on gathering and monetizing personal and health data in order to influence consumer behavior,” the report explains. As the use of these devices becomes more widespread, and as their functionalities become increasingly sophisticated, “the extent and nature of data collection will be unprecedented.”The report documents a number of current digital-health marketing practices that threaten the privacy of consumer health information, including “condition targeting,” “look-alike modeling,” predictive analytics, “scoring,” and the real-time buying and selling of individual consumers. The technology of wearable devices makes them particularly powerful tools for data collection and digital marketing. For example, smartphones and other mobile devices already provide access to users’ location information, enabling marketers to target individuals wherever they are, based on analyses of “visitation patterns” and a host of other behavioral and demographic data.The report also explains how an emerging set of techniques and Big-Data practices are being developed to harness the unique capabilities of wearables—such as biosensors that track bodily functions, and “haptic technology” that enables users to “feel” actual body sensations. Pharmaceutical companies are poised to be among the major beneficiaries of wearable marketing.The report offers suggestions for how government, industry, philanthropy, nonprofit organizations, and academic institutions can work together to develop a comprehensive approach to health privacy and consumer protection in the era of Big Data and the Internet of Things. These include:Clear, enforceable standards for both the collection and use of information;Formal processes for assessing the benefits and risks of data use; andStronger regulation of direct-to-consumer marketing by pharmaceutical companies.“The connected-health system is still in an early, fluid stage of development,” explained Kathryn C. Montgomery, PhD, professor at American University and a co-author of the report. “There is an urgent need to build meaningful, effective, and enforceable safeguards into its foundation.”Such efforts “will require moving beyond the traditional focus on protecting individual privacy, and extending safeguards to cover a range of broader societal goals, such as ensuring fairness, preventing discrimination, and promoting equity,” the report says.“In the wake of the recent election, the United States is on the eve of a major public debate over the future of its health-care system,” the report notes. “The potential of personal digital devices to reduce health-care spending will likely play an important role,” as lawmakers deliberate the fate of the Affordable Care Act. However, unless there are adequate regulatory safeguards in place, “consumers and patients could face serious risks to their privacy and security, and also be subjected to discrimination and other harms.”“Americans now face a growing loss of their most sensitive information, as their health data are collected and analyzed on a continuous basis, combined with information about their finances, ethnicity, location, and online and off-line behaviors,” said Jeff Chester, Executive Director of the Center for Digital Democracy, and another co-author of the report. “Policy makers must act decisively to protect consumers in today’s Big Data era.”The Robert Wood Johnson Foundation provided funding for the report.The three authors of the report —Kathryn Montgomery, Jeff Chester, and Katharina Kopp—have played a leading role on digital privacy issues, and were responsible for the campaign during the 1990s that led to enactment by Congress of the Children’s Online Privacy Protection Act (COPPA).---Full report attached.
  • The new report by Cracked Labs titled "Corporate Surveillance in Everyday Life" with contributions by CDD's Katharina Kopp, provides a powerful overview of the commercial surveillance infastructure, key players and trends. The report is available as a ten-part overview online (link is external) and as a free, detailed 93-page PDF (link is external).
  • This report examines trends in digital marketing to youth that uses "immersive" techniques, social media, behavioral profiling, location targeting and mobile marketing, and neuroscience methods. Recommends principles for regulating inappropriate advertising to youth.
  • This report describes and provides examples of the types of digital marketing research utilized by the food and beverage industry and the potential effects it has on the health of children and adolescents. Researchers found that food and beverage industry, together with the companies they contract, are conducting three major types of research: 1) testing and deploying new marketing platforms, 2) creating new research methods to probe consumers’ responses to marketing, and 3) developing new means to assess the impact of new digital research on marketers’ profits. Researchers also found that industry puts this research into action, specifically through its efforts to target communities of color and youth.
    Jeff Chester
  • This report summarizes how the online lead generation (or “lead gen”) business works. Companies that look as if they are offering you a loan are actually (often deceptively) collecting information about you to sell your profile (a “lead”) to the highest-bidding loan company (and often to fraudulent firms, too). At the end of the report, we offer consumer tips on what you can do to protect yourself. This work is licensed under a Creative Commons Attribution 4.0 International License (link is external)
  • Project

    Private For-Profit Colleges and Online Lead Generation

    Private Universities Use Digital Marketing to Target Prospects, Including Veterans, via the Internet

    This report summarizes how companies that specialize in recruiting students to enroll at for-profit colleges use online lead generation (or “lead gen”) and other targeting tools. Websites that look like news sites or even colleges themselves are actually (often deceptively) collecting information about you to sell your profile (a “lead”) to the highest-bidding for-profit school. Many lead generators specialize in targeting veterans, because the schools will pay a higher fee to obtain access not only to federal student loan funds but also to federal veterans’ benefits, as we explain below. Many of these schools are under investigation or have even been shut down by government agencies for fraudulent practices. At the end of the report, we offer consumer tips on what you can do to protect yourself. This work is licensed under a Creative Commons Attribution 4.0 International License (link is external)
  • Data-driven tools enable marketers and financial firms to specifically target any group, from students and veterans to ethnic groups. This report examines digital targeting and marketing to Hispanics, especially younger Hispanics, due to their growing economic clout and early adoption of mobile smart phones, which enables precision targeting based on behavior, geo-location and language. Unfortunately, as the report explains, the out-sized digital footprint of young Hispanics enables some of the worst elements of the digital economy – from predatory payday lenders to debt settlement companies – to target Hispanics through online lead generator schemes. This work is licensed under a Creative Commons Attribution 4.0 International License (link is external)
  • Groups File Report with the White House “Big Data” Review Proceeding Washington, DC: U.S. PIRG Education Fund and the Center for Digital Democracy (CDD) released a comprehensive new report today focused on the realities of the new financial marketplace and the threats and opportunities its use poses to financial inclusion. The report examines the impact of digital technology, especially the unprecedented analytical and real-time actionable powers of “Big Data,” on consumer welfare. The groups immediately filed the report with the White House Big Data review headed by John Podesta, who serves as senior counselor to the President. The White House is to issue a report in April addressing the impact of “Big Data” practices on the public, including the possible need for additional consumer safeguards. In addition to the undeniable convenience of online and mobile banking, explains the report, the new financial environment poses a number of challenges, especially for lower-income consumers. Increasingly, the public confronts an invisible “e-scoring” system that may limit their access to credit and other financial services. “We are being placed under a powerful ‘Big Data’ lens, through which, without meaningful transparency or control, decisions about our financial futures are being decided,” the report explains. “Will big data tools be used to help banks and other financial firms offer lower-cost products that help the unbanked and underbanked join the insured financial system and build assets, or will big data simply make it easier for payday lenders and others seeking to extract money from consumers to win?” asked U.S. PIRG Education Fund Consumer Program Director Ed Mierzwinski. “We intend the report to stimulate a healthy debate among policymakers, industry and consumer and civil rights leaders.” Among the issues examined in the report, “Big Data Means Big Opportunities and Big Challenges: Promoting Financial Inclusion and Consumer Protection in the ‘Big Data’ Financial Era,” are the following:the plight of “underbanked and unbanked consumers,” who face special challenges in the new financial marketplace;the impact of data collection and targeted advertising on all Americans, most of whom have no idea that their personal data shape the offers they receive and the prices they pay online;the use of murky “lead generation” practices, especially by payday lenders and for-profit trade schools, to target veterans and others for high-priced financial and educational products; andthe need for new regulatory oversight to protect consumers from potentially discriminatory and deceptive practices online.The report, co-authored by Ed Mierzwinski, Consumer Program Director of the U.S. PIRG Education Fund, and CDD Executive Director Jeff Chester, reflects on the role that online financial marketing played in the recent economic crisis, and provides a blueprint for how such problems can be avoided in the future. “Technological advances that collect, analyze, and make actionable consumer data,” the report concludes, “are now at the core of contemporary marketing. The public is largely unaware of these changes and there are few safeguards in this new marketplace. Economically vulnerable consumers, and especially youth, will be continually urged to spend their limited resources. Conversely, there are opportunities to use the same tools to urge consumers to budget, save and build assets.” “Consumers increasingly face a far-reaching system that uses data about them to predict and determine the products and services they are offered in the marketplace. Federal safeguards that protect privacy and ensure members of the public are not subject to unfair and discriminatory financial practices are long overdue,” explained CDD’s Jeff Chester. “The White House ‘Big Data’ report should call for strong measures to ensure that the changing financial services marketplace operates in a fair and equitable manner.” A copy of the new report is available at www.democraticmedia.org and www.uspirgedfund.org (link is external) The Center for Digital Democracy is a nonprofit group working to educate the public about the impact of digital marketing on financial services, public health, consumer protection, and privacy. It has played a leading role at the FTC and in Congress to help promote the development of legal safeguards against behavioral targeting and other potentially invasive online data collection practices. U.S. PIRG Education Fund works to protect consumers and promote good government. We investigate problems, craft solutions, educate the public and offer Americans meaningful opportunities for civic participation.