CDD

program areas Digital Citizen

  • Press Release

    Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teens

    Letter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data

    Contact:David Monahan, Fairplay: david@fairplayforkids.orgKatharina Kopp, Center for Digital Democracy: kkopp@democraticmedia.org Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teensLetter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data BOSTON/ WASHINGTON DC–June 13, 2023– A coalition of leading advocacy organizations is standing up today to support the Federal Trade Commission’s recent order reining in Meta’s abusive practices aimed at kids and teens.  Thirty-one groups, led by the Center for Digital Democracy, the Electronic Privacy Information Center (EPIC), Fairplay, and U.S. PIRG, sent a letter to the FTC saying “Meta has violated the law and its consent decrees with the Commission repeatedly and flagrantly for over a decade, putting the privacy of all users at risk. In particular, we support the proposal to prohibit Meta from profiting from the data of children and teens under 18. This measure is justified by Meta’s repeated offenses involving the personal data of minors and by the unique and alarming risks its practices pose to children and teens.”  Comments from advocates: Katharina Kopp, Director of Policy, Center for Digital Democracy:“The FTC is fully justified to propose the modifications of Meta’s consent decree and to require it to stop profiting from the data it gathers on children and teens.  There are three key reasons why.  First, due to their developmental vulnerabilities, minors are uniquely harmed by Meta’s failure to comply repeatedly with its 2012 and 2020 settlements with the FTC, including its non-compliance with the federal children’s privacy law (COPPA); two, because Meta has failed for many years to even comply with the procedural safeguards required by the Commission, it is now time for structural remedies that will make it less likely that Meta can again disregard the terms of the consent decree; and three, the FTC must affirm its credibility and that of the rule of law and ensure that tech giants cannot evade regulation and meaningful accountability.” John Davisson, Director of Litigation, Electronic Privacy Information Center (EPIC): "Meta has had two decades to clean up its privacy practices after many FTC warnings, but consistently chose not to. That's not 'tak[ing] the problem seriously,' as Meta claims—that's lawlessness. The FTC was right to take decisive action to protect Meta's most vulnerable users and ban Meta from profiting off kids and teens. It's no surprise to see Meta balk at the legal consequences of its many privacy violations, but this action is well within the Commission's power to take.” Haley Hinkle, Policy Counsel, Fairplay: “Meta has been under the FTC's supervision in this case for over a decade now and has had countless opportunities to put user privacy over profit. The Commission's message that you cannot monetize minors' data if you can't or won't protect them is urgent and necessary in light of these repeated failures to follow the law. Kids and teens are uniquely vulnerable to the harms that result from Meta’s failure to run an effective privacy program, and they can’t wait for change any longer.” R.J. Cross, Director of U.S. PIRG’s Don’t Sell My Data campaign: “The business model of social media is a recipe for unhappiness. We’re all fed content about what we should like and how we should look, conveniently presented alongside products that will fix whatever problem with our lives the algorithm has just helped us discover. That’s a hard message to hear day in and day out, especially when you’re a teen. We’re damaging the self-confidence of some of our most impressionable citizens in the name of shopping. It’s absurd. It’s time to short circuit the business model.”  ###
    a white and blue square with a blue and white facebook logo by Dima Solomin
  • The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500May 23, 2023Dear President Biden:The undersigned civil rights, consumer protection, and other civil society organizations write to express concern about digital trade negotiations underway as part of the proposed Indo-Pacific Economic Framework (IPEF).Civil society advocates and officials within your own administration have raised increasing concern about discrimination, racial disparities, and inequities that may be “baked into” the algorithms that make decisions about access to jobs and housing, health care, prison sentencing, educational opportunity, insurance rates and lending, deployment of police resources, and much more. To address these injustices, we have advocated for anti-discrimination protections and algorithmic transparency and fairness. We have been pleased that these concepts are incorporated into your recent Executive Order on racial equity,1 as well as the White House’s AI Bill of Rights2 and many other policy proposals. The DOJ, FTC, CFPB, and EEOC also recently released a joint statement underscoring their commitment to combating discrimination in automated systems.3 Any trade agreement must be consistent with, and not undermine, these policies and the values they are advancing.Now, we have learned that the U.S. may be considering proposals for IPEF and other trade agreement negotiations that could sabotage efforts to prevent and remedy algorithmic discrimination, including provisions that could potentially preempt executive and Congressional legal authority to advance these goals. Such provisions may make it harder or impossible for Congress or executive agencies to adopt appropriate policies while also respecting our international trade commitments. For example, trade provisions that guarantee digital firms new secrecy rights over source code and algorithms could thwart potential algorithmic impact assessment and audit requirements, such as testing for racial bias or other violations of U.S. law and regulation. And because the trade negotiations are secret, we do not know how the exact language could affect pivotal civil rights protections. Including such industry-favored provisions in trade deals like IPEF would be a grievous error and undermine the Administration’s own policy goals. We urge the administration to not submit any proposals that could undermine the ability to protect the civil rights of people in the United States, particularly with regard to digital trade. Moreover, there is a great need for transparency in these negotiations. Text already proposed should be made public so the civil rights community and relevant experts can challenge any provisions that could undermine administration goals regarding racial equity, transparency, and fairness. We know that your administration shares our goals of advancing racial equity, including protecting the public from algorithmic discrimination. Thank you for your leadership in this area. For questions or further discussion, please contact Harlan Yu (harlan@upturn.org), David Brody (dbrody@lawyerscommittee.org), and Emily Peterson-Cassin (epetersoncassin@citizen.org).Sincerely,American Civil Liberties Union Center for Democracy & Technology Center for Digital Democracy Data & Society Research Institute Demand Progress Education Fund Electronic Privacy Information Center (EPIC) Fight for the Future Lawyers’ Committee for Civil RightsUnder LawThe Leadership Conference on Civil andHuman Rights NAACPNational Urban League Public Citizen Sikh American Legal Defense andEducation Fund UpturnCC:Secretary of Commerce Gina Raimondo U.S. Trade Representative Katherine TaiNational Economic Council Director Lael BrainardNational Security Advisor Jake SullivanDomestic Policy Council Director Susan RiceIncoming Domestic Policy Council Director Neera TandenDomestic Policy Council Deputy Director for Racial Justice and Equity Jenny Yang1 Exec. Order No. 14091, 88 Fed. Reg. 10825, Feb. 16, 2023, available at https://www.federalregister.gov/documents/2023/02/22/2023-03779/further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal.2 The White House, Blueprint for an AI Bill of Rights, Oct. 22, 2022, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights.3 Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, CFPB, DOJ, EEOC, FTC, April 25, 2023, available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
    woman in white tank top and white shorts standing on gray concrete road during daytime by Clay Banks
  • Commercial Surveillance expands via the "Big" Screen in the Home Televisions now view and analyze us—the programs we watch, what shows we click on to consider or save, and the content reflected on the “glass” of our screens. On “smart” or connected TVs, streaming TV applications have been engineered to fully deliver the forces of commercial surveillance. Operating stealthily inside digital television sets and streaming video devices is an array of sophisticated “adtech” software. These technologies enable programmers, advertisers and even TV set manufacturers to build profiles used to generate data-driven, tailored ads to specific individuals or households. These developments raise important questions for those concerned about the transparency and regulation of political advertising in the United States.Also known as “OTT” (“over-the-top” since the video signal is delivered without relying on traditional set-top cable TV boxes), the streaming TV industry incorporates the same online advertising techniques employed by other digital marketers. This includes harvesting a cornucopia of information on viewers through alliances with leading data-brokers. More than 80 percent of Americans now use some form of streaming or Smart TV-connected video service. Given such penetration, it is no surprise that streaming TV advertising is playing an important role in the upcoming midterm elections. And, streaming TV will be an especially critical channel for campaigns to vie for voters in 2024. Unlike political advertising on broadcast television or much of cable TV, which is generally transmitted broadly to a defined geographic market area, “addressable” streaming video ads appear in programs advertisers know you actually watch (using technologies such as dynamic ad insertion). Messaging for these ads can also be fine-tuned as a campaign progresses, to make the message more relevant to the intended viewer. For example, if you watch a political ad and then sign up to receive campaign literature, the next TV commercial from a candidate or PAC can be crafted to reflect that action. Or, if your data profile says you are concerned about the costs of healthcare, you may see a different pitch than your nextdoor neighbor who has other interests. Given the abundance of data available on households, including demographic details such as race and ethnicity, there will also be finely tuned pitches aimed at distinct subcultures produced in multiple languages.An estimated $1.4 billion dollars will be spent on streaming political ads for the midterms (part of an overall $9 billion in ad expenditures). With more people “cutting the cord” by signing up for cheaper, ad-supported streaming services, advances in TV technologies to enable personalized data-driven ad targeting, and the integration of streaming TV as a key component of the overall online marketing apparatus, it is evident that the TV business has changed. Even what’s considered traditional broadcasting has been transformed by digital ad technologies. That’s why it’s time to enact policy safeguards to ensure integrity, fairness, transparency and privacy for political advertising on streaming TV. Today, streaming TV  political ads already combine information from voter records with online and offline consumer profile data in order to generate highly targeted messages. By harvesting information related to a person’s race and ethnicity, finances, health concerns, behavior, geolocation, and overall digital media use, marketers can deliver ads tied to our needs and interests. In light of this unprecedented marketing power and precision, new regulations are needed to protect consumer privacy and civic discourse alike. In addition to ensuring voter privacy, so personal data can’t be as readily used as it is today, the messaging and construction of streaming political ads must also be accountable. Merely requiring the disclosure of who is buying these ads is insufficient. The U.S. should enact a set of rules to ensure that the tens of thousands of one-to-one streaming TV ads don’t promote misleading or false claims, or engage in voter suppression and other forms of manipulation. Journalists and campaign watchdogs must have the ability to review and analyze ads, and political campaigns need to identify how they were constructed—including the information provided by data brokers and how a potential voter’s viewing behaviors were analyzed (such as with increasingly sophisticated machine learning and artificial intelligence algorithms). For example, data companies such as Acxiom, Experian, Ninth Decimal, Catalina and LiveRamp help fuel the digital video advertising surveillance apparatus. Campaign-spending reform advocates should be concerned. To make targeted streaming TV advertising as effective as possible will likely require serious amounts of money—for the data, analytics, marketing and distribution. Increasingly, key gatekeepers control much of the streaming TV landscape, and purchasing rights to target the most “desirable” people could face obstacles. For example, smart TV makers– such as LG, Roku, Vizio and Samsung– have developed their own exclusive streaming advertising marketplaces. Their smart TVs use what’s called ACR—”automated content recognition”—to collect data that enables them to analyze what appears on our screens—“second by second.” An “exclusive partnership to bring premium OTT inventory to political clients” was recently announced by LG and cable giant Altice’s ad division. This partnership will enable political campaigns that qualify to access 30 million households via Smart TVs, as well as the ability to reach millions of other screens in households known to Altice. Connected TVs also provide online marketers with what is increasingly viewed as essential for contemporary digital advertising—access to a person’s actual identity information (called “first-party” data). Streaming TV companies hope to gain permission to use subscriber information in many other ways. This practice illustrates why the Federal Trade Commission’s (FTC) current initiative designed to regulate commercial surveillance, now in its initial stage, is so important. Many of the critical issues involving streaming political advertising could be addressed through strong rules on privacy and online consumer protection. For example, there is absolutely no reason why any marketer can so easily obtain all the information used to target us, such as our ethnicity, income, purchase history, and education—to name only a few of the variables available for sale. Nor should the FTC allow online marketers to engage in unfair and largely stealth tactics when creating digital ads—including the use of neuroscience to test messages to ensure they respond directly to our subconscious. The Federal Communications Commission (FCC), which has largely failed to address 21st century video issues, should conduct its own inquiry “in the public interest.” There is also a role here for the states, reflecting their laws on campaign advertising as well as ensuring the privacy of streaming TV viewers.This is precisely the time for policies on streaming video, as the industry becomes much more reliant on advertising and data collection. Dozens of new ad-supported streaming TV networks are emerging—known as FAST channels (Free Ad Supported TV)—which offer a slate of scheduled shows with commercials. Netflix and Disney+, as well as Amazon, have or are soon adopting ad-supported viewing. There are also coordinated industry-wide efforts to perfect ways to more efficiently target and track streaming viewers that involve advertisers, programmers and device companies. Without regulation, the U.S. streaming TV system will be a “rerun” of what we historically experienced with cable TV—dashed expectations of a medium that could be truly diverse—instead of a monopoly—and also offer both programmers and viewers greater opportunities for creative expression and public service. Only those with the economic means will be able to afford to “opt-out” of the advertising and some of the data surveillance on streaming networks. And political campaigns will be allowed to reach individual voters without worry about privacy and the honesty of their messaging. Both the FTC and FCC, and Congress if it can muster the will, have an opportunity to make streaming TV a well-regulated, important channel for democracy. Now is the time for policymakers to tune in.***This essay was originally published by Tech Policy Press.Support for the Center for Digital Democracy’s review of the streaming video market is provided by the Rose Foundation for Communities and the Environment.
    Jeff Chester
  • Time for the FTC to intervene as marketers create new ways to leverage our “identity” data as cookies “crumble” For decades, the U.S. has allowed private actors to basically create the rules regarding how our data is gathered and used online. A key reason that we do not have any real privacy for digital media is precisely because it has principally been online marketing interests that have shaped how the devices, platforms and applications we use ensnare us in the commercial surveillance complex. The Interactive Advertising Bureau (IAB) has long played this role through an array of standards committees that address everything from mobile devices to big data-driven targeting to ads harnessing virtual reality, to name a few. As this blog has previously covered, U.S. commercial online advertising, spearheaded by Google, the Trade Desk and others, is engaged in a major transformation of how it processes and characterizes data used for targeted marketing. For various reasons, the traditional ways we are profiled and tracked through the use of “cookies” are being replaced by a variety of schemes that enable advertisers to know and take advantage of our identities, but which they believe will (somehow!) pass muster with any privacy regulations now in force or potentially enacted. What’s important is that regardless of the industry rhetoric that these approaches will empower a person’s privacy, at the end of the day they are designed to ensure that the comprehensive tracking and targeting system remains firmly in place.As an industry trade organization, the IAB serves as a place to generate consensus, or agreed-upon formats, for digital advertising practices. To help the industry’s search for a way to maintain its surveillance business model approach, it has created what’s called “Project Rearc” to “re-architect digital marketing.” The IAB explains that Project Rearc “is a global call-to-action for stakeholders across the digital supply chain to re-think and re-architect digital marketing to support core industry use cases, while balancing consumer privacy and personalization.” It has set up a number of industry-run working groups to advance various components of this “re-architecting,” including what’s called an “Accountability Working Group.” Its members include Experian, Facebook, Google, Axel Springer, Nielsen, Pandora, TikTok, Nielsen, Publicis, Group M, Amazon, IABs from the EU, Australia, and Canada, Disney, Microsoft, Adobe, News Corp., Roku and many more (including specialist companies with their own “identity” for digital marketing approaches, such as Neustar and LiveRamp).The IAB Rearc effort has put out for “public comment” a number of proposed approaches for addressing elements of the new ways to target us via identifiers, cloud processing, and machine learning. Earlier this year, for example, it released for comment proposed standards on a “Global Privacy Platform;” an “Accountability Platform,” “Best Practices for User-Enabled Identity Tokens,” and a “Taxonomy and Data Transparency Standards to Support seller-defined Audience and Context Signaling.”Now it has released for public comment (due by November 12, 2021) a proposed method to “Increase Transparency Across Entire Advertising Supply Chain for New ID usage.” This proposal involves critical elements on the data collected about us and how it can be used. It is designed to “provide a standard way for companies to declare which user identity sources they use” and “ease ad campaign execution between advertisers, publishers, and their chosen technology providers….” This helps online advertisers use “different identity solutions that will replace the role of the third-party cookie,” explains the IAB. While developed in part for a “transparent supply chain” and to help build “auditable data structures to ensure consumer privacy,” its ultimate function is to enable marketers to “activate addressable audiences.” In other words, it’s all about continuing to ensure that digital marketers are able to build and leverage numerous individual and group identifiers to empower their advertising activities, and withstand potential regulatory threats about privacy violations.The IAB’s so-called public comment system is primarily designed for the special interests whose business model is the mass monetization of all our data and behaviors. We should not allow these actors to define how our everyday experiences with data operate, especially when privacy is involved. The longstanding role in which the IAB and online marketers have set many of the standards for our online lives should be challenged—by the FTC, Congress, state AGs and everyone else working on these issues.We—the public—should be determining our “digital destiny”—not the same people that gave us surveillance marketing in the first place.
    Jeff Chester
  • Blog

    The Big Data Merger Gold Rush to Control Your “Identity” Information

    Will the DoJ ensure that both competition and consumer protection in data markets are addressed?

    There is a digital data “gold rush” fever sweeping the data and marketing industry, as the quest to find ways to use data to determine a person’s “identity” for online marketing becomes paramount. This is triggered, in part, by the moves made by Google and others to replace “cookies” and other online identifiers with new, allegedly pro-privacy data-profiling methods to get the same results. We’ve addressed this privacy charade in other posts. In order to better position themselves in a world where knowing who we are and what we do is a highly valuable global currency, there are an increasing number of mergers and acquisitions in the digital marketing and advertising sector.For example, last week data-broker giant TransUnion announced it is buying identity data company Neustar for $3.1 billion dollars, to further expand its “powerful digital identity capabilities.” This is the latest in TransUnion’s buying spree to acquire data services companies that give it even more information on the U.S. public, including what we do on streaming media, via its 2020 takeovers of connected and streaming video data company Tru Optik (link is external) and the data-management-focused Signal. (link is external)In reviewing some of the business practices touted by TransUnion and Neustar, it’s striking that so little has changed in the decades CDD has been sounding the alarm about the impacts data-driven online marketing services have on society. These include the ever-growing privacy threats, as well as the machine-driven sorting of people and the manipulation of our behaviors. So far, nothing has derailed the commercial Big Data marketing.With this deal, TransUnion is obtaining a treasure trove of data assets and capabilities. For Neustar, “identity is an actionable understanding of who or what is on the other end of every interaction and transaction.” Neustar’s “OneID system provides a single lens on the consumer across their dynamic omnichannel journey.” This involves: (link is external) data management services featuring the collection, identification, tagging, tracking, analyzing, verification, correcting and sorting of business data pertaining to the identities, locations and personal information of and about consumers, including individuals, households, places, businesses, business entities, organizations, enterprises, schools, governments, points of interest, business practice characteristics, movements and behaviors of and about consumers via media devices, computers, mobile phones, tablets and internet connected devices.Neustar keeps close track of people, saying that it knows that “the average person has approximately 15 distinct identifiers with an average of 8 connected devices” (and notes that an average household has more than 45 such distinct identifiers). Neustar has an especially close business partnership with Facebook, (link is external) which enables marketers to better analyze how their ads translate into sales made on and spurred by that platform. Its “Customer Scoring and Segmentation” system enables advertisers to identify and classify targets so they can “reach the right customer with the right message in the right markets.” Neustar has a robust data-driven ad-targeting system called AdAdvisor, which reaches 220 million adults in “virtually every household in the U.S.” AdAdvisor (link is external) “uses past behavior to predict likelihood of future behavior” and involves “thousands of data points available for online targeting” (including the use of “2 billion records a month from authoritative offline sources”). Its “Propensity Audiences” service helps marketers predict the behaviors of people, incorporating such information (link is external) as “customer-level purchase data for more than 230 million US consumers; weekly in-store transaction data from over 4,500 retailers; actual catalog purchases by more than 18 million households”; and “credit information and household-level demographics, used to build profiles of the buying power, disposable income and access to credit a given household has available.” Neustar offers its customers the ability to reach “propensity audiences” in order to target such product categories as alcohol, automotive, education, entertainment, grocery, life events, personal finance, and more. For example, companies can target people who have used their debit or credit cards, by the amount of insurance they have on their homes or cars, by the “level of investable assets,” including whether they have a pension or other retirement funds. One also can discover people who buy a certain kitty litter or candy bar—the list of AdAdvisor possibilities is far-reaching.Another AdAdvisor application, “ElementOne,” (link is external) comprises 172 segments that can be “leveraged in real time for both online and offline audience targeting.” The targeting categories should be familiar to anyone who is concerned about how groups of people are characterized by data-brokers and others. For example, one can select “Segment 058—high income rural younger renters with and without children—or “Segment 115—middle income city older home owners without children; or any Segment from 151-172 to reach “low income” Americans who are renters, homeowners, have or don’t have kids, live in rural or urban areas, and the like.Marketers can also use AdAdvisor to determine the geolocation behaviors of their targets, through partnerships that provide Neustar with “10 billion daily location signals from 250+ million opted-in consumers.” In other words, Neustar knows whether you walked into that liquor store, grocery chain, hotel, entertainment venue, or shop. It also has data on what you view on TV, streaming video, and gaming. And it’s not just consumers who Neustar tracks and targets. Companies can access its “HealthLink Dimensions Doctor Data to target 1.7 million healthcare professionals who work in more than 400 specialties, including acute care, family practice, pediatrics, cardiovascular surgery.”TransUnion is already a global data and digital marketing powerhouse, with operations in 30 countries, 8,000 clients that include 60 of the Fortune 100. What is calls its “TruAudience Marketing Solutions (link is external)” is built on a foundation of “insight into 98% of U.S. adults and more than 127 million homes, including 80 million connected homes.” Its “TruAudience Identity” product provides “a three-dimensional, omnichannel view of individuals, devices and households… [enabling] precise, scalable identity across offline, digital and streaming environments.” It offers marketers and others a method to secure what it terms is an “identity resolution,” (link is external) which is defined as “the process of matching identifiers across devices and touchpoints to a single profile [that] helps build a cohesive, omnichannel view of a consumer….”TransUnion, known historically as one of the Big Three credit bureaus, has pivoted to become a key source for data and applications for digital marketing. It isn’t the only company expanding what is called an “ID Graf (link is external)”—the ways all our data are gathered for profiling. However, given its already vast storehouse of information on Americans, it should not be allowed to devour another major data-focused marketing enterprise.Since this merger is now before the U.S. Department of Justice—as opposed to the Federal Trade Commission—there isn’t a strong likelihood that in addition to examining the competitive implications of the deal, there will also be a focus on what this really means for people, in terms of further loss of privacy, their autonomy and their potential vulnerability to manipulative and stealthy marketing applications that classify and segment us in a myriad of invisible ways. Additionally, the use of such data systems to identify communities of color and other groups that confront historic and current obstacles to their well-being should also be analyzed by any competition regulator.In July, the Biden Administration issued (link is external) an Executive Order on competition that called for a more robust regime to deal with mergers such as TransUnion and Neustar. According to that order, “It is also the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”We hope the DOJ will live up to this call to address mergers such as this one, and other data-driven deals that are a key reason why these kind of buyouts happen with regularity. There should also be a way for the FTC—especially under the leadership of Chair Lina Khan—to play an important role evaluating this and similar transactions. There’s more at stake than competition in the data-broker or digital advertising markets. Who controls our information and how that information is used are the fundamental questions that will determine our freedom and our economic opportunities. As the Big Data marketplace undergoes a key transition, developing effective policies to protect public privacy and corporate competition is precisely why this moment is so vitally important.
    Jeff Chester
    three person pointing the silver laptop computer by John Schnobrich
  • Blog

    Surveillance Marketing Industry Claims Future of an “Open Internet” Requires Massive Data Gathering

    New ways to take advantage of your “identity” raise privacy, consumer-protection and competition issues

    The Trade Desk is a leading (link is external) AdTech company, providing data-driven digital advertising services (link is external) to major brands and agencies. It is also playing an outsized role responding to the initiative led by Google (link is external) to create new, allegedly “privacy-friendly” approaches to ad targeting, which include ending the use of what are called “third-party” cookies. These cookies enable the identification and tracking of individuals, and have been an essential building block for surveillance advertising since the dawn (link is external) of the commercial Internet. As we explained in a previous post about the so-called race to “end” the use of cookies, the online marketing industry is engaged in a full-throated effort to redefine how our privacy is conceptualized and privately governed. Pressure from regulators (such as the EU’s GDPR) and growing concerns about privacy from consumers are among the reasons why this is happening now. But the real motivation, in my view, is that the most powerful online ad companies and global brands (such as Google, Amazon and the Trade Desk) don’t need these antiquated cookies anymore. They have so much of our information that they collect directly, and also available from countless partners (such as global brands). Additionally, they now have many new ways to determine who we are—our “identity”—including through the use of AI, machine learning and data clouds (link is external). “Unified ID 2.0” is what The Trade Desk calls its approach to harvesting our identity information for advertising. Like Google, they claim to be respectful of data protection principles. Some of the most powerful companies in the U.S. are supporting the Unified ID standard, including Walmart, Washington Post, P&G, Comcast, CBS, Home Depot, Oracle, and Nielsen. But more than our privacy is at stake as data marketing giants fight over how best to reap the financial rewards (link is external) of what is predicted eventually to become a trillion dollar global ad marketplace. This debate is increasingly focused on the very future of the Internet itself, including how it is structured and governed. Only by ensuring that advertisers can continue to successfully operate powerful data-gathering and ad-targeting systems, argues Trade Desk CEO Jeff Green, can the “Open (link is external) Internet” be preserved. His argument, of course, is a digital déjà vu version of what media moguls have said in the U.S. dating back to commercial radio in the 1930’s. Only with a full-blown, ad-supported (and regulation-free) electronic media system, whether it was broadcast radio, broadcast TV, or cable TV, could the U.S. be assured it would enjoy a democratic and robust communications environment. (I was in the room at the Department of Commerce back in the middle 1990’s when advertisers were actually worried that the Internet would be largely ad-free; the representative from P&G leaned over to tell me that they never would let that happen—and he was right.) Internet operations are highly influenced to serve the needs of advertisers, who have reworked its architecture to ensure we are all commercially surveilled. For decades, the online ad industry has continually expanded ways to monetize our behaviors, emotions, location and much more. (link is external) Last week, The Trade Desk unveiled its latest iteration using Unified ID 2.0—called Solimar (see video (link is external) here). Solimar uses “an artificial intelligence tool called Koa (link is external), which makes suggestions” to help ensure effective marketing campaigns. Reflecting the serial partnerships that operate to provide marketers with a gold mine of information on any individual, The Trade Desk has a “Koa Identity (link is external) Alliance,” a “cross-device graph that incorporates leading and emerging ID solutions such as LiveRamp Identity Link, Oracle Cross Device, Tapad (link is external) Device Graph, and Adbrain Device Graf.” This system, they say, creates an effective way for marketers to develop a data portrait of individual consumers. It’s useful to hear what companies such as The Trade Desk say as we evaluate claims that “big data” consumer surveillance operations are essential for a democratically structured Internet. In its most recent Annual Report (link is external), the company explains that “Through our self-service, cloud-based platform, ad buyers can create, manage, and optimize more expressive data-driven digital advertising campaigns across ad formats and channels, including display, video, audio, in-app, native and social, on a multitude of devices, such as computers, mobile devices, and connected TV (‘CTV’)…. We use the massive data captured by our platform to build predictive models around user characteristics, such as demographic, purchase intent or interest data. Data from our platform is continually fed back into these models, which enables them to improve over time as the use of our platform increases.” And here’s how The Trade Desk’s Koa’s process is described in the trade publication Campaign (link is external) Asia: …clients can specify their target customer in the form of first-party or third-party data, which will serve as a seed audience that Koa will model from to provide recommendations. A data section provides multiple options for brands to upload first-party data including pixels, app data, and IP addresses directly into the platform, or import data from a third-party DMP or CDP. If a client chooses to onboard CRM data in the form of email addresses, these will automatically be converted into UID2s. Once converted, the platform will scan the UID2s to evaluate how many are ‘active UID2s’, which refers to how many of these users have been active across the programmatic universe in the past week. If the client chooses to act on those UID2s, they will be passed into the programmatic ecosystem to match with the publisher side, building the UID2 ecosystem in tandem. For advertisers that don't have first-party data… an audiences tab allows advertisers to tap into a marketplace of second- and third-party data so they can still use interest segments, purchase intent segments and demographics. In other words, these systems have a ton of information about you. They can easily get even more data and engage in the kinds of surveillance advertising that regulators (link is external) and consumer (link is external) advocates around the world are demanding be stopped. There are now dozens of competing “identity solutions”—including those from Google, Amazon (link is external), data brokers (link is external), telephone (link is external) companies, etc. (See visual at bottom of page here (link is external)). The stakes here are significant—how will the Internet evolve in terms of privacy, and will its core “DNA” be ever-growing forms of surveillance and manipulation? How do we decide the most privacy-protective ways to ensure meaningful monetization of online content—and must funding for such programming only be advertising-based? In what ways are some of these identity proposals a way for powerful platforms such as Google to further expand its monopolistic control of the ad market? These and other questions require a thoughtful regulator in the U.S. to help sort this out and make recommendations to ensure that the public truly benefits. That’s why it’s time for the U.S. Federal Trade Commission to step in. The FTC should analyze these advertising-focused identity efforts; assess their risks and the benefits; address how to govern the collection and use of data where a person has supposedly given permission to a brand or store to use it (known as “first-party” data). A key question, given today’s technologies, is whether meaningful personal consent for data collection is even possible in a world driven by sophisticated and real-time AI systems that personalize content and ads? The commission should also investigate the role of data-mining clouds and other so-called “clean” rooms where privacy is said to prevail despite their compilation of personal information for targeted advertising. The time for private, special interests (and conflicted) actors to determine the future of the Internet, and how our privacy is to be treated, is over.
    Jeff Chester
  • Press Release

    Against surveillance-based advertising

    CDD joins an international coalition of more than 50 NGOs and scholars in a call for a surveillance-based advertising ban in its Digital Services Act and for the U.S. to enact a federal digital privacy and civil rights law

    International coalition calls for action against surveillance-based advertising Every day, consumers are exposed to extensive commercial surveillance online. This leads to manipulation, fraud, discrimination and privacy violations. Information about what we like, our purchases, mental and physical health, sexual orientation, location and political views are collected, combined and used under the guise of targeting advertising.   In a new report, the Norwegian Consumer Council (NCC) sheds light on the negative consequences that this commercial surveillance has on consumers and society. Together with [XXX] organizations and experts, NCC is asking authorities on both sides of the Atlantic to consider a ban. In Europe, the upcoming Digital Services Act can lay the legal framework to do so. In the US, legislators should seize the opportunity to enact comprehensive privacy legislation that protects consumers.  - The collection and combination of information about us not only violates our right to privacy, but renders us vulnerable to manipulation, discrimination and fraud. This harms individuals and society as a whole, says the director of digital policy in the NCC, Finn Myrstad.  In a Norwegian population survey conducted by YouGov on behalf of the NCC, consumers clearly state that they do not want commercial surveillance. Just one out of ten respondents were positive to commercial actors collecting personal information about them online, while only one out of five thought that ads based on personal information is acceptable. - Most of us do not want to be spied on online, or receive ads based on tracking and profiling. These results mirror similar surveys from Europe and the United States, and should be a powerful signal to policymakers looking at how to better regulate the internet, Myrstad says. Policymakers and civil society organisations on both sides of the Atlantic are increasingly standing up against these invasive practices. For example, The European Parliament and the European Data Protection Supervisor (EDPS) have already called for phasing out and banning surveillance-based advertising. A coalition of consumer and civil rights organizations in the United States has called for a similar ban.     Significant consequences  The NCC report ’Time to ban surveillance-based advertising’ exposes a variety of harmful consequences that surveillance-based advertising can have on individuals and on society:    1. Manipulation  Companies with comprehensive and intimate knowledge about us can shape their messages in attempts to reach us when we are susceptible, for example to influence elections or to advertise weight loss products, unhealthy food or gambling.     2. Discrimination  The opacity and automation of surveillance-based advertising systems increase the risk of discrimination, for example by excluding consumers based on income, gender, race, ethnicity or sexual orientation, location, or by making certain consumers pay more for products or services.     3. Misinformation   The lack of control over where ads are shown can promote and finance false or malicious content. This also poses significant challenges to publishers and advertisers regarding revenue, reputational damage, and opaque supply chains. 4. Undermining competition   The surveillance business model favours companies that collect and process information across different services and platforms. This makes it difficult for smaller actors to compete, and negatively impacts companies that respect consumers’ fundamental rights.  5. Security risks  When thousands of companies collect and process enormous amounts of personal data, the risk of identity theft, fraud and blackmail increases. NATO has described this data collection as a national security risk.    6. Privacy violations   The collection and use of personal data is happening with little or no control, both by large companies and by companies that are unknown to most consumers. Consumers have no way to know what data is collected, who the information is shared with, and how it may be used.   -  It is very difficult to justfy the negative consequences of this system. A ban will contribute to a healthier marketplace that helps protect individuals and society, Myrstad comments.  Good alternatives  In the report, the NCC points to alternative digital advertising models that do not depend on the surveillance of consumers, and that provide advertisers and publishers more oversight and control over where ads are displayed and which ads are being shown. - It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see, Myrstad says. - A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue. The coordinated push behind the report and letter illustrates the growing determination of consumer, digital rights, human rights and other civil society groups to end the widespread business model of spying on the public.
  • The Center for Digital Democracy and 23 other leading civil society groups sent a letter to President Biden today asking his Administration to ensure that any new transatlantic data transfer deal is coupled with the enactment of U.S. laws that reform government surveillance practices and provide comprehensive privacy protections.
  • The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers Jeff Chester One would think, in listening to the language used by Google, Facebook, and other ad and data companies to discuss the construction and future of privacy protection, that they are playing some kind of word game. We hear terms (link is external) such as “TURTLEDOVE,” “FLEDGE,” SPARROW and “FLoC.” Such claims should be viewed with skepticism, however. Although some reports make it appear that Google and its online marketing compatriots propose to reduce data gathering and tracking, we believe that their primary goal is still focused on perfecting the vast surveillance system they’ve well-established. A major data marketing industry effort is now underway to eliminate—or diminish—the role of the tracking software known as “third-party” cookies. Cookies were developed (link is external) in the very earliest days of the commercial “World Wide Web,” and have served as the foundational digital tether connecting us to a sprawling and sophisticated data-mining complex. Through cookies—and later mobile device IDs and other “persistent” identifiers—Google, Facebook, Amazon, Coca-Cola and practically everyone else have been able to surveil and target us—and our communities. Tracking cookies have literally helped engineer a “sweet spot (link is external)” for online marketers, enabling them to embed spies into our web browsers, which help them understand our digital behaviors and activities and then take action based on that knowledge. Some of these trackers—placed and used by a myriad (link is external) of data marketing companies on various websites—are referred to as “third-party” cookies, to distinguish them from what online marketers claim, with a straight face, are more acceptable forms of tracking software—known as “first-party” cookies. According to the tortured online advertiser explanation, “first-party” trackers are placed by websites on which you have affirmatively given permission to be tracked while you are on that site. These “we-have-your-permission-to-use” first-party cookies would increasingly become the foundation for advances in digital tracking and targeting. Please raise your hand if you believe you have informed Google or Amazon, to cite the two most egregious examples, that they can surveil what you do via these first-party cookies, including engaging in an analysis of your actions, background, interests and more. What the online ad business has developed behind its digital curtain—such as various ways to trigger your response, measure your emotions (link is external), knit together information on device (link is external) use, and employ machine learning (link is external) to predict your behaviors (just to name a few of the methods currently in use)—has played a fundamental role in personal data gathering. Yet these and other practices—which have an enormous impact on privacy, autonomy, fairness, and so many other aspects of our lives—will not be affected by the “death-of-the-cookie” transition currently underway. On the contrary, we believe that a case to be made that the opposite is true. Rather than strengthening data safeguards, we are seeing unaccountable platforms such as Google actually becoming more dominant, as so-called “privacy preserving (link is external)” systems actually enable enhanced data profiling. In a moment, we will briefly discuss some of the leading online marketing industry work underway to redefine privacy. But the motivation for this post is to sound the alarm that we should not—once again—allow powerful commercial interests to determine the evolving structure of our online lives. The digital data industry has no serious track record of protecting the public. Indeed, it was the failure of regulators to rein in this industry over the years that led to the current crisis. In the process, the growth of hate speech, the explosion of disinformation, and the highly concentrated control over online communications and commerce—to name only a few— now pose serious challenges to the fate of democracies worldwide. Google, Facebook and the others should never be relied on to defer their principal pursuit of monetization out of respect to any democratic ideal—let alone consumer protection and privacy. One clue to the likely end result of the current industry effort is to see how they frame it. It isn’t about democracy, the end of commercial surveillance, or strengthening human rights. It’s about how best to preserve what they call the “Open Internet.” (link is external)Some leading data marketers believe we have all consented to a trade-off, that in exchange for “free” content we’ve agreed to a pact enabling them to eavesdrop on everything we do—and then make all that information available to anyone who can pay for it—primarily advertisers. Despite its rhetoric about curbing tracking cookies, the online marketing business intends to continue to colonize our devices and monitor our online experiences. This debate, then, is really about who can decide—and under what terms—the fate of the Internet’s architecture, including how it operationalizes privacy—at least in the U.S. It illustrates questions that deserve a better answer than the “industry-knows-best” approach we have allowed for far. That’s why we call on the Biden Administration, the Federal Trade Commission (FTC) and the Congress to investigate these proposed new approaches for data use, and ensure that the result is truly privacy protective, supporting democratic governance and incorporating mechanisms of oversight and accountability. Here’s a brief review (link is external) of some of the key developments, which illustrate the digital “tug-of-war” ensuing over the several industry proposals involving cookies and tracking. In 2019, Google announced (link is external) that it would end the role of what’s known as “third-party cookies.” Google has created a “privacy sandbox (link is external)” where it has researched various methods it claims will protect privacy, especially for people who rely on its Chrome browser. It is exploring “ways in which a browser can group together people with similar browsing habits, so that ad tech companies can observe the habits of large groups instead of the activity of individuals. Ad targeting could then be partly based on what group the person falls into.” This is its “Federated Learning of Cohorts (FLoC) approach, where people are placed into “clusters” based on the use of “machine learning algorithms” that analyze the data generated from the sites a person visited and their content. Google says these clusters would “each represent thousands of people,” and that the “input features” used to generate the targeting algorithm, such as our “web history,” would be stored on our browsers. There would be other techniques deployed, to add “noise” to the data sets and engage in various “anonymization methods” so that the exposure of a person’s individual information is limited. Its TURTLEDOVE initiative is designed to enable more personalized targeting, where web browsers will be used to help ensure our data is available for the real-time auctions that sell us to advertisers. The theory is that by allowing the data to remain within our devices, as well using clusters of people for targeting, our privacy is protected. But the goal of the process— to have sufficient data and effective digital marketing techniques—is still at the heart of this process. Google recently (link is external) reported that “FLoC can provide an effective replacement signal for third-party cookies. Our tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” Google’s 2019 announcement caused an uproar in the digital marketing business. It was also perceived (correctly, in my view) as a Google power grab. Google operates basically as a “Walled Garden (link is external)” and has so much data that it doesn’t really need third-party data cookies to hone in on its targets. The potential “death of the cookie” ignited a number of initiatives from the Interactive (link is external) Advertising Bureau, as well as competitors (link is external) and major advertisers, who feared that Google’s plan would undermine their lucrative business model. They include such groups as the Partnership for Addressable Media (PRAM), (link is external) whose 400 members include Mastercard, Comcast/NBCU, P&G, the Association of National Advertisers, IAB and other ad and data companies. PRAM issued a request (link is external) to review proposals (link is external) that would ensure the data marketing industry continues to thrive, but could be less reliant on third-party cookies. Leading online marketing company Trade Desk is playing a key role here. It submitted (link is external) its “United ID 2.0 (link is external),” plan to PRAM, saying that it “represents an alternative to third party cookies that improves consumer transparency, privacy and control, while preserving the value exchange of relevant advertising across channels and devices.” There are also a number of other ways now being offered that claim both to protect privacy yet take advantage of our identity (link is external), such as various collaborative (link is external) data-sharing efforts. The Internet standards groups Worldwide Web Consortium (W3C) has created (link is external) a sort of neutral meeting ground where the industry can discuss proposals and potentially seek some sort of unified approach. The rationale for the [get ready for this statement] “Improving Web Advertising Business Group goal is to provide monetization opportunities that support the open web while balancing the needs of publishers and the advertisers that fund them, even when their interests do not align, with improvements to protect people from the individual and societal impacts of tracking content consumption over time.” Its participants (link is external) are another “Who’s Who” in data-driven marketing, including Google, AT&T, Verizon, NYT, IAB, Apple, Group M, Axel Springer, Facebook, Amazon, Washington Post, Verizon, and Criteo. DuckDuckGo is also a member (and both Google and Facebook have multiple representatives in this group). The sole NGO listed as a member is the Center for Democracy and Technology. W3Cs ad business group has a number of documents (link is external) about the digital marketing business that illustrate why the issue of the future of privacy and data collection and targeting should be a public—and not just data industry—concern. In an explainer (link is external) on digital advertising, they make the paradigm so many are working to defend very clear: Marketing’s goal can be boiled down to the "5 Rights": Right Message to the Right Person at the Right Time in the Right Channel and for the Right Reason. Achieving this goal in the context of traditional marketing (print, live television, billboards, et al) is impossible. In digital realm, however, not only can marketers achieve this goal, they can prove it happened. This proof is what enables marketing activities to continue, and is important for modern marketers to justify their advertising dollars, which ultimately finance the publishers sponsoring the underlying content being monetized.” Nothing I’ve read says it better. Through a quarter century of work to perfect harvesting our identity for profit, the digital ad industry has created a formidable complex of data clouds (link is external), real-time ad auctions, cross-device tracking tools and advertising techniques (link is external) that further commodify our lives, shred our privacy, and transform the Internet into a hall of mirrors that can amplify our fears and splinter democratic norms. It’s people, of course, who decide how the Internet operates—especially those from companies such as Google, Facebook, Amazon, and those working for trade groups as the IAB. We must not let them decide how cookies may or may not be used or what new data standard should be adopted by the most powerful corporate interests on the planet to profit from our “identity.” It’s time for action by the FTC and Congress. Part 1. (1)For the uninitiated, TURTLEDOVE stands for “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory”; FLEDGE is short for “First Locally-Executed Decision over Groups Experiment”; SPARROW is “Secure Private Advertising Remotely Run On Webserver”; and FLoC is “Federated Learning of Cohorts”). (2) In January 2021, the UK’s Competition and Markets Authority (CMA) opened up an investigation (link is external) into Google privacy sandbox and cookie plans.
    Jeff Chester
  • CONSUMER AND CITIZEN GROUPS CONTINUE TO HAVE SERIOUS CONCERNS ABOUT GOOGLE FITBIT TAKEOVER Joint Statement on Possible Remedies (link is external)