CDD

Newsroom

  • Press Release

    Transatlantic Consumer Dialogue (TACD) Calling on White House and Administration to Take Immediate Action on Generative AI

    Transatlantic Consumer Dialogue (TACD), a coalition of the leading consumer organizations in North America and Europe, asking policymakers on both side of the Atlantic for action

    The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500 June 20, 2023  Dear President Biden,We are writing on behalf of the Transatlantic Consumer Dialogue (TACD), a coalition of the leading consumer organizations in North America and Europe, to ask you and your administration to take immediate action regarding the rapid development of Generative Artificial Intelligence in a growing number of applications, such as text generators like ChatGPT, and the risks these entail for consumers. We are calling on policymakers and regulators on both sides of the Atlantic to use existing laws and regulations to address the problematic uses of Generative Artificial Intelligence; adopt a cautious approach to deploying Generative Artificial Intelligence in the public sector; and adopt new legislative measures to directly address Generative Artificial Intelligence harms. As companies are rapidly developing and deploying this technology and outpacing legislative efforts, we cannot leave consumers unprotected in the meantime.  Generative Artificial Intelligence systems are now already widely used by consumers in the U.S. and beyond. For example, chatbots are increasingly incorporated into products and services by businesses. Although these systems are presented as helpful, saving time, costs, and labor, we are worried about serious downsides and harms they may bring about.Generative Artificial Intelligence systems are incentivized to suck up as much data as possible to train the AI models, leading to inclusion of personal data that may be irremovable once the sets have been established and the tools trained. Where training models include data that is biased or discriminatory, those biases become baked into the Generative Artificial Intelligence’s outputs, creating increasingly more biased and discriminatory content that is then disseminated. The large companies making advances in this space are already establishing monopolistic market concentration. Running Generative Artificial Intelligence tools requires enormous amounts of water and electricity, leading to heightened carbon emissions. The speed and volume of information creation with these technologies speeds the generation and spread of increasing misinformation and disinformation. Three of our members (Public Citizen, The Electronic Privacy Information Center, and The Norwegian Consumer Council) have already published reports setting forth the specific harms of Generative Artificial Intelligence and proposing steps to counter these harms – we would be happy to discuss these with you. In addition, TACD has adopted policy principles which we believe are key to safely deploying Generative Artificial Intelligence. Our goal is to provide policymakers,                                lawmakers, enforcement agencies, and other relevant entities with a robust starting point to ensure that Generative Artificial Intelligence does not come at the expense of consumer, civil, and human rights.  If left unchecked, these harms will become permanently entrenched in the use and development of Generative Artificial Intelligence. We are calling for actions that insist upon transparency, accountability, and safety in these Generative Artificial Intelligence systems, including ensuring that discrimination, manipulation, and other serious harms are eliminated. Where uses of GAI are clearly harmful or likely to be clearly harmful, they must be barred completely.  In order to combat the harms of Generative Artificial Intelligence, your administration must ensure that existing laws are enforced wherever they apply. New regulations must be passed that specifically address the serious risks and gaps in protection identified in the reports mentioned above. Companies and other entities developing Generative Artificial Intelligence must adhere to transparent and reviewable obligations. Finally, once binding standards are in place, the Trade and Technology Council must not undermine those binding standards.We welcome the administration’s efforts on AI to protect Americans’ rights and safety, particularly your efforts to center civil rights, via executive action. Furthermore, we are encouraged to see the leading enforcement agencies underscore their collective commitment to leverage their existing legal authorities to protect the American people. But more must be done, and soon, especially for those already disadvantaged and the most vulnerable, including people of color and others who have been historically underserved and marginalized, as well as children and teenagers. We want to work with you to ensure that privacy and other consumer protections remain at the forefront of these discussions, even when new technology is involved.Sincerely, Finn Lützow-Holm Myrstad                                Director of Digital Policy, Norwegian Consumer European Co-Chair of TACD’s Digital Policy Calli SchroederSenior Counsel and Global Privacy Counsel, EPIC U.S. Co-Chair of TACD’s Digital PolicyTransatlantic Consumer Dialogue (TACD)Rue d’Arlon 80, B-1040 Brussels  Tel. +32 (0)2 743 15 90  www.tacd.org  @TACD_ConsumersEC register for interest representatives: identification number 534385811072-96                                       
  • Press Release

    Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teens

    Letter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data

    Contact:David Monahan, Fairplay: david@fairplayforkids.orgKatharina Kopp, Center for Digital Democracy: kkopp@democraticmedia.org Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teensLetter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data BOSTON/ WASHINGTON DC–June 13, 2023– A coalition of leading advocacy organizations is standing up today to support the Federal Trade Commission’s recent order reining in Meta’s abusive practices aimed at kids and teens.  Thirty-one groups, led by the Center for Digital Democracy, the Electronic Privacy Information Center (EPIC), Fairplay, and U.S. PIRG, sent a letter to the FTC saying “Meta has violated the law and its consent decrees with the Commission repeatedly and flagrantly for over a decade, putting the privacy of all users at risk. In particular, we support the proposal to prohibit Meta from profiting from the data of children and teens under 18. This measure is justified by Meta’s repeated offenses involving the personal data of minors and by the unique and alarming risks its practices pose to children and teens.”  Comments from advocates: Katharina Kopp, Director of Policy, Center for Digital Democracy:“The FTC is fully justified to propose the modifications of Meta’s consent decree and to require it to stop profiting from the data it gathers on children and teens.  There are three key reasons why.  First, due to their developmental vulnerabilities, minors are uniquely harmed by Meta’s failure to comply repeatedly with its 2012 and 2020 settlements with the FTC, including its non-compliance with the federal children’s privacy law (COPPA); two, because Meta has failed for many years to even comply with the procedural safeguards required by the Commission, it is now time for structural remedies that will make it less likely that Meta can again disregard the terms of the consent decree; and three, the FTC must affirm its credibility and that of the rule of law and ensure that tech giants cannot evade regulation and meaningful accountability.” John Davisson, Director of Litigation, Electronic Privacy Information Center (EPIC): "Meta has had two decades to clean up its privacy practices after many FTC warnings, but consistently chose not to. That's not 'tak[ing] the problem seriously,' as Meta claims—that's lawlessness. The FTC was right to take decisive action to protect Meta's most vulnerable users and ban Meta from profiting off kids and teens. It's no surprise to see Meta balk at the legal consequences of its many privacy violations, but this action is well within the Commission's power to take.” Haley Hinkle, Policy Counsel, Fairplay: “Meta has been under the FTC's supervision in this case for over a decade now and has had countless opportunities to put user privacy over profit. The Commission's message that you cannot monetize minors' data if you can't or won't protect them is urgent and necessary in light of these repeated failures to follow the law. Kids and teens are uniquely vulnerable to the harms that result from Meta’s failure to run an effective privacy program, and they can’t wait for change any longer.” R.J. Cross, Director of U.S. PIRG’s Don’t Sell My Data campaign: “The business model of social media is a recipe for unhappiness. We’re all fed content about what we should like and how we should look, conveniently presented alongside products that will fix whatever problem with our lives the algorithm has just helped us discover. That’s a hard message to hear day in and day out, especially when you’re a teen. We’re damaging the self-confidence of some of our most impressionable citizens in the name of shopping. It’s absurd. It’s time to short circuit the business model.”  ###
    a white and blue square with a blue and white facebook logo by Dima Solomin
  • “By clarifying what types of data constitute personal data under COPPA, the FTC ensures that COPPA keeps pace with the 21st century and the increasingly sophisticated practices of marketers,” said Katharina Kopp, Director of Policy at Center for Digital Democracy.“As interactive technologies evolve rapidly, COPPA must be kept up to date and reflect changes in the way children use and access these new media, including virtual and augmented realities. The metaverse typically involves a convergence of physical and digital lives, where avatars are digital extension of our physical selves. We agree with the FTC that an avatar’s characteristics and its behavior constitute personal information. And as virtual and augmented reality interfaces allow for the collection of extensive sets of personal data, including sensitive and biometric data, this data must be considered personal information under COPPA. Without proper protections this highly coveted data would be exploited by marketers and used to further manipulate and harm children online.”
    person holding black game controller by Hardik Sharma
  • Contact: Katharina Kopp, kkopp [at] democraticmedia.org“We welcome the FTC ‘s action to address the rampant commercial surveillance of children via Internet of Things (IoT) devices, such as Amazon’s Echo, and for enforcing existing law,” said Katharina Kopp, Director of Policy at Center for Digital Democracy. “Children’s data is taken away from them illegally and surreptitiously on a massive scale via IoT devices, including their voice recordings and data gleaned from kids’ viewing, reading, listening, and purchasing habits. These violations in turn lead to further exploitation and manipulation of children and teens. They lead to violating their privacy, to manipulating them into being interested in harmful products, to undermining their autonomy and hooking them to digital media, and to perpetuating discrimination and bias. As Commissioner Bedoya’s separate statement points out, with this proposed order the FTC warns companies that they cannot take data from children and teens (and others) illegitimately to develop even more sophisticated methods to take advantage of them. Both the FTC and the Department of Justice must hold Amazon accountable.”
    white and black Amazon Echo Dot 2 by Find Experts at Kilta.com
  • FACT SHEETSummary of the Kids Online Safety ActAs Congressional hearings, media reports, academic research, whistleblower disclosures, and heartbreaking stories from youth and families have repeatedly shown, social media platforms have exacerbated the mental health crisis among children and teens fostering body image issues, creating addiction-like use, promoting products that are dangerous for young audiences, and fueling destructive bullying.  The Kids Online Safety Act (KOSA) provides children, adolescents, and parents with the tools, safeguards, and transparency they need to protect against threats to young people's health and wellbeing online. The design and operation of online platforms have a significant impact on these harms, such as recommendation systems that send kids down rabbit holes of destructive content, and weak protections against relentless bullying.KOSA would provide safeguards and accountability through:   Creating a duty of care for social media platforms to prevent and mitigate specific dangers to minors in their design and operation of products, including the promotion of suicidal behaviors, eating disorders, substance use, sexual exploitation, advertisements for tobacco and alcohol, and more.Requiring social media platforms to provide children and adolescents with options to protect their information, disable addictive product features, and opt out of algorithmic recommendations. Platforms are required to enable the strongest settings by default.  Giving parents new tools to help support their children and providing them (as well as schools) a dedicated reporting channel to raise issues (such as harassment or threats) to the platforms.How Online Harms Impact LGBTQ+ CommunitiesSocial media can be an important tool for self-discovery, expression, and community. However, online platforms have failed to take basic steps to protect their users from profound harm and have put profit ahead of safety. Companies have operationalized their products to keep young users on their sites for as long as possible, even if the means to get people to use their platforms more are harmful. From documents provided by a whistleblower, Facebook’s own researchers described Instagram itself as a “perfect storm” that “exacerbates downward spirals” and produces hundreds of millions of dollars in revenue annually.  This “perfect storm” has been shown by academic research and surveys to weigh most profoundly on LGBTQ+ children and adolescents, who are more at risk of bullying, threats, and suicidal behaviors on social media. Some harms and examples of the protections KOSA would provide include:  LGBTQ+ youth are more at risk of cyberbullying and harassment.LGBTQ+ high school students consistently report higher rates of cyberbullying than their heterosexual peers, and suffer more severe forms of harassment, such as stalking, non-consensual imagery, and violent threats.Surveys have found that 56% of LGBTQ+ students had been cyberbullied in their lifetime compared to 32% for non-LGBTQ+ students.One in three young LGBTQ+ people have said that they had been sexually harassed online, four times as often as other young people.  LGBTQ+ youth are more at risk for eating disorders and substance use.Young LGBTQ+ people experience significantly greater rates of eating disorders and substance use compared to their heterosexual and cisgender peers. Transgender and nonbinary youth are at even higher risk for eating disorders, and Black LGBTQ+ youth are diagnosed at half the rate of their white peers.Prolonged use of social media is linked with negative appearance comparison, which in turn increases risk for eating disorder symptoms.Engagement-based algorithms feed extreme eating disorders through recommending more eating disorder content to vulnerable users (every click or view sends more destructive content to a user).For example, TikTok began recommending eating disorder content within 8 minutes of creating a new account and Instagram was found to deluge a new user with eating disorder recommendations within one day.How KOSA Will Help:KOSA would require that platforms give users the ability to turn off engagement-based algorithms or options to influence the recommendation they receive. A user would be able to stop recommendation systems that are sending them toxic content.  KOSA’s duty of care requires platforms to prevent and mitigate cyberbullying. It also requires that platforms give users options to restrict messages from other users and to make their profile private.It would require platforms to provide a point of contact for users to report harassment and mandates platforms respond to these reports within a designated time frame.  LGBTQ+ youth are more at risk of suicide and suicidal behaviors.Young people exposed to hateful messaging online in tandem with self-harm material on social media, increases the risk of suicidal behaviors and/or suicide.These risks are exacerbated when platform recommendation systems amplify hateful content and self-harm content.For example, after creating a new teen account on TikTok, suicide content was recommended under three minutes.Surveys have found 42% of LGBTQ+ youth seriously considered attempting suicide, including more than half of transgender and nonbinary youth.Moreover, eating disorders, depression, bullying, substance use, and other mental health harms that fall harder on LGBTQ+ communities further increase risks of self-harm and suicide.  How KOSA Will Help:In addition to the core safeguards and options provided to kids, such as controls and transparency over algorithmic recommendation systems, KOSA’s duty of care would require platforms consider and address the ways in which their recommendation systems promote suicide and suicidal behaviors, creating incentives for the platforms to provide self-help resources, uplift information about recovery, and prevent their algorithms from pushing users down rabbit holes of harmful and deadly content.Protections for LGBTQ+ CommunitiesThe reintroduction of the Kids Online Safety Act takes into account recommended edits from a diverse group of organizations, researchers, youth, and families.The outcome from experts in the field and those with lived experience is a thoughtful and tailored bill designed to be a strong step in advancing a core set of accountability provisions to provide children, adolescents, and families with a safer online experience. Below is a summary comparing previous bill text and changes that were made for reintroduction.Concerns with Previous DraftHow Current Draft Protects LGBTQ+The “duty of care” is too vague, creating liabilities for broad and undefined harms to children and teens.The duty of care is now limited to a set of specific harms that have been shown to be exacerbated by online platforms’ product designs and algorithms. Specific harms are focused on serious threats to the wellbeing of young users, such as, eating disorders, substance use, depression, anxiety, suicidal behaviors, physical violence, sexual exploitation, and the marketing of narcotics, tobacco, gambling, alcohol. The terms used to describe those harms are linked to clinical or legal definitions where there  is a perceived risk of misuse. In addition, the duty of care includes a limitation to ensure it is not construed to require platforms to block access to content that a young user specifically requests or block access to evidence-informed medical information and support resources.The inclusion of “grooming” in the duty of care could be weaponized against entities providing information about gender-affirming care.“Grooming” was cut from the bill. Sexual exploitation and abuse are now defined using existing federal criminal statutes to prevent politicalization or distortion of terms.The duty of care to prevent and mitigate “self-harm” or “physical harm” could be weaponized against trans youth and those who provide information about gender-affirming care.The specific reference to “self-harm” has been removed from the duty of care. “Physical harm” has been changed to “physical violence” to enhance clarity. Other covered harms related to “self-harm” are covered using terminology that is anchored in a medical definition.Will allow non-supportive parents to surveil LGBTQ+ youth online.The legislation clarifies the tools available to protect kids and differentiates the developmental differences between children and young teens.KOSA has always included requirements that children and adolescents are notified if parental controls are turned on, and required kids know before parents are informed about creating a new account. For teens, the bill requires platforms to give parents the ability to restrict purchases, view metrics on how much time a minor is spending on a platform and view - but not change - account settings. It does not require the disclosure of a minor’s browsing behavior, search history, messages, or other content or metadata of their communications.KOSA will lead to privacy-invasive age verification across the internet.KOSA never required age verification or gating, nor did it create liability for companies if kids lie about their age.The bill explicitly states that companies are not required to age-gate or collect additional data to determine a user’s age.Additionally, a knowledge standard is more consistently applied across the bill for the purpose of clarifying that companies are not liable if they have no knowledge whether a user is a child or adolescent.KOSA will affect access to sexual health information, schools, or nonprofit services.KOSA requirements only apply to commercial online platforms, such as social media and games that have been the largest source of issues for kids online.Nonprofits, schools, and broadband services are exempt from KOSA and a previous reference to “educational services” was removed from the “covered platform” definition of the bill.KOSA does not apply to health sites or other information resources.
    group of people under garment by Mercedes Mehling
  • The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500May 23, 2023Dear President Biden:The undersigned civil rights, consumer protection, and other civil society organizations write to express concern about digital trade negotiations underway as part of the proposed Indo-Pacific Economic Framework (IPEF).Civil society advocates and officials within your own administration have raised increasing concern about discrimination, racial disparities, and inequities that may be “baked into” the algorithms that make decisions about access to jobs and housing, health care, prison sentencing, educational opportunity, insurance rates and lending, deployment of police resources, and much more. To address these injustices, we have advocated for anti-discrimination protections and algorithmic transparency and fairness. We have been pleased that these concepts are incorporated into your recent Executive Order on racial equity,1 as well as the White House’s AI Bill of Rights2 and many other policy proposals. The DOJ, FTC, CFPB, and EEOC also recently released a joint statement underscoring their commitment to combating discrimination in automated systems.3 Any trade agreement must be consistent with, and not undermine, these policies and the values they are advancing.Now, we have learned that the U.S. may be considering proposals for IPEF and other trade agreement negotiations that could sabotage efforts to prevent and remedy algorithmic discrimination, including provisions that could potentially preempt executive and Congressional legal authority to advance these goals. Such provisions may make it harder or impossible for Congress or executive agencies to adopt appropriate policies while also respecting our international trade commitments. For example, trade provisions that guarantee digital firms new secrecy rights over source code and algorithms could thwart potential algorithmic impact assessment and audit requirements, such as testing for racial bias or other violations of U.S. law and regulation. And because the trade negotiations are secret, we do not know how the exact language could affect pivotal civil rights protections. Including such industry-favored provisions in trade deals like IPEF would be a grievous error and undermine the Administration’s own policy goals. We urge the administration to not submit any proposals that could undermine the ability to protect the civil rights of people in the United States, particularly with regard to digital trade. Moreover, there is a great need for transparency in these negotiations. Text already proposed should be made public so the civil rights community and relevant experts can challenge any provisions that could undermine administration goals regarding racial equity, transparency, and fairness. We know that your administration shares our goals of advancing racial equity, including protecting the public from algorithmic discrimination. Thank you for your leadership in this area. For questions or further discussion, please contact Harlan Yu (harlan@upturn.org), David Brody (dbrody@lawyerscommittee.org), and Emily Peterson-Cassin (epetersoncassin@citizen.org).Sincerely,American Civil Liberties Union Center for Democracy & Technology Center for Digital Democracy Data & Society Research Institute Demand Progress Education Fund Electronic Privacy Information Center (EPIC) Fight for the Future Lawyers’ Committee for Civil RightsUnder LawThe Leadership Conference on Civil andHuman Rights NAACPNational Urban League Public Citizen Sikh American Legal Defense andEducation Fund UpturnCC:Secretary of Commerce Gina Raimondo U.S. Trade Representative Katherine TaiNational Economic Council Director Lael BrainardNational Security Advisor Jake SullivanDomestic Policy Council Director Susan RiceIncoming Domestic Policy Council Director Neera TandenDomestic Policy Council Deputy Director for Racial Justice and Equity Jenny Yang1 Exec. Order No. 14091, 88 Fed. Reg. 10825, Feb. 16, 2023, available at https://www.federalregister.gov/documents/2023/02/22/2023-03779/further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal.2 The White House, Blueprint for an AI Bill of Rights, Oct. 22, 2022, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights.3 Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, CFPB, DOJ, EEOC, FTC, April 25, 2023, available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
    woman in white tank top and white shorts standing on gray concrete road during daytime by Clay Banks
  • CDD urges Congress to adopt stronger online safeguards for kids and teensContact: Katharina Kopp, kkopp [at] democraticmedia.orgThe Children’s Online Privacy Protection Act (COPPA 2.0), introduced by Senators Markey and Cassidy, will provide urgently needed online safeguards for children and teens. It will enact real platform accountability and limit the economic and psychological exploitation of children and teens online and thus address the public health crisis they are experiencing.By banning targeted ads to young people under 16, the endless streams of data collected by online companies to profile and track them will be significantly reduced. The ability of digital marketers and platforms to manipulate, discriminate, and exploit children and teens will be curtailed. COPPA 2.0 will also extend the original COPPA law protections for youth from 12 to 16 years of age.  The proposed law provides the ability to delete children’s and teen’s data with a click of an “eraser button.”  With the creation of a new FTC "Youth Marketing and Privacy Division,” COPPA 2.0 will ensure young peoples’ privacy rights are enforced.
  • Meta’s Virtual Reality-based Marketing Apparatus Poses Risks to Teens and OthersWhether it’s called Facebook or Meta, or known by its Instagram, WhatsApp, Messenger or Reels services, the company has always seen children and teens as a key target. The recent announcement opening(link is external) up the Horizon Worlds metaverse(link is external) to teens, despite calls to first ensure it will be a safe and healthy experience, is lifted out of Facebook’s well-worn political playbook—make whatever promises necessary to temporarily quell any political opposition to its monetization plans. Meta’s priorities are intractably linked to its quarterly shareholder revenue reports. Selling our “real” and “virtual” selves to marketers is their only real source of revenue, a higher priority than any self-regulatory scheme Meta offers(link is external) claiming to protect children and teens.Meta’s focus on creating more immersive, AI/VR, metaverse-connected experiences for advertisers should serve as a “wake-up” call for regulators. Meta has unleashed a digital environment designed to trigger the “engagement(link is external)” of young people with marketing, data collection and commercially driven manipulation. Action is required to ensure that young people are treated fairly, and not exposed to data surveillance, threats to their health and other harms.Here are a few recent developments that should be part of any regulatory review of Meta and young people:Expansion of “immersive(link is external)” video and advertising-embedded applications: Meta tells marketers it provides “seamless video experiences that are immersive and fueled by discovery,” including the “exciting(link is external) opportunity for advertisers” with its short-video “Reels” system. Through virtual reality (VR) and augmented reality (AR(link is external)) technologies, we are exposed to advertising content designed to have a greater impact by influencing our subconscious and emotional processes. With AR ads, Meta tells(link is external) marketers, they can “create immersive experiences, encourage people to virtually try out your products and inspire people to interact with your brand,” including encouraging “people who interact with your ad… [to]take photos or videos to share their experience on Facebook Feed, on Facebook and Instagram Stories or in a message on Instagram.” Meta has also been researching(link is external) the use of AR(link is external) and VR(link is external) that will ensure that its ad and marketing messaging becomes even more compelling.Expanded integration of ads throughout Meta applications: Meta allows advertisers to “turn organic image and video posts into ads in Ads Manager on Facebook Reels,” including adding a “call-to-action” feature. It permits marketers to “boost their Reels within the Instagram app to turn them into ads….” It enables marketers “to add a “Send Message” button to their Facebook Reels ads [that] give people an option to start a conversation in WhatsApp(link is external) right from the ad.” This follows last year’s Meta “Boosted Reels” product(link is external) release, allowing Instagram Reels to be turned into ads as well.“Ads Manager” “optimization(link is external) goals” that are inappropriate when used for targeting young people: These include “impressions, reach, daily unique reach, link clicks and offsite conversions.” “Ad placements” to target teens are available for the “Facebook Marketplace, Facebook Feed, … Facebook Stories, Facebook-instream video (mobile), Instagram Feed, Instagram Explore, Instagram Stories, Facebook Reels and Instagram Reels.”The use of metrics for delivering and measuring the impact of augmented reality ads: As Meta explains, it uses:(link is external)Instant Experience View Time: The average total time in seconds that people spent viewing an Instant Experience. An Instant Experience can include videos, images, products from a catalog, an augmented reality effect and more. For an augmented reality ad, this metric counts the average time people spent viewing your augmented reality effect after they tapped your ad.Instant Experience Clicks to Open: The number of clicks on your ad that open an Instant Experience. For an augmented reality ad, this metric counts the number of times people tapped your ad to open your augmented reality effect.Instant Experience Outbound Clicks: The number of clicks on links in an Instant Experience that take people off Meta technologies. For an augmented reality ad, this metric counts the number of times people tapped the call to action button in your augmented reality effect.Effect Share: The number of times someone shared an image or video that used an augmented reality effect from your ad. Shares can be to Facebook or Instagram Stories, to Facebook Feed or as a message on Instagram.These ad effects can be designed and tested(link is external) through Meta’s “Spark Hub” and ad manager. Such VR and other measurement systems require regulators to analyze their role and impact on youth.Expanded use of machine learning/AI to promote shopping via Advantage(link is external)+: Last year, Meta rolled out “Advantage+ shopping campaigns, Meta’s machine-learning capabilities [that] save advertisers(link is external) time and effort while creating and managing campaigns. For example, advertisers can set up a single Advantage+ shopping campaign, and the machine learning-powered automation automatically combines prospecting and retargeting audiences, selects numerous ad creative and messaging variations, and then optimizes for the best-performing ads.” While Meta says that Advantage+ isn’t used to target teens, it deploys(link is external) it for “Gen Z” audiences. How Meta uses machine learning/AI to target families should also be on the regulatory agenda.Immersive advertising will shape the near-term evolution of marketing, where brands will be “world agnostic and transcend the limitations of the current physical and digital space.” The Advertising Research Foundation (ARF) predicts(link is external) that “in the next decade, AR and VR hardware and software will reach ubiquitous status.” One estimate is that by 2030, the metaverse will “generate(link is external) up to $5 trillion in value.”In the meantime, Meta’s playbook in response to calls from regulators and advocates is to promise some safeguards, often focused on encouraging the use of what it calls “safety(link is external) tools.” But these tools(link is external) do not ensure that teens aren’t reached and influenced by AI- and VR-driven marketing technologies and applications. Meta also knows that today, ad-targeting is less important than so-called “discovery(link is external),” where its purposeful melding of its video content, AR effects, social interactions and influencer marketing will snare young people into its marketing “conversion”(link is external) net.Last week, Mark Zuckerberg told(link is external) investors his vision of bringing “AI agents to billions of people,” as well as into his “metaverse” that will be populated by “avatars, objects, worlds, and codes to tie” online and offline together. There will be, as previously reported, an AI-driven “discovery(link is external) engine” that will “increase the amount of suggested content to users.”These developments reflect just a few of the AI- and VR-marketing-driven changes to the Meta system. They illustrate why responsible regulators and advocates must be in the forefront of holding this company accountable, especially with regard to its youth-targeting apparatus.Please also read(link is external) Fairplay for Kids’ account of Meta’s long history of failing to protect children online.   metateensaivr0523fin.pdf
    Jeff Chester
  • Reining In Meta’s Digital ‘Wild West’ as FTC protects young people’s safety, health and privacyContacts:Jeff Chester, CDD, 202-494-7100David Monahan, Fairplay, 781-315-2586Children’s advocates Fairplay and Center for Digital Democracy respond to today’s announcement that the FTC proposes action to address Facebook’s privacy violations in practices impacting children and teens.  And see important new information compiled by Fairplay and CDD, linked below.Josh Golin, executive director, Fairplay:The action taken by the Federal Trade Commission against Meta is long overdue. For years, Meta has flouted the law and exploited millions of children and teens in their efforts to maximize profits, with little care as to the harms faced by young users on their platforms. The FTC has rightly recognized Meta simply cannot be trusted with young people’s sensitive data and proposed a remedy in line with Meta’s long history of abuse of children. We applaud the Commission for its efforts to hold Meta accountable and for taking a huge step toward creating the safe online ecosystem every young American deserves.Jeff Chester, executive director, Center for Digital Democracy:Today’s action by the Federal Trade Commission (FTC) is a long-overdue intervention into what has become a huge national crisis for young people. Meta and its platforms are at the center of a powerful commercialized social media system that has spiraled out of control, threatening the mental health and wellbeing of children and adolescents. The company has not done enough to address the problems caused by its unaccountable data-driven commercial platforms. Amid a continuing rise in shocking incidents of suicide, self-harm and online abuse, as well as exposés from industry “whistleblowers,” Meta is unleashing even more powerful data gathering and targeting tactics fueled by immersive content, virtual reality and artificial intelligence, while pushing youth further into the metaverse with no meaningful safeguards. Parents and children urgently need the government to institute protections for the “digital generation” before it is too late. Today’s action by the FTC limiting how Meta can use the data it gathers will bring critical protections to both children and teens. It will require Meta/Facebook to engage in a proper “due diligence” process when launching new products targeting young people—rather than its current method of “release first and address problems later approach.” The FTC deserve the thanks of U.S parents and others concerned about the privacy and welfare of our “digital generation.”NEW REPORTS:META HAS A LONG HISTORY OF FAILING TO PROTECT CHILDREN ONLINE(link is external)(from Fairplay)META’S VIRTUAL REALITY-BASED MARKETING APPARATUS POSES RISKS TO TEENS AND OTHERS(from CDD)
     by
  • Advocates Fairplay, Eating Disorders Coalition, Center for Digital Democracy, and others announce support of the newly reintroduced Kids Online Safety ActContact:David Monahan, Fairplay (david@fairplayforkids.org)Advocates pledge support for landmark bill requiring online platforms to protect kids, teens with “safety by design” approachAdvocates Fairplay, Eating Disorders Coalition, Center for Digital Democracy, and others announce support of the newly reintroduced Kids Online Safety ActBOSTON, MA and WASHINGTON, DC — May 2, 2023 — Today, a coalition of leading advocates for children’s rights, health, and privacy lauded the introduction of the Kids Online Safety Act (KOSA), a landmark bill that would create robust online protections for children and teens online. Among the advocates pledging support for KOSA are Fairplay, Eating Disorders Coalition, the American Academy of Pediatrics, the American Psychological Association, and Common Sense.KOSA, a bipartisan bill from Senators Richard Blumenthal (D-CT) and Martha Blackburn (R-TN), would make online platforms and digital providers abide by a “duty of care” requiring them to eliminate or mitigate the impact of harmful content on their platforms. The bill would also require platforms to default to the most protective settings for minors and enable independent researchers to access “black box” algorithms to assist in research on algorithmic harms to children and teens.The reintroduction of the Kids Online Safety Act coincides with a rising tide of bipartisan support for action to protect children and teens online amidst a growing youth mental health crisis. A February report from the CDC showed that teen girls and LGBTQ+ youth are facing record levels of sadness and despair, and another report from Amnesty International indicated that 74% of youth check social media more than they’d like.Fairplay Executive Director, Josh Golin:“For far too long, Big Tech have been allowed to play by their own rules in a relentless pursuit of profit, with little regard for the damage done to the children and teens left in their wake. Companies like Meta and TikTok have made billions from hooking kids on their products by any means necessary, even promoting dangerous challenges, pro-eating disorder content, violence, drugs, and bigotry to the kids on their platforms. The Kids Online Safety Act stands to change all that. Today marks an exciting step toward the internet every young person needs and deserves, where children and teens can explore, socialize and learn without being caught in Big Tech crossfire.”National Alliance for Eating Disorders CEO and EDC Board Member, Johanna Kandel:“The Kids Online Safety Act is an integral first step in making social media platforms a safer place for our children. We need to hold these platforms accountable for their role in exposing our kids to harmful content, which is leading to declining mental health, higher rates of suicide, and eating disorders. As both a CEO of an eating disorders nonprofit and a mom of a young child, these new laws would go a long way in safeguarding the experiences our children have online.”Center for Digital Democracy Deputy Director, Katharina Kopp:“The Kids Online Safety Act (KOSA), co-sponsored by Senators Blumenthal and Blackburn, will hold social media companies accountable for their role in the public health crisis that children and teens experience today. It will require platforms to make better design choices that ensure the well-being of young people. KOSA is urgently needed to stop online companies operating in ways that encourage self-harm, suicide, eating disorders, substance use, sexual exploitation, patterns of addiction-like behaviors, and other mental and physical threats.  It also provides safeguards to address unfair digital marketing tactics. Children and teens deserve an online environment that is safe. KOSA will significantly reduce the harms that children, teens, and their families experience online every day.”Children and Screens: Institute of Digital Media and Children Development Executive Director, Kris Perry:“We appreciate the Senators’ efforts to protect children in this increasingly complicated digital world. KOSA will allow access to critical datasets from online platforms for academic and research organizations. This data will facilitate scientific research to better understand the overarching impact social media has on child development."###kosa_reintro_pr.pdf
  • Statement from Children’s Advocacy Groups on New Social Media Bill by U.S. Senators Schatz and CottonWashington, D.C., April 26, 2023– Several children’s advocacy groups expressed concern today with parts of a new bill intended to protect kids and teens from online harms.  The bill, “The Protecting Kids on Social Media Act,” was introduced this morning by U.S. Sens. Brian Schatz (D-HI) and Tom Cotton (R-AR).The groups, including Common Sense Media, Fairplay, and The Center for Digital Democracy, play a leading role on legislation in Congress to ensure that tech companies, and social media platforms in particular, are held accountable for the serious and sometimes deadly harms related to the design and operation of these platforms. They said the new bill is well-intentioned in the face of a youth mental health crisis and has some features that should be adopted, but that other aspects of the bill take the wrong approach to a serious problem.The groups said they support the bill’s ban on algorithmic recommendation systems to minors, which would prevent platforms from using personal data of minors to amplify harmful content to them. However, they said they object to the fact that the bill places too many new burdens on parents and creates unrealistic bans and institutes potentially harmful parental control over minors’ access to social media. By requiring parental consent before a teen can use a social media platform, vulnerable minors, including LGBTQ+ kids and kids who live in unsupportive households, may be cut off from access to needed resources and community. At the same time, kids and teens could pressure their parents or guardians to provide consent. Once young users make it onto the platform, they will still be exposed to addictive or unsafe design features beyond algorithmic recommendation systems, such as endless scroll and autoplay. The bill’s age verification measures also introduce troubling implications for the privacy of all users, given the requirement for covered companies to verify the age of both adult and minor users. Despite its importance, there is currently no consensus on how to implement age verification measures without compromising users’ privacy. The groups said that they strongly support other legislation that establish important guardrails on platforms and other tech companies to make the internet a healthier and safer place for kids and families, for example the Kids Online Safety Act (KOSA), COPPA 2.0, bi-partisan legislation that was approved last year by the Senate Commerce Committee and expected to be reintroduced again this year.“We appreciate Senators Schatz and Cotton's effort to protect kids and teens online and we look forward to working with them as we have with many Senators and House members over the past several years. But this is a life or death issue for families and we have to be very careful about how to protect kids online. The truth is, some approaches to the problem of online harms to kids risk further harming kids and families,” said James P. Steyer, founder and CEO of Common Sense Media. “Congress should place the onus on companies to make the internet safer for kids and teens and avoid placing the government in the middle of the parent-child relationship. Congress has many good policy options already under consideration and should act on them now to make the internet healthier and safer for kids.”“We are grateful to Senators Schatz, Cotton, Britt and Murphy for their efforts to improve the online environment for young people but are deeply concerned their bill is not not the right approach,” said Josh Golin, Executive Director of Fairplay. “ Young people deserve secure online spaces where they can safely and autonomously socialize, connect with peers, learn, and explore. But the Protecting Kids on Social Media Act does not get us any closer to a safer internet for kids and teens. Instead, if this legislation passes, parents will face the same exact conundrum they face today: Do they allow their kids to use social media and be exposed to serious online harms, or do they isolate their children from their peers? We need legislative solutions that put the burden on companies to make their platforms safer, less exploitative, and less addictive, instead of putting even more on parents’ plates.”"It’s critical that social media platforms are held accountable for the harmful impacts their practices have on children and teens. However, this bill’s approach is misguided. It places too much of a burden on parents, instead of focusing on platforms’ business practices that have produced the unprecedented public health crisis that harms our children’s physical and mental well-being. Kids and teens should not be locked out of our digital worlds, but be allowed online where they can be safe and develop in age-appropriate ways. One of the unintended consequences of this bill will likely be a two-tiered online system, where poor and otherwise disadvantaged parents and their children will be excluded from digital worlds. What we need are policies that hold social media companies truly accountable, so all young people can thrive,” said Katharina Kopp, Ph.D., Deputy Director of the Center for Digital Democracy.schatz-cotton_bill_coalition_statement.pdf
  • Citing research that illustrates a number of serious risks to children and teens in the Metaverse, advocates say Meta must wait for more research and root out dangers before targeting youth in VR. BOSTON, MA, WASHINGTON, DC and LONDON, UK — Friday, April 14, 2023 — Today, a coalition of over 70 leading experts and advocates for health, privacy, and children’s rights are urging Meta to abandon plans to allow minors between the ages of 13 and 17 into Horizon Worlds, Meta’s flagship virtual reality platform. Led by Fairplay, the Center for Digital Democracy (CDD), and the Center for Countering Digital Hate (CCDH), the advocates underscored the dearth of research on the impact of time spent in the Metaverse on the health and wellbeing of youth as well as the company’s track record of putting profits ahead of children’s safety. The advocates’ letter maintained that the Metaverse is already unsuitable for use by children and teens, citing March 2023 research from CCDH which revealed that minors already using Horizon Worlds were routinely exposed to harassment and abuse—including sexually explicit insults and racist, misogynistic, and homophobic harassment—and other offensive content. In addition to the existing risks present in Horizon Worlds, the advocates’ letter outlined a variety of potential risks facing underage users in the Metaverse, including magnified risks to privacy through the collection of biomarkers, risks to youth mental health and wellbeing, and the risk of discrimination, among others.In addition to Fairplay, CDD, and CCDH, the 36 organizations signing on include Common Sense Media, the Electronic Privacy Information Center (EPIC), Public Citizen, and the Eating Disorders Coalition.The 37 individual signatories include: Richard Gephardt of the Council for Responsible Social Media, former Member of Congress and House Majority Leader; Sherry Turkle, MIT Professor and author of Alone Together and Reclaiming Conversation; and social psychologist and author Jonathan Haidt.Josh Golin, Executive Director, Fairplay:“It's beyond appalling that Mark Zuckerberg wants to save his failing Horizons World platform by targeting teens. Already, children are being exposed to homophobia, racism, sexism, and other reprehensible content on Horizon Worlds. The fact that Mr. Zuckerberg is even considering such an ill-formed and dangerous idea speaks to why we need Congress to pass COPPA 2.0 and the Kids Online Safety Act.”Katharina Kopp, PhD, Deputy Director, Center for Digital Democracy:“Meta is demonstrating once again that it doesn’t consider the best interest of young people when it develops plans to expand its business operations.  Before it considers opening its Horizon Worlds metaverse operation to teens, it should first commit to fully exploring the potential consequences.  That includes engaging in an independent and research-based effort addressing the impact of virtual experiences on young people’s mental and physical well-being, privacy, safety, and potential exposure to hate and other harmful content.  It should also ensure that minors don’t face forms of discrimination in the virtual world, which tends to perpetuate and exacerbate ‘real life’ inequities.”Mark Bertin, MD, Assistant Professor of Pediatrics at New York Medical College, former Director of Developmental Behavioral Pediatrics at the Westchester Institute for Human Development, author of The Family ADHD Solution, Mindful Parenting for ADHD, and How Children Thrive:“This isn't like the panic over rock and roll, where a bunch of old folks freaked out over nothing. Countless studies already describe the harmful impact of Big Tech products on young people, and it’s worsening a teen mental health crisis. We can't afford to let profit-driven companies launch untested projects targeted at kids and teens and let families pick up the pieces after. It is crucial for the well-being of our children that we understand what is safe and healthy first.” Imran Ahmed, CEO of the Center for Countering Digital Hate:“Meta is making the same mistake with Horizon Worlds that it made with Facebook and Instagram. They have prioritized profit over safety in their design of the product, failed to provide meaningful transparency, and refused to take responsibility for ensuring worlds are safe, especially for children.“Yet again, their aim is speed to market in order to achieve monopoly status – rather than building truly sustainable, productive and enjoyable environments in which people feel empowered and safe.“Whereas, to some, ‘move fast and break things’ may have appeared swashbuckling from young startup entrepreneurs, it is a brazenly irresponsible strategy coming from Meta, one of the world’s richest companies. It should have learned lessons from the harms their earlier products imposed on society, our democracies and our citizens.”horizonletter.pdf
    Jeff Chester
     by
  • Reports indicate FTC plans to advance case against Amazon for violation of kids’ privacy after advocates’ 2019 complaint. BOSTON, MA and WASHINGTON, DC — Friday, March 31, 2023 — Following a groundbreaking investigation of Amazon’s Echo Dot Kids by Fairplay and Center for Digital Democracy (CDD), the Federal Trade Commission is preparing to advance a case against Amazon for the company’s violations of children’s privacy law to the Department of Justice. According to new reporting from Politico, the case centers on Amazon’s violations of the Children’s Online Privacy Protection Act (COPPA) through its Alexa voice assistant.In 2019, privacy advocates Fairplay and CDD called for the FTC to take action against Amazon after an investigation of the company’s Echo Dot Kids smart home assistant, a candy-colored version of Amazon’s flagship home assistant with Alexa voice technology. The investigationrevealed a number of shocking illegal privacy violations, including Amazon’s indefinite retention of kids’ sensitive data even after parents requested for it to be deleted. Now, reports indicate that the FTC is acting on the advocates’ calls for investigation.“We’re thrilled that the Federal Trade Commission and Department of Justice are close to taking action against Amazon for its egregious violations of children’s privacy,” said Josh Golin, Executive Director of Fairplay. “We know it’s not just social media platforms and apps thatmisuse children’s sensitive data. This landmark case would be the first time the FTC sanctioned the maker of a voice-enabled device for flouting COPPA. Amazon and its Big Tech peers must learn that COPPA violations are not just a cost of doing business.” “It is time for the FTC to address the rampant commercial surveillance of children via Internet of Things (IoT) devices, such as Amazon’s Echo, and enforce existing law,” said Katharina Kopp, Director of Policy at Center for Digital Democracy. “Children are giving away sensitive personal data on a massive scale via IoT devices, including their voice recordings and data gleaned from kids’ viewing, reading, listening, and purchasing habits. These data practices lead to violating children’s privacy, to manipulating them into being interested in harmful products, undermining their autonomy, and to perpetuating discrimination and bias. Both the FTC and the Department of Justice must hold Amazon accountable.”[see attached for additional comments] ftc_amazon_investigation_statement_fairplay_cdd.pdf
    Jeff Chester
  • Consumer Advocates Urge Action Walmart Deceptively Marketing to Kids on RobloxConsumer Advocates Urge ActionMADISON, CONN. January 23, 2023 – A coalition of advocacy groups led by ad watchdog truthinadvertising.org (TINA.org) is urging the Children’s Advertising Review Unit (CARU) – a BBB National Program – to immediately audit the Walmart Universe of Play advergame, a recent addition to the self-regulatory group’s COPPA Safe Harbor Program and bearer of one of the Program’s certification seals. According to a letter from TINA.org, Fairplay, Center for Digital Democracy and the National Association of Consumer Advocates, a copy of which was sent to Walmart, Roblox and the FTC, the retail giant is exposing children to deceptive marketing on Roblox, the online gaming and creation platform used by millions of kids on a daily basis.Walmart’s first foray into the Roblox metaverse came last September, when it premiered two experiences, Walmart Universe of Play and Walmart Land, which collectively have been visited more than 12 million times. Targeted at – and accessible to – young children on Roblox, Universe of Play features virtual products and characters from L.O.L. Surprise!, Jurassic World, Paw Patrol, and more and is advertised to allow kids to play with the “year’s best toys” and make a “wish list” of toys that can then be purchased at Walmart.As the consumer groups warn, Walmart completely blurs the distinction between advertising content and organic content, and simultaneously fails to provide clear or conspicuous disclosures that Universe of Play (or content within the virtual world) are ads. In addition, as kids’ avatars walk through the game, they are manipulated into opening additional undisclosed advertisements disguised as surprise wrapped gifts.To make matters worse, Walmart is using the CARU COPPA Safe Harbor Program seal to convey the false message that its children’s advergame is not only in compliance with COPPA (Children’s Online Privacy Protection Act), but CARU's Advertising Guidelines and truth-in-advertising laws, as well as a shield against enforcement action.“Walmart’s brazen use of stealth marketing directed at young children who are developmentally unable to recognize the promotional content is not only appalling, it’s deceptive and against truth-in-advertising laws. We urge CARU to take swift action to protect the millions of children being manipulated by Walmart on a daily basis.” Laura Smith, TINA.org Legal Director“Walmart's egregious and rampant manipulation of children on Roblox -- a platform visited by millions of children every day -- demands immediate action. The rise of the metaverse has enabled a new category of deceptive marketing practices that are harmful to children. CARU must act now to ensure that children are not collateral damage in Walmart's digital drive for profit.” Josh Golin, Executive Director, Fairplay“Walmart’s and Roblox’s practices demonstrate that self-regulation is woefully insufficient to protect children and teens online. Today, young people are targeted by a powerful set of online marketing tactics that are manipulative, unfair, and harmful to their mental and physical health. Digital advertising operates in a ‘wild west’ world where anything goes in terms of reaching and influencing the behaviors of kids and teens. Congress and the Federal Trade Commission must enact safeguards to protect the privacy and well-being of a generation of young people.” Katharina Kopp, Director of Policy, Center for Digital DemocracyTo read more about Walmart’s deceptive marketing on Roblox see: /articles/tina-org-urges-action-against-walmarts-undisclosed-advergame-on-robloxAbout TINA.org (truthinadvertising.org) TINA.org is a nonprofit organization that uses investigative journalism, education, and advocacy to empower consumers to protect themselves against false advertising and deceptive marketing.About Fairplay Fairplay is the leading nonprofit organization committed to helping children thrive in an increasingly commercialized, screen-obsessed culture, and the only organization dedicated to ending marketing to children.About Center for Digital DemocracyThe Center for Digital Democracy is a nonprofit organization using education, advocacy, and research into commercial data practices to ensure that digital technologies serve and strengthen democratic values, institutions, and processes.About National Association of Consumer AdvocatesThe National Association of Consumer Advocates is a nonprofit association of more than 1,500 attorneys and consumer advocates committed to representing consumers’ interests.For press inquiries contact: Shana Mueller at 203.421.6210 or press@truthinadvertising.org.walmart_caru_press_release_final.pdf
  • Josh Golin, executive director, Fairplay:The FTC’s landmark settlement against Epic Games is an enormous step forward towards creating a safer, less manipulative internet for children and teens. Not only is the Commission holding Epic accountable for violating COPPA by illegally collecting the data of millions of under 13-year-olds, but the settlement is also a shot across the bow against game makers who use unfair practices to drive in-game purchases by young people. The settlement rightly recognizes not only that unfair monetization practices harm young people financially, but that design choices used to drive purchases subject young people to a wide array of dangers, including cyberbullying and predation.Today’s breakthrough settlement underscores why it is so critical that Congress pass the privacy protections for children and teens currently under consideration for the Omnibus bill. These provisions give teens privacy rights for the first time, address unfair monetization by prohibiting targeted advertising, and empower regulators by creating a dedicated youth division at the FTC. Jeff Chester, executive director, Center for Digital Democracy:Through this settlement with EPIC Games using its vital power to regulate unfair business practices, the FTC has extended long-overdue and critically important online protections for teens.  This tells online marketers that from now on, teenagers cannot be targeted using unfair and manipulative tactics designed to take advantage of their young age and other vulnerabilities.Kids should also have their data privacy rights better respected through this enforcement of the federal kids data privacy law (COPPA).  Gaming is a “wild west” when it comes to its data gathering and online marketing tactics, placing young people among the half of the US population who play video games at especially greater risk.  While today’s FTC action creates new safeguards for young people, Congress has a rare opportunity to pass legislation this week ensuring all kids and teens have strong digital safeguards, regardless of what online service they use.
    Jeff Chester
  • Consumer financial safeguards for online payments needed, says U.S. PIRG & CDDBig Tech Payment PlatformsSupplemental Comments of USPIRG and the Center for Digital DemocracyCFPB-2021-0017December 7, 2022United States Public Interest Research Group (USPIRG) and the Center for Digital Democracy (CDD) submit these additional comments to further inform the Bureau’s inquiry. They amplify the comments USPIRG and CDD submitted last year.[1]  We believe that since we filed our original comment, the transformation of “Big Tech” operated digital payment platforms has significantly evolved, underscoring the need for the Bureau to institute much needed consumer protection safeguards. We had described how online platform based payment services seamlessly incorporate the key elements of “commerce” today—including content, promotion, marketing, sales and payment. We explained how these elements are part of the data-driven “surveillance” and personalized marketing system that operates as the central nervous system for nearly all U.S. online operations. We raised the growing role that “social media commerce” plays in contemporary payment platforms, supporting the Bureau’s examination of Big Tech platforms and consumer financial payment services. For example, U.S. retail social media commerce sales will generate $53 billion in 2022, rising to $107 billion by 2025, according to a recent report by Insider Intelligence/eMarketer. Younger Americans, so-called “Generation Z,” are helping drive this new market—an indicator of how changing consumer financial behaviors are being shaped by the business model and affordances of the Big Tech platforms, including TikTok, Meta and Google.[2]In order to meaningfully respond to the additional questions raised by the Bureau in its re-opening of the comment period, in particular regarding how the payment platforms handle “complaints, disputes and errors” and whether they are “sufficiently staffed…to address consumer protection and provide responsible customer service,” USPIRG and CDD offer some further analysis regarding the structural problems of contemporary platform payment systems below.[3]First, payment services such as operated by Google, Meta, TikTok and others have inherent conflicts of interest.They are, as the Bureau knows, primarily advertising systems, that are designed to capture the “engagement” of individuals and groups using a largely stealth array of online marketing applications (including, for example, extensive testing to identify ways to engage in subconscious “implicit” persuasion).[4] Our prior comment and those of other consumer groups have already documented the extensive use of data profiling, machine learning, cross-platform predictive analysis and “identity” capture that are just a few of current platform monetization tactics. The continually evolving set of tools available for digital platforms to target consumers has no limits—and raises critical questions when it comes to the financial security of US consumers.  The build-out of Big Tech payment platforms leveraging their unique capabilities to seamlessly combine social media, entertainment, commerce with sophisticated data-driven contemporary surveillance has transformed traditional financial services concepts. Today’s social media giants are also global consumer financial banking and retail institutions. For example, J.P. Morgan has “built a real-time payments infrastructure” for TikTok’s parent company ByteDance: “that can be connected to local clearing systems. This allows users, content producers, and influencers to be paid instantaneously and directly into their bank accounts at any day or time. ByteDance has enabled this capability in the U.S. and Europe, meaning it covers approximately one-fifth of TikTok’s 1 billion active users worldwide.”[5]J.P. Morgan assisted ByteDance to also replace its “host-to host connectivity with banks, replacing it with application programming interfaces (API) connectivity that allows real-time exchange of data” between ByteDance and Morgan. This allows ByteDance to “track and trace the end-to-end status through the SWIFT network, see and monitor payments, and allow users to check for payments via their TikTok or other ByteDance apps in real time.” Morgan also has “elevated and further future-proofed ByteDance’s cash management through a centralized account structure covering all 15 businesses” through a “virtual account management and liquidity tool.”[6]Google’s Pay operations also illustrate how distinct digital payment platforms are from previous forms of financial services. Google explains to merchants that by integrating “with Google Wallet [they can] engage with users through location-based notifications, real-time updates” and offers, including encouraging consumers to “add offers from your webpage or app directly to Google wallet.” Google promotes the use of “geofenced notifications to drive engagement” with its Pay and Wallet services as well. Google’s ability to leverage its geolocation and other granular tracking and making that information available through a package of surveillance and engagement tools to merchants to drive financial transactions in real-time is beyond the ability of a consumer to effectively address. A further issue is the growing use of “personalization” technologies to make the financial services offering even more compelling. Google has already launched its “Spot” service to deliver “payment enabled experiences for users, including “fully customized experiences” in Google Pay. Although currently available only in India and Singapore, Google’s Spot platform, which allows consumers with “a few simple taps…to search, review, choose and pay” for a product is an example of how payment services online are continually advanced—and require independent review by consumer financial regulators. It also reflects another problem regarding protecting the financial well-being of US consumers. What are the impacts to financial security when there is no distance—no time to reflect—when the seamless, machine and socially-driven marketing and payment operations are at work?[7]A good example of the lack of meaningful protections for online financial consumers is Google Pay’s use of what’s known as “discovery,” a popular digital marketing concept meaning to give enhanced prominence to a product or service. Here’s how Google describes how that concept works in its Spot-enabled Pay application: “We understand that discovery is where it starts, but building deep connections is what matters the most - a connection that doesn’t just end with a payment, but extends to effective post sale engagement. The Spot Platform helps merchants own this relationship by providing a conversational framework, so that order updates, offers, and recommendations can easily be surfaced to the customer. This is powered by our Order API which is specialised to surface updates and relevant actions for users' purchases, and the Messaging API which can surface relevant messages post checkout to the user.”[8]Meta (Facebook), along with ad giant WPP, also relies on the growing use of “discovery” applications to promote sales. In a recent report, they explain that “digital loyalty is driven by seamless shopping experiences, convenience, easy discovery, consistent availability, positive community endorsement and personal connections.”[9]  Since Google and other payment platforms have relationships with dozens of financial institutions, and also have an array of different requirements for vendors and developers, USPIRG and CDD are concerned that consumers are placed at a serious disadvantage when it comes to protecting their interests and also seeking redress for complaints. The chain of digital payment services relationships, including with partners that conduct their own powerful data driven marketing systems, requires Bureau review. For example, PayPal is a partner with Google Pay, while the PayPal Commerce Platform has Salesforce as one of many partners.[10]See also PIRG’s recent comments to the FTC, for an extensive discussion of retail media networks and data clean rooms:[11]“Clean rooms are data platforms that allow companies to share first party data with one another without giving the other party full access to the underlying, user-level data. This ability to set controls on who has access to granular information about consumers is the primary reason that data clean rooms are able to subvert current privacy regulations.” Another important issue for the Bureau is the ability of the Big Tech payment platforms to collect and analyze data in ways that allow it to identify unique ways to influence consumer spending behaviors. In a recent report, Chinese ecommerce platform Alibaba explained how such a system operates: “The strength of Alibaba’s platforms allows a birds-eye view of consumer preferences, which is combined with an ecosystem of tactical solutions, to enable merchants to engage directly and co-create with consumers and source suppliers to test, adapt, develop, and launch cutting-edge products…helps merchants identify new channels and strategies to tap into the Chinese market by using precise market analysis, real-time consumer insights, and product concept testing.”[12]Such financial insights are part of what digital payment and platform services provide. PayPal, for example, gathers data on consumers as part of their “shopping journey.” In one case study for travel, PayPal explained that its campaign for Expedia involved pulling “together data-driven destination insights, creative messaging and strategic placements throughout the travel shoppers’ journey.” This included a “social media integration that drove users to a campaign landing page” powered by “data to win.” This data included what is the growing use of what’s euphemistically called “first-party data” from consumers, where there has been alleged permission to use it to target an individual. Few consumers will ever review—or have the ability to influence—the PayPal engine that is designed for merchants to “shape [their] customer journey from acquisition to retention.” This includes applications that add “flexible payment options…right on product pages or through emails;” “relevant Pay Later offer to customers with dynamic messaging;’ ability to “increase average order value” through “proprietary payment methods;” or “propose rewards as a payment option to help inspire loyalty.”[13]The impact of data-driven fostered social commerce on promoting the use of consumer payments should be assessed. For example, Shopify’s “in-app shopping experience on TikTok” claims that the placement of its “shopping tabs” by vendors on posts, profiles and product catalogs unleashes “organic discovery.” This creates “a mini-storefront that links directly to their online store for check out.’’ A TikTok executive explains how the use of today’s digital payment services are distinct—“rooted in discovery, connection, and entertainment, creating unparalleled opportunities for brands to capture consumers’ attention…that drives [them] directly to the digital point of purchase.”[14] TikTok also has partnered with Stripe, helping it “become much more integrated with the world of payments and fintech.”[15]TikTok’s Stripe integrations enable “sellers to send fans directly from TikTok videos, ads, and shopping tabs on their profiles to products available in their existing Square Online (link is external)store, providing a streamlined shopping experience that retains the look and feel of their personal brand.”[16] The Square/TikTok payment alliance illustrates the role that data driven commercial surveillance marketing plays in payment operations, such as the use of the “TikTok pixel” and “advanced matching.”[17] In China, ByteDance’s payment services reflects its growing ability to leverage its mass customer data capture for social media driven marketing and financial services.[18]We urge the Bureau to examine TikTok’s data and marketing practices as it transfers U.S. user information to servers in the U.S., the so-called “Project Texas,” to identify how “sensitive” data may be part of its financial services offerings.[19]Apple’s payment services deserve further scrutiny as its reintroduces its role as a digital advertising network, leveraging its dominant position in the mobile and app markets.[20] PayPal recently announced that it will be “working with Apple to enhance offerings for PayPal and Venmo merchants and consumers.” Apple is also making its payment service available through additional vendors, including the giant Kroger grocery store chain stores in California.[21]Amazon announced in October 2022 that Venmo was now an official payment service, where users could, during checkout, “select “Select a payment method” and then “Add a Venmo account.” This will redirect them to the Venmo app, where they can complete the authentication. Users can also choose Venmo to be their default payment method for Amazon purchases on that screen.”[22] Amazon’s AWS partners with fintech provider Plaid, another example of far-reaching partnerships restructuring the consumer financial services market.[23]ConclusionUSPIRG and CDD hope that both our original comments and these additional comments help the Bureau to understand the impact of rapid changes in Big Tech’s payments network relationships and partnerships. We believe urgent CFPB action is needed to protect consumers from the threat of Big Tech’s continued efforts to breach the important wall separating banking and commerce and to ensure that all players in the financial marketplace follow all the rules. Please contact us with additional questions.Sincerely yours,Jeff Chester, Executive Director, Center for Digital DemocracyEdmund Mierzwinski, Senior Director, Federal Consumer Program, U.S. PIRG [[1] /comment/CFPB-2021-0017-0079[2] /what-s-behind-social-commerce-surge-5-charts[3] We also believe that the Bureau’s request for comments concerning potential abuse of terms of service and use of penalties merits discussion. We look forward to additional comments from others. [4] /business/en-US/blog/mediascience-study-brands-memorable-tiktok; see Google, Meta, TikTok as well: https://www.neuronsinc.com/cases[5] /content/dam/jpm/treasury-services/documents/case-study-bytedance.pdf[6] /content/dam/jpm/treasury-services/documents/case-study-bytedance.pdf[7] /about/business/checkout/(link is external); /pay/spot(link is external); /about/business/passes-and-rewards/[8] /pay/spot[9] /news/meta-publishes-new-report-on-the-importance-of-building-brand-loyalty-in-on/625603/[10] See, for example, the numerous bank partners of Google in the US alone: /wallet/answer/12168634?hl=en. Also: /payments/apis-secure/u/0/get_legal_document?ldo=0&ldt=buyertos&ldr=us; /wallet/retail; /wallet/retail/offers/resources/terms-of-service; /us/webapps/mpp/google-pay-paypal; /products/commerce-cloud/overview/?cc=dwdcmain[11] /wp-content/uploads/2022/11/PIRG-FTC-data-comment-no-petitions-Nov-2022.pdf[12] /article/how-merchants-can-use-consumer-insights-from-alibaba-to-power-product-development/482374[13] /us/brc/article/enterprise-solutions-expedia-case-study(link is external); /us/brc/article/enterprise-solutions-acquire-and-retain-customers[14] /scaling-social-commerce-shopify-introduces-new-in-app-shopping-experiences-on-tiktok#[15] /financial-services-finserv/tiktok-partners-fintech-firm-stripe-tips-payments[16] /us/en/press/square-x-tiktok[17] /help/us/en/article/7653-connect-square-online-with-tiktok(link is external); /help/article/data-sharing-tiktok-pixel-partners[18] /video/douyin-chinas-version-tiktok-charge-093000931.html; /2021/01/19/tiktok-owner-bytedance-launches-mobile-payments-in-china-.html[19] /a/202211/16/WS6374c81ea31049175432a1d8.html[20] /news/newsletters/2022-08-14/apple-aapl-set-to-expand-advertising-bringing-ads-to-maps-tv-and-books-apps-l6tdqqmg?sref=QDmhoVl8[21] /231198771/files/doc_financials/2022/q3/PYPL-Q3-22-Earnings-Release.pdf;/2022/11/08/ralphs-begins-accepting-apple-pay/[22] /2022/10/25/amazon-now-allows-customers-to-make-payments-through-venmo/[23] /blogs/apn/how-to-build-a-fintech-app-on-aws-using-the-plaid-api/pirg_cdd_cfpb_comments_7dec2022.pdf
    Jeff Chester
  • Coalition of child advocacy, health, safety, privacy and consumer organization document how data-driven marketing undermines privacy and welfare of young peopleChildren and teenagers experience widespread commercial surveillance practices to collect data used to target them with marketing. Targeted and personalized advertising remains the dominant business model for digital media, with the marketing and advertising industry identifying children and teens as a prime target. Minors are relentlessly pursued while, simultaneously, they are spending more time online than ever before. Children’s lives are filled with surveillance, involving the collection of vast amounts of personal data of online users. This surveillance, informed by behavior science and maximized by evolving technologies, allows platforms and marketers to profile and manipulate children.The prevalence of surveillance advertising and targeted marketing aimed at minors is unfair in violation of Section 5. Specifically, data-driven marketing and targeted advertising causes substantial harm to children and teens by:violating their privacy;manipulating them into being interested in harmful products;undermining their autonomyperpetuating discrimination and bias;Additionally, the design choices tech companies use to optimize engagement and data collection in order to target marketing to minors further harm children and teens. These harms include undermining their physical and mental wellbeing and increasing the risk of problematic internet risk. These harms cannot reasonably be avoided by minors or their families, and there are no countervailing benefits to consumers or competition that outweigh these harms.Surveillance advertising is also deceptive to children, as defined by the Federal Trade Commission. The representations made about surveillance advertising by adtech companies, social media companies, apps, and games are likely to mislead minors and their parents and guardians. These misrepresentations and omissions are material. Many companies also mislead minors and their guardians by omission because they fail to disclose important information about their practices. These practices impact the choices of minors and their families every day as they use websites, apps, and services without an understanding of the complex system of data collection, retention, and sharing that is used to influence them online. We therefore urge the Commission to promulgate a rule that prohibits targeted marketing to children and teenagers.Groups filing the comment included: The Center for Digital Democracy, Fairplay, and #HalfTheStory, American Academy of Pediatrics, Becca Schmill Foundation, Berkeley Media Studies Group, Children and Screens: Institute of Digital Media and Child Development, Consumer Federation of America, Consumer Federation of California, CUNY Urban Food Policy Institute, Eating Disorders Coalition for Research, Policy & Action, Enough is Enough, LookUp.live, Lynn’s Warriors, National Eating Disorders Association, Parents Television and Media Council, ParentsTogether, Peace Educators Allied for Children Everywhere (P.E.A.C.E.), Public Citizen and UConn Rudd Center for Food Policy & Health FairPlay's executive director Josh Golin said: "Big Tech's commercial surveillance business model undermines young people's wellbeing and development.  It causes kids and teens to spend excessive time online, and exposes them to harmful content and advertising targeted to their vulnerabilities. The FTC must adopt a series of safeguards to allow vulnerable youth to play, learn, and socialize online without being manipulated or harmed. Most importantly, the Commission should prohibit data-driven advertising and marketing to children and teens, and make clear that Silicon Valley profits cannot come at the expense of young people's wellbeing.”CDD's Jeff Chester underscored this saying: "Children and teens are key commercial targets of today’s data-driven surveillance complex.  Their lives are tethered to a far-reaching system that is specifically designed to influence how they spend their time and money online, and uses artificial intelligence, virtual reality, geo-tracking, neuromarketing and more to do so.  In addition to the loss of privacy, surveillance marketing threatens their well-being, health and safety. It’s time for the Federal Trade Commission to enact safeguards that protect young people. "[full filing attached]
  • FTC Commercial Surveillance Filing from CDD focuses on how pharma & other health marketers target consumers, patients, prescribers “Acute Myeloid Lymphoma,” “ADHD,” “Brain Cancer,” “High Cholesterol,” “Lung Cancer,” “Overweight,” “Pregnancy,” “Rheumatoid Arthritis,” “Stroke,” and “Thyroid Cancer.” These are just a handful of the digitally targetable medical condition “audience segments” available to surveillance advertisers  While health and medical condition marketers—including pharmaceutical companies and drug store chains—may claim that such commercial data-driven marketing is “privacy-compliant,” in truth it reveals how vulnerable U.S. consumers are to having some of their most personal and sensitive data gathered, analyzed, and used for targeted digital advertising. It also represents how the latest tactics leveraging data to track and target the public—including “identity graphs,” artificial intelligence, surveilling-connected or smart TV devices, and a focus on so-called permission-based “first-party data”—are now broadly deployed by advertisers—including pharma and medical marketers. Behind the use of these serious medical condition “segments” is a far-reaching commercial surveillance complex including giant platforms, retailers, “Adtech” firms, data brokers, marketing and “experience” clouds, device manufacturers (e.g., streaming), neuromarketing and consumer research testing entities, “identity” curation specialists and advertisers...We submit as representative of today’s commercial surveillance complex the treatment of medical condition and health data. It incorporates many of the features that can answer the questions the commission seeks. There is widespread data gathering on individuals and communities, across their devices and applications; techniques to solicit information are intrusive, non-transparent, and out of meaningful scope for consumer control; these methods come at a cost to a person’s privacy and pocketbook, and potentially has significant consequences to their welfare. There are also societal impacts here, for the country’s public health infrastructure as well as with the expenditures the government must make to cover the costs for prescription drugs and other medical services...Health and pharma marketers have adopted the latest data-driven surveillance-marketing tactics—including targeting on all of a consumer’s devices (which today also includes streaming video delivered by Smart TVs); the integration of actual consumer purchase data for more robust targeting profiles; leveraging programmatic ad platforms; working with a myriad of data marketing partners; using machine learning to generate insights for granular consumer targeting; conducting robust measurement to help refine subsequent re-targeting; and taking advantage of new ways to identify and reach individuals—such as “Identity Graphs”— across devices. [complete filing for the FTC's Commercial Surveillance rulemaking attached]cddsurveillancehealthftc112122.pdf
    Jeff Chester
  • At every turn, young people face tricks and traps to keep them online for hours and sharing sensitive data. Contact:David Monahan, Fairplay: david@fairplayforkids.orgJeff Chester, Center for Digital Democracy: jeff@democraticmedia.orgAdvocates to FTC: Write rules to protect kids from harmful manipulative design onlineAt every turn, young people face tricks and traps to keep them online for hoursand sharing sensitive data.BOSTON, MA and WASHINGTON, DC – November 17, 2022 – A coalition of leading health and privacy advocates filed a petition today asking the Federal Trade Commission to promulgate a rule prohibiting online platforms from using unfair design features to manipulate children and teens into spending excessive time online. Twenty-one groups, led by Fairplay and the Center for Digital Democracy, said in their petition: “When minors go online, they are bombarded by widespread design features that have been carefully crafted and refined for the purpose of maximizing the time users spend online and activities users engage in.” They urged the FTC to establish rules of the road to establish when these practices cross the line into unlawful unfairness.The advocates’ petition details how the vast majority of apps, games, and services popular among minors generate revenue primarily via advertising, and many employ sophisticated techniques to cultivate lucrative long term relationships between minors and their brands. As a result, platforms use techniques like autoplay, endless scroll, and strategically timed advertisements to keep kids and teens online as much as possible– which is not in their best interests.The petition also details how manipulative design features on platforms like TikTok, Twitter, YouTube, Facebook, Instagram, and Snapchat undermine young people’s wellbeing. Excessive time online displaces sleep and physical activity, harming minors’ physical and mental health, growth, and academic performance. Features designed to maximize engagement also expose minors to potential predators and online bullies and age-inappropriate content, harm minors’ self-esteem, and aggravate risks of disordered eating and suicidality. The manipulative tactics also undermine children’s and teens’ privacy by encouraging the disclosure of massive amounts of sensitive user data.The advocates’ petition comes just months after California passed its Age Appropriate Design Code, a law requiring digital platforms to act in the best interests of children, and as momentum grows in Congress for the Kids and Online Safety Act and the Children and Teens’ Online Privacy Protection Act.The petition was drafted by the Communications and Technology Law Clinic at Georgetown University Law Center. Haley Hinkle, Policy Counsel, Fairplay:“The manipulative tactics described in this Petition that are deployed by social media platforms and apps popular with kids and teens are not only harmful to young people’s development– they’re unlawful. The FTC should exercise its authority to prohibit these unfair practices and send Big Tech a message that manipulating minors into handing over their time and data is not acceptable.”Katharina Kopp, Deputy Director, Center for Digital Democracy:“The hyper-personalized, data-driven advertising business model has hijacked our children’s lives. The design features of social media and games have been purposefully engineered to keep young people online longer and satisfy advertisers. It’s time for the FTC to put an end to these unfair and harmful practices. They should adopt safeguards that ensure platforms and publishers design their online content so that it places the well-being of young people ahead of the interests of marketers.”Jenny Radesky, MD, Associate Professor of Pediatrics, University of Michigan and Chair-elect, American Academy of Pediatrics Council on Communications and Media:“As a pediatrician, helping parents and teens navigate the increasingly complex digital landscape in a healthy way has become a core aspect of my work. If the digital environment is designed in a way that supports children’s healthy relationships with media, then it will be much easier for families to create boundaries that support children’s sleep, friendships, and safe exploration. However, this petition highlights how many platforms and games are designed in ways that actually do the opposite: they encourage prolonged time on devices, more social comparisons, and more monetization of attention. Kids and teens are telling us that these types of designs actually make their experiences with platforms and apps worse, not better. So we are asking federal regulators to help put safeguards in place to protect against the manipulation of children’s behavior and to instead prioritize their developmental needs.”Professor Laura Moy, Director, Communications & Technology Law Clinic at Georgetown Law, and counsel for Center for Digital Democracy and Fairplay:“As any parent or guardian can attest, games and social media apps keep driving kids and teens to spend more and more time online, in a way that neither minors nor their guardians can reasonably prevent. This is neither accidental nor innocuous—it's engineered and it's deeply harmful. The FTC must step in and set some boundaries to protect kids and teens. The FTC should clarify that the most harmful and widespread design features that manipulate users into maximizing time online, such as those employed widely by social media services and popular games, are unlawful when used on minors.” Groups signing on to the petition include: Center for Digital Democracy; Fairplay; Accountable Tech; American Academy of Pediatrics; Becca Schmill Foundation, Inc.; Berkeley Media Studies Group; C. Everett Koop Institute at Dartmouth; Center for Humane Technology; Children and Screens: Institute of Digital Media and Child Development; Eating Disorders Coalition; Electronic Privacy Information Center (EPIC); LookUp.live; Lynn's Warriors; Network for Public Education; Parent Coalition for Student Privacy; ParentsTogether Action; Protect Young Eyes; Public Citizen; Together for Girls; U.S. Public Interest Research Group; and UConn Rudd Center for Food Policy and Health.###ftc_engagement_petition_pr1.pdf, unfair_design_practices_petition_for_rulemaking_final_combined_filing.pdf
  • https://www.ftc.gov/system/files/ftc_gov/pdf/R307000_RULE_MAKING_PETITION_TO_PROHIBIT_THE%20_USE_ON_CHILDREN_OF_DESIGN_FEATURES.pdf
    Jeff Chester