The US Federal Trade Commission (FTC) on August 11 announced that it is exploring rules to crack down on harmful commercial surveillance and lax data security and is seeking public comments on the same. “Specifically, the Commission invites comment on whether it should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive,” the FTC notice stated.
“Firms now collect personal data on individuals at a massive scale and in a stunning array of contexts. The growing digitization of our economy—coupled with business models that can incentivize endless hoovering up of sensitive user data and a vast expansion of how this data is used—means that potentially unlawful practices may be prevalent.” — FTC Chair Lina M. Khan.
Why does this matter? Despite being the home to tech companies that amass the most amount of user data worldwide, the US does not have a law that regulates what data is collected and how it’s used. For years, lawmakers have tried to pass a federal privacy law, but to no avail. The US lags behind the EU, China, and others on this front. Some states like California, Virginia, and Colorado have their own privacy laws, and there are also sector-specific laws, but all these only protect a subset of US citizens or cases. A new privacy bill was introduced by lawmakers in June but it has a long way to go before it becomes law. The FTC’s rules on this front might see the light of day sooner than the other efforts.
FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.
What is commercial surveillance?
“Commercial surveillance is the business of collecting, analyzing, and profiting from information about people,” the FTC explained. Key features of commercial surveillance include:
- Collection of data: “The FTC is concerned that companies collect vast troves of consumer information, only a small fraction of which consumers proactively share. Much of this data is collected through secret surveillance practices. Companies can track every aspect of consumers’ engagement online. Companies can also surveil consumers while they are connected to the internet – their family and friend networks, browsing and purchase histories, location and physical movements, and a wide range of other personal details. Companies can collect data in other ways too, such as buying it from data brokers or pulling it from public sources,” FTC explained.
- Analysis of data: Companies then use algorithms and automated systems to analyse the information they collect and build consumer profiles and make inferences about consumers to predict their behaviour and preferences. “Some companies, moreover, reportedly claim to collect consumer data for one stated purpose but then also use it for other purposes,” FTC noted.
- The monetisation of data: “Companies may use some of the information they collect to provide products and services, but they can also use it to make money. For example, they may sell the information through the massive, opaque market for consumer data, use it to place behavioural ads, or leverage it to sell more products,” FTC explained. “These practices also appear to exist outside of the retail consumer setting. Some employers, for example, reportedly collect an assortment of worker data to evaluate productivity, among other reasons—a practice that has become far more pervasive since the onset of the COVID-19 pandemic,” FTC added.
“Whether they know it or not, most Americans today surrender their personal information to engage in the most basic aspects of modern life. When they buy groceries, do homework, or apply for car insurance, for example, consumers today likely give a wide range of personal information about themselves to companies, including their movements, prayers, friends, menstrual cycles, web-browsing, and faces, among other basic aspects of their lives.” — FTC
What concerns does commercial surveillance pose?
According to the FTC, commercial surveillance poses the following concerns:
- Exposure to data thieves and hackers: The volume of information that companies collect requires a commensurate level of data security to keep it safe, but many companies do not sufficiently or consistently invest in securing the data from hackers and data thieves. “Despite widely accepted risk mitigation standards, they fail to use encryption techniques, and other protective measures,” FTC observed. “Fraud and identity theft cost both businesses and consumers billions of dollars, and consumer complaints are on the rise. For some kinds of fraud, consumers have historically spent an average of 60 hours per victim trying to resolve the issue. Even the nation’s critical infrastructure is at stake, as evidenced by the recent attacks on the largest fuel pipeline, meatpacking plants, and water treatment facilities in the United States,” FTC added
- Harms to children: “There is a growing body of evidence that surveillance-based services are addictive to children and lead to a wide variety of mental health and social harms. With the expansion of technologies that are directed at kids and the education system’s growing reliance on digital tools, children and teens face greater risks of immediate and long-term dangers,” FTC explained.
- Retaliation for not consenting: Many companies may deny service to consumers who do not wish to have their personal information shared with other parties or require consumers to pay a premium to keep their information private. “These data practices, and the lack of meaningful alternatives, raise questions about whether consumers are really consenting,” FTC stated. Further, “studies have shown that most people do not generally understand the market for consumer data that operates beyond their monitors and displays. Most consumers, for example, know little about the data brokers and third parties who collect and trade consumer data or build consumer profiles that can expose intimate details about their lives and, in the wrong hands, could expose unsuspecting people to future harm,” FTC added.
- Surveillance creep: “Some companies reserve the right to change their privacy terms after consumers sign up for a product or service. Consumers who want to maintain access may have no choice but to accept those updated terms, even those that materially break previous privacy promises. Companies may couch their updates in legal language that masks the new ways they will collect, analyze, and monetize consumers’ information. They can then use data collected for one purpose for a wide variety of new purposes. And consumers may not have a way to say no,” FTC remarked.
- Inaccuracy: Companies do not publicly disclose how their algorithms and automated systems work, but research suggests that algorithms are prone to errors, bias, and inaccuracy. “These flaws often stem from the design process, such as the use of unrepresentative datasets, faulty classifications, or flawed problem analysis, a failure to identify new phenomena, and lack of context and meaning,” FTC observed.
- Bias and discrimination: “Some commercial surveillance practices may discriminate against consumers based on legally protected characteristics like race, gender, religion, and age. Some companies may use these categories to deny consumers access to housing, credit, employment, and other critical services,” FTC pointed out. “For example, some employers’ automated systems have reportedly learned to prefer men over women. Meanwhile, a recent investigation suggested that lenders’ use of educational attainment in credit underwriting might disadvantage students who attended historically Black colleges and universities,” FTC cited as examples.
- Dark patterns: “Companies increasingly employ dark patterns or marketing to influence or coerce consumers into choices they would otherwise not make,” FTC noted, “including burying privacy settings behind multiple layers of the user interface and making misleading representations to “trick or trap” consumers into providing personal information.”
Questions for public comment
Here are a few selected questions that FTC has posed to the public. The full list of about 100 questions can be found here.
The public can submit comments on the following questions via https://www.regulations.gov within 60 days of the notice and/or share their input on these topics during a virtual public forum on September 8, 2022.
- Which practices do companies use to surveil consumers?
- Which measures do companies use to protect consumer data?
- How, if at all, do these commercial surveillance practices harm consumers or increase the risk of harm to consumers?
- How should the Commission identify and evaluate these commercial surveillance harms or potential harms? On which evidence or measures should the Commission rely to substantiate its claims of harm or risk of harm?
- Which kinds of data should be subject to a potential trade regulation rule? Should it be limited to, for example, personally identifiable data, sensitive data, data about protected categories and their proxies, data that is linkable to a device, or non-aggregated data? Or should a potential rule be agnostic about kinds of data?
- Which, if any, commercial incentives and business models lead to lax data security measures or harmful commercial surveillance practices? Are some commercial incentives and business models more likely to protect consumers than others? On which checks, if any, do companies rely to ensure that they do not cause harm to consumers?
Harm to children
- Are there practices or measures to which children or teenagers are particularly vulnerable or susceptible? For instance, are children and teenagers more likely than adults to be manipulated by practices designed to encourage the sharing of personal information?
- What types of commercial surveillance practices involving children and teens’ data are most concerning? For instance, given the reputational harms that teenagers may be characteristically less capable of anticipating than adults, to what extent should new trade regulation rules provide teenagers with an erasure mechanism in a similar way that COPPA provides for children under 13? Which measures beyond those required under COPPA would best protect children, including teenagers, from harmful commercial surveillance practices?
- In what circumstances, if any, is a company’s failure to provide children and teenagers with privacy protections, such as not providing privacy-protective settings by default, an unfair practice, even if the site or service is not targeted to minors?
- Which sites or services, if any, implement child-protective measures or settings even if they do not direct their content to children and teenagers?
- Do techniques that manipulate consumers into prolonging online activity (e.g., video autoplay, infinite or endless scroll, quantified public popularity) facilitate commercial surveillance of children and teenagers? If so, how?
- To what extent should trade regulation rules distinguish between different age groups among children (e.g., 13 to 15, 16 to 17, etc.)?
- Given the lack of clarity about the workings of commercial surveillance behind the screen or display, is parental consent an efficacious way of ensuring child online privacy?
- How extensive is the business-to-business market for children and teens’ data?
- How would potential rules that block or otherwise help to stem the spread of child sexual abuse material, including content-matching techniques, otherwise affect consumer privacy?
Balancing costs and benefits
- The Commission invites comment on the relative costs and benefits of any current practice, as well as those for any responsive regulation. How should the Commission engage in this balancing in the context of commercial surveillance and data security? Which variables or outcomes should it consider in such an accounting? Which variables or outcomes are salient but hard to quantify as a material cost or benefit? How should the Commission ensure adequate weight is given to costs and benefits that are hard to quantify?
- What is the right time horizon for evaluating the relative costs and benefits of existing or emergent commercial surveillance and data security practices? What is the right time horizon for evaluating the relative benefits and costs of regulation?
- To what extent would any given new trade regulation rule on data security or commercial surveillance impede or enhance innovation?
- Would any given new trade regulation rule on data security or commercial surveillance impede or enhance competition?
- Should the analysis of cost and benefits differ in the context of information about children? If so, how?
- Should, for example, new rules require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data? If so, which measures? How granular should such measures be?
- Should new rules codify the prohibition on deceptive claims about consumer data security, accordingly authorizing the Commission to seek civil penalties for first-time violations?
- To what extent, if at all, should the Commission require firms to certify that their data practices meet clear security standards? If so, who should set those standards, the FTC or a third-party entity?
Collection, Use, Retention, and Transfer of Consumer Data
- How do companies collect consumers’ biometric information? What kinds of biometric information do companies collect? For what purposes do they collect and use it? Are consumers typically aware of that collection and use? What are the benefits and harms of these practices?
- Should the Commission consider limiting commercial surveillance practices that use or facilitate the use of facial recognition, fingerprinting, or other biometric technologies? If so, how?
- To what extent, if at all, should the Commission limit companies that provide any specifically enumerated services (e.g., finance, healthcare, search, or social media) from owning or operating a business that engages in any specific commercial surveillance practices like personalized or targeted advertising?
- How accurate are the metrics on which internet companies rely to justify the rates that they charge to third-party advertisers? To what extent, if at all, should new rules limit targeted advertising and other commercial surveillance practices beyond the limitations already imposed by civil rights laws? If so, how? To what extent would such rules harm consumers, burden companies, stifle innovation or competition, or chill the distribution of lawful content?
- How cost-effective is contextual advertising as compared to targeted advertising?
- To what extent would data minimization requirements or purpose limitations protect consumer data security?
- To what extent would data minimization requirements or purpose limitations unduly hamper algorithmic decision-making or other algorithmic learning-based processes or techniques?
Automated decision-making systems
- How prevalent is algorithmic error?
- What are the best ways to measure algorithmic error?
- Does the weight that companies give to the outputs of automated decision-making systems overstate their reliability?
- To what extent, if at all, should new rules require companies to take specific steps to prevent algorithmic errors? If so, which steps?
- To what extent, if at all, do consumers benefit from automated decision-making systems?
- Could new rules help ensure that firms’ automated decision-making practices better protect non-English speaking communities from fraud and abusive data practices? If so, how?
- If new rules restrict certain automated decision-making practices, which alternatives, if any, would take their place?
- What would be the effect of restrictions on automated decision-making in product access, product features, product quality, or pricing?
Discrimination based on protected categories
- How prevalent is algorithmic discrimination based on protected categories such as race, sex, and age?
- How should the Commission evaluate or measure algorithmic discrimination?
- How should the Commission address such algorithmic discrimination? Should it consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based?
- How, if at all, would restrictions on discrimination by automated decision-making systems based on protected categories affect all consumers?
- To what extent is consumer consent an effective way of evaluating whether a practice is unfair or deceptive?
- To what extent should new trade regulation rules prohibit certain specific commercial surveillance practices, irrespective of whether consumers consent to them?
- To what extent should new trade regulation rules give consumers the choice of withdrawing their duly given prior consent?
- Should the Commission require different consent standards for different consumer groups (e.g., parents of teenagers (as opposed to parents of pre-teens), elderly individuals, individuals in crisis or otherwise especially vulnerable to deception)?
- Have opt-out choices proved effective in protecting against commercial surveillance?
Notice, transparency, and disclosure
- What kinds of information should new trade regulation rules require companies to make available and in what form?
- In which contexts are transparency or disclosure requirements effective?
- The Commission invites comment on the nature of the opacity of different forms of commercial surveillance practices. On which technological or legal mechanisms do companies rely to shield their commercial surveillance practices from public scrutiny? Intellectual property protections, including trade secrets, for example, limit the involuntary public disclosure of the assets on which companies rely to deliver products, services, content, or advertisements. How should the Commission address, if at all, these potential limitations?
- To what extent should trade regulation rules, if at all, require companies to explain (1) the data they use, (2) how they collect, retain, disclose, or transfer that data, (3) how they choose to implement any given automated decision-making system or process to analyze or process the data, including the consideration of alternative methods, (4) how they process or use that data to reach a decision, (5) whether they rely on a third-party vendor to make such decisions, (6) the impacts of their commercial surveillance practices, including disparities or other distributional outcomes among consumers, and (7) risk mitigation measures to address potential consumer harms?
- Should new rules, if promulgated, require plain-spoken explanations? How effective could such explanations be, no matter how plain? To what extent, if at all, should new rules detail such requirements?
- To what extent should the Commission, if at all, make regular self-reporting, third-party audits or assessments, or self-administered impact assessments about commercial surveillance practices a standing obligation?
- To what extent do companies have the capacity to provide any of the above information? Given the potential cost of such disclosure requirements, should trade regulation rules exempt certain companies due to their size or the nature of the consumer data at issue?
- The Commission is alert to the potential obsolescence of any rulemaking. As important as targeted advertising is to today’s internet economy, for example, it is possible that its role may wane. Companies and other stakeholders are exploring new business models. Such changes would have notable collateral consequences for companies that have come to rely on the third-party advertising model, including and especially news publishing. These developments in online advertising marketplace are just one example. How should the Commission account for changes in business models in advertising as well as other commercial surveillance practices?
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
- Summary: How The New US Federal Data Privacy Bill Might Break Years Of Stalemate
- Summary: US Proposes New Regulatory Body To Oversee “Powerful” Tech Sector
- Summary: US Lawmakers Introduce Bill To Tackle Social Media Addiction And Amplification Of Harmful Content
- Here’s Why US Department Of Justice Wants Congress To Pass Antitrust Bill Prohibiting Self-Preferencing By Big Tech