Appeared in MartechCube
In a study commissioned by the IAB, a Harvard professor estimated that eliminating access to consumer data would result in a cascade of protective responses by brands and agencies. The impact would consolidate and shrink the $110 billion digital marketing industry by an estimated $40 billion.
“Concern over the banning of online data and targeting is already driving the marketplace into the hands of top internet platforms who are pulling access to third-party cookies and consolidating their consumers’ data for their own use behind walled gardens.”
Consumers using these services are forced into a data for services value exchange. In turn, advertisers are being forced to spend only on these platforms in order to get access to data about qualified consumers.
Regulations should exist for the benefit of the people.
After a long and somewhat contentious process, CCPA is a reality. The regulations have finally been released. Unlike the Fair Credit or HIPAA legislation that clearly identified harmful behaviors and the data risks involved, CCPA and other state actions seem to be less about identifying actual behaviors that are risky to the consumer, and more about controlling the influence of select platforms.
Innovations and strategic evolutionary decisions, like the move to walled gardens, will always be able to outflank rules based on regulating tactics like cookies, location, AI, or predictive models.
Unfortunately, with CCPA, the regulators seem more interested in blocking specific technologies that “sound bad” in a press release than actually pursuing online behaviors that are in bad faith and lead to harm.
Clearly, marketers, social networks, and digital entertainment platforms haven’t done a good job of explaining privacy to consumers. For years, there has been an explicit value exchange being made – free content in exchange for the limited and anonymous use of a consumer’s data in advertising. Up until Cambridge Analytica, a polarized electorate and activist responses like CCPA, this has been a civil conversation between the platforms and the consumer. Studies show that most consumers understand and accept this trade-off.
The online industry has been largely good about guarding the privacy rights of the consumer in this exchange. While not perfect, certifications, industrywide notice, and consent policies are widely adopted for the proper handling of personal information. Brands and advertisers have every reason in the world to protect user privacy and act in good faith.
CCPA may change all that by conflating Facebook and Google business practices with bad actors like Cambridge Analytica and Julien Assange.
Consumer Choice is at Risk
One of the rights that CCPA potentially takes away from the consumer is their right to participate in this value exchange. Data is not always bad. Traditionally, the battle over personal information was about what data is truly sensitive and must be treated with care. On the one hand, there is personal health and financial information. On the other, aggregated and de-identified information that is part of the public record. In the middle is anonymous consumer behavior, device activity, points of interest and ownership that is not risky in any practical sense – yet it is critical for fueling online commerce.
It is in this middle ground where damage will be done. CCPA and protectionist responses like walled gardens potentially take away both the consumer’s rights to make a value exchange and the marketer’s right to use data to optimize and create value for consumers.
Information collected about our demographic profiles, social networking, and even our commercial exchanges with vendors is overwhelmingly used to create opportunities of choice for consumers. These opportunities provide individuals with a more seamless and cost-effective online experience, informing them of current events and providing savings on the goods and services they need.
What does a reasonable standard for data and privacy look like?
People having the ability (and the tools) to control information about themselves is reasonable and completely appropriate. The standard of individual choice should always take precedence.
Consumers routinely accept cookies, make online transactions, and volunteer their participation in social exchanges knowing that it will not be harmful and that they can opt-out at any time. Consumers benefit from exposing their data, opinions, and image for greater opportunity, lower costs, and improved efficiency. A line is crossed when personal data is used for an illicit activity or based on sensitive data that the consumer did not explicitly sanction. Content that is controversial or that is in a “protected class” like health or financial data should always be protected.
Wouldn’t it be beneficial to supporting a reasonableness standard if all content and all advertising had “breadcrumbs” that were monitored by an independent third-party and graded against an IAB, ANA, or FCC framework that detailed the provenance of the digital asset to be advertised against?
If we can monitor email spam, we can certainly monitor contextual and advertising abuse. With the onset of COVID-19 and even the Black Lives Matter protests, advertisers have been quick to use data to navigate around, and sometimes to, that online content.
Turning that same workflow around, publishers and platforms would have no problem blocking ads that are not validated, or the postings of bad actors. Let the market render its judgment and support advertisers with guidelines that allow them to make a market-based decision.
As with HIPAA, regulating the use of data online should include a recognition of where the consumer’s data fits on a spectrum of sensitivity and transparency to the purpose of its usage. Much clearer guidance than the regulations imposed by CCPA are needed to allow advertisers and publishers to continue to navigate the intersection of sensitivity and reasonable purpose in good faith. To protect consumers from actual harm we need more transparency than what we get from driving data use behind walled gardens.