European Privacy Advocates have claimed that online behavioral advertising has a complicated bidding process that poses a significant threat to consumer privacy.
The behavioral advertising process tracks the kind of things a user reads, listens to, or watches to present them with ads tailor-made to their specific needs. A bunch of new documents filed this Monday with regulators in Poland, the UK, and Ireland have claimed that the handling of personal data while matching advertisements to ad slots flout the European Union’s General Data Protection Regulation, a strict set of rules related to consumer privacy that has been in effect since May 2018.
With its focus on the categories that the magnates of the ad-tech industry use to instantly match advertisers with appropriate users or content, the documents point out how the list of labels has been approved by the Interactive Advertising Bureau that comprises sensitive categories like incest, abuse support, gay life, hate content, substance abuse, and AIDS/HIV.
The advocacy group being led by the privacy-focused browser, Brave, claims that in time cookies and related technologies used to track a user’s browser history can, in turn, incorporate the labels into user profiles. Johnny Ryan, chief policy officer at Brave, has cited a December report from New Economics Foundation, which claims that a typical user profile from the UK is broadcasted by ad-tech companies a whopping 164 times a day, to state that “Labels about what you read and watch online stick to you for a long time.” The groups have added that the profiles are then passed around the internet ad ecosystem while flouting most of the rules laid down by the GDPR.
suggest that personal identifiers ‘about the human user of the device,’ the ‘User’ attributes, are ‘strongly recommended’ to be involved in a bid request – Ravi Naik, Brave’s lawyer
The IAB, for its part, has said that the categories consulted academics, admeasurement companies, and related IAB members before being established by the IAB Tech Lab. A blog post from November 2017 states the lab’s goal behind creating the current set of categories as a desire to help content creators with building stuff that is “relevant, brand-safe, and effective advertising” to help with “audience analysis and segmentation. The list includes categories for special needs kids, autism, incontinence, and infertility; religious categories for Islam, Hinduism, and other religions. Dennis Buchheim, senior vice president, and general manager, IAB Tech Lab, said in an email that it is the ad tech- companies, and not the categories created for their benefit, that owe any legal obligation under the GDPR, adding that the categories are “used by organizations, at their sole discretion, to categorize the type of content that a website contains.”
‘Authorized Buyer Programme’ from Google has a similar category to facilitate real-time bidding requests that include sexual abuse, unwanted body & facial hair removal, sexually transmitted diseases, male impotence, and right-wing and left-wing politics; while exempting categories for substance abuse, including the use of steroids & performance-enhancing drug and alcohol treatment. For their part, publishers can opt out of the list by “Publisher Verticals” and generate automatically owing to keywords used on a webpage. The index uses webpage content rather than personal info to facilitate contextual ads, allowing advertisers to make smart audience-based ad choices like allowing a liquor company to keep its ads from a pregnant woman. The categories are used to provide bidders with an estimate of ad space up for auction in real-time.
We have strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories such as race, sexual orientation, health conditions, pregnancy status, etc. If we found ads on any of our platforms that were violating our policies and attempting to use sensitive interest categories to target ads to users, we would take immediate action – A spokesperson for Google
A complaint filed last September by Ryan and two others about the online advertising system in the UK and Ireland alleges that the process risks the exposure of user location and tracking identifiers which can then be used to build long-term profiles. These profiles can then be combined with sensitive offline data, like a user’s income, social media influence, gender, political leaning, and sexual orientation.
A demonstration of the same included a sample bid request on Google’s blog for developers, which came up with specific information regarding the person’s latitude and longitude, zip code, device details, and tracking ID. Ryan adds that this illustrates the human dimension of behavioral targeting, a process verging on the abstract due to its pervasive and opaque nature. A complaint in Poland filed by the Polish privacy-focused non-profit group Panoptykon Foundation includes the allegations made in the UK and Ireland and the new claims regarding the content category.
Google paid $57 million as a fine to French privacy watchdogs last week due to their violation of the GDPR in not having attained proper user consent while personalizing their advertising.