This month, Google came under fire after a report revealed that it had given about $218,089 in free advertising tools to an anti-abortion organisation. Google has a thorny history when it comes to providing these types of organisations with powerful platforms to reach and mislead women seeking abortion services, but the tech giant just updated its ad policy to help curb this deception.
Starting next month, advertisers in the U.S., UK, and Ireland running ads with abortion-related keywords will need to apply for certification and specifically state whether they do or do not provide abortions. When the organisation is certified, their ads will have a disclosure that reads either “Provides abortions” or “Does not provide abortions”, with these disclosures rolling out on all Search ad formats, according to a Google post.
Advertisers running abortion-related ads can submit their applications starting today, as Google recommends, since the policy will go into effect in June.
It’s unclear why the policy will only apply to the U.S., UK, and Ireland. Google did not immediately respond to a request for comment but we’ll update this post when we receive a reply.
For individuals that turn to the most powerful search engine to find information about abortion services, this is a crucial policy update to help them weed out legitimate medical institutions that provide those services from crisis pregnancy centres that aim to dissuade women from having an abortion. And these crisis pregnancy centres have weaponised Google products for years through deceptive advertising practices on Google Maps and its Search engine.
By forcing crisis pregnancy centres to explicitly declare whether or not they provide abortions, and then labelling their ads in an obvious way, it helps to eliminate the risk of a woman with an unplanned pregnancy from being manipulated online, which can lead to a traumatising and scientifically untruthful in-person experience.
Google’s policy change is pretty belated—activists urged Google to remove crisis pregnancy centres from its Search results a year ago—but it marks an acknowledgement that the tech giant’s tools and services can be exploited, and that those exploits have serious consequences both for users mental and physical health. And it is as simple as forcing manipulative organisations to simply, and clearly, state the facts.