Mumbai: Google has released its annual Ads Safety Report – which looks at how Google has created a safer experience for users in the ad ecosystem the year prior. It noted that billions of people around the world rely on Google products to provide relevant and trustworthy information, including ads. That’s why it has thousands of people working around the clock to safeguard the digital advertising ecosystem. 5.5 billion bad ads were stopped in 2023. Overall, it blocked or removed over one billion ads for violating its policy against abusing the ad network, which includes promoting malware and 206.5 million advertisements for violating its misrepresentation policy, which includes many scam tactics, and 273.4 million advertisements for violating its financial services policy.
The key trend in 2023 it noted was the impact of generative AI. This new technology introduced significant and exciting changes to the digital advertising industry, from performance optimization to image editing. Of course, generative AI also presents new challenges.
Just as importantly, generative AI presents a unique opportunity to improve enforcement efforts significantly. Its teams are embracing this transformative technology, specifically Large Language Models (LLMs), so that it can better keep people safe online. It said that its policies are designed to support a safe and positive experience for users, which is why it prohibits content that it believes to be harmful to users and the overall ad ecosystem.
Gen AI bolsters enforcement: Google said that its safety teams have long used AI-driven machine learning systems to enforce policies at scale. It’s how, for years, it has been able to detect and block billions of bad ads before a person ever sees them. But, while still highly sophisticated, these machine learning models have historically needed to be trained extensively — they often rely on hundreds of thousands, if not millions of examples of violative content.
LLMs, on the other hand, are able to rapidly review and interpret content at a high volume, while also capturing important nuances within that content. These advanced reasoning capabilities have already resulted in larger-scale and more precise enforcement decisions on some of our more complex policies. Take, for example, our policy against Unreliable Financial Claims, which includes ads promoting get-rich-quick schemes. The bad actors behind these types of ads have grown more sophisticated. They adjust their tactics and tailor ads around new financial services or products, such as investment advice or digital currencies, to scam users.
To be sure, traditional machine learning models are trained to detect these policy violations. Yet the fast-paced and ever-changing nature of financial trends make it, at times, harder to differentiate between legitimate and fake services, and quickly scale its automated enforcement systems to combat scams. LLMs are more capable of quickly recognising new trends in financial services, identifying the patterns of bad actors who are abusing those trends, and distinguishing a legitimate business from a get-rich-quick scam. This has helped its teams become even more nimble in confronting emerging threats of all kinds.
To put the impact of AI on this work into perspective: last year more than 90% of its publisher page-level enforcement started with the use of machine learning models, including the latest LLMs. Google added that it has only just begun to leverage the power of LLMs for ads safety. Gemini, launched publicly last year, is Google’s most capable AI model. It has started bringing its sophisticated reasoning capabilities into our ads safety and enforcement efforts.
Work to prevent fraud and scams: In 2023, scams and fraud across all online platforms were on the rise. Bad actors are constantly evolving their tactics to manipulate digital advertising in order to scam people and legitimate businesses alike. To counter these ever-shifting threats, it updated policies, deployed rapid-response enforcement teams, and sharpened our detection techniques.
- In November, it launched its Limited Ads Serving policy, which is designed to protect users by limiting the reach of advertisers with whom it is less familiar. Under this policy, it has implemented a “get-to-know-you” period for advertisers who don’t yet have an established track record of good behaviour, during which impressions for their ads might be limited in certain circumstances — for example, when there is an unclear relationship between the advertiser and a brand they are referencing. Ultimately, Limited Ads Serving, which is still in its early stages, will help ensure well-intentioned advertisers are able to build up trust with users, while limiting the reach of bad actors and reducing the risk of scams and misleading ads.
- A critical part of protecting people from online harm hinges on its ability to respond to new abuse trends quickly. Toward the end of 2023 and into 2024, it faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deepfakes. When Google detected this threat, it created a dedicated team to respond immediately. It pinpointed patterns in the bad actors’ behavior, trained its automated enforcement models to detect similar ads, and began removing them at scale. It also updated its misrepresentation policy to better enable it to rapidly suspend the accounts of bad actors.
The fight against scam ads is an ongoing effort, as it sees bad actors operating with more sophistication, at a greater scale, and using new tactics such as deepfakes to deceive people. Google added that it will continue to dedicate extensive resources, making significant investments in detection technology and partnering with organizations like the Global Anti-Scam Alliance and Stop Scams UK, to facilitate information sharing and protect consumers worldwide.
Investing in election integrity: Political ads are an important part of democratic elections. Candidates and parties use ads to raise awareness, share information, and engage potential voters. In a year with several major elections around the world, it wants to make sure voters continue to trust the election ads they may see on our platforms. That’s why Google said that it has long-standing identity verification and transparency requirements for election advertisers, as well as restrictions on how these advertisers can target their election ads. All election ads must also include a “paid for by” disclosure and are compiled in its publicly available transparency report.
In 2023, it verified more than 5,000 new election advertisers and removed more than 7.3 million election ads that came from advertisers who did not complete verification. Last year, it said that it was the first tech company to launch a new disclosure requirement for election ads containing synthetic content. As more advertisers leverage the power and opportunity of AI, it wants to make sure that it continues to provide people with the greater transparency and the information they need to make informed decisions.
Additionally, it has continued to enforce its policies against ads that promote demonstrably false election claims that could undermine trust or participation in democratic processes.
Staying nimble and looking ahead: When it comes to ads safety, a lot can change over the course of a year, from the introduction of new technology such as generative AI to novel abuse trends and global conflicts. And the digital advertising space has to be nimble and ready to react. That’s why it is continuously developing new policies, strengthening enforcement systems, deepening cross-industry collaboration, and offering more control to people, publishers, and advertisers.
In 2023, for example, it launched the Ads Transparency Center, a searchable hub of all ads from verified advertisers, which helps people quickly and easily learn more about the ads they see on Search, YouTube and Display. It also updated its suitability controls to make it simpler and quicker for advertisers to exclude topics that they wish to avoid across YouTube and Display inventory. Overall, it made 31 updates to its Ads and Publisher policies. Though it does not yet know what the rest of 2024 has in store it said that it is confident that its investments in policy, detection and enforcement will prepare it for any challenges ahead.