OpenAI’s launch of ChatGPT created a massive buzz in the technology world. This still continues but with stronger concerns about the impact of Generative Artificial Intelligence (GenAI) and the ways it can be misused.
Compared to the true capabilities of AI, GenAI is surface level. AI experiments existed since the 1930s, but it became a household name only in 2022. Everyone, from the Big techs to startups, wanted to capitalise on this popularity. This led to a hot mess where as content consumers, everyday we are more unsure of what’s genuine and what’s not.
McAfee found that 64 percent of Indians believed AI has made it harder to spot online scams. For a country like India, where only 24.7% are computer literate but digital penetration is slightly over the 50% mark, everyone is vulnerable to fall for these scams. Especially now with GenAI tools which are easily accessible and have little to no regulations.
The need of the hour is some form of mechanism to ensure proper distinction between what is AI-generated and what is not. The AI Labeling Act of 2023 (16) introduced in the United States of America laid out that developers of generative AI systems would be required to include “clear and conspicuous” disclosures indicating that content was produced using AI. To show their commitment to the cause, seven leading companies including OpenAI, Google and Meta pledged to introduce ‘robust’ technology to mitigate this blurring of lines. TikTok joined the Coalition for Content Provenance and Authenticity (C2PA) as well. It found ways to embed tamper-proof markers in assets generated using their platform, something that Meta too introduced and is urging others to adopt.
There is a lot of murmur regarding its tamper-proof claims and quite clearly, it would take some time for the others to adopt such measures. There are also reports of Meta mislabelling original work as AI-generated content on their social media platforms Facebook and Instagram. OpenAI too shut down their AI classifier in June this year as well. The classifier was supposed to tell human writing from AI but as per the company’s statement the results yielded false positives.
A study by MIT Schwarzman College of Computing lays down the motivations, challenges and impact of AI-labelling. It states that before implementing AI-labelling, stakeholders must set the objective of such an exercise. It lists two main objectives, the first to inform people about the origin of the content and secondly for controlling misinformation. However, the study acknowledges the gap in what needs to be done and what is being done. Be it Meta or TikTok, the underlying message of their statements regarding AI-labelling is in a nutshell: AI-labelling is only half effective and that is not good enough.
The onus of labelling has now been passed on to the users. YouTube, Instagram and many of the other platforms are now urging its users to declare if they are using AI generated content. According to reports, if found that any user did not do their part in labelling such content, these platforms can penalise such users. Such dependence on an honour system of self-declaration seems hardly sincere at an age when it is clear that malicious agents would continue to exploit using the technology.
The most concerning matter in all of this is how the safety measures taken by the propagators of GenAI feels like an afterthought. As a tech company, we do our best to look at multiple use cases and scenarios before we even start building a feature. So, how the biggest players in the field did not see this coming and have contingency plans in place is puzzling.
Looking at the current state where the biggest players in AI field are shrugging at hard questions concerning GenAI, it is only safe for us to assume that we are on our own and are light years away from the utopia where AI-generated content would be easily distinguishable, and the internet would be a safe place for many, if not all, be that with AI-labelling or something completely novel.