Artificial intelligence (AI) became the talk of the town in November 2022, when OpenAI released ChatGPT for public use. Since then, technology giants such as Microsoft, Google and others have invested heavily in developing AI chatbots and the general impact of this new-age technology has been undeniable. Businesses across sectors have increasingly adopted AI in their services; Particularly in India, where the AI adoption rate in key sectors has touched 48% in FY24.
In the domestic context, AI is presently being used in food delivery and beauty to energy and manufacturing sectors. The usage of Generative AI (GenAI) has peaked in the creative domains, from copywriting and graphics designing to advertising and public relations. In public relations, GenAI is being used widely for several applications — from content creation to generating ideas and numerous other aspects. However, concerns regarding the authenticity of the AI-generated content quickly rose to prominence, with as many as eight American newspapers suing OpenAI and Microsoft for copyright infringement in April 2024. Other ethical considerations have also been made public in the last year for using AI, including privacy concerns. This calls for understanding the considerations for using GenAI in public relations promptly.
Accuracy and reliability
While the impact of GenAI in the public relations domain cannot be understated, PR practitioners need to consider that this technology remains a work in progress. The content or ideas generated by GenAI are more often than not, inaccurate, infringing or unfit to use for business purposes. Numerous lawsuits have been filed across the world for infringing copyrights of existing creative works, misaligned facts and related issues. Experts suggest that the content generated through GenAI is often the result of the non-differentiation of events or situations for which the content is being created.
For example, a GenAI-generated idea for an article for a manufacturing expert might suggest writing on particular technologies that are either non-existing or without any use. It’s necessary to understand that AI chatbots are often trained using data in the digital medium, without verifying the accuracy of the data. Creative outputs by GenAI might also have trademarks, which without permission makes the user liable for legal proceedings. PR professionals relying on GenAI need to consider vetting the content carefully before using it for professional purposes.
Data Security and confidentiality
One of the most important aspects for any firm is to protect valuable data concerning its brand, products or business dealings. PR professionals are often given access to confidential data of a company, and if those data are used to generate press releases, reports or any other content through GenAI, the firms come under risk for data breaches. GenAI tools often do not adhere to the data security and confidentiality standards of particular organizations and may store and use the same data in prompts submitted by other users. This breaches the confidentiality of any given organization and can open them up to considerable losses, and even lawsuits.
Numerous companies have fallen victim to data and confidentiality breaches through AI usage all over the globe. The most prominent incident of this was in November 2022, when a prominent German telecom company, operating in the US was victimised through the theft of the personal data of 37 million customers. Investigations revealed that the usage of an application equipped with AI capabilities led to the theft.
Plagiarism & copyright issues
As mentioned before, several US newspapers sued OpenAI and Microsoft earlier in 2024 for copyright violations. The plaintiffs accused the AI tools of these technology giants used copyrighted news articles without express permission or payment for training purposes of their AI chatbots. Such instances have not been limited to the US, but have happened all over the world. GenAI tools usually create content based on original content, often using exact matches, making users vulnerable to plagiarism and copyright infringements if used commercially.
Recent studies have found that up to 80% of PR professionals feel confident about using GenAI tools to generate various content such as reports, press releases and others. This content is often not verified for plagiarism and copyright infringement, making the clients vulnerable to reputation damage and lawsuits.
The ethical implications
While it is imperative for PR professionals to consider the legal implications, they must also contemplate the ethical implications of using GenAI for professional purposes. GenAI training modules often use data from a particular source, which are not verified for potential biases or prejudices against individuals, entities or others. If a particular GenAI tool generates content using malicious original content as a foundation, it opens up ethical implications for the user, i.e. clients.
PR professionals must complete an in-depth assessment of content generated through GenAI tools and then decide to take the next steps to integrate it ethically.
Conclusion
The modern world is being driven by AI-generated narratives presently. Through controlled usage, GenAI offers users efficient and data-driven communication given it is verified through ample considerations. PR professionals work intensely to set the narrative of their clients in front of the proper audience, however, all of their efforts become vulnerable through the unregulated use of GenAI. PR organisations must lay down an exhaustive framework before authorising the use of GenAI in a professional capacity, where special emphasis should be exercised on accuracy, security, data safety, and legal and ethical implications. GenAI offers a new horizon of success for PR professionals, but considerations must be reviewed before using it in a professional capacity.
Article authored by Anubhav Singh, Founder of Bridgers.