Google announced the launch of its experimental conversational AI service named Bard – billed by many as a rival to OpenAI’s ChatGPT. Even before being widely available to the public (in the coming weeks), Bard made headlines as it made a factual error in its very first demo.
In a post shared by Google on Twitter, Bard is seen answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”
In return, Bard gave three bullet points as answers, including one that stated that the telescope “took the very first pictures of a planet outside of our own solar system.”
Many Twitter users familiar with astronomy pointed out the incorrect option as the first image of an exoplanet was taken in 2004.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
A few days ago, ChatGPT apologised to Microsoft Executive Chairman and CEO Satya Nadella for suggesting biryani as a south Indian “tiffin” option.
On February 8, Microsoft announced its integration of OpenAI’s GPT-4 model into Bing, which will provide a ChatGPT-like experience within the search engine.
Google has also announced Bard will be integrated into its search engine after a testing phase, providing users with a personalised response to their query rather than a list of relevant websites.
As issues of inaccuracies are reported about these conversational AI services, the question to be asked is can they be relied on in newsrooms, where fact checking and accuracy are critical to the credibility of the news brand.
Experts believe that they can be used for research but the output cannot be trusted without human analysis and verification.
“Human civilisation goes through path-breaking innovations from time to time. Looks like ChatGPT is one such moment where massive industrial changes might be underway. Like in many other fields, journalism might also be under the threat of artificial intelligence,” observes Vinod K Jose, former Executive Editor, Caravan.
“I see publishers, content companies and disinformation factories using it on a massive scale. It will be detrimental to newsrooms the way it is understood,” Jose added.
Not everyone agrees.
“The way ChatGPT is now, I don’t think anybody from any field can use it as it is rather can be used to do some basic research,” says Sabyasachi Mitter, Founder and Managing Director, Fulcro.
He adds, “It can be used as an option to shortcut to Google search-based research. The output cannot be trusted to take any decision without any human analysis or confirmation. Newsrooms are more sensitive, any fact or information which is picked up without any verification can create a huge problem for the organisation. The credibility of the organisation will be at stake.”
Mitter believes that newsrooms will continue to need humans for research, verification, and fact check.
“They can use technologies like ChatGPT and Bard as augmentation tools for ideation, or to get story angles or story pegs, beyond that I don’t think it is something matured enough to be trusted cent percent,” Mitter adds.
Prof. Ujjwal Anu Chowdhury, Strategic Adviser, Daffodil International University (Dhaka) and Adamas University (Kolkata), believes that ChatGPT and Bard are a good fast route to backgrounders.
“They can give a background matter on news stories but not the actual final story which depends on cross verification. Students’ assignments and background information can well be procured from Open AI bots. But, cross verification, analysis, future applications etc. need human interpretation for sure,” Chowdhury explains.
NP Ullekh, Executive Editor, Open Magazine and Author, says, “Technology is making a lot of inroads into the creative fields including writing and journalism. I treat them as hi-tech plagiarism.”
He explains, “You are Googling some information, but if you copy and paste that and then use it for your articles, it is termed as plagiarism. The same applies to writing articles using Open AI bots. Whoever is using these technologies can use it to understand and learn things, but ChatGPT cannot provide accurate answers as far as journalism is considered.”
“With such AI tools making a grand entry, it won’t be long before we have more advanced vetting software that can detect plagiarism done using ChatGPT and the like. So you can’t have ChatGPT or Bard write for you and then sit back with no fear of being detected,” he added.
Ullekh says that ChatGPT and Bard cannot process information as humans can.
“The article written using ChatGPT or Bard might be free from grammatical error or punctuation mistakes, but facts will remain to be a problem,” he added.
PJ George, Digital Editor, The Hindu, believes that AI tools such as ChatGPT and Bard can be valuable in the newsroom.
“They can help journalists with research as well as simplifying data. There are many journalists who are already using ChatGPT to create the first draft of their articles. However, these tools are not foolproof. These systems have been trained on various datasets, and the ‘garbage in, garbage out’ principle applies here. Their output still needs the critical eye of a journalist before being published,” George elucidates.
Jose adds another perspective – that Indian journalism has an ethical decay and ChatGPT is not going to be its biggest problem. But it might be the last nail on it, he notes.
He adds, “But if journalism revives itself to do more original reporting, more old-fashioned investigations, it will survive.”
But that’s another story. For now, the verdict is that AI cannot be trusted to replace humans in the newsrooms, but can help them in good measure.