'Claims fact checked by Robots' possible, but human supervision mandatory, says Charlie Beckett of POLIS
New Delhi | Pratyush Ranjan: As the tools of fact-checking continue to modernise with time, so is the nefarious approach of the forces spreading fake news and misinformation/disinformation. One such advancement achieved by them is ‘deepfake’ - AI-generated fake images and videos in which a person is replaced with someone else's likeness.
The use of deepfake is becoming quite common around the world and it is not long when it hits the alarm button in India too while only getting more realistic with time. Combating deepfakes will be a daunting challenge for the Indian fact-checkers as the newsrooms still continue to experiment with artificial intelligence. AI-powered tools will play a vital role in the newsrooms to spot deepfake images and videos.
Jagran New Media’s Senior News Editor Pratyush Ranjan discussed the anticipated menace and the importance of artificial intelligence in newsrooms with Professor Charlie Beckett, the founding director of POLIS - the think-tank for research and debate around international journalism and society in the Department of Media and Communications at London School of Economics (LSE).
Leading the POLIS JournalismAI project, Beckett holds expertise in artificial intelligence and future of news, and has addressed a series of public lectures and seminars for journalists and the public as well as a programme of fellowships and research on the use of AI in newsrooms.
In the E-mail conversation with Pratyush Ranjan, Beckett said in the answer to a question that headlines like 'Claims Fact Checked by Robots’ is possible but always with mandatory human supervision only. Beckett also highlighted the challenges faced by the fact-checkers in countering various forms of fake news and the importance of AI literacy and AI-powered tools in the newsrooms dealing with the flow of misinformation.
Here are the excerpts from the discussion with Professor Charlie Beckett:
Pratyush Ranjan: Fighting fake news is a much more complicated challenge. Will it be unreasonable to expect current artificial intelligence technologies to fully automate the fight against fake news?
Charlie Beckett: Yes. Automation of misinformation is always going to be a partial and imperfect solution because the difficult cases of moderation always require human judgement. No algorithm can replicate the nuanced judgement that has to be made around language, ethics and ideas of offence. But of course, it can help filter potential misinformation to slow it down and help to make the human moderation process much more efficient.
Before creating an AI system that can fight fake news, we must first understand the requirements of verifying the veracity of a claim.
Pratyush Ranjan: What can be steps to even start thinking to implement AI insights in fact checking process to debunk claims in almost all formats of communication?
Charlie Beckett: The first step is to understand the limits of AI. But then when you put in a system, be clear about the process you are creating and be transparent. Tell someone why a post has been removed and how they can appeal it. Explain clearly the T&C that you are operating within.
Pratyush Ranjan: Which initial step will be better for a fact checking organisation -
A. Creating an end-to-end AI-powered fake-news detector to describe a piece of news either “fake” or “real”?
B. Creating an AI algorithm to determine the signals to indicate a certain claim may be potential fake news.
Charlie Beckett: 'Fake' is a bad word to use. Usually it is not so binary. Sometimes content is clearly false but usually it is more nuanced. I would suggest other filters as well. Is this news from a credible source? It is much more useful to talk about misinformation or disinformation.
Pratyush Ranjan: How do you see the idea of integrating AI tools into human-controlled procedures (Read the Fact Checkers) to have the first hand insights to validate the process?
Charlie Beckett: The AI is there as a tool. It can be automated to filter out untrustworthy information and to refer content for review. It is up to the fact-checkers to decide the parameters. How much capacity to they have realistically to process content? It depends very much on the purpose of your 'fact-checking'. Is it to validate factual information or is it to provide a balanced narrative for the reader?
Pratyush Ranjan: Fact Checking is an important part of journalism in which the editorial guidelines and ethics play an important role for them to draw any conclusion in any article. Do you think blindly trusting machine learning algorithms to make decisions about truth in a potential fake claim will be a tough task?
Charlie Beckett: Yes, of course. No use of AI or any other technology should be done 'blindly'. That's why it is so important to have technology skills in your newsroom to understand how it works.
Pratyush Ranjan: Do you think using or creating AI-powered tools to support fact checking works will be seen by many as an attempt to replace fact checkers with technology?
Charlie Beckett: Yes. There is a lot of hostility against the social networks because when they use AI to moderate content they will inevitably make mistakes and it can take time for the human moderators to correct them. But with so much content on open platforms being created so quickly, we have to accept that we either close down those networks to manage the misinformation or we allow them to stay open and accept that there will always be some untrustworthy communicators. We can see that when we close down open networks it tends to favour authoritarian governments. Is that a price worth paying?
Pratyush Ranjan: We have seen many headlines like - "Robot writing headlines", "Machines writing stories" in the recent times. Will we also see headlines like - "Claims Fact Checked by Robots" in the coming days?
Charlie Beckett: Yes - why not? It would be good to be clear and transparent if you are using the technology - but always make sure it is supervised by humans.
Pratyush Ranjan: Vishvas News is the IFCN-certified Fact Checking website of Jagran New Media in India. It does fact checking in Hindi, English and 10 other Indic languages. We produce 200+ fact checked stories in 12 languages every month. The amount of data (In text, images and videos) on fake claims being fact-checked is huge. What's your suggestion for us to initiate and empower the fact checking team to become an AI-powered team? From where and how the journey can be started?
Charlie Beckett: That's a big question and only you have an answer to this. Because, your mission and your brand is decided by you only. The complexity of the information eco-system you have described is remarkable. I would suggest to go selective with subject specialisation and then targeting the source of fake claims rather than trying to clean up the entire information pipeline in both mainstream media in India or on social media!
About the Author:
Pratyush Ranjan is working as senior Editor with Jagran New Media. Pratyush is a certified fact checker also associated with GNI India training network. Pratyush represented the JNM in the prestigious 6-month long JournalismAI global project in which different leaders of top newsrooms from across the world discussed and studied the role of Artificial Intelligence (AI) on the future of journalism.
Pratyush Ranjan represents Jagran New Media in Global JournalismAI Project by LSE media think-tank POLIS
Posted By: Pratyush Ranjan