Search This Blog

Monday, January 22, 2024

Artificial Intelligence and the future of journalism

[This piece was sent to The Assam Tribune on 7th January 2024 with a request for publication. Apparently, they could not find it worth publishing] 

A correction: It seems, the article was not so bad after all. The Tribune has published it on 7th March 2024. I am sharing the jpg file at the bottom of this write-up.

We have seen numerous discussions about the enormous possibilities that artificial intelligence (AI) has opened up in front of us – both desirable and not so desirable. There have been voices on possible replacement of humans in various fields which may create largescale job-loss, voices for regulations on the use of AI and praise on the possible revolutionary application of AI in health, agriculture, industry, defense, education, entertainment and other sectors for benefits of mankind. However, a lawsuit by The New York Times against Microsoft and Open AI (the company that pioneered the use of text generative AI with ChatGPT) filed just after Christmas has opened up a whole new front for discussion. The lawsuit is the culmination of failure of more than seven months of discussion on terms and conditions of licensing the newspapers content to AI companies.

The two companies allegedly used millions of articles of the newspaper to train their Large Language Model (LLM) text generator AI applications – ChatGPT and Copilot – which now produces similar content in the same style of the newspaper, thereby competing with the newspaper itself as a source of reliable information. The New York Times is one of the early integrators of print and online journalism after the proliferation of internet and decline of newspaper circulation in the West. They had invested heavily to build a subscription-based model of online journalism which had been largely successful so far. The lawsuit alleged copyright violation by unauthorized use of unique content generated by it, thereby causing significant loss to the newspaper.

To fathom the issue, we need to understand how generative AI works with LLMs and go back to the year 2012 when Google used 16000 computer processors to process 10 million digital images found on YouTube and identify a cat! The neural network of these 16000 computers had then actually identified the cat, with 16% accuracy. However, work on creating a machine that can act like human brain started in 1950s and a system of connected computers was developed that spends days, weeks or even months identifying patterns in large amounts of digital data. For example, after analyzing names and addresses scribbled on hundreds of envelopes, the system could read handwritten text.

By 2012, a neural network was trained to recognize common objects like flower and cars. The same basic technology of learning by analyzing patterns was used by Google to identify the cat and now used by various AI applications to generate text, images or sound. A Large Language Model or LLM is essentially a neural network that learns from enalyzing an enormous amount of digital text. Once trained on the digital text, it can produce text on its own. In 2015 Elon Musk funded a company called Open AI which released the first AI application capable of generating text in 2020 (then called GPT-3). Today, there are so many companies, applications and search engines that are using generative AI that it has become really difficult to believe that it has only been three years that artificial intelligence came out of the lab.

While the debate goes on whether AI would elevate the world or destroy it, let us face another question - what will happen to journalism if AI continues to generate both online and print texts instead of a human being? It is almost impossible to differentiate between actual truth and ‘truth’ generated by AI. Our older generation keeps on saying that Google is not always right and they are correct to some extent. The truth or fact generated by AI is actually the truth and fact used to train it. What will happen if the LLM used to train it is biased or fake in the first place? We have already experienced the menace of fake news, fake videos and doctored photos in social media and have seen it spill over to both print and electronic media. Despite the onslaught, print media has been able to somewhat hold on to its credibility. If AI trained on ‘fake truth’ is used to generate textual or visual content for journalism, the implications are horrifying to even imagine.

AI is not so intelligent if we go by the true meaning of intelligence – the ability to differentiate between what is right and what is correct in a particular situation in time or space. For example, let us imagine a situation where a law enforcer finds a ninety-year-old visibly deranged person entering a restricted area where he / she has instructions to shoot any violator. AI will shoot the person, but an intelligent human being would perhaps use all the faculties to take a humane decision. Journalism is also a world where intelligence to differentiate between what is right and what is correct is in premium.

How does AI transforms communication itself? Generative AI – whether for text or for images - helps people to express themselves better. Use of AI for image generation would enable one to express more vibrantly and imaginatively than one could previously. Tools like Midjourney can create an entire imagined landscape like Marine Drive in Mumbai or a Yeti in Himalayas or an adorable portrait of a dog and a cat together or a scene in an American town from 1950s! Just imagine how easy and fruitful communication would be if one can create a picture of what one imagined. In terms of generating text for communication, we have already seen that AI tools can write better communicative text than most of the humans.

The legal battle between The New York Times and Open AI may prove to be the proverbial tip of the iceberg as several other publishers including Gannett, the largest U.S. newspaper company; Rupert Murdoch’s News Corp, The Daily Beast and the magazine publisher Dotdash Meredith are also trying to negotiate with AI companies to make the later pay for using the formers content. 







KEVIN CARTER'S 'THE VULTURE AND THE CHILD'



Solicitor General of India Tushar Mehta was making a spirited submission in front of Supreme Court on May 28 this year attacking journalists as ‘prophets of doom’ for highlighting the desperation and plight of workers who had been trying to reach home amid the lockdown. You can read the details of Mr Mehta's arguments here, here and here.

Mehta later on June 1 said he had never referred to journalists as "prophets of doom". To defend Mr Mehta, Times of India carried an interesting refutal of Mr Mehta where he reportedly said that he used the term 'Prophets of doom' against "certain NGOs and so-called activists who filed a slew of PILs and intervention applications before the courts to look into migrant workers and other problems. These were the same people who contributed nothing to mitigate the problems of any section of the society during the pandemic."

On March 31, the same Mr Mehta told Supreme Court that "No one is now on road. Anyone who was outside has been taken to available shelters". A blatant lie about which you can read here and here. A day before, hearing two petitions by advocates Alakh Srivastava and Rashmi Bansal seeking immediate redressal to the "heart-wrenching and inhuman plight of thousands of migrant workers walking back home",  the SC had asked Central Government to submit a status report on the measures taken to prevent exodus of migrant workers in the wake of the 21-day lockdown that began on March 22 midnight.

After 63 days of the lockdown and many hearing on the two petitions, the SC finally took sou-moto cognisance of the issue of migrant workers on May 27 and asked the Central Government to file a reply in two days. While appearing before the court, Mr Mehta referred to the iconic photograph of a malnourished child and a vulture in Sudan taken in 1993 by South African photo-journalist Kevin Carter, which got him a Pulitzer.

This has brought into sharp focus a decades-old debate on ethical dilemma faced by journalists. While submitting that the government was doing a lot for the workers walking home, the SG accused journalists of spreading negativity. What the SG failed to tell the court is that he quoted verbatim from a social media post (mainly in WhatApp) that was doing rounds since around 19th May. 


Various well-crafted translation of the post in regional languages were also being circulated. ‘Remember the picture? The name of the picture was The Vulture & the Little Girl’ – the post read. The child in the picture was actually a boy, but the SG said it was a girl – just as circulated in the social media post. He also told the court that the photo was taken in 1983 missing the actual time by exactly a decade.


The social media post described how Carter committed suicide after being called a vulture for choosing to take photo instead of helping the child. ‘There was(sic) two vultures in the picture, one with a camera’ – the post screamed. The depiction of Carter as the second vulture sought to draw a parallel between Carter and the media’s role in reporting the migrant workers’ crisis.

 

“Today, 26 years later, the vultures are still returning home from all over India with cameras in their hands, busy taking pictures of workers walking thousands of kilometers... These vultures are more concerned with gathering news, with increasing channel TRP, than with worrying about workers' deaths. They are busy collecting breaking news by pouring spices on the bodies of dead workers and children”. These were the venomous words in the post which was being forwarded.

 

Carter’s photo has a story behind. Sudan was experiencing a ‘silent famine’ during 90’s following a protracted civil war and hundreds were dying of starvation and malnutrition, which the world at large was not much aware of. The United Nations was aware of the need for a massive humanitarian support and wanted to highlight the plight of the people. Carter was one of the journalists engaged to document this human tragedy.

 

On that day in March 1993, Carter had completed the day’s work and was returning to the waiting aircraft at the small airstrip in the Sudanese village of Ayod, when he noticed a child resting amid a crawl, trying to reach the United Nation’s food distribution centre run by and NGO called Doctors’ of the World situated near the airstrip. The NGO had divided the population in need of assistance into two categories – T for severe malnutrition and S for supplementary feeding. People in T category were to reach the food distribution centre first. If anyone looks carefully, a plastic band with the letter T3 inscribed can be seen on the wrist of the child in Kevin’s photo.

 

It is not that Carter just clicked the photo leaving the child to the vulture as the social media posts insinuated. What he did as a professional photo-journalist may seem even more revolting to many readers who does not understand the profession. Carter later recollected that he, in fact, waited almost twenty minutes for the vulture to open its wings for a more impactful photograph. But, when he realised that it was not going to happen he took the image anyway, shooed the bird away, and left.

 

The question about the fate of the child remained unanswered for another 18 years when the Spanish newspaper El Mundo tracked the father of the boy in 2011. The severely malnourished boy named Kong Nyong was the third to reach the UN food centre that day; he survived the famine but died of ‘fever’ in 2007.

   

 

It is a dilemma faced by not only journalists but almost every professional in their life - whether to be professional or be humane. Many journalists face this moral and ethical question – whether to intervene or to observe and tell. Carter's death had given a new dimension to the ethical debate. There is no conclusive evidence as to why he committed suicide; it was just a convenient conjecture that is being spread that he felt remorse for his action of not helping the child in Sudan. Maybe there were other reason! He had a failed relationship and fathered a daughter out of wedlock. He had little money and a lot of obligations. His closest friend died a few months ago. May be these were the passive causes for his suicide.

 

When the Solicitor General told Supreme Court that Carter’s work spelled “doom” he was wrong on at least two counts. First - as representative of the executive, his tasteless remarks on journalists and others highlighting the crisis actually amounts to one pillar of democracy attacking another pillar, which is ominous. His reference to Carter as the second vulture is derogatory to the entire profession of journalism. Second – his longwinded submission based almost in toto on a WhatsApp forward was not only factually inaccurate, but also symptomatic of a greater malaise in the society. By replacing sound legal arguments with propaganda materials to question the motive and credentials of people highlighting the crisis, the solicitor general has erred in professionalism, to say the least.       


A shorter edited version of this post was published in The Assam Tribune on 3 June 2020.