And people are worried about AI misinformation when we have crap mainstream media 🤦‍♂️

But, yes, the “liar’s dividend” will be in full swing this election cycle.

One view of the crash site of the IL-76 transport aircraft

(verified by several OSINT sources)

25 January 2024 (Brussels, Belgium) – On Wednesday, Russia accused Ukraine deliberately shooting down an IL-76 Russian military transport plane saying “it was carrying 65 captured Ukrainian soldiers to a prisoner exchange in what it is a barbaric act of terrorism that has killed a total of 74 people. These Ukrainian soldiers were to be used in a prisoner swap”.

Every mainstream media outlet repeated the Russian narrative verbatim – there were Ukrainian soldiers on board – without challenging the story.

Here is a video of the crash site, verified by several of my OSINT sources:

Some notes from my OSINT sources:

  • if the aircraft was full of prisoners, the field would have been full of bodies
  • numerous OSINT sources show the plane was flying AWAY from Ukraine, towards the northeast, and had arrived in Russia from Egypt. This lends strong evidence to military intelligence claims that the aircraft had been full of S-300 missiles (which it regularly picks-up in Egypt) which would certainly explain the giant explosion.
  • Russia released an already debunked “list of prisoners on board”. At least 17 of the names on the list belong to free men, released in earlier exchanges.
  • Prisoners of war are rarely transported by air to war fronts (flights could get shot down, as in this case). They always go over land by trucks.
  • And you certainly can’t exchange them by flying AWAY from the front
  • Russia claimed 74 people were in board on in total, 65 Ukrainian prisoners plus 9 Russian soldiers. That is far too few soldiers for an actual prisoner transport.
  • In the released footage (by the Russians themselves and by OSINT sources) only one body found can be found in the photographs and the videos

Disproving, once again, Moscow’s nonsense. Just lies and propaganda.

I have no doubt AI is destabilizing the concept of truth itself and will ramp up exponentially in the 2024 election. Trump is among a growing cadre of politicians around the world blaming AI for damning photos, videos and audio.

Last month at the symposium on computational law and AI and media which focused on how AI will (and will not) affect the legal industry and the media industry, several data scientists with expertise in artificial intelligence said AI-generated content will muddy the waters of perceived reality. Weeks into a pivotal election year, AI confusion is already on the rise.

We have seen politicians around the globe swatting away potentially damning pieces of evidence – grainy video footage of hotel trysts, voice recordings criticizing political opponents – by dismissing them as AI-generated fakes. At the same time, AI deepfakes are being used to spread misinformation.

On Monday, in the U.S., the New Hampshire Justice Department said it was investigating robocalls featuring what appeared to be an AI-generated voice that sounded like President Biden telling voters to skip the Tuesday primary – the first notable use of AI for voter suppression this campaign cycle.

And this month, Trump dismissed an ad featuring actual, real video of all his well-documented public gaffes – including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of “Pleasure,” a.k.a. Paradise – claiming the footage was generated by AI:

That’s all A.I., Fake television commercials, in order to make me look as bad and pathetic as Crooked Joe Biden. Not an easy thing to do.

AI creates a “liar’s dividend” – when you actually do catch a police officer or politician saying something awful, they have plausible deniability. Welcome to the age of AI.

So AI can truly destabilize the concept of truth itself. If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth. Politically motivated actors, especially, can take whatever interpretation they choose.

Trump is not alone in seizing this advantage. Around the world, AI is becoming a common scapegoat for politicians trying to fend off damaging allegations. Late last year, a grainy video surfaced of a ruling-party Taiwanese politician entering a hotel with a woman, indicating he was having an affair. Commentators and other politicians quickly came to his defense, saying the footage was AI-generated – though it remains unclear whether it actually was.

In April, a 26-second voice recording was leaked in which a politician in the southern Indian state of Tamil Nadu appeared to accuse his own party of illegally amassing $3.6 billion, according to reporting by Rest of World. The politician denied the recording’s veracity, calling it “machine generated”; experts have said they are unsure whether the audio is real or fake.

AI companies have generally said their tools shouldn’t be used in political campaigns now, but enforcement is impossible. 

It will play out like this:

Last week, social media users began circulating an audio clip they claimed was a Baltimore County, Md., school principal on a racist tirade against Jewish people and Black students. The union that represents the principal has said the audio is AI-generated.

Several signs do point to that conclusion, including the uniform cadence of the speech and indications of splicing, said experts who analyzed the audio. But without knowing where it came from or in what context it was recorded, it’s impossible to say for sure.

On social media, commenters overwhelmingly seem to believe the audio is real, and the school district says it has launched an investigation. A request for comment to the principal through his union was not returned.

These claims hold weight because AI deepfakes are more common now and better at replicating a person’s voice and appearance. Deepfakes regularly go viral on X, Facebook and other social platforms. Meanwhile, the tools and methods to identify an AI-created piece of media are not keeping up with rapid advances in AI’s ability to generate such content.

Tech and social media companies say they are looking into creating systems to automatically check and moderate AI-generated content purporting to be real, but have yet to do so. Meanwhile, only experts possess the tech and expertise to analyze a piece of media and determine whether it’s real or fake. 

That leaves too few people capable of truth-squadding content that can now be generated with easy-to-use AI tools available to almost anyone. You don’t have to be a computer scientist. You don’t have to be able to code. There’s no barrier to entry anymore.

And it is the same old story. Technology companies have the tools to regulate the problem: they could watermark audio to create a digital fingerprint or join a coalition meant to prevent the spreading of misleading information online by developing technical standards that establish the origins of media content. Most importantly, they could tweak their algorithms so they don’t promote sensational but potentially false content.

But so far, tech companies have mostly failed to take action to safeguard the public’s perception of reality. Because as long as the incentives continue to be engagement-driven sensationalism, and really conflict, those are the kinds of content – whether deepfake or not – that’s going to be surfaced.