U.S elections in turmoil, again.
It is deja-vu back to 2016, as a network of Russia-based websites are masquerading as local American newspapers and pumping out AI-generated fake stories targeting the November U.S elections. Between May 2014 and November 2016, Russia carried out a comprehensive and sustained operation to undermine U.S elections. It was a multi-pronged attack, on four fronts. Hacking into and releasing confidential documents and emails from Clinton campaign staff; Hacking into voter registration databases in all 50 states and stealing voter information; Deploying a "troll farm" of thousands of social media accounts that used the stolen information to target false messages that disparaged Hillary Clinton; and Allegedly funneling $30 million dollars through NRA to fund Trump's election campaign.
Fake news, turbocharged with AI
2024 is starting to see another wave of misinformation. It began with a post alleging the first lady of Ukraine bought a rare Bugatti Tourbillon sports car for €4.5M while visiting Paris for the D-Day commemorations in June. And that the source of funds was supposedly American military aid money. A completely false report that Bugatti Paris has categorically denied as fake news. But before the truth actually had a chance to catch up, the story spread like wildfire on X, as pro-Russia, pro-Trump junkies retweeted the picture of a fake invoice, seen by at least 12M users. At its heart is former U.S cop John Mark Dougan - one among several mules spreading misinformation. “For me it’s a game," he said. “And a little payback.” To these and an alarming amount of former police officers, this is just a game of oneupmanship.
Attacks more nuanced this time
The misinformation campaign has gone beyond social media, and are now targeting local news media. With U.S losing more than two local newspapers a week in 2023, the fake news syndicates are filling the void with AI-generated newspapers spreading misinformation. They use American-sounding names like "Houston Post" and "Boston Times" to appear credible, with some even reviving defunct newspapers from the grave like "The Chicago Chronicle" - which went out of business decades ago. These fake outlets plagiarize real news, rewritten using AI prompts to vilify Democrats and glorify Republicans and Trump. So shoddy that in some cases, their prompts were left intact in the article. With over half of U.S counties having just one or no local news outlets, they will now be served with a healthy dose of fake news.
Social Media Platforms feel deliberately defenseless
Twitter which had once aimed at lofty goals of being an independent communication platform, has been rapidly devolving into a social media dumpster. Along with rebranding itself as X, its recent CEO Elon Musk abruptly dismantled its Trust and Safety council, established in the aftermath of 2016 election debacle. The team of volunteer civil rights leaders, academics and advocates, played a key role to address issues related to hate speech, terrorism, child exploitation, and misinformation. Precisely the sort of team that could take a sustained effort in combating deep fakes.
Facebook's measures have mostly been incremental when they were in place. Their policy of reducing visibility for "repeat offender" websites have reduced engagement on posts by misinformation groups from 16% to 31%. However, Facebook doesn't proactively show fact-checks to users who have already seen misinformation. So the damage is left unchecked. Furthermore, internal documents show Facebook rolled back the safeguards it had implemented ahead of the 2020 elections. This allowed the right-wing conspiratorial content to fester in the weeks leading up to the January 6 riot at the U.S Capitol.
What's next?
The relative ease in creating convincing fake content, and the unchecked virility on social media platforms are enabling the fake news mafia to spread their wings further. The operation is already in motion, spreading false stories about UK and French politics ahead of this week's general elections, as well as the 2024 Paris Olympics. As with anything nefarious, the cat and mouse game continues, but it looks like mouse is winning while the cat is found napping for the most part. We will continue tracking this story as it unfolds over the next several weeks.
AI Bites
AI-generated exam papers go undetected in 94% of cases: A recent study revealed that AI-generated papers submitted for exams are almost impossible to detect. Researchers at the University of Reading tested UK university exam systems and found that nearly all AI-generated submissions went unnoticed. Ironically, these papers often received higher grades than those written by real students. Published in PLOS ONE, the study showed only 6 out of a 100 papers get flagged. This finding highlights the challenges schools face in banning AI tools like ChatGPT. With AI becoming more integrated into education, universities might need to adapt, exploring novel questions that don't rely on regurgitating facts, better exam supervision, and incorporating AI into learning as the new normal.
Texas pedophile arrested after using a deep fake program to undress underage girl: A 30-year-old Houston man, Roman Shoffner, has been arrested following a two-month investigation into his use of artificial intelligence to digitally remove clothing from photos of a 17-year-old girl. The Montgomery County Precinct 3 Constable's Office began investigating after receiving a tip from someone who claimed to have seen the altered image on Shoffner's phone. This highlights another proliferating menace of deep fakes that's rending the societal fabric even faster.
Ukraine deploys autonomous killer drones: In Kyiv, Vyriy, a Ukrainian drone company, is developing autonomous drones that use AI to track and follow targets without human pilots. This innovation, demonstrated by CEO Oleksii Babenko, is part of a broader effort by Ukrainian firms to revolutionize military technology amid the war with Russia. Backed by investments and government support, these companies are creating affordable, accessible autonomous weapons using everyday components like Raspberry Pi. While these advancements push the boundaries of modern warfare, it sparks concerns of closed AI systems going wrong.
AI companies firmly oppose new California regulations: California lawmakers have advanced a bill requiring AI companies to test and add safety measures to prevent their systems from being misused, such as for attacks on the electric grid or building chemical weapons. Authored by Democratic state Sen. Scott Wiener, the bill targets AI systems costing over $100 million in computing power. However, the bill has met with stiff opposition from big-wigs like Meta and Google, who argue it unfairly targets developers instead of malicious users. The bill, backed by AI researchers, also proposes a state agency for oversight. Additional measures being considered aim to prevent AI-driven discrimination and protect minors' data on social media.
Apple Intelligence rejects Meta but gives the nod to Gemini: Apple Inc. (AAPL) has rejected Meta Platforms Inc.'s proposal to integrate its AI chatbot, Llama, into the iPhone, due to concerns about Meta's privacy practices. Despite preliminary discussions in March, the two companies are not currently negotiating this potential partnership. This decision coincides with Apple's ongoing negotiations to integrate Microsoft-backed OpenAI's ChatGPT and Alphabet Inc.'s Gemini into its products. Apple has announced a ChatGPT agreement and hinted at future Gemini integration while exploring partnerships with AI startup Anthropic. Set to launch later this year, Apple Intelligence will feature tools for summarizing notifications, transcribing voice memos, and creating custom emojis.
AI Tools you can use
pi.ai - a personal assistant that aims to provide a human-like conversation. I tried it myself and found it easy to have a conversation with. The voice-over and the responses sound natural, making it more immersive. Check out my post about AI and mental health.
eduaide.ai - a lesson planner that aims to lessen administrative overhead for educators. It helps users with paperwork and procedural tasks, as well as lets teachers build assessments personalized for their class.
krisp.ai - an AI-based noise canceling app that removes an impressive range of noises from audio recordings. I've been trying out podcasting and this app has been quite impressive. The free option gives 60 mins of daily free noise canceling.
Well written, very informative and current, well researched