Apple has removed a number of AI image generation apps from the App Store after they were found to be advertising the ability to create nonconsensual nude images.
Apple’s actions come following an investigation by @404mediaco, which found adverts for these apps being pushed on Instagram. The apps were only removed when 404 Media provided links to them and their related ads to Apple, indicating “the company was not able to find the apps that violated its policy itself.”
Check out the latest "Smashing Security" podcast from yours truly and Carole Theriault, looking at Indian election deepfakery, the kindness of the Canadian rail system, Leicester's ransomware attack, and 12 Angry Men!
Thanks to our sponsors Kolide by 1Password, Vanta, and Sonrai Security for their fab support!
A federal judicial panel has met in Washington, DC, to discuss the rising challenge of policing Al-generated evidence in court trials.
The eight-member panel heard from computer scientists and academics about the potential risks of Al-manipulated images and videos disrupting a trial, and will be responsible for drafting evidence-related amendments to the Federal Rules of Evidence.
Not all on the panel feel this is necessary though, writes @arstechnica, with one judge stating, ‘I'm not sure that this is the crisis that it's been painted as.”
«Facebook ha un grave problema nel contenere i #DeepFake di truffe finanziarie che sfruttano i volti di politici italiani e giornalisti famosi.»
«Adesso è il turno di GiorgiaMeloni e di Enrico Mentana col TgLa7 il post è stato già segnalato alle autorità competenti»
Il post di @alexorlowski mostra questi video che passano come messaggi sponsorizzati (il che configura Facebook come non semplicemente una vittima, ma come un complice della truffa...)
Here's an #AI tool that arrives just in time: The PRISA media group has created an audio #verification and #deepfake detection service for voices in Spanish.
Their goal: Support #journalists (e.g. when covering election campaigns) and foster trust in the face of #disinformation.
Remarkable: The creators also address the problem of powerful people claiming their voices have been cloned after saying something embarrassing.
Deepfake z Rafałem Brzoską na Facebooku. W trakcie filmu coś się popsuło… i wjeżdża rosyjski (?) akcent
Jeden z czytelników poinformował nas o takim scamie. Materiał wideo wykorzystuje wizerunek Rafała Brzoski. Całość dostępna jest tutaj (uwaga, jeszcze raz – to jest scam; temat zgłoszony do FB) https://www.facebook.com/100084457186405/videos/315721031520526/ Na filmie widać słabą (praktycznie żadną) synchronizację głosu z ruchami ust, ale przy oglądaniu miniaturki, całkiem śmiało ktoś się może...
💻 🤖 Faszinierend und beängstigend: Mit der OpenAI-Software Sora lassen sich durch einfache Textbefehle Videos erstellen, die täuschend echt aussehen. Die Gefahr von Missbrauch ist hoch - gerade in einem "Superwahljahr" wie diesem.
The ability of #AI tools to readily generate highly convincing "#deepfake" text, audio, images, and (soon) video is, arguably, one of the greatest near-term concerns about this emerging technology. Fundamental to any proposal to address this issue is the ability to accurately distinguish "deepfake" content from "genuine" content. Broadly speaking, there are two sides to this ability:
Reducing false positives. That is, reducing the number of times someone mistakes a deepfake for the genuine article. Technologies to do so include watermarking of human and AI content, and digital forensics.
Reducing false negatives. That is, reducing the number of times one believes content that is actually genuine content to be a deepfake. There are cryptographic protocols to help achieve this, such as digital signatures and other provenance authentication technology.
Much of the current debate about deepfakes has focused on the first aim (reducing false positives), where the technology is quite weak (AI, by design, is very good at training itself to pass any given metric of inauthenticity, as per Goodhart's law); also, measures to address the first aim often come at the expense of the second. However, the second aim is at least as important, and arguably much more technically and socially feasible, with the adoption of cryptographically secure provenance standards. One such promising standard is the C2PA standard https://c2pa.org/ that is already adopted by several major media and technology companies (though, crucially, social media companies will also need to buy into such a standard and implement it by default to users for it to be truly effective).
#Disinformation is usually no laughing matter, but some fakes are so bizarre and far out, they will most certainly make you chuckle. Like the alleged Swedish Sex Championship – or the #Deepfake Doc telling you how to cure diabetes with chia seeds.
Our colleague Kathrin Wesolowski recently listed the "strangest fakes of 2023":
"Our findings demonstrate a dampening effect on perceptual, emotional, and evaluative processing of presumed deepfake smiles, but not angry expressions, adding new specificity to the debate on the societal impact of AI-generated content."
Teen girls are being targeted with deepfake pornography created using technology such as AI. Perpetrators are putting pictures of a victim's face onto an image or a video of a naked person. The FBI have warned that these images are being used for harassment and sextortion of young people and sadly there are limited ways to seek accountability.
@arstechnica Speaking of google, today was the first time I got one of those deepfake Elon Musk-videos about something something cryptocurrency as an ad before a youtube video. And I could find no way to flag or report an ad. I've reported those abominations when they were normal videos, but as ads? #nogoogle#google#malvertising#badvertising#youtube#elonmusk#deepfake
So some of you might remember this post (and the subsequent demonstration on national news) of using a voice cloning tool (AI, Audio Deep Fake) by @racheltobac
(If you haven't seen it, go watch it. Rachel is amazing.)
I'd never needed to do a similar attack before, but! I was just tasked yesterday with researching it.
Asked some friends for a turn-key solution to clone voices. Got pointed to a website. Signed up for $1 a month (first month... then it goes to $5 a month thereafter).
Pulled some audio of my mark down from a youtube interview (a podcast works great too).
Only needed a minute's worth of audio.
Uploaded it to the website for cloning.
Typed out a quick script for the voice to read.
30 seconds later, I had my cloned audio.
It was so good, that it even included natural voice inflections AND!!! verbal pauses like umm's and uhh's that matched the mark's original presentation. I can't tell the difference between the cloned voice and the original person.
Y'all... voice cloning and audio deep fakes are well past the ease of "script-kiddy" level. Anyone can do it.
#Zoom just changed their terms and conditions to include using anyone's video and audio for training #AI with no option for opting out. You too can help train #deepfake s!
Living with a disabled spouse, I used Zoom a lot to get through the ongoing global pandemic.
What alternatives are out there for remote teaching/meetings? #BoycottZoom
Der "Enkeltrick", bei dem Betrüger sich als Verwandte ausgeben, um an Geld zu kommen, ist bekannt. Jetzt können die Stimmen sogar täuschend echt imitiert werden - mithilfe von Künstlicher Intelligenz, sogenannten "Audio Deepfakes".
Die Technologie eröffnet viele Chancen und Möglichkeiten, zum Beispiel für Hörbücher oder vorgelesene Zeitungsartikel, aber birgt auch Gefahren. Denn auch Kriminelle können die Technologie nutzen, um jemanden nachzuahmen.