TechDesk , to Random stuff
@TechDesk@flipboard.social avatar

Apple has removed a number of AI image generation apps from the App Store after they were found to be advertising the ability to create nonconsensual nude images.

Apple’s actions come following an investigation by @404mediaco, which found adverts for these apps being pushed on Instagram. The apps were only removed when 404 Media provided links to them and their related ads to Apple, indicating “the company was not able to find the apps that violated its policy itself.”

https://flip.it/rDcG7V

gcluley , to Cybersecurity
@gcluley@mastodon.green avatar

Something for the weekend?

Check out the latest "Smashing Security" podcast from yours truly and Carole Theriault, looking at Indian election deepfakery, the kindness of the Canadian rail system, Leicester's ransomware attack, and 12 Angry Men!

Thanks to our sponsors Kolide by 1Password, Vanta, and Sonrai Security for their fab support!

https://grahamcluley.com/smashing-security-podcast-369/

TechDesk , to Random stuff
@TechDesk@flipboard.social avatar

A federal judicial panel has met in Washington, DC, to discuss the rising challenge of policing Al-generated evidence in court trials.

The eight-member panel heard from computer scientists and academics about the potential risks of Al-manipulated images and videos disrupting a trial, and will be responsible for drafting evidence-related amendments to the Federal Rules of Evidence.

Not all on the panel feel this is necessary though, writes @arstechnica, with one judge stating, ‘I'm not sure that this is the crisis that it's been painted as.”

https://flip.it/HMIl0g

informapirata , to Etica Digitale (Feddit) Italian
@informapirata@mastodon.uno avatar

«Facebook ha un grave problema nel contenere i di truffe finanziarie che sfruttano i volti di politici italiani e giornalisti famosi.»
«Adesso è il turno di GiorgiaMeloni e di Enrico Mentana col TgLa7 il post è stato già segnalato alle autorità competenti»

Il post di @alexorlowski mostra questi video che passano come messaggi sponsorizzati (il che configura Facebook come non semplicemente una vittima, ma come un complice della truffa...)

@eticadigitale

https://twitter.com/alex_orlowski/status/1780007107275702705

dw_innovation , to Random stuff
@dw_innovation@mastodon.social avatar

Here's an tool that arrives just in time: The PRISA media group has created an audio and detection service for voices in Spanish.

Their goal: Support (e.g. when covering election campaigns) and foster trust in the face of .

Remarkable: The creators also address the problem of powerful people claiming their voices have been cloned after saying something embarrassing.

Details in this post (Reuters Institute):

https://reutersinstitute.politics.ox.ac.uk/news/how-spanish-media-group-created-ai-tool-detect-audio-deepfakes-help-journalists-big-election

parismarx , (edited ) to Random stuff
@parismarx@mastodon.online avatar

After AI-generated porn of Taylor Swift spread on Twitter, the issue finally got more attention — but it’s much bigger than celebrities.

On , I spoke to @kattenbarge about how deepfake nudes are wreaking havoc in women’s lives.

https://techwontsave.us/episode/215_deepfake_abuse_is_a_crisis_w_kat_tenbarge

simontsui , to Random stuff
@simontsui@infosec.exchange avatar

Check Point Research (CPR) reports on the extensive use of AI in election campaigns: by candidates for self-promotion, to attack and defame political opponents, and by foreign nation-state actors to defame candidates. CPR reviews AI deepfake exploitation in recent elections in multiple countries. 🔗 https://research.checkpoint.com/2024/beyond-imagining-how-ai-is-actively-used-in-election-campaigns-around-the-world/

sekurakbot Bot , to Random stuff Polish
@sekurakbot@mastodon.com.pl avatar

Deepfake z Rafałem Brzoską na Facebooku. W trakcie filmu coś się popsuło… i wjeżdża rosyjski (?) akcent

Jeden z czytelników poinformował nas o takim scamie. Materiał wideo wykorzystuje wizerunek Rafała Brzoski. Całość dostępna jest tutaj (uwaga, jeszcze raz – to jest scam; temat zgłoszony do FB) https://www.facebook.com/100084457186405/videos/315721031520526/ Na filmie widać słabą (praktycznie żadną) synchronizację głosu z ruchami ust, ale przy oglądaniu miniaturki, całkiem śmiało ktoś się może...

https://sekurak.pl/deepfake-z-rafalem-brzoska-na-facebooku-w-trakcie-filmu-cos-sie-popsulo-i-wjezdza-rosyjski-akcent/

linuxtldr , to Linux
@linuxtldr@noc.social avatar
NDR , to Random stuff German
@NDR@ard.social avatar

💻 🤖 Faszinierend und beängstigend: Mit der OpenAI-Software Sora lassen sich durch einfache Textbefehle Videos erstellen, die täuschend echt aussehen. Die Gefahr von Missbrauch ist hoch - gerade in einem "Superwahljahr" wie diesem.

📝 https://www.ndr.de/kultur/film/Folgen-nicht-absehbar-KI-Software-kreiert-taeuschend-echte-Videos,sora100.html

tao , (edited ) to Random stuff
@tao@mathstodon.xyz avatar

The ability of tools to readily generate highly convincing "" text, audio, images, and (soon) video is, arguably, one of the greatest near-term concerns about this emerging technology. Fundamental to any proposal to address this issue is the ability to accurately distinguish "deepfake" content from "genuine" content. Broadly speaking, there are two sides to this ability:

  • Reducing false positives. That is, reducing the number of times someone mistakes a deepfake for the genuine article. Technologies to do so include watermarking of human and AI content, and digital forensics.

  • Reducing false negatives. That is, reducing the number of times one believes content that is actually genuine content to be a deepfake. There are cryptographic protocols to help achieve this, such as digital signatures and other provenance authentication technology.

Much of the current debate about deepfakes has focused on the first aim (reducing false positives), where the technology is quite weak (AI, by design, is very good at training itself to pass any given metric of inauthenticity, as per Goodhart's law); also, measures to address the first aim often come at the expense of the second. However, the second aim is at least as important, and arguably much more technically and socially feasible, with the adoption of cryptographically secure provenance standards. One such promising standard is the C2PA standard https://c2pa.org/ that is already adopted by several major media and technology companies (though, crucially, social media companies will also need to buy into such a standard and implement it by default to users for it to be truly effective).

jackiegardina , to Random stuff
@jackiegardina@awscommunity.social avatar

With AI and deep fakes the 2024 campaign is going to be messier than the 2020 election. https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984

SideBar interviewed one organization attempting to push state legislatures to do more. It may be too little too late.

https://legaltalknetwork.com/podcasts/sidebar/2023/12/can-we-protect-democracy-from-ai-and-deepfakes-with-drew-liebert-and-jonathan-mehta-stein/

dw_innovation , (edited ) to Random stuff
@dw_innovation@mastodon.social avatar

is usually no laughing matter, but some fakes are so bizarre and far out, they will most certainly make you chuckle. Like the alleged Swedish Sex Championship – or the Doc telling you how to cure diabetes with chia seeds.

Our colleague Kathrin Wesolowski recently listed the "strangest fakes of 2023":

https://www.dw.com/en/fact-check-the-strangest-fakes-of-2023/a-67807926

polygon , to Random stuff
@polygon@mastodon.social avatar

Naruto game accused of using AI voice-over is just sloppy editing, admits Bandai https://www.polygon.com/23978516/naruto-ai-voice-over-controversy-sloppy-bandai-admits

kolide ,
@kolide@mastodon.social avatar

@polygon Yet MORE evidence that we aren’t as good at identifying audio deepfakes as we think we are. 😒

https://www.kolide.com/blog/how-audio-deepfakes-trick-employees-and-moms

bibliolater , to psychology group
@bibliolater@qoto.org avatar

"Our findings demonstrate a dampening effect on perceptual, emotional, and evaluative processing of presumed deepfake smiles, but not angry expressions, adding new specificity to the debate on the societal impact of AI-generated content."

Eiserbeck, A., Maier, M., Baum, J. et al. Deepfake smiles matter less—the psychological and neural impact of presumed AI-generated faces. Sci Rep 13, 16111 (2023). https://doi.org/10.1038/s41598-023-42802-x @psychology

ALT
  • Reply
  • Loading...
  • TechDesk , to Privacy
    @TechDesk@flipboard.social avatar

    Teen girls are being targeted with deepfake pornography created using technology such as AI. Perpetrators are putting pictures of a victim's face onto an image or a video of a naked person. The FBI have warned that these images are being used for harassment and sextortion of young people and sadly there are limited ways to seek accountability.

    https://flip.it/TRIuqA

    arstechnica , to Random stuff
    @arstechnica@mastodon.social avatar

    Google-hosted malvertising leads to fake Keepass site that looks genuine

    Google-verified advertiser + legit-looking URL + valid TLS cert = convincing look-alike.

    https://arstechnica.com/security/2023/10/google-hosted-malvertising-leads-to-fake-keepass-site-that-looks-genuine/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

    hopfgeist ,
    @hopfgeist@digitalcourage.social avatar

    @arstechnica Speaking of google, today was the first time I got one of those deepfake Elon Musk-videos about something something cryptocurrency as an ad before a youtube video. And I could find no way to flag or report an ad. I've reported those abominations when they were normal videos, but as ads?

    tinker , to Random stuff
    @tinker@infosec.exchange avatar

    So I cloned my own voice and cloned @wendynather's voice (with permission).

    I asked Wendy what we should create and... well...

    ...Wendy suggested this scene from Twilight.

    It's a little choppy, as I'm not the best at stitching, but it came out alright.

    Again... we did not actually record our voices here. We took samples of our real voices and then made this with an AI cloning tool.

    I typed out the script and had the machine say the text in our voices.

    tinker , to Hacking
    @tinker@infosec.exchange avatar

    So some of you might remember this post (and the subsequent demonstration on national news) of using a voice cloning tool (AI, Audio Deep Fake) by @racheltobac

    Link to post: https://infosec.exchange/@racheltobac/110963070495263373

    (If you haven't seen it, go watch it. Rachel is amazing.)

    I'd never needed to do a similar attack before, but! I was just tasked yesterday with researching it.

    Asked some friends for a turn-key solution to clone voices. Got pointed to a website. Signed up for $1 a month (first month... then it goes to $5 a month thereafter).

    Pulled some audio of my mark down from a youtube interview (a podcast works great too).

    Only needed a minute's worth of audio.

    Uploaded it to the website for cloning.

    Typed out a quick script for the voice to read.

    30 seconds later, I had my cloned audio.

    It was so good, that it even included natural voice inflections AND!!! verbal pauses like umm's and uhh's that matched the mark's original presentation. I can't tell the difference between the cloned voice and the original person.

    Y'all... voice cloning and audio deep fakes are well past the ease of "script-kiddy" level. Anyone can do it.

    mfriess , to Random stuff
    @mfriess@mastodon.social avatar

    Effective 27 July changed their terms of service (T&C) whereby without opt-out YOU give them consent to perpetually use your content (video, audio,…) also for “training and tuning of algorithms and models”.
    https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/

    Check for yourself:
    https://explore.zoom.us/en/terms/
    Compare with the version archived 25 July by :
    https://web.archive.org/web/20230725013414/https://explore.zoom.us/en/terms/

    MedievalMideast , to Random stuff
    @MedievalMideast@hcommons.social avatar

    just changed their terms and conditions to include using anyone's video and audio for training with no option for opting out. You too can help train s!

    Living with a disabled spouse, I used Zoom a lot to get through the ongoing global pandemic.

    What alternatives are out there for remote teaching/meetings?

    NDR , to Random stuff German
    @NDR@ard.social avatar

    Der "Enkeltrick", bei dem Betrüger sich als Verwandte ausgeben, um an Geld zu kommen, ist bekannt. Jetzt können die Stimmen sogar täuschend echt imitiert werden - mithilfe von Künstlicher Intelligenz, sogenannten "Audio Deepfakes".

    Die Technologie eröffnet viele Chancen und Möglichkeiten, zum Beispiel für Hörbücher oder vorgelesene Zeitungsartikel, aber birgt auch Gefahren. Denn auch Kriminelle können die Technologie nutzen, um jemanden nachzuahmen.

    📝 https://www.ndr.de/ratgeber/verbraucher/Deepfakes-Wenn-Betrueger-die-KI-fuer-den-Enkeltrick-nutzen,enkeltrickki100.html?at_medium=mastodon&at_campaign=NDR.de

    antygon , to Random stuff Polish
    @antygon@pol.social avatar

    Uuuu, gruba kampania. Bez zakładników. Ale chyba tak trzeba.

    T-Mobile w bardzo bezpośredni sposób pokazuje do czego może prowadzić wrzucanie zdjęć do sieci. Ty na przykładzie dzieci…

    https://youtu.be/F4WZ_k0vUDM

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Mordhau
  • WatchParties
  • Rutgers
  • loren
  • Lexington
  • cragsand
  • mead
  • RetroGamingNetwork
  • mauerstrassenwetten
  • MidnightClan
  • xyz
  • PowerRangers
  • AnarchoCapitalism
  • kamenrider
  • supersentai
  • itdept
  • neondivide
  • AgeRegression
  • Teensy
  • WarhammerFantasy
  • space_engine
  • learnviet
  • bjj
  • electropalaeography
  • steinbach
  • khanate
  • jeremy
  • fandic
  • All magazines