cosmos4u , to Random stuff
@cosmos4u@scicomm.xyz avatar

Recording of an Town Hall by NASA from a few hours ago: https://www.youtube.com/watch?v=n3LH7Hd0L5s - hardly any concrete examples demonstrated but one panel member read a limerick and a haiku about LLMs composed by an LLM ...

FlipboardMagazines , to The Age of AI
@FlipboardMagazines@flipboard.social avatar

For the latest news about artificial intelligence and its impact on society, check out these 5 Magazines to follow from the Flipboard community.

AI Revolution: Curated stories about what's on the horizon with artificial intelligence technologies.
@ai-revolution-West1118

Artificial Intelligence and Misinformation by The Literacy Project: Here, we'll share articles on what to watch out for to avoid sharing misinformation about AI or created by AI.
@artificial-intelligence-and-mi

Artificial Intelligence by The 74: Rolling coverage of how artificial intelligence and tools such as ChatGPT are changing education in America
@the-74-artificial-intelligence

The Age of AI: Capturing the biggest need-to-know stories about artificial intelligence technology and what its rapid advancement means for our future. Stories collected by Flipboard's editors.
@the-age-of-ai-tech

The AI Economy: Exploring artificial intelligence's impact on business, work, society and technology.
@the-ai-economy-thekenyeung

Wondering what a Magazine is? A Flipboard Magazine is a curated feed of posts about a specific topic or interest that is followable, just like a profile.

JustCodeCulture , to Space & Science
@JustCodeCulture@mastodon.social avatar

New Review Essay on @lmesseri tremendous new book, ethnography & tech, social hopes, & false dreams of tech solutionism. Also discussing work of Andrew Brock, Zeynep Tufekci & Kelsie Nabben on Black Twitter, Twitter & ethnographies of DAOs.

@histodons
@commodon
@anthropology
@sociology

https://z.umn.edu/EthnographicSublime

brad262run , to Random stuff
@brad262run@mastodon.online avatar

“most striking about the tale of the Bell rocket belt is the shape of the deception that Moore and Bell pulled off”
“exactly what the car bros did over the past decade to convince us all that the human driver was already obsolete. The playbook was nearly identical”
“we have vested an alarming amount of power in the hands of

https://pluralistic.net/2024/05/17/fake-it-until-you-dont-make-it/ by @pluralistic

metin , to Random stuff
@metin@graphics.social avatar

𝘾𝙝𝙖𝙩𝙂𝙋𝙏 𝙘𝙤𝙣𝙨𝙪𝙢𝙚𝙨 25 𝙩𝙞𝙢𝙚𝙨 𝙢𝙤𝙧𝙚 𝙚𝙣𝙚𝙧𝙜𝙮 𝙩𝙝𝙖𝙣 𝙂𝙤𝙤𝙜𝙡𝙚

https://www.brusselstimes.com/1042696/chatgpt-consumes-25-times-more-energy-than-google

parismarx , to Random stuff
@parismarx@mastodon.online avatar

Remember when tech CEOs whipped us into a frenzy about generative AI changing everything? Those days are long gone.

Google and OpenAI’s latest demos show the bubble is deflating, but they’re still going to seize as much power as they can before the crash.

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

thecwordpodcast , to Podcast
@thecwordpodcast@glammr.us avatar

New episode! 🤖

Solange, Phedra, Jenny and guest host Luisa Casella explore the world of AI and what it might mean for conservation professionals. 🧠 (It's techy but not very, don't fret!)

Listen here: https://thecword.show/2024/05/15/s14e05-using-ai/

sabret00the , to Random stuff
@sabret00the@mas.to avatar

Collectively the world doesn't give a fuck about . We all accept it's going to be a part of our lives, but it's not the focus of our lives. It's a background helper at best. Big tech, being so out of touch with reality is hellbent on shoving it down our throats though. Fuck your keynote!

bibliolater , to Podcast
@bibliolater@qoto.org avatar

Backstabbing, bluffing and playing dead: has AI learned to deceive? – podcast

“Dr Peter Park, an AI existential safety researcher at MIT and author of the research, tells Ian Sample about the different examples of deception he uncovered, and why they will be so difficult to tackle as long as AI remains a black box.”

https://www.theguardian.com/science/audio/2024/may/14/backstabbing-bluffing-and-playing-dead-has-ai-learned-to-deceive-podcast

@science

attribution: Orion 8, Public domain, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Icon_announcer.svg

TechDesk , to Random stuff
@TechDesk@flipboard.social avatar

OpenAI has announced the launch of GPT-4o, an iteration of its GPT-4 model that powers ChatGPT — and the rollout starts today.

The latest update “is much faster” and improves “capabilities across text, vision, and audio,” according to a livestream announcement by OpenAI CTO Mira Murati. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, reports @theverge.

https://flip.it/BfWfrr

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

gcluley , to Random stuff
@gcluley@mastodon.green avatar

Delighted to receive my copy of "How AI Ate the World" by @stokel through the post this morning.

Can't wait to sink my teeth into it.

ScienceDesk , to Space & Science
@ScienceDesk@flipboard.social avatar

You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.”
https://flip.it/ZbnJtj

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

JustCodeCulture , to Space & Science
@JustCodeCulture@mastodon.social avatar

Congratulations to Harvard University History of Science doctoral candidate Aaron Gluck-Thaler on the 2024-25 CBI Tomash Fellowship. We are thrilled to have Aaron as a fellow in the upcoming academic year!

@histodons
@sociology
@commodon

https://z.umn.edu/2024-25-Tomash

geekgrrl , to Random stuff
@geekgrrl@infosec.exchange avatar
renwillis , to Random stuff
@renwillis@mstdn.social avatar

Just finished the re-watch of Person of Interest. Like 10 minutes ago. God damn what a show. Still teary eyed. Needed a good cry after some heavy existential stuff I’ve been going through. The show did not hold back.

From a somewhat campy start with interesting philosopical implications to it’s incredibly heavy, dark, and impactful finish. Glad we re-watched it. Felt more relevant than ever.

Anyways. Highly recommended.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Mordhau
  • WatchParties
  • Rutgers
  • steinbach
  • Lexington
  • cragsand
  • mead
  • RetroGamingNetwork
  • mauerstrassenwetten
  • loren
  • xyz
  • PowerRangers
  • AnarchoCapitalism
  • kamenrider
  • supersentai
  • itdept
  • neondivide
  • space_engine
  • AgeRegression
  • WarhammerFantasy
  • Teensy
  • learnviet
  • bjj
  • khanate
  • electropalaeography
  • MidnightClan
  • jeremy
  • fandic
  • All magazines