metin , to Random stuff
@metin@graphics.social avatar

𝘾𝙝𝙖𝙩𝙂𝙋𝙏 𝙘𝙤𝙣𝙨𝙪𝙢𝙚𝙨 25 𝙩𝙞𝙢𝙚𝙨 𝙢𝙤𝙧𝙚 𝙚𝙣𝙚𝙧𝙜𝙮 𝙩𝙝𝙖𝙣 𝙂𝙤𝙤𝙜𝙡𝙚

https://www.brusselstimes.com/1042696/chatgpt-consumes-25-times-more-energy-than-google

nixCraft , to Random stuff
@nixCraft@mastodon.social avatar

Someone actually built a Chrome extension to "Hide annoying Google AI Overviews". LOL. Right now it only has 2,000 users, but I wouldn't be surprised if it reaches millions in a few months when SGE (Search Generative Experience) will be rolled out globally. https://chromewebstore.google.com/detail/hide-google-ai-overviews/neibhohkbmfjninidnaoacabkjonbahn Duckduckgo is also releasing similar tech using AI .

rpsu ,
@rpsu@mas.to avatar

@nixCraft arven better would be to prevent the LLM results from being generated. That would not just save us from the crap but also prevent the emissions being created at all (maybe).

dw_innovation , to Random stuff
@dw_innovation@mastodon.social avatar

As negotiate with companies that have built , they are starting to think about how to assign a dollar to their .

There are three parts to this problem:

  1. Understanding what can be licensed
  2. Setting a price
  3. Getting the companies to agree to pay

Interesting article by Anya Schiffrin for Poynter: https://www.poynter.org/reporting-editing/2024/google-search-ai-effect-news-publishers-deals/

rotnroll666 , to Random stuff
@rotnroll666@mastodon.social avatar

Absolutely unbelievable but here we are. by default using messages, files etc for building and training models, enabled by default and opting out requires a manual email from the workspace owner.

https://slack.com/intl/en-gb/trust/data-management/privacy-principles

What a time to be alive in IT. 🤦‍♂️

thecwordpodcast , to Podcast
@thecwordpodcast@glammr.us avatar

New episode! 🤖

Solange, Phedra, Jenny and guest host Luisa Casella explore the world of AI and what it might mean for conservation professionals. 🧠 (It's techy but not very, don't fret!)

Listen here: https://thecword.show/2024/05/15/s14e05-using-ai/

scottjenson , to Random stuff
@scottjenson@social.coop avatar

@simon
Simon, I'm working with @homeassistant a bit and we just had a fascinating discussion about 'nanoLLMs' that could run locally. They would NOT need the sum-total-of-all-human-knowledge but would really just be there as a smart parser for speech-to-text commands, keeping everything local. This is clearly still not trivial but hopefully one way to reduce the model size.

Do you know of any 'reduced' LLMs that could work in this more limited context?

alex ,
@alex@digittante.com avatar

@scottjenson @simon @homeassistant

Step 1: pick a small open source , less than 1 billion parameters

Step 2: assemble a training dataset of every lightbulb in the world and how to turn it on and off...

Step 3: profit!

woozle , to Random stuff
@woozle@toot.cat avatar

I'd like to see an trained on the contents of the Debian apt database (as well as any public discussion of same), so I can ask it questions about what's available or which package to install if I want something.

scottjenson , (edited ) to Random stuff
@scottjenson@social.coop avatar

I don't think we're talking enough about the UX limitations of LLMs. A chat is just too primitive and the 'randomization' aspect of each generation makes refinement nearly impossible. As a thought experiment, I'm suggesting there are three levels of :

Level 1: Chat in and out
Existing chat UX

Level 2: Masked Chat
Still chat based, but hidden behind an app that turns visual actions into chat prompts and renders it back to the user.

1/3

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to Space & Science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

TechDesk , to Random stuff
@TechDesk@flipboard.social avatar

Stack Overflow, a popular forum for programmers and software developers, announced a partnership with OpenAI earlier this week, selling the site’s data, including users’ forum posts, to train ChatGPT.

Now unhappy users are finding themselves banned for editing their popular posts in protest, and even finding those posts changed back by admin – “a reminder that anything you post on any of these platforms can and will be used for profit,” concluded one. Futurism has more.

https://flip.it/IVR89a

crecente , to law group
@crecente@games.ngo avatar

Assume a website plans to use user-contribution content to train LLMs. The license for the content is CC BY-SA.

❓ Would the output from the resulting LLMs be required to provide attribution?

@law

scottjenson , to Random stuff
@scottjenson@social.coop avatar

Color me surprised.
Basically a walk through of a recent paper that shows performance of classifiers flattens out as you add more data which basically means things are NOT going to exponentially explode into general intelligence (using current models)
https://www.youtube.com/watch?v=dDUC-LqVrPU

mcc , to Random stuff
@mcc@mastodon.social avatar

Hard to imagine a signal that a website is a rugpull more intense than banning users for trying to delete their own posts

https://www.tomshardware.com/tech-industry/artificial-intelligence/stack-overflow-bans-users-en-masse-for-rebelling-against-openai-partnership-users-banned-for-deleting-answers-to-prevent-them-being-used-to-train-chatgpt

Like just incredible "burning the future to power the present" energy here

chris ,
@chris@strafpla.net avatar

@mcc So developers will stop sharing information on and future and friends will be forever stuck in the past, answering questions about historically relevant frameworks and languages.

emill1984 , to Random stuff Polish
@emill1984@101010.pl avatar

Kiedys bylo "kazdy moze byc programista i zarabiac 15k miesiecznie"

Dzisiaj "kazdy moze byc prompt engineerem" xD Ciekawe kiedy to ebnie ;)

https://www.theverge.com/2024/5/8/24151847/microsoft-copilot-rewrite-prompt-feature-microsoft-365

SztucznaInteligencja

huey , to Law
@huey@social.ketupat.me avatar

"When I was asked to beta test its AI research bot, I informed a major legal research provider that it worse than sucked. It was dangerous. Not only did it hallucinate... but it conflated almost all the critical distinctions that make law work. It failed to distinguish between jurisdictions, both states and state and federal, as well as majority, concurrences and dissents. To AI, it was all the same, words about law..."

https://www.lexblog.com/2024/05/03/all-rise-for-judge-ai/

snoopy , (edited ) to Forum Libre in Je bosse au 4/5 sur les modèles de langage (LLM, parfois appelées IAs) et à 2/5 sur la robotique open hardware AMA
@snoopy@mastodon.zaclys.com avatar

Salut le fédiverse,

@keepthepace_ fait un Demande-moi n'importe quoi sur le @forumlibre

Le thème : les modèles de language et la robotique open hardware. Si ça vous intéresse de découvrir une autre facette que Skynet et la machine à billet,

je vous invite à lire ce poste où il parle de son parcours :
https://jlai.lu/post/6554057

Puis de poser vos questions. Bonne lecture !

Hésitez pas à partager :3

TechDesk , to Random stuff
@TechDesk@flipboard.social avatar

A former Amazon executive has accused the company of telling her to violate copyright law in order to compete with other tech giants in AI, reports Business Insider.

As part of a wider lawsuit against the company, in which Viviane Ghaderi claims she was discriminated against and ultimately fired for taking maternity leave, Ghaderi says she was told to “ignore legal advice and Amazon’s own policies to get better results” when developing its large-language models.

https://flip.it/oCwttM

scottjenson , to Random stuff
@scottjenson@social.coop avatar

I've just started using limitless.ai. I find the experience pretty set-and-forget and I love it. For my meetings, I can focus on the person, not worrying about note taking. Better yet, in addition to the transcript are notes and a summary. Still testing it out but interesting.

Better yet, it's web based! So I can use it from nearly any device. This means so much to me. Ultimately, I'd prefer this tech to be local, but unlike others, I'm not knee jerk against cloud services.

GossiTheDog , to Random stuff
@GossiTheDog@cyberplace.social avatar

My Mastodon server, cyberplace.social, has received a legal threat in an attempt to have a user's thread deleted. It is styled as a cease and desist.

I have published the email here:
https://github.com/GossiTheDog/Cyberplace/blob/main/LegalThreats/Cease%20and%20Desist%20Order%20-%20Felix%20Juhl

chanakya ,
@chanakya@social.screamingatmyscreen.com avatar

@GossiTheDog are we sure that Ian Watt is not an ? At least that’s in his name.

LMWStuttgart , to museum group German
@LMWStuttgart@xn--baw-joa.social avatar

Morgen nicht verpassen: Die Tagung "No risk, no innovation? Künstliche Intelligenz in der Museumspraxis" beschäftigt sich mit KI-basierten Technologien im Museumsbereich. Auch am setzen wir uns mit den Einsatzmöglichkeiten von KI-Technologien am Museum auseinander. Den Grundstein dafür legen unsere KI-Ethikrichtlinien: https://github.com/LMWStuttgart/KI-Ethik

@museum

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Mordhau
  • WatchParties
  • Rutgers
  • jeremy
  • Lexington
  • cragsand
  • mead
  • RetroGamingNetwork
  • loren
  • steinbach
  • xyz
  • PowerRangers
  • AnarchoCapitalism
  • kamenrider
  • electropalaeography
  • WarhammerFantasy
  • itdept
  • AgeRegression
  • mauerstrassenwetten
  • khanate
  • space_engine
  • learnviet
  • bjj
  • Teensy
  • MidnightClan
  • supersentai
  • neondivide
  • fandic
  • All magazines