Someone actually built a Chrome extension to "Hide annoying Google AI Overviews". LOL. Right now it only has 2,000 users, but I wouldn't be surprised if it reaches millions in a few months when SGE (Search Generative Experience) will be rolled out globally. https://chromewebstore.google.com/detail/hide-google-ai-overviews/neibhohkbmfjninidnaoacabkjonbahn Duckduckgo is also releasing similar tech using AI .
@nixCraft arven better would be to prevent the LLM results from being generated. That would not just save us from the #LLM crap but also prevent the #CO2 emissions being created at all (maybe).
Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
Solange, Phedra, Jenny and guest host Luisa Casella explore the world of AI and what it might mean for conservation professionals. 🧠 (It's techy but not very, don't fret!)
@simon
Simon, I'm working with @homeassistant a bit and we just had a fascinating discussion about 'nanoLLMs' that could run locally. They would NOT need the sum-total-of-all-human-knowledge but would really just be there as a smart parser for speech-to-text commands, keeping everything local. This is clearly still not trivial but hopefully one way to reduce the model size.
Do you know of any 'reduced' LLMs that could work in this more limited context?
I'd like to see an #LLM trained on the contents of the Debian apt database (as well as any public discussion of same), so I can ask it questions about what's available or which package to install if I want something.
I don't think we're talking enough about the UX limitations of LLMs. A chat #UI is just too primitive and the 'randomization' aspect of each generation makes refinement nearly impossible. As a thought experiment, I'm suggesting there are three levels of #LLM#UX:
Level 1: Chat in and out
Existing chat UX
Level 2: Masked Chat
Still chat based, but hidden behind an app that turns visual actions into chat prompts and renders it back to the user.
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
Stack Overflow, a popular forum for programmers and software developers, announced a partnership with OpenAI earlier this week, selling the site’s data, including users’ forum posts, to train ChatGPT.
Now unhappy users are finding themselves banned for editing their popular posts in protest, and even finding those posts changed back by admin – “a reminder that anything you post on any of these platforms can and will be used for profit,” concluded one. Futurism has more.
Color me surprised.
Basically a walk through of a recent paper that shows performance of classifiers flattens out as you add more data which basically means things are NOT going to exponentially explode into general intelligence (using current models) https://www.youtube.com/watch?v=dDUC-LqVrPU #LLM#ChatGPT#gai
@mcc So developers will stop sharing information on #StackOverflow and future #Copilot and friends will be forever stuck in the past, answering questions about historically relevant frameworks and languages. #LLM#StuckOverflow
"When I was asked to beta test its AI research bot, I informed a major legal research provider that it worse than sucked. It was dangerous. Not only did it hallucinate... but it conflated almost all the critical distinctions that make law work. It failed to distinguish between jurisdictions, both states and state and federal, as well as majority, concurrences and dissents. To AI, it was all the same, words about law..."
Le thème : les modèles de language et la robotique open hardware. Si ça vous intéresse de découvrir une autre facette que Skynet et la machine à billet,
A former Amazon executive has accused the company of telling her to violate copyright law in order to compete with other tech giants in AI, reports Business Insider.
As part of a wider lawsuit against the company, in which Viviane Ghaderi claims she was discriminated against and ultimately fired for taking maternity leave, Ghaderi says she was told to “ignore legal advice and Amazon’s own policies to get better results” when developing its large-language models.
I've just started using limitless.ai. I find the experience pretty set-and-forget and I love it. For my meetings, I can focus on the person, not worrying about note taking. Better yet, in addition to the transcript are notes and a summary. Still testing it out but interesting.
Better yet, it's web based! So I can use it from nearly any device. This means so much to me. Ultimately, I'd prefer this tech to be local, but unlike others, I'm not knee jerk against cloud services. #UX#LLM#Web
Morgen nicht verpassen: Die Tagung "No risk, no innovation? Künstliche Intelligenz in der Museumspraxis" beschäftigt sich mit KI-basierten Technologien im Museumsbereich. Auch am #LMWStuttgart setzen wir uns mit den Einsatzmöglichkeiten von KI-Technologien am Museum auseinander. Den Grundstein dafür legen unsere KI-Ethikrichtlinien: https://github.com/LMWStuttgart/KI-Ethik
Je bosse au 4/5 sur les modèles de langage (LLM, parfois appelées IAs) et à 2/5 sur la robotique open hardware AMA ( jlai.lu ) French
Hello!...