Futurism wanted to know: What kind of a company creates fake authors for a newspaper or magazine and operates them like sock puppets? What they discovered "should alarm anyone who cares about a trustworthy and ethical media industry." https://flip.it/oxcxAC #Tech#Technology#AI#Journalism
In this week’s Disconnect Roundup, Apple delivered an insult to life itself by crushing all human creativity into an iPad Pro in its recent ad. Plus, recommended reads, labor updates, and other news you might have missed.
Google’s mobile platform will have to look a little different to compete in the AI era. And, Allison Johnson writes, “If the past 12 months is any indication, it’s going to be a little messy." Read more from @theverge. https://flip.it/xwIUGs #Tech#Technology#AI#Google#Android
Gaza Death Toll Hits 34,971 Amid Ongoing Israel-Hamas Conflict
Hamas Releases Disturbing Video of Israeli Hostage as Psychological Warfare Intensifies
Demands Israeli Clarification on Palestinian Detainee Treatment Following CNN Exposé**
You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.” https://flip.it/ZbnJtj #Science#AI#ArtificialIntelligence#Chatbot#Tech
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
Charles Babbage, inventor of the first mechanical computer, has a particular quote that's always stuck with me:
"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
Babbage made this observation almost 200 years ago, and he'd certainly say the same thing about modern AI proponents who expect that feeding all of the internet into an over-glorified autocomplete will produce factual results.
Generative AI is not just teaching cyber bad guys new tricks — it’s also making it easier for anyone to become a bad guy, according to Cybersecurity and Infrastructure Security Agency (CISA) chief Jen Easterly.
“I look at AI: how fast it’s moving, how unpredictable it is, how powerful it is,” Easterly told @AxiosNews. “I think it’ll make people who are less sophisticated actually better at doing some of the bad things that they want to do.” Here’s more from the interview.
"Consumer AI is just the new search" anecdote: [1/3]
Casual non-techy coworkers yesterday were talking about using excel reports to analyze data & turns out two of the people use #ChatGPT to know how to do something in excel.
So, before this #AI stuff, if you were like, "how do I do X in Excel" in google, you'd get a bunch of hits and then have to wade through the results to see which link was actually what you were looking for, then test out if their solution works.
"Consumer AI is just the new search" anecdote: [3/3]
There are over 1 billion websites with over 30 billion web pages out there on the internet and regular search absolutely sucks now. It's no wonder normies are seeing #ChatGPT as magic when it can take 30 billion+ results and give you one answer that's most likely what you are looking for.
@bagder
Great action 👍🏻. My hope is that we get more specialized pages/blogs, instead of those central places that sooner or later get way too much power. Especially if that power is based on contributions by the community. For this reason, I decided to revamp my blog (https://linux-audit.com/), specialize, and still allow #AI to crawl it. After all, if it continues to exist, I rather want it to use knowledge of higher quality.