What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. That’s a new article every 10 minutes, at $4.25 per article.
As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.
Surprise, surprise... Legislation regulating AI-enabled HR tech is ghost written by HR tech industry firm Workday, and will allow them and their clients to covertly run discriminatory AI models without practical recourse for those facing unfair discrimination.
Remember when AI was used to generate a George Carlin comedy routine? That didn’t happen. Generative AI isn’t that good.
After the George Carlin’s estate sued, a representative of the show admitted that it was human-written. The claim that it was produced by an AI trained on Carlin’s material appears to be far from the truth, and rather used as a way to garner attention.
Cory Doctorow frequently reminds us that these stories of magical AI is peddled by both boosters and critics. Critics of AI make the mistake of also assuming that AI is this good, and talk about it as bad use of AI. This unnecessarily inflates the idea that AI can do things that it really can’t, adding fuel to magical thinking.
It’s probably a good idea to more often question if AI is even in the picture, given how effective it has become as a marketing vehicle. Was there AI involved? Perhaps, but not to the extent that salespeople would have you imagine
And yes, there's a name for this kind of criticism: "criti-hype.” A term coined by Lee Vinsel. Read more in Doctorow’s blog post, as always littered with further reading:
I remain quite bullish on the roles Generative AI and other Machine Learning tools can play in political campaigns. We already see strong, positive, use cases for things like internal operations and asynchronous voter communication. The tools are incredibly useful at tasks like helping us draft and edit content, or acting as a co-pilot in data analysis. But...
... One of my concerns lies in the temptation to inject them into our synchronous voter communications. It’s not just about deception or hallucinations. We’re seeing campaigns experiment with things like chatbots and AI-robocallers. But we have to remember the slogan of Democratic campaigning is to “meet voters where they are.” Relegating voters to talking only to a machine is not “meeting them where they are,” even when we disclose that it’s AI ...
So if I’m advising any campaign on this tech, from Biden or the DNC all the way down to a local city council race, I’m reminding them of our commitment to treat voters and supporters with dignity and respect. And that means that if voters bother to take time out of their day to talk to us, then the least we can do is actually show up to that conversation.
So, no, I would not recommend this. This is exactly the kind of tactic people inside and outside of politics feared would start happening. My team and I are going to continue to work with campaigns, committees, and electoral organizations to adopt voter-positive uses of AI. Not this. If you want to talk to voters, then talk to voters. Otherwise, just go buy ads.
An important point I’m going to hit consistently: Using a bot for voter contact is a bad strategic decision. We can debate whether it's "wrong" or "dangerous" (hopefully it’s clear that I have opinions!). But ultimately, it's just not smart.
The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:
An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”
An AI outputs "100 targets a day". Like a factory with murder delivery:
"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"
"The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."
A person who took part in previous Israeli offensives in Gaza said:
“If they would tell the whole world that the [Islamic Jihad] offices on the 10th floor are not important as a target, but that its existence is a justification to bring down the entire high-rise with the aim of pressuring civilian families who live in it in order to put pressure on terrorist organizations, this would itself be seen as terrorism. So they do not say it.”
#IDI stands for the Intelligence Division of the Israel army. Here is some praise of technology usage:
May 2021 "is the first time that the intelligence services have played such a transformative role at the tactical level.
This is the result of a strategic shift made by the IDI [in] recent years. Revisiting its role in military operations, it established a comprehensive, “one-stop-shop” intelligence war machine, gathering all relevant players in intelligence planning and direction, collection, processing and exploitation, analysis and production, and dissemination process (PCPAD)".
“Levy describes a system that has almost reached perfection. The political echelon wants to maintain the status quo, and the military provides it with legitimacy in exchange for funds and status.”
“Levy points out the gradual withdrawal of the old Ashkenazi middle class from the ranks of the combat forces[…]:
• the military’s complete reliance on technology as a decisive factor in warfare;
• the adoption of the concept […] of an army that is “small and lethal”;
• the obsession with the idea of #deterrence, which is supposed to negate the other side’s will to fight; and
• the complete addiction to the status quo as the only possible and desirable state of affairs.”
Here is a follow-up of
Yuval Abraham's investigation:
"The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties" https://www.972mag.com/lavender-ai-israeli-army-gaza/
It was easier to locate the individuals in their private houses.
“We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
Last week's spectacular #OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between #EffectiveAltruism (#doomers) and #EffectiveAccelerationism (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
White points out that there's another, much more distinct side in this AI debate - as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of #AIEthics - the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others."
The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.
Both effective altruism and effective accelerationism embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today.
Effective accelerationism has found an ally in Marc Andreessen, but his recent manifesto exposes that he just wants to go back to the old days when tech founders were uncritically revered, and when obstacles between him and staggering profits were nearly nonexistent.