"Even though it was hoped that machines might overcome human bias, this assumption often fails due to a problematic or theoretically implausible selection of variables that are fed into the model and because of small size, low representativeness, and presence of bias in the training data [5.]."
Suchotzki, K. and Gamer, M. (2024) 'Detecting deception with artificial intelligence: promises and perils,' Trends in Cognitive Sciences [Preprint]. https://doi.org/10.1016/j.tics.2024.04.002.
The social norm is constructed: not naturally occurring but created by the society in which it is found.
Hence there are no actions which in themselves are inherently #abnormal or universally condemned by all societies at all times. Deviance is thus situational and contextual.
Emotion #AI uses details such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, with a big promise: To detect and predict how someone is feeling.
It’s used in contexts both mundane, like entertainment, and high stakes, like the workplace, #hiring and #healthcare.
"Claims made by Israeli politicians, journalists and the Israeli Defence Force (IDF) have been amplified
and accepted as truth without verification. Some of these have subsequently been proven to be
false yet have not been corrected nor have they been challenged when repeated."
Unitary management:
"A good decision is no longer a decision legally well-supported, but a decision made within a reasonable time, which will not be appealed and whose application will incur fewer costs."
Risk tools come to still improve our chances. A study shows:
➤ Risk assessment reduces the likelihood of incarceration for relatively affluent defendants,
➤ Risk assessment increases the likelihood of incarceration for relatively poor defendants.
1/7
Nothing is more annoying than influencers giving themselves permission to dehumanize Palestinians. Here are two examples, but I've seen many more. These are definitely two of the most defensive, lol.
Maybe it says more about the pitfalls of influencer culture (than the individuals), the need to be seen by millions as "authentic," ...
“The repeated stylizations of the body—everyday acts and gestures—are themselves performatives, producing the gendered identity of which they are thought to be the expressions.”(Alberti, 2013, p. 95)
I know Mastodon is designed to keep everything nice and to shield us from the horrors of the world, and that it is good for us to only look at cat pictures all day and cheer each other up, but honestly: sometimes i also think that that is just a lot of crap and everyone who turns away and continues with their nice privileged life as if all is ok is complicit #Gaza
@pvonhellermannn
The text pointed below may help. Sarah Aziza, through a stirring mix of personal reflection and philosophical reckoning, disabuses the Western witness of its self-gratifying power, instead – amid Israel’s openly broadcast yet unimpeded march towards genocide in Gaza – unmasking the impotence, deceit and hollowness that witnessing currently entails. More than a collective indictment or last-gasp scream of defiance into the void, Aziza’s own testimony guides the reader towards a form of witness no longer elevated in angelic, uncompromised distance, but instead manifest in the embodied, intimate, ego-displacing position of “sacrifice, mourning and resisting.” https://jewishcurrents.org/the-work-of-the-witness
And most (neural network style) "#AI" or #LLM systems cannot even tell you WHY they produced the result they give. It's all in the training data. Huge "garbage in, garbage out" risks/biases!
The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:
An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”
An AI outputs "100 targets a day". Like a factory with murder delivery:
"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"
"The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."
#IDI stands for the Intelligence Division of the Israel army. Here is some praise of technology usage:
May 2021 "is the first time that the intelligence services have played such a transformative role at the tactical level.
This is the result of a strategic shift made by the IDI [in] recent years. Revisiting its role in military operations, it established a comprehensive, “one-stop-shop” intelligence war machine, gathering all relevant players in intelligence planning and direction, collection, processing and exploitation, analysis and production, and dissemination process (PCPAD)".
It was easier to locate the individuals in their private houses.
“We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
"New doubts are emerging about the New York Times’s coverage of sexual violence in the October 7 attack. The paper must explain why it broke its own rules by hiring a clearly biased writer who endorsed racist and violent rhetoric toward Palestinians."
Ben Norton wrote: 'Instead of investigating how it published a fake story on supposed Hamas "mass rapes", the NY Times is investigating... employees who leaked info about how its editors hired a racist Israeli propagandist to write the fake story to justify Israel's genocide.'