- cross-posted to:
- BoycottUnitedStates@europe.pub
- fuck_ai@lemmy.world
- cross-posted to:
- BoycottUnitedStates@europe.pub
- fuck_ai@lemmy.world
In 2012, Palantir quietly embedded itself into the daily operations of the New Orleans Police Department. There were no public announcements. No contracts made available to the city council. Instead, the surveillance company partnered with a local nonprofit to sidestep oversight, gaining access to years of arrest records, licenses, addresses, and phone numbers all to build a shadowy predictive policing program.
Palantir’s software mapped webs of human relationships, assigned residents algorithmic “risk scores,” and helped police generate “target lists” all without public knowledge. “We very much like to not be publicly known,” a Palantir engineer wrote in an internal email later obtained by The Verge.
After years spent quietly powering surveillance systems for police departments and federal agencies, the company has rebranded itself as a frontier AI firm, selling machine learning platforms designed for military dominance and geopolitical control.
"AI is not a toy. It is a weapon,” said CEO Alex Karp. “It will be used to kill people.”
I’m not very surprised. I think even old-school face recognition does things like measure distance between your eyes, nose etc, and stuff like that (your skull) doesn’t change a lot during 10 years of adulthood. The real danger is that they connect all of that information. And as you said, it’s everywhere these days, they have a lot of sources and -of course- it’s in the wrong hands. I say “of course”, because I don’t think there are many applications to help with anything. That technology is mainly good for oppression. And predictive policing, social scores are content for Black Mirror episodes or old sci-fi movies. Not reality.