• 0 Posts
  • 1.27K Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle
  • African-American is pretty awkward but it fits the similarly awkward model of Irish-American, Italian-American. The reason those are more specific should be obvious and horrifying - the vast majority of black Americans have little record of their ancestry before cross-Atlantic transportation. It would be nice if Americans just focused on the American part but these labels were often imposed on them from outside before they were adopted as a matter of spiteful pride from inside. Like LGTBQ Pride, St Patrick’s Day parades originally had an element of defiance and protest.

    It’s useful in AAVE though because it is specifically American as opposed to just “black”. There are black slang/vernaculars in the Caribbean, Britain and France for example. Some of it bleeds into AAVE/Global English too - e.g. fam, bruv.






  • The people - very, very many of them literal school children - doing this are not training image AI models or even LoRAs or whatever on their home servers by feeding them images of a person from multiple angles and different parts exlosed. They’re just taking a single image and uploading it to some dodgy Android store app or, y’know, Grok. Which then colours in the part it identifies as clothes with a perfectly average image from the Internet (read: heavily modified in the first place and skewed towards unrealistic perfection). The process is called in-painting. The same models use the same technique if you just want to change the clothes, and people find that a brief amusement. If you want to replace your bro’s soccer jersey with a jersey of a team he hates to wind him up, you are not carefully training the AI to understand what he’d look like in that jersey. You just ask the in-painter to do it and assuming it already has been fed what the statistical average combination of pixels for “nude girl” or “Rangers jersey” are, it applies a random seed and starts drawing, immediately and quickly.

    That’s the problem. It has always been possible to make a convincing fake nude of someone. But there was a barrier to entry - Photoshop skills, or paying someone for photoshop skills, time, footprint (you’re not going to be doing this on dad’s PC).

    Today that barrier to entry is reduced massively which has put this means of abuse in the hands of every preteen with a smartphone, and in a matter of seconds. And then shared with all your peer group, in a matter of seconds.

    It’s the exact same logic which means that occasionally I find a use for image generation tools. Yes I can probably draw an Orc with a caltrop stuck up his nose, but I can’t do that mid-session of D&D and if it’s for a 10 second bit, why bother. Being able to create and share it within seconds is a large part of the selling point of these tools. Did I just steal from an artist? Maybe. Was I going to hire an artist to do it for me? No. Was I going to Google the words “orc” and “caltrop” and overlay the results for a cheap laugh? Maybe. Is that less stealing? Maybe. Am I getting way off the point that these people aren’t training image generation AIs with fragments of photos in order to make a convincing fake? Yes.