Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this…)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and I’ve felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and I’m glad you shared it.

    Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile I’m always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; it’s a kind of possible harm that fundamentally defies automatic classification by definition.

    HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again we’re seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but I’m not going to cry for Nashville.

    I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although I’m concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I can’t tell spectrographically whether that’s also done above hearing range or not. I’m reluctant to call for attacks on home assistants, but they’re great targets.

    Fundamentally this is a video that doesn’t want to talk about how musicians actually rip each other off. The “tones and rhythms” that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      which I estimate is going to slide back out of affordability by the end of 2026.

      You don’t think the coming crash is going to drive compute costs down? I think the VC money for training runs drying up could drive down costs substantially… but maybe the crash hits other aspects of the supply chain and cost of GPUs and compute goes back up.

      He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

      Yeah this shit grates so much. Copyright is so often a tool of capital to extract rent from other people’s labor.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        It’s the cost of the electricity, not the cost of the GPU!

        Empirically, we might estimate that a single training-capable GPU can pull nearly 1 kilowatt; an H100 GPU board is rated for 700W on its own in terms of temperature dissipation and the board pulls more than that when memory is active. I happen to live in the Pacific Northwest near lots of wind, rivers, and solar power, so electricity is barely 18 cents/kilowatt-hour and I’d say that it costs at least a dollar to run such a GPU (at full load) for 6hrs. Also, I estimate that the GPU market is currently offering a 50% discount on average for refurbished/like-new GPUs with about 5yrs of service, and the H100 is about $25k new, so they might depreciate at around $2500/yr. Finally, I picked the H100 because it’s around the peak of efficiency for this particular AI season; local inference is going to be more expensive when we do apples-to-apples units like tokens/watt.

        In short, with bad napkin arithmetic, an H100 costs at least $4/day to operate while depreciating only $6.85/day or so; operating costs approach or exceed the depreciation rate. This leads to a hot-potato market where reselling the asset is worth more than operating it. In the limit, assets with no depreciation relative to opex are treated like securities, and we’re already seeing multiple groups squatting like dragons upon piles of nVidia products while the cost of renting cloudy H100s has jumped from like $2/hr to $9/hr over the past year. VCs are withdrawing, yes, and they’re no longer paying the power bills.