Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting thisā€¦)

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      2 days ago

      I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and Iā€™ve felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and Iā€™m glad you shared it.

      Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile Iā€™m always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; itā€™s a kind of possible harm that fundamentally defies automatic classification by definition.

      HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again weā€™re seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but Iā€™m not going to cry for Nashville.

      I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although Iā€™m concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I canā€™t tell spectrographically whether thatā€™s also done above hearing range or not. Iā€™m reluctant to call for attacks on home assistants, but theyā€™re great targets.

      Fundamentally this is a video that doesnā€™t want to talk about how musicians actually rip each other off. The ā€œtones and rhythmsā€ that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        Ā·
        2 days ago

        which I estimate is going to slide back out of affordability by the end of 2026.

        You donā€™t think the coming crash is going to drive compute costs down? I think the VC money for training runs drying up could drive down costs substantiallyā€¦ but maybe the crash hits other aspects of the supply chain and cost of GPUs and compute goes back up.

        He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

        Yeah this shit grates so much. Copyright is so often a tool of capital to extract rent from other peopleā€™s labor.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          Ā·
          18 hours ago

          Itā€™s the cost of the electricity, not the cost of the GPU!

          Empirically, we might estimate that a single training-capable GPU can pull nearly 1 kilowatt; an H100 GPU board is rated for 700W on its own in terms of temperature dissipation and the board pulls more than that when memory is active. I happen to live in the Pacific Northwest near lots of wind, rivers, and solar power, so electricity is barely 18 cents/kilowatt-hour and Iā€™d say that it costs at least a dollar to run such a GPU (at full load) for 6hrs. Also, I estimate that the GPU market is currently offering a 50% discount on average for refurbished/like-new GPUs with about 5yrs of service, and the H100 is about $25k new, so they might depreciate at around $2500/yr. Finally, I picked the H100 because itā€™s around the peak of efficiency for this particular AI season; local inference is going to be more expensive when we do apples-to-apples units like tokens/watt.

          In short, with bad napkin arithmetic, an H100 costs at least $4/day to operate while depreciating only $6.85/day or so; operating costs approach or exceed the depreciation rate. This leads to a hot-potato market where reselling the asset is worth more than operating it. In the limit, assets with no depreciation relative to opex are treated like securities, and weā€™re already seeing multiple groups squatting like dragons upon piles of nVidia products while the cost of renting cloudy H100s has jumped from like $2/hr to $9/hr over the past year. VCs are withdrawing, yes, and theyā€™re no longer paying the power bills.