• BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    Related toot:

    insurers rely on the world being predictable in actuarial tables that allow them to ensure profiting from quantifiable risks. Non-deterministic AI is impossible to predict outcomes for. For an insurer, that is terrifying: they could potentially lose unlimited amounts of money. But I’m sure they’re thrilled about genAI proliferation giving them sweeping new ways to exclude most business activities with a single discreet sentence, while maintaining the same premiums as before. In the next couple of years, AI adopters are going to find out their liability coverage has become utterly worthless because their activities are so contaminated by non-determinism which no one wants to cover.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Cybersecurity insurance was a topic last term at the tech/law group on campus, see also Josephine Wolff https://direct.mit.edu/books/oa-monograph/5373/Cyberinsurance-PolicyRethinking-Risk-in-an-Age-of

    This month, I found out my business insurance split out cybersec from the general policy a couple years ago and never told me, so I had to pay a $300 upcharge for it for a new contract that needed it specifically. Also a new $7 terrorism fee.

    Probably you can s/cyber/AI/g and guess where things are heading.

  • cornflake@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Ever since the great recession I’ve felt that the executive class are the most parasitic, regularly working against even the shareholders’ interests, let alone anyone else. Risk management is a huge part of this disconnect; these execs do not care about these downsides.

  • lukematthewsutton@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    I have an anecdote not directly related to insurance, but liability.

    I was involved in re-negotiating a Master Services Agreement with a tech consulting firm. The sticking point were terms where they essentially said “we might use AI, we won’t tell you if we do, and if we do and it goes wrong, we accept no liability”. They would not budge on that.

    I quit before it got hashed out, but I bet it got signed anyhow. People are so blasé about anything AI

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    I recently heard a director at a major contractor say that they’ll start using AI to design things as soon as the AI company accepts the same liability as traditional (read: actual) design companies.

    Which is, of course, never.