• 9 Posts
  • 232 Comments
Joined 6 个月前
cake
Cake day: 2025年8月23日

help-circle







  • Hippie culture had elements of entrepreneurship and self-improvement, and that was definitely a time in the USA when the boss might invite you for drinks or to a strip club while you were candidate for promotion

    I think if you are comfortable saying “I use substances to turn my brain off / experiment with other ways of seeing the world / because its fun” you tend to drift away from orthodox LW


  • I don’t know much about Michael Vassar’s career after MetaMed went bankrupt in 2015. It sounds like he remained close to CFAR after it pivoted from rationality training to AI doom, but I don’t know much about them after 2016 either. I don’t know a long profile of him like the RationalWiki articles and podcasts on other leading figures.

    A characteristic of rationalist groups is that leaders refuse to state clear norms around everyday human frailties like substance use, hiring friends and lovers, and dating people you have power over. The Leverage staffer defended her boss by saying that they had no policy around senior staff dating employees, Yudkowsky.

    Substance use, intense sexual play, tangled relationship chains, narcissists with reality distortion fields, and mental illness make it hard to establish facts.

    Some of the woo-curious LessWrongers get grouped under postrationalism.

    I doubt they have read “The Tyranny of Structurelessness” and subsequent debates because it was written by a feminist.



  • Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled “Quis cancellat ipsos cancellores?” which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with “Persephone.” He or she does not quite say that any of the accusations were untrue, just that “an anonymous, unverified report” says that some details were changed by an editor, and that her Medium post was of “dramatically lower fidelity, but higher memetic virulence” than Brent’s buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

    The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach’s attempt to get Jeffrey Epstein to fund an event where our friends would speak.

    Often, people in messed-up situations point at a very similar situation and say “at least we are not like that.” I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!





  • News story from 2015:

    (Some people might have been concerned to read that) almost 3,000 “researchers, experts and entrepreneurs” have signed an open letter calling for a ban on developing artifical intelligence (AI) for “lethal autonomous weapons systems” (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

    The people who signed the letter included celebrities of the science and high-tech worlds like Tesla’s Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.

    They were quite clear about what worried them: “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

    “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnc cleansing, etc.”

    The letter was issued by the Future of Life Institute which is now Max Tegmark and Toby Walsh’s organization.

    People have worked on the general pop culture that inspired TESCREAL, and on the current hype, but less on earlier attempts to present machine minds as a clear and present danger. This has the ‘arms race’ narrative, the ‘research ban’ proposed solution, but focuses on smaller dangers.



  • I like this reply on Reddit:

    I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.

    I see maybe a solution, or at least help, in closer research-business collaboration. Companies don’t care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I’ve seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.

    This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff’s economic paper with the Excel error).