

huh, interesting


huh, interesting


well with the recent Microsoft CEO statement on “we have to find a use for this stuff or it won’t be socially acceptable to waste so much electricity on it” they have some level of awareness, but only a very surface level awareness


what the fuck?? really??


This is Scott “Richard Lynn was right actually” Alexander we’re talking about here, the chinese not being able to catch up without resorting to spies is absolutely apart of his agenda


good point, but also that sentence makes me want to take a long walk outside
maybe Yud just hasn’t met a partner “high IQ” enough


kinda depressing seeing people fall for Yud’s shtick without realising about all the other bullshit (though in fairness the average person is not aware of the many years of rationalism lore). thankfully people in the comment section are more skeptical but still cautious, which I think is a fair reaction to all this


considering his confidence in an AI apocalypse I heavily doubt it


feels like a good enough place to dump my other observations of this book’s reviews
-It’s currently sitting at a 3.99 on Goodreads, with 4K+ ratings and 757 reviews
-higher on Amazon with a 4.5, though less reviews, only 313 (i couldve sworn it was 800 earlier but whatever)
-it received several high profile endorsements, all listed on the wikipedia page. only 7 of these endorsements work in the compsci field, and only one of them’s an AI expert (Yoshua Bengio)


the new hit anime coming real soon to a simulation near you


“Tellingly, although the authors acknowledge at the start of the book that LLMs seem “shallow,” they do not ever mention hallucinations, the most significant problem that LLMs face”
christ, it’s that bad?


that post got way funnier with Eliezer’s recent twitter post about “EAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever did”


the key assumption here is that these “superbabies” will naturally hold the “correct” moral values that they will then program into a superintelligent AI system, which will then elevate humanity into a golden period where we get to live in a techno-utopia amongst the stars.
which is pretty weird and has some uncomfortable implications
smart people are still capable of being pieces of shit. Eliezer’s whole “we need to focus everything on augmenting human intelligence” thing pretty much glosses over this. It only takes one group of superbabies/augmented intelligence humans getting into some fascist shit for this to blow up in his face.


late reply but yes Eliezer has avoided hard dates because “predictions are hard”
the closest he’s gotten is his standing bet with Bryan Chaplan that it’ll happen before 2030 (when I looked into this bet Eliezer himself said that he made it so he could “exploit Bryan’s amazing bet-winning ability and my amazing bet-losing ability” to ensure AGI doesn’t wipe everyone out before 2030) he said in a 2024 interview that if you put a gun to his head and forced him to make probabilities, “it would look closer to 5 years than 50” (unhelpfully vague since it puts the ballpark at like 2-27 years) but did say in a more recent interview that he thinks 20 years feels like it’s starting to push it (possible but he doesn’t think so)
So basically, no hard dates but “sooner rather than later” vagueness
LMFAOO THIS IS GOLD