Skip Navigation

Posts
44
Comments
1,350
Joined
2 yr. ago

  • The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.

  • Oh, but LW has the comeback for you in the very first paragraph

    Outside of niche circles on this site and elsewhere, the public's awareness about AI-related "x-risk" remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people's concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.

    Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!

  • LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

    https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

    Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

    Listen to the plot summary

    • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
    • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
    • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
    • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

    OK so what should be shown in the film?

    compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

    Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

    All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

  • At this point in time, having a substack is in itself a red flag.

  • The targets are informed, via a grammatically invalid sentence.

    Sam Kriss (author of the ‘Laurentius Clung’ piece) has posted a critique. I don’t think it’s good, but I do think it’s representative of a view that I ever encounter in the wild but haven’t really seen written up.

    FWIW the search term 'Laurentius Clung' gets no hits on LW, so I'm to assume everyone there also is Extremely Online on Xitter and instantly knows the reference.

    https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=MzkAjd8EWqosiePMf

  • Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.

  • The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show it's better

  • So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

    https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

    No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

    Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

  • No replies and somehow that screen name just screams "troll" to me.

    Not that I really care, git can go DIAF as far as I'm concerned.

  • janitorai - which seems to be a hosting site for creepy AI chats - is blocking all UK visitors due to the OSA

    https://blog.janitorai.com/posts/3/

    I'm torn here, the OSA seems to me to be massive overreach but perhaps shielding limeys from AI is wroth it

  • Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

    For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

    (Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

    The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

    At least on commenter gets it:

    Most people distinguish between intentional acts and shit that happens.

    (source)

    Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

  • yeah but have you considered how much it's worth that gramma can vibecode a todo app in seconds now???

  • Haven't really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.

  • I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

  • This isn't an original thought, but a better matrix for comparing the ideology (such as it is) of the current USG is not Nazi Germany but pre-war US right wing obsessions - anti-FDR and anti-New Deal.

    This appears in weird ways, like this throwaway comment regarding the Niihau incident, where two ethnic Japanese inhabitants of Niihau helped a downed Japanese airman immediately after Pearl Harbor.

    Imagine if you will, one of the 9/11 hijackers parachuting from the plane before it crashed, asking a random muslim for help, then having that muslim be willing to immediately get himself into a shootouts, commit arson, kidnappings, and misc mayhem.

    Then imagine that it was covered in a media environment where the executive branch had been advocating for war for over a decade, and voices which spoke against it were systematically silenced.

    (src)

    Dude also credits LessOnline with saving his life due to unidentified <<

    <ethnics>

    >> shooting up his 'hood when he was there. Charming.

    Edit nah he's a neo-Nazi (or at least very concerned about the fate of German PoWs after WW2):

    https://www.lesswrong.com/posts/6BBRtduhH3q4kpmAD/against-that-one-rationalist-mashal-about-japanese-fifth?commentId=YMRcfJvcPWbGwRfkJ

  • NotAwfulTech @awful.systems

    Advent of Code 2024 - the home stretch - it's been an aMAZEing year

    NotAwfulTech @awful.systems

    Advent of Code Week 3 - you're lost in a maze of twisty mazes, all alike

    NotAwfulTech @awful.systems

    Advent of Code 2024 Week 2: this time it's all grids, all the time

    TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 1 September 2024

    Buttcoin @awful.systems

    Person who exercises her free association rights at conferences incites ire in Jameson Lopp

    Buttcoin @awful.systems

    Butters do a 180 regarding statism as Daddy Trump promises to use filthy Fed FIAT to buy and hodl BTC

    Buttcoin @awful.systems

    Martin Shkreli claims to have been behind a Donald Trump memecoin

    Buttcoin @awful.systems

    In an attempt to secure the libertarian vote, Trump promises to pardon Dread Pirate Roberts (while calling for the death penalty for other drug dealers)

    TechTakes @awful.systems

    Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

    TechTakes @awful.systems

    Turns out that the basic mistakes spider runners fixed in the late 90s are arcane forgotten knowledge to our current "AI" overlords

    TechTakes @awful.systems

    AI grifters con the US gov that AGI poses "existential risk"

    TechTakes @awful.systems

    "The Obscene Energy Demands of A.I." - hackernews discussion

    TechTakes @awful.systems

    Elon Musk’s legal case against OpenAI is hilariously bad

    TechTakes @awful.systems

    Some interesting tidbits in this ElReg story about "AI Dean Phillips"

    bless this jank @awful.systems

    cannot login using mobile Firefox

    NotAwfulTech @awful.systems

    Looking for: random raytracing program

    NotAwfulTech @awful.systems

    The official awful.systems Advent of Code 2023 thread

    NotAwfulTech @awful.systems

    Any interest in an Advent of Code thread?

    TechTakes @awful.systems

    ScottA is annoyed EA has a bad name now

    TechTakes @awful.systems

    We don't even have Universal Basic Income yet but libertarians are already arguing it's too large