Skip Navigation

Posts
7
Comments
679
Joined
2 yr. ago

  • I know cryptocurrency people have a weirdly high tolerance for getting scammed and blaming the victim, but the twitter spam is constant now. You'd think they'd get tired of it at some point and switch to a platform that lets them moderate better.

  • So remember when Google Domains got sold off to Squarespace because it wasn't profitable enough and Google has the attention span of a squirrel?

    Well that meant bye bye MFA for anyone who didn't check their email diligently enough, allegedly leading to a number of cryptocurrency domains getting hacked.

    The cryptocurrency aspect is mostly just funny, but Google and Squarespace should know better than to effectively disable MFA out from under people. Tech companies put profit over people all the time. And then everyone blames the people for not being hyper-vigilant about computer security.


    Edit: The tweet linked in that bleepingcomputer article is funny if this was indeed the issue: https://twitter.com/pendle_fi/status/1811683909509558562

    Some "defi" company realized this could be a problem 22 hours before they were hacked. Even had time to write a tool to mitigate the impact of getting hacked. Got hacked anyway. Did they uhh... IDK change their password? Make sure MFA was set up? They don't say.

  • Quite likely yeah. There's no way they don't have a timeout on the backend.

  • Sloppy LLM programming? Never!

    In completely unrelated news I've been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:

  • It's real, all of it. John Titor the time traveler? He's real. AI gods? We could build them.

    John Titor came back in time to stop the creation of a superintelligence. He does this by secretly founding, co-founding, or co-co-founding various silicon valley startups that don't actually do anything; but that sound good to venture capitalists with too much money.

    The money is secretly funneled to good causes like food banks, adopting puppies, and maintaining the natural habitat of burrowing owls. Thus averting the end of the world. Encultured AI is part of this plan. They do nothing-- for the good of the earth.

  • Here's what they write:

    AI alignment via the power of videogames:

    We're starting with a singular focus on video game development, because we think that will offer the best feedback loop for testing new AI models. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.

    Healthcare pivot:

    Originally, when Encultured was founded as a gaming-oriented AI research company, our immediate goal was to make research progress on human–AI interaction that would ultimately benefit humanity well beyond the entertainment sector. Since then, we've considered healthcare as a likely next step for us after gaming.

    Couldn't find any details beyond that. Perhaps one of them read way too much Friendship is Optimal but they didn't actually having any gaming chops so never got anywhere.

    EDIT: More details here: https://www.lesswrong.com/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game

  • Oh hey a blog: https://www.encultured.ai/blog.html -- Because of course they're an rationalist AI alignment / AI gaming startup pivoting to healthcare

    Part of their bold vision: AI agents that heal not just your cells, but also your society :')

    Our vision for 2027 and beyond remains similar, namely, the development of artificial general healthcare: Technological processes capable of repairing damage to a diverse range of complex systems, including human cells, organs, individuals, and perhaps even groups of people. Why so general? The multi-agent dynamical systems theory needed to heal internal conflicts such as auto-immune disorders may not be so different from those needed to heal external conflicts as well, including breakdowns in social and political systems.

    We don't expect to be able to control such large-scale systems, but we think healthy is the best word to describe our desired relationship with them: As a contributing member of a well-functioning whole.

    Translation: they don't know the first thing about healthcare but want the big US healthcare grifting dollars anyway.

  • What happens when your spurned ex is a devoted archivist, a Wikipedia administrator, and perhaps the most online man the world has ever known?

    I already thought he was cool you don't have to sell me on it.

  • So the rationalists got this weird idea that good thinkers can deduce everything from first principles. Not just from experiments or literature review or looking at evidence, but by thinking real hard and LARPing at bayesian statistics.

    Thought leader Yudkowsky once wrote that a super-intelligence could deduce general relativity from three photos of a falling apple. They call themselves the cult of Bayes. There's lots of talk about epistimology, and updating priors after seeing evidence reading eachother's blog posts.

    In short: you are looking into the abyss. Do take care.

    Self-named

    A god AI; they want to invent a nice one and avoid an evil one.

  • That doesn't imply cloud computing is a hard requirement, just that a server (might be) a requirement.

    In a different universe where the cloud / SAAS never took over the market, Cat-GTPurr could be distributed on mail order Blu-Ray disks or (in the worst case) a spinning drive or two, or downloaded once via bittorrent; and then hosted locally. The cost of such a distribution would be a rounding error for most big tech companies.

  • You can practically taste the frustration in the "prompt engineering" here. Just one more edge case bro, one more edge case and then the prompt will be perfect!

  • Case in point, or the exception that proves the rule: Is being a trans woman (or just low-T) +20 IQ?

    Warning: This post might be depressing to read for everyone except trans women.

    Actual warning: This post and the comments is a particularly bad example of rationalists being red-pilled sexists. Even by rationalist standards. Don't say I didn't warn you.


    But yeah this goes way back and is really enmeshed in their worldview. Robin Hanson has been blogging terrible takes about gender for almost 20 years on Overcoming Bias, which Lesswrong split off from.

  • Data and stats can tell you whatever story you want to promote

    Seen this so many times at my work. There's some bone-headed decision and the people in charge are like "look guys we ran the numbers". But the methodology is messed up somehow, or they just ignored / misinterpreted the numbers while pretending they were following the data, or it doesn't bear out in the real world; etc.

    When data and common sense disagree you'd better be damn sure in the data.

  • When I eventually shuffle off this mortal coil (not any time soon don't worry!) I'll do so with a cell-phone in my hand, open to whatever god-forsaken replacement for the American healthcare system Silicon Valley dreams up.

    My family would find me lying there unconscious. The animated corporate mascot, looking slightly uncertain but still happy, would just repeat in a sing-song cartoonish voice "You might want to check your blood pressure!" and "you still haven't completed today's tasks for Health+ points! But don't worry, there's still time!"

    Eventually they manage to shut Healthy Bob up; but continue to receive birthday reminders from Pinstagrambook long after my passing, with no setting to turn it off.

  • Couldn't make it through the video, but this level of argument has been hashed and rehashed over literally the past thousand years. It's in the training data out the wazoo. And yet the youtube commenters act like it's the most insightful thing they've ever heard, and not copy-pasted from a mediocre philosophy 101 textbook.

    each perspective is given its best possible representation.

    This was a quite nuanced debate, better than what most humans are capable of.


    I feel like we are nearing to the end of the times. We humans are losing faith in ourselves… -- Hayao Miyazaki

  • Oh my god. The AI chatbots which were designed to mimic human writing are saying stuff exactly like the sci-fi stories I read online. They must be alive.

  • Tired: The earth is doomed due to climate change :(

    Wired: Ignore that stuff; the cosmos are at stake unless we burn our planet generating bad AI generated "poetry"

    Inspired: Oh wait oh no, Oh no. this is where Vogon Poetry came from isn't it? Burn it all down.

  • The utter contempt doesn't cost extra? Sweet!

  • Me: Mom I want AI mayor!

    Mom: Shush now Saturn, we have AI mayor at home.

    AI mayor at home: