Skip Navigation

Posts
18
Comments
560
Joined
2 yr. ago

  • This was an excellent read if you're aware of the emails but never bothered to read his citations or to dig into what the blather about object-level and meta-level problems was specifically about, which is presumably most people.

    So, a deeper examination of the email paints 2014 Siskind as a pretty run of the mill race realist who's really into black genes are dumber, you guys studies and who thinks that higher education institutions not taking them seriously means they are deeply broken and untrustworthy, especially with anything to do with pushing back against racism and sexism. Oh, and he is also very worried that immigration may destroy the West, or at least he gently urges you to get up to speed with articles coincidentally pushing that angle, and draw your own conclusions based on pure reason.

    Also it seems that in private he takes seriously stuff he has already debunked in public, which makes it basically impossible to ever take anything he writes in good faith.

  • Plus he's gay so if he dies hell awaits, or so the evangelical worldview tends to go.

  • It's like a one-and-a-half-page article that also comes in audio and video form, don't be lazy.

  • Oh no, you must have missed the surprise incelism, let me fix that:

    And as the world learned a decade ago, I was able to date, get married, and have a family, only because I finally rejected what I took to be the socially obligatory attitude for male STEM nerds like me—namely, that my heterosexuality was inherently gross, creepy, and problematic, and that I had a moral obligation never to express romantic interest to women.

  • Modern move money between pockets for profit economics seem to give The Hitchhiker's Guide bistromathics a run for their money.

  • I wonder what this means for US GDP

    Don't worry, unchecked inflation and increasing housing costs will keep the GDP propped up at least for a while longer.

  • Zitron taking every opportunity to shit on Scott's AI2027 is kind of cathartic, ngl

  • He has capital L Lawfulness concerns. About the parent and the child being asymmetrically skilled in context engineering. Which apparently is the main reason kids shouldn't trust LLM output.

    Him showing his ass with the memory comment is just a bonus.

  • I feel dumber for having read that, and not in the intellectually humbled way.

  • This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.

  • Not who you asked, but both python and javascript have code smell as a core language feature and we are stuck with them by accident of history, not because anyone in particular thought it would be such a great idea for them to overshoot their original purpose to such a comical degree.

    Also there's a long history of languages meant to be used as an introduction to coding being spun off into ridiculously verbose enterprise equivalents that then everyone had to deal with (see delphi and visual basic) so there's certainly a case for refusing to cede any more ground to dollar store editions of useful stuff under the guise of education.

  • AI innovation in this space usually means automatically adding stuff to the model's context.

    It probably started meaning the (failed) build output got added in every iteration, but it's entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.

  • When I was at computer toucher school at about the start of the century, under the moniker AI were taught (I think) fuzzy logic, incremental optimization and graph algorithms, and neural networks.

    AI is a sci-fi trope far more than it ever was a well-defined research topic.

  • Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.

    For sure, blockchain based agentic LLM that learns as it goes is sounds like someone describing a flying elephant wearing an inflatable life jacket.

  • Nobody's using datasets made of copyrighted literature and 4chan to teach robots how to move, what are you even on about.

  • Risk checks for financial services: $1M saved annually on outsourced risk management

    Since I doubt they had time to use the tools for a full year, this is probably just the month they saved ~85K$ from firing/ending partnership with humans involved in risk assessment multiplied by twelve.

    In the long run I'm betting that exclusively using software that not only can't do basic math but actually treats numbers as words for risk assessment isn't going to be a net positive for their bottom line, especially if it their customers also get it in their heads that they could ditch the middleman and directly use a chatbot themselves.