Skip Navigation

Posts
16
Comments
484
Joined
2 yr. ago

  • The comments are something else alright:

    The part about kids is wrong. Per Aella's survey, many people report having masturbated well before puberty. Breeding and pregnancy were actually one of my earliest kinks when I was 7.

  • Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can't directly compete with yet, it's not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.

  • That's some wildly disingenuous goal post moving when describing what was meant to be The Future of Finance™ at the time.

    Like saying yeh, AGI was a pipedream and there's no disruption of technical professions to be seen anywhere, but you can't deny LLMs made it way easier for bad actors to actively fuck with elections, and the people posting autogenerated youtube slop 5.000 times a day sure did make some legitimate ad money.

  • Oh no, the premise of money and capitalism, my only weakness.

  • So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.

  • Zero interest rate period, when the taps of investor money were wide open and spraying at full volume because literally any investment promising some sort of return was a better proposition than having your assets slowly diminished by e.g. inflation in the usually safe investment vehicles.

    Or something to that effect, I am not an economist.

  • I can never tell, is there an actual 'experiment' taking place with an LLM-backend agent actually trying stuff on a working vm or are they just prompting a chatbot to write a variation of a story (or ten, or a million) about what it might have done given these problem parameters?

  • ::: spoiler 23-2 Leaving something to run for 20-30 minutes expecting nothing and actually getting a valid and correct result: new positive feeling unlocked.

    Now to find out how I was ideally supposed to solve it. :::

  • If nothing else, you've definitely stopped me forever from thinking of jq as sql for json. Depending on how much I hate myself by next year I think I might give kusto a shot for AOC '25

  • I mean, you could have answered by naming one fabled new ability LLM's suddenly 'gained' instead of being a smarmy tadpole, but you didn't.

  • Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.

    Can't help but wonder if he's just a critihype enabling useful idiot who refuses to know better or if he's being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.

    edit: The claude syllogistic scratchpad also makes an appearance, it's that thing where we pretend that they have a module that gives you access to the LLM's inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of "So what were you thinking when you wrote so and so, remember no one can read what you reply here". Que a bunch of people in the comments moving straight into wondering if Claude has qualia.

  • Rationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.

    One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.

  • It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

  • And all that stuff just turned out to be true

    Literally what stuff, that AI would get somewhat better as technology progresses?

    I seem to remember Yud specifically wasn't that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.