Super-recursive fractal of not-even-wrongness
Super-recursive fractal of not-even-wrongness

A BlueSky thread by Mat G on Skyview

I went into a rabbit-hole yesterday, looking into the super-recursive wikipedia article we first sneered at here and the revisited in Tech Takes. I will regret it forever.
You can view my live descent into madness, but I reformatted the thread and added some content in a more reasonable form below. Note that there is plenty that I didn't have time/wine to sneer at in the article itself and then in the amazing arxiv-published "paper" that serves as an underpinning.
Content warning: if you know anything about Turing machines you will want to punch Darko Roglic straight in the control unit near the end.
holy fuck this is the kind of craziness I was hoping someone would dig up (rants about the orthodoxy and all) when I realized the Wikipedia articles had some flat earth level shit in them. thank you for the great read! if there’s ever a sneercon, I owe you a few bottles of wine (or your choice of stronger alcohol)
sans the evolutionary computing and other nonsense, the theoretical core (if you can call it that) of this bullshit seems to be that you can ignore the halting problem if you don’t halt — that implementing (practically) a step limit for your Turing machine is some kind of revolutionary step. I’m not much of a CS practitioner outside of my hobbies, but even I know that solves nothing, for all the reasons you explained eloquently in your post. so it’s kind of fucking amazing to me how frequently I see the opinion on the orange site, among the Rationalists, and even from the Urbit fascists (check our urbit threads if you haven’t already — those folks go deep on CS crankery, including the idea that their bullshit lambda calculus variant is somehow capable of modeling problems the original can’t) that the halting problem is easily solved via workarounds and tricks like that. it’s actually kinda scary how hard the Rationalists in particular try to reject the basics of CS (because they easily disprove their religious beliefs) and replace them with pseudoscience, and how much of this bullshit that places like the orange site echo just because someone cited a crank paper or wikipedia article
I'm glad I found at least one person who enjoyed the rant, makes me feel much better about wasting my braincells on this nonsense.
"The halting problem is easily solved via workarounds" sounds like an Elon Musk tweet trying to reassure investors that self-driving is just around the corner you guys.
“these scientist eggheads told me it was impossible and a bad idea but I did it anyway” is the fantasy scenario for so much of the capitalist class; it’s fucking bizarre how much these folks hate CS theory, but they still require their workers to have a BS in CS or a similar field because they feel it increases their prestige (though going after anything higher than a BS is usually very heavily discouraged — I’ve had managers shrug and go “if you want to waste your money” when I mentioned wanting to further my education, and a lot of these bullshit moonshot projects tend to prefer fresh college grads who don’t know how to say no for all positions)
what. where did i miss this.
it’s been a minute, but I believe that was one of the replies you got in one of the orange site urbit threads after it was pointed out that Nock is just lambda calculus with a bunch of bits glued on. that’s not an uncommon way to derive a functional language (it’s the rough origin of the ML family of languages), but yarvin claimed that Nock is much more efficient than lambda calculus (absolutely not and that’s not even a high bar) and somehow revolutionary. when challenged on the latter point, the urbit fans in the thread started claiming that Nock is capable of solving problems that lambda calculus can’t and gave a very similar abuse of the CTT to what we’ve seen in this thread. it was pure crankery, but it being the orange site I remember the crankery seemed to get a bunch of upvotes