Skip Navigation

Posts
51
Comments
1,232
Joined
2 yr. ago

  • For some reason, the news of Red Lobster's bankruptcy seems like a long time ago. I would have sworn that I read this story about it before the solar eclipse.

    Of course, the actual reasons Red Lobster is circling the drain are more complicated than a runaway shrimp promotion. Business Insider’s Emily Stewart explained the long pattern of bad financial decisions that spelled doom for the restaurant—the worst of all being the divestment of Red Lobster’s property holdings in order to rent them back on punitive leases, adding massive overhead. (As Ray Kroc knows, you’re in the real estate business!) But after talking to many Red Lobster employees over the past month—some of whom were laid off without any notice last week—what I can say with confidence is that the Endless Shrimp deal was hell on earth for the servers, cooks, and bussers who’ve been keeping Red Lobster afloat. They told me the deal was a fitting capstone to an iconic if deeply mediocre chain that’s been drifting out to sea for some time. [...] “You had groups coming in expecting to feed their whole family with one order of endless shrimp,” Josie said. “I would get screamed at.” She already had her share of Cheddar Bay Biscuit battle stories, but the shrimp was something else: “It tops any customer service experience I’ve had. Some people are just a different type of stupid, and they all wander into Red Lobster.”

  • Yeah, Krugman appearing on the roster surprised me too. While I haven't pored over everything he's blogged and microblogged, he hasn't sent up red flags that I recall. E.g., here he is in 2009:

    Oh, Kay. Greg Mankiw looks at a graph showing that children of high-income families do better on tests, and suggests that it’s largely about inherited talent: smart people make lots of money, and also have smart kids.

    But, you know, there’s lots of evidence that there’s more to it than that. For example: students with low test scores from high-income families are slightly more likely to finish college than students with high test scores from low-income families.

    It’s comforting to think that we live in a meritocracy. But we don’t.

    And in 2014:

    There are many negative things you can say about Paul Ryan, chairman of the House Budget Committee and the G.O.P.’s de facto intellectual leader. But you have to admit that he’s a very articulate guy, an expert at sounding as if he knows what he’s talking about.

    So it’s comical, in a way, to see [Paul] Ryan trying to explain away some recent remarks in which he attributed persistent poverty to a “culture, in our inner cities in particular, of men not working and just generations of men not even thinking about working.” He was, he says, simply being “inarticulate.” How could anyone suggest that it was a racial dog-whistle? Why, he even cited the work of serious scholars — people like Charles Murray, most famous for arguing that blacks are genetically inferior to whites. Oh, wait.

    I suppose it's possible that he was invited to an e-mail list in the late '90s and never bothered to unsubscribe, or something like that.

  • From the documentation:

    While reasoning tokens are not visible via the API, they still occupy space in the model's context window and are billed as output tokens.

    Huh.

  • The New Yorker gamely tries to find some merit, any at all in the writings of Dimes Square darling Honor Levy. For example:

    In the story “Little Lock,” which portrays the emotional toll of having to always make these calculations, the narrator introduces herself as a “brat” and confesses that she can’t resist spilling her secrets, which she defines as “my most shameful thoughts,” and also as “sacred and special.”

    I'm really scraping the bottom of the barrel for extremely online ways to express the dull thud of banality here. "So profound, very wow"? "You mean it's all shit? —Always has been."

    She mixes provocation with needy propitiation

    Right-click thesaurus to the rescue!

    But the narrator’s shameful thoughts, which are supposed to set her apart, feel painfully ordinary. The story, like many of Levy’s stories, is too hermetically sealed in its own self-absorption to understand when it is expressing a universal experience. Elsewhere, the book’s solipsism renders it unintelligible, overly delighted by the music of its own style—the drama of its own specialness—and unable to provide needed context.

    So, it's bad. Are you incapable of admitting when something is just bad?

  • I often use prompts

    Well, there's your problem

  • and hot young singles in your area have a bridge in Brooklyn to sell

    on the blockchain

  • So many techbros have decided to scrape the fediverse that they all blur together now... I was able to dig up this:

    "I hear I’m supposed to experiment with tech not people, and must not use data for unintended purposes without explicit consent. That all sounds great. But what does it mean?" He whined.

  • When you don’t have anything new, use brute force. Just as GPT-4 was eight instances of GPT-3 in a trenchcoat, o1 is GPT-4o, but running each query multiple times and evaluating the results. o1 even says “Thought for [number] seconds” so you can be impressed how hard it’s “thinking.”.

    This “thinking” costs money. o1 increases accuracy by taking much longer for everything, so it costs developers three to four times as much per token as GPT-4o.

    Because the industry wasn't doing enough climate damage already.... Let's quadruple the carbon we shit into the air!

  • (smashes imaginary intercom button) "Who is this 'some guy'? Find him and find out what he knows!!"

  • Elon Musk in the replies:

    Have you read Asimov’s Foundation books?

    They pose an interesting question: if you knew a dark age was coming, what actions would you take to preserve knowledge and minimize the length of the dark age?

    For humanity, a city on Mars. Terminus.

    Isaac Asimov:

    I'm a New Deal Democrat who believes in soaking the rich, even when I'm the rich.

    (From a 1968 letter quoted in Yours, Isaac Asimov.)

  • ... "Coming of Age" also, oddly, describes another form of novel cognitive dissonance; encountering people who did not think Eliezer was the most intelligent person they had ever met, and then, more shocking yet, personally encountering people who seemed possibly more intelligent than himself.

    The latter link is to "Competent Elities", a.k.a., "Yud fails to recognize that cocaine is a helluva drug".

    I've met Jurvetson a few times. After the first I texted a friend: “Every other time I’ve met a VC I walked away thinking ‘Wow, I and all my friends are smarter than you.’ This time it was ‘Wow, you are smarter than me and all my friends.’“

    Uh-huh.

    Quick, to the Bat-Wikipedia:

    On November 13, 2017, Jurvetson stepped down from his role at DFJ Venture Capital in addition to taking leave from the boards of SpaceX and Tesla following an internal DFJ investigation into allegations of sexual harassment.

    Not smart enough to keep his dick in his pants, apparently.

    Then, from 2006 to 2009, in what can be interpreted as an attempt to discover how his younger self made such a terrible mistake, and to avoid doing so again, Eliezer writes the 600,000 words of his Sequences, by blogging “almost daily, on the subjects of epistemology, language, cognitive biases, decision-making, quantum mechanics, metaethics, and artificial intelligence”

    Or, in short, cult shit.

    Between his Sequences and his Harry Potter fanfic, come 2015, Eliezer had promulgated his personal framework of rational thought — which was, as he put it, “about forming true beliefs and making decisions that help you win” — with extraordinary success. All the pieces seemed in place to foster a cohort of bright people who would overcome their unconscious biases, adjust their mindsets to consistently distinguish truth from falseness, and become effective thinkers who could build a better world ... and maybe save it from the scourge of runaway AI.

    Which is why what happened next, explored in tomorrow’s chapter — the demons, the cults, the hells, the suicides — was, and is, so shocking.

    Or not. See above, RE: cult shit.

  • Something tells me they’re not just slapping chatGPT on the school computers and telling kids to go at it; surely one of the parents would have been up-to-date enough to know it’s a scam otherwise.

    If people with money had that much good sense, the world would be a well-nigh unfathomably different place....

  • I actually don’t get the general hate for AI here.

    Try harder.

  • We have had readily available video communication for over a decade.

    We've been using "video communication" to teach for half a century at least; Open University enrolled students in 1970. All the advantages of editing together the best performances from a top-notch professor, moving beyond the blackboard to animation, etc., etc., were obvious in the 1980s when Caltech did exactly that and made a whole TV series to teach physics students and, even more importantly, their teachers. Adding a new technology that spouts bullshit without regard to factual accuracy is necessarily, inevitably, a backward step.