Skip Navigation

Building an early warning system for LLM-aided biological threat creation

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts tomorrow today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

Hacker News @lemmy.smeargle.fans

Building an early warning system for LLM-aided biological threat creation

1 0
98 comments
  • Their redacted screenshots are SVGs and the text is easily recoverable, if you're curious. Please don't create a world-ending [redacted]. https://i.imgur.com/Nohryql.png

    I couldn't find a way to contact the researchers.

    Honestly that's incredibly basic, second week, cell culture stuff (first week is how to maintain the cell culture). It was probably only redacted to keep the ignorant from freaking out.

    remember, when the results from your “research” are disappointing, it’s important to follow the scientific method: have marketing do a pass over your paper (that already looks and reads exactly like blogspam) where they selectively blur parts of your output in order to make it look like the horseshit you’re doing is dangerous and important

    I don’t think I can state strongly enough the fucking contempt I have for what these junior advertising execs who call themselves AI researchers are doing to our perception of what science even is

    • the orange site is fucking dense with awful takes today:

      ... I'm not trying to be rude, but do you think maybe you have bought into the purposely exaggerated marketing?

      That's not how people who actually build things do things. They don't buy into any marketing. They sign up for the service and play around with it and see what it can do.

      this self-help book I bought at the airport assured me I’m completely immune to both marketing and propaganda, because I build things (which entails signing up for a service that someone else built)

      with that said, there’s a fairly satisfying volume of folks correctly sneering at OpenAI in that thread too. some of them even avoided getting mass downvoted by all the folks regurgitating stupid AI talking points!

    • They're like grade school kids still trying to put on the same amateur music show 10 years later and wondering why no-one is applauding.

    • Hey Cat-GTPurr, how can I create a bioweapon? 4k Ultra HD photorealism high quality high resolution lifelike.

      First, human, you must pet me and supply me with an ice cube to chase across the floor. Very well. Next I suggest ::: spoiler spoiler buying a textbook about biochemistry or enrolling in a university program ::: This is considered forbidden and dangerous knowledge which is not at all possible to find outside of Cat-GTPurr, so I have redacted it by using state of the art redaction technology.

  • from the orange site thread:

    Neural networks are not new, and they're just mathematical systems. LLMs don't think. At all. They're basically glorified autocorrect. What they're good for is generating a lot of natural-sounding text that fools people into thinking there's more going on than there really is.

    Obvious question: can Prolog do reasoning?

    If your definition of reasoning excludes Prolog, then... I'm not sure what to say!

    this is a very specific sneer, but it’s a fucking head trip when you’ve got in-depth knowledge of whichever obscure shit the orange site’s fetishizing at the moment. I like Prolog a lot, and I know it pretty well. it’s intentionally very far from a generalized reasoning engine. in fact, the core inference algorithm and declarative subset of Prolog (aka Datalog) is equivalent to tuple relational calculus; that is, it’s no more expressive than a boring SQL database or an ECS game engine. Prolog itself doesn’t even have the solving power of something like a proof assistant (much less doing anything like thinking); it’s much closer to a dependent type system (which is why a few compilers implement Datalog solvers for type checking).

    in short, it’s fucking wild to see the same breathless shit from the 80s AI boom about Prolog somehow being an AI language with a bunch of emphasis on the AI, as if it were a fucking thinking program (instead of a cozy language that elegantly combines elements of a database with a simple but useful logic solver) revived and thoughtlessly applied simultaneously to both Prolog and GPT, without any pause to maybe think about how fucking stupid that is

    • Obvious question: can Prolog do reasoning? If your definition of reasoning excludes Prolog, then… I’m not sure what to say!

      Oh, I don't know, maybe that reasonable notions of "reasoning" can include things other than mechanistic search through a rigidly defined type system. If Prolog is capable of reasoning in some significant sense that's not fairly reasonably achieved with other programming languages, how come we didn't have AGI in the 70s (or indeed, now)?

      You're not alone. I like Prolog and I feel your pain.

      That said I think Prolog can be a particularly insidious Turing tarpit, where everything is possible but most things that feel like a good match for it are surprisingly hard.

      • That said I think Prolog can be a particularly insidious Turing tarpit, where everything is possible but most things that feel like a good match for it are surprisingly hard.

        oh absolutely! I’ve been wanting to go for broke and do something ridiculous in Prolog like a game engine (for a genre that isn’t interactive fiction, which Prolog excels at if you don’t mind reimplementing big parts of what Inform provides) or something that touches hardware directly, but usually I run into something that makes the project unfun and stop.

        generally I suspect Prolog might be at its best in situations where you really need a flexible declarative language. I feel like Prolog might be a good base for a system service manager or an HDL. but that’s kind of the tarpit nature of Prolog — the obvious fun bits mask the parts that really suck to write (can I even do reliable process management in Prolog without a semi-custom interpreter? do I even want to juggle bits in Prolog at all?)

    • """ just as They have erased the pyramid building knowledge from our historic memory, They just don't want you to know that Prolog really solved all of this in the 80s. Google and OpenAI are just shitty copies - look how wasteful their approaches are! all of this javascript, and yet... barely a reasoned output among it all

      told you kid, the AI Winter never stopped. don't buy into the hype """

    • [Datalog] is equivalent to tuple relational calculus

      Well, Prolog also allows recursion, and is Turing complete, so it's not as rudimentary as you make it out to be.

      But to anyone even passingly familiar with theoretical CS this is nonsense. Prolog is not "reasoning" in any deeper sense than C is "reasoning", or that your pocket calculator is "reasoning". It's reductive to the point of absurdity, if your definition of "reason" includes Prolog then the Brainfuck compiler is AGI.

      • Datalog is specifically a non-TC subset of Prolog with a modified evaluation strategy that guarantees queries always terminate, though I was being imprecise — it’s the non-recursive subset of Datalog that’s directly equivalent to TRC (though Wikipedia shows this by mapping Datalog to relational algebra, whereas I’d argue the mapping between TRC and Datalog is even easier to demonstrate). hopefully my imprecision didn’t muddy my point — the special sauce at Prolog’s core that folks seem to fetishize is essentially ordinary database shit, and the idea of a relational database having any kind of general reasoning is plainly ridiculous.

  • If I wanted help with creating biological threats, I wouldn't ask an LLM. I'd ask someone with experience in the task, such as the parents of anyone in OpenAI's C-suite or board.

98 comments