At the same time, most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ``cruise ship comedy material from the 1950s, but a bit less racist''.
The phrasing "a bit less racist" suggests a nonzero level of racism in the output, yet the participants also complain about the censorship making the bot refuse to discuss sensitive topics. Sounds like these LLMs can only be boringly racist.
Spam machines are only ever funny or interesting by accident. The more they smooth out the wrinkles the more creatively useless they become. The tension is sort of fascinating.
Like I've always been interested in generative poetry and other manglings of text, and ChatGPT's so fucking dull compared to putting a sentence through babelfish a few times.
Before the big AI boom, I actually did a project where I used inferkit to generate text for the comedy factor because the unhinged nightmare garbage it spit out was extremely entertaining. I just can't imagine using chat gpt in the same way, it's so boring
Read through the paper looking for sample jokes, found none. :(
But this was the issue with the George Carlin bit that was online. I listened to it, it was a reasonable approximation of George's voice and intonation.
But when it got to the part where it said "I think we can all agree, there's one comedian better off as AI... Bill Cosby." and I went "OK, AI did not write that." AI doesn't get subversion.
The adverse impacts section was just the comedians saying “we’ve already lost friends, everyone hates us” but the conclusion was “here’s how comedians should use our tool.”
I can imagine a comedian using an LLM to check if a joke or punchline has been done before, but that would require the LLM to actually work and give accurate information. Also if you are a comedian using an LLM, you probably don’t actually care about whether or not you are plagiarising someone, so I guess this is all moot.
My favorite LLM move is when you ask for a source for their last response, and instead of saying they aren't capable of providing them, they just invent fictitious URLs.
I've been experimenting on creative writing tools with a bunch of writer friends, and the setup described in this paper is notoriously shit. I mean they come up to ChatGPT on v3.5 (or Bard lmao) and expect it to write comedy ? Jeez talk about setting yourself up for failure. That's like walking up to a junior screenwriter and yelling "GIVE ME A JOKE" to them. I don't understand why people keep repeating that mistake, they design experiments where they expect the model to be the source of creativity but that's just stupid.
If you want to get output that is not entirely mediocre, you need something like a Dramatron architecture where you decouple various task (fleshing out characters, outlining at the episode level, outlining at the scene level, writing dialogues etc...) and maintain internal memory of what is being worked on. It is non-trivial to setup but it gets there sometimes - even the authors of this paper recognize that this would have probably produced better results. You also need a user able to provide good ideas that the model can work with, you can't expect the good creative stuff to come from the robot.
Instinctively i'd say you have to treat the model like your own junior writer, and how do you make a junior writer useful ? By teaching them to "yes, and" in a writing room with better writers (in this case, the user). In that context, with a good experienced user at the helm, it can definitely bring value. Nothing groundbreaking but i can see how a refined version of this could help, notably with consistency, story beats, pacing, the boring stuff. GPTs are better critics than they are writers anyway.
That being said i never really pursued "pure comedy" on LLMs as it sounds like a lost battle. In my mind it's kind of like tickling : if a machine pokes your ribs you don't get the tickles, that only works when a human does it. I doubt they can fix that in the short or mid term.
I don't understand why you're getting downvoted. You should read the room, delete your posts, and leave forever. Then you wouldn't be getting downvoted.
No i'm saying comedy (as in writing your jokes for you) is not something you should expect from language models. As a general rule, there is no tool that will make you a good writer, only (potentially) tools that can help you do more with your qualities as a writer. But it will never be funnier or more talented than you are.
That's why i personally experiment with writing tools. Writing standup is one thing, but imagine you're writing a sitcom or any form of serialized work. That's a lot of fucking work and obviously if you're starting out you can't exactly afford to pay for assistant writers to do the menial labour that comes with it. Language models can come in handy in that scenario, but again you can't expect them to be the genius in the room if you want a good show you have to bring the good ideas and the funnies. It's a power tool and power tools don't draw the plans for the house they just grind where you need grinding.