Attached: 1 image
ChatGPT's language model fails entirely in the scenario that a man is a nurse. (h/t @nilsreiter@social.cologne)
While I am fascinated with the power of chatbots, I always make the point to remember people of their limitations. This is the screenshot I'll show everyone now
This doesn’t seem very damning. I tried it with GPT-4 and it’s still wrong at first but gets it right after it’s established who the chancellor actually is.
Imagine asking a human this question. Don’t you think that most people would make the same assumption? ChatGPT is simply picking up on our human bias.
Also, this whole dialog is a contrived gotcha. If you ask real questions and are mindful of the implicit biases you may be encoding you’re going to get great results.