In response, the guidelines regulate the labeling of AI-generated online content throughout its production and dissemination processes, requiring providers to add visible marks to their content in appropriate locations.
My understanding is that this is meant more as a set of legal guidelines... I'm not a legal scholar, but since China has a history of enforcing certain information-related laws I'd assume they can "legally" enforce it
On the technical side... there is a subfield of LLM research that focuses on "watermarking" or ensuring that LLM-generated outputs can be clearly identified, so I guess in theory it might be enforceable
In practice as to whether it will actually be ensured... who knows (facepalm