The current policy makes it very difficult to discipline users merely for posting bad answers, even, or perhaps particularly, if those bad answers are the systematic product of limited language models.
However, the mere fact of using LLMs does not negate all policies around the Code of Conduct, trolling, and so forth. This user has also been writing hostile comments on various sites, accusing people critiquing their answers of "reading out of their posterior" or of being unintelligent and envious. They also have been reposting answers after their previous answers have been deleted, as with this very question. Some of the ChatGPT errors have also caused them to violate the Code of Conduct by claiming credit for the work of others, such as one answer that implies that they invented the theory of relativity.
Finally, some of these answers are so far from the content of the question as to be "not an answer" on their own merits, as with their most recent answer to this question, which appears to be pseudo-Buddhist musings mostly unconnected to the question.
Any of these activities could be subject to moderation whether or not they were using ChatGPT. A user engaging in this sort of behavior without AI would certainly be sent to cool down for a bit, and I think the same can be applied to those users who violate the site rules while incidentally using LLM content generators.