I have big doubts about this part of presented data:
Yet, at the same time, actual GPT posts on the site have fallen continuously since release
Yet, at the same time, actual GPT posts on the site have fallen continuously since release
It's implied that postposts with a small number of Drafts are occurring less and less occur on sites, so it means that we have less chatGPTChatGPT answers.
I would argue that the only conclusion that can be drawn, is that we have less postposts that are blind copies from chatGPTChatGPT, with no reading or applyingedits to apply basic formatting.
I've executed a small experiment: I've gone to one of my latest answers, copied the question to chatGPTChatGPT, and attempted to create an answer with it'sits output. It took me three drafts, to paste and reformat answers,the answer so that it would be adequate to my personal standards.
I don't have the exact number of drafts required to createdcreate an initial answer, but since it consisted of three rather conservative in length paragraphs, and ready-made code copied from an IDE, I'm pretty sure that it took me less than 6 Drafts to create the answer.
I'm open to the idea that while this study was conducted there were some additional internal indicators of answers being AI made-made that were applied, but the published part, in my opinion, has major flaws onregarding this partpoint on drafts.