As to the future of the site part, I think it's imperative to be able to detect AI generated answers. Whilst it is true that voting can take care of hallucinations and plain incorrect answers, the problem lies in one of the core principles of the site: gamification through tying credibility to reputation.
Getting reputation for AI generated content is akin to getting reputation for plagiarised content: it is not your own. Reputation gives you on-site credibility and trust in the form of privileges. But not only that, more crucially, it gives you credibility to other users. An answer posted by a user with 100k reputation is trusted more by readers than when the exact same answer would be posted by a user with 1 reputation. Thus, at some point people would be seen as credible and trustworthy, when in fact it should be the AI getting the reputation.
This is in fact a known problem and has happened in a slightly different form, when a US teenager without speaking any Scots started editing Wikipedia articles in Scots . They got moderation privileges, allowing them to e.g. roll back correcting edits on their posts. At some point, even translation algorithms, such as Google Translate, started to use their, partly/mostly incorrect, posts.
Thus, if anything, fully AI generated content should still be disallowed and a detection mechanism should be found.
Note that I am not completely against the use of AI in all forms. Especially for non-native speakers having an AI clear up their language to make it more understandable and grammatically correct is a good use in my opinion (much like spellcheckers have been doing the past decades). Where the line should be drawn is up to discussion; e.g. is a user answering a programming question with their own code, but having an AI writing the complete explanation acceptable?