5

I would like clarification on Stack Overflow's current stance (as of December 3rd, 2025) regarding the reliability of LLMs.

The ban on posting AI content, according to the SO Help Centre, is motivated by the general unreliability of LLMs (specifics quoted below).

However, by releasing and centering AI Assist, Stack Overflow is clearly claiming that some LLM tools can adequately represent and attribute Stack Overflow answers. In fact, SO staff thinks AI Assist's output is reliable enough to possibly vote on human answers in the future, so them allowing LLMs to participate (if not post) on Stack Overflow is a possibility.

The standards used in judging AI Assist output may differ from the standards used for content posted on Stack Overflow, but SO policy makes general statements about LLM reliability to justify the genAI ban. These statements are explicitly incompatible with the AI Assist feature.

Apparently, SO policies are created and enforced by the community, not official staff. I am assuming this means that the policy does not reflect the staff's views or bind the staff.

However, I would still like clarification from the staff regarding how they view the AI Assist feature in relation to the community's current policy on generative AI. This could be an explanation of how AI Assist overcomes the shortcomings cited by the community as reasons for the genAI ban, or simply an acknowledgement that AI Assist is not bound by, and does not follow, principles and stances set out in SO policy.

Examples of contradictions

Here are several instances where I believe certain ideas and stances expressed in SO policy, and AI Assist as described on Meta Stack Exchange and the Stack Overflow Blog, contradict each other.

Focus on human-created content

Stack Overflow's ban on generative AI was/is motivated by at least three primary factors, according to the Help Center article above:

Users who ask questions on Stack Overflow expect to receive an answer authored and vetted by a human.

Users who ask questions on Stack Overflow may have already sought answers elsewhere. Due to the ease of using generative artificial intelligence services, if a user wanted an answer from an artificial intelligence, they may already have sought one, and so it does not make sense to provide one here. [emphasis added]

Generative artificial intelligence tools are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network.

I would like to point out the middle quote here, in which SO policy explicitly states SO is against providing an LLM.

According to the SO Blog,

... Whether it’s not knowing the community rules, struggling to find relevant content, or worrying about asking duplicate questions, there are many barriers that users may face when first accessing our sites. We needed to create a new way to use Stack Overflow that would address these barriers, providing users with guidance and direction so they can feel at home in the community.

So, the stated rationale for AI Assist is that it removes some barriers. However, the fact that AI Assist definitionally provides non-human answers goes unaddressed here. In particular, AI Assist answers are not "human-vetted". Therefore, AI Assist's output is not what SO users expect, according to SO itself.

As a side note, SO staff have claimed that AI Assist "retrieving and extracting relevant content" does not constitute summarization, rewrite, or manipulation. I don't think this (ridiculous) claim solves the problem here: If nothing else, AI Assist openly integrates LLM output into its answers, so it is factually incorrect.

As per a separate Help Centre article:

Responses may also include AI-generated and summarized information. If suitable answers were not found on Stack Overflow or other Stack Exchange network sites, the response may also include information sourced from the broader internet via LLM partner integrations.

This is patently self-contradictory: AI Assist answers are somehow verbatim and "sourced from the broader internet" at the same time.

The disconnect between AI Assist output and "expected" Stack Overflow answers is almost acknowledged in the Meta Stack Exchange post (quoted below), but never addressed, as far as I can tell.

Encouraging people to vote through AI Assist will also drive people away from directly engaging with human responses in context, leading to misinterpretations.

As per Meta Stack Exchange:

Human-verified answers from Stack Overflow and the Stack Exchange network are provided first, then LLM answers fill in any knowledge gaps when necessary. Sources are presented at the top and expanded by default, with in-line citations and direct quotes from community contributions for additional clarity and trust.

In this quote and elsewhere, we also see that the third reason for the genAI ban (attribution) is (at least superficially) addressed. This partially addresses the problem. However, we do not know if AI Assist attributes sources outside Stack Exchange at all.

So, to summarize, of the three problems above, there was partial effort towards resolving one problem (attribution), but the other two (expectation of human answers and the availability of alternative LLMs) go pretty much completely unaddressed.

AI Assist will be integrated into SO

Clearly, AI Assist is meant to become a central part of Stack Overflow. It will not be solely an entry point into the community, and it will certainly influence questions, answers and interactions. As stated above, SO staff have already acknowledged they plan on letting AI Assist facilitate voting on responses.

Here are three places where further AI Assist integration is reaffirmed.

According to SO Blog:

Our next goal is to bring AI Assist deeper into our platform, meeting users where they are - like on individual Q&A pages to provide timely assistance to users.

The Meta Stack Exchange post concludes with:

This is not the end of the work going into AI Assist, but the start of it on-platform. Expect to see iterations and improvements in the near future.

It won't even be a choice, according a comment by someone with the Meta Stack Exchange Staff badge:

We don't have plans to provide toggles turn off any AI Assist components.

I stress these statements because they clearly demonstrate that SO staff clearly view LLMs and LLM-facilitated participation in the community differently from current policy.

In case anyone had doubts, the SO Blog makes it crystal clear that users will be encouraged to use AI Assist to post on Stack Overflow:

AI Assist would also include a pathway into the community to ask questions when the tool was unable to surface an exact answer, or when the user wanted to dive deeper. Through this, we are providing a way to engage with Stack Overflow with less friction than traditional search and Q&A.

(IMPORTANT: I have been told that the "pathway" above just guides people into the Ask Wizard and will not write or edit questions for people to post.)

Conclusion

To clarify, I am not asking SO staff to reconsider integrating AI Assist into everything. Clearly, that ship has long since sailed.

However, I think that the SO staff still owe it to the community to openly acknowledge that they are not following the principles outlined in the generative AI ban policy, or provide a way to reconcile the two.

All Stack Overflow users (regardless of stance on AI Assist) deserve honesty about the policies and expectations of SO staff as caretakers of this community.

The policy as written could be improved by emphasizing how AI Assist's output should be interpreted and used.

A note on duplication

A question similar to this one was asked in 2023, with no accepted answer. It is not the only question related to this policy without an accepted answer. I believe this question does not duplicate these questions, both because my question is about the status of specific site policies (and where these policies are communicated) and because I believe that the release of a third-party-affiliated LLM on the front page of Stack Overflow presents a meaningfully different situation from prior questions.


On the edit:

I have extensively edited this question. The original version of this post misunderstood the generative AI ban. Namely, the ban only forbids posting questions or answers filtered through genAI output. It does not directly forbid engaging with existing content on Stack Overflow through generative AI, so AI Assist processing SO content does not violate the rules of the policy or constitute an effective reversal of the genAI ban.

However, I believe there are still significant contradictions between the motivations and principles cited in Stack Overflow policy as reasons for the generative AI ban, and how AI Assist is presented by Stack Overflow staff. As such, I believe that the relevant policy may need to be updated to reflect the current stance and principles of Stack Overflow regarding LLMs, even if AI Assist is not inherently in violation of SO policy.

I should also note that the SO Blog post cited several times below has been seemingly taken down.

45
  • 18
    It hasn’t been lifted. Nothing has changed. Commented Dec 3, 2025 at 22:22
  • 13
    @ygtizc thus far, this feature does not assist people in creating content for posting on SO. It already has the feature they mentioned that you specifically called out that creates a pathway to asking on SO… it’s just a link to the ask wizard. Commented Dec 3, 2025 at 22:29
  • 7
    AI technology can be freely used by the company in any way they see fit. For a time they actually really posted LLM generated content as answers earlier this year. However, users are forbidden to use AI generated content when posting. It was always like this. But I see the point that this might seem a bit unfair. Commented Dec 3, 2025 at 22:30
  • 12
    My understanding of the scope of the GenAI policy is that it's about what you post. (If I'm wrong on this, I'm sure someone will quickly correct me). My cues are the historical context, phrasing like "content for Stack Overflow", and "Posting content generated by generative artificial intelligence tools may lead to a warning from moderators, or possibly a suspension for repeated infractions.", Commented Dec 3, 2025 at 22:32
  • 5
    and the big headline on the meta post, that says "All use of generative AI (e.g., ChatGPT and other LLMs) is banned when posting content on Stack Overflow." (italics added). Maybe these writings could be clarified so people understand that it's not about "AI Assist", but as far as I know, the policy hasn't changed. Commented Dec 3, 2025 at 22:32
  • 7
    @ygtozc why would that be not allowed, you can't use GenAI to write a question, but you can absolutely ask about AI generated code Commented Dec 3, 2025 at 22:55
  • 2
    @ygtozc - The policy with regards to AI generating (questions and answers) has not changed regardless if the AI Assist feature exists on the network. If anything you can use AI Assist to everything except generate that content on your behalf. If AI content is allowed on this network, that is the end of the network for many users, since there is no quality control on AI generated content by those users who have or will use it. Commented Dec 3, 2025 at 23:00
  • 3
    It is practically impossible to enforce "do not use LLMs to post here because they suck" while simultaneously advertising an LLM that, according to you, does not suck. Edit: On top of that, you aren't teaching people to read more answers by summarizing them, you're encouraging them to NOT engage with anything human-written unless they have to. AI assist therefore 1) means more people will post AI replies whether that's policy or not, and 2) fewer people will go out of their comfort zone to find and engage with replies if they can just get what info they need and leave. Commented Dec 3, 2025 at 23:09
  • 9
    @ygtozc this feature was first introduced in July and has coexisted for the entire period of time since then to now without much conflict in terms of this policy, I don’t see what embedding it on the home page changes about the policy and our ability to enforce it. I definitely agree that it is a step in the wrong direction if we intend to increase participation, but that’s another matter. Commented Dec 3, 2025 at 23:11
  • 10
    "how to use AI Assist without violating current SO policy in general?" is extremely trivial, you can use it however you want as long as it's not for creating content on SO. The majority of intended usage scenarios of SO do not involve any content creation. Commented Dec 3, 2025 at 23:34
  • 3
    See… the thing is, this policy is community generated and community enforced. Stack can, if they so choose, put an end to it… but they haven’t. They can also choose to implement features that help people violate the policy, which they have in the past, and quickly rescinded due to community pushback. It’s their platform, it is within their right to run away their community in service of their AI ambitions. Commented Dec 3, 2025 at 23:45
  • 3
    SO is supposed to be a library of Q/A. as such the quality bar for content posted here is extremely high. That doesn't mean that content that doesn't pass that bar can't be useful, but not every Q/A belongs in a library of programming problems. Commented Dec 3, 2025 at 23:51
  • 3
    Just as it was their choice, to not touch the network feature-wise for 10 years while it slowly declined, year after year, due to various reasons. :shrug: they decided their SaaS product was more important Commented Dec 3, 2025 at 23:51
  • 10
    The question may be better received if it focuses on the objective contradiction mentioned in the comments instead of based on a false premise ("clearly lifted"). That said I don't see how such a discussion will change the current status as neither the company nor the community is willing to step back from what they insist on. Commented Dec 4, 2025 at 1:46
  • 6
    Please avoid meta commentary in posts. Don't tell us about your edits, just edit the post to be the best presentation possible. Don't write "to clarify", clarify. Don't add "clarification", rewrite what isn't clear. PS There are no answers, you might consider posting a new question if you think because of its previous assumptions you are essentially proposing a new discussion topic. PS Clarify posts via edits, not comments; delete & flag obsolete comments. Commented Dec 5, 2025 at 2:10

2 Answers 2

5

Due to the ease of using generative artificial intelligence services, if a user wanted an answer from an artificial intelligence, they may already have sought one, and so it does not make sense to provide one here. [emphasis added]

I would like to point out the middle quote here, in which SO policy explicitly states SO is against providing an LLM.

You have mistakenly assumed that here in the policy means "the entire Stack Overflow network". It does not. It means "The question submission box and the answer submission box" which are designed and reserved for humans.

Even comment boxes, flags, and voting buttons have been opened to certain types of machine-learning automation, even before the generative-LLM explosion.

Re-read the policy, and think about it as keeping LLM-produced content separate from human-sourced content. Separate, not absent.


And I personally want this separate content to be uniformly marked in such a way that I can strip it with my ad-blocker...

-11

I think it's a valid point to ask of the hypocrisy of adding policy to protect from generative AI, then using it, but the addition of the AI Assist tool should be pretty obvious...

I've been using Gemini AI in Android Studio as I learn Jetpack Compose, and as I have never used a dedicated LLM for anything else, in the past few weeks I've had a lot of insight into how it behaves in real-world scenarios. Furthermore, I've realised it's strengths and it's weaknesses. Remember, this is not really brand-new technology, it's been out in the public for about three or four years now. But it still struggles.

Example: Gemini apologises for putting "protobuf" at the start of the file, then does it again while claiming it's not there! (then it tells me to copy and paste the whole file and the error will be gone)
Another: Gemini tells me that what I did here was wrong! But it wouldn't be wrong if I did this other thing, which I did... so it backtracks and says I did it right anyway. It's very strange backpedalling.

This was after I made it start again after it went down a rabbit hole trying to fix a simple syntactical issue... Gemini seemingly recognises that it messed up by introducing library upon library in order to fix a simple syntax error it made, but could not recognise until I pointed it out.

This is why I use Ask, and NOT the scary Agent that thinks it can read your mind and edit your code with massive presumptions just to fit with the modern norm for big businesses (why would you NOT want to do it like everyone else, human?? BECAUSE I DIDN'T ASK YOU TO TOUCH IT, ROBOT!!)... Gemini - for no reason - starts to edit theme files, changing callings and names of functions and types. It kept repairing its own mistakes with more mistakes, essentially changing the essence of the code until I stopped it and rolled it all back
...and on it went, with different files too, until I had to roll the whole project back. The worst bit is, I didn't even ask it to touch those theme files! It just took it upon itself to correct what it saw as an inconsistent naming scheme, which it wasn't!

So... MISTAKES in the DETAIL and fights creative thinking, and its head doesn't know why its arse is talking (it confuses itself). They are some of its weaknesses. Because it is a generalisation machine trying to complete specific, detailed tasks. It's great for personalised learning if it has the resources to generalise, to nit, to merge, but NOT great for detailed or on-the-forefront/in-the-trenches and consistent answers on a given specific subject which a lot of SE communities rely on. You realise the lag, too, as when it gives library versions, without specifically checking for the latest version, it uses older versions, because that's what it seen. They are inherently out-of-date due to core training date cut-offs.

So the pattern emerging here is that it's good for general time-independent concepts, but not good for detailed time-sensitive core data, where uniqueness and "realness" is a key concept, as with a lot of niched industries and arts. In other words, it's good for fetching general data, not good at writing specific data relevant to the moment.

What would happen if "AI" were allowed to be used freely on all SE networks, and all Q&A communities, particularly those that are dependent on the particular perspective of human experts? Errors would creep in. Errors that would fit in like facts, and errors that would be obviously wrong. Obviously, to humans in that field. Who or WHAT would not recognise these errors, repeatedly consuming them and regurgitating that misinformation as factual details, spreading like an unhindered infection as more and more people use the machine to regurgitate the new truth? Generative AIs.

It's a vicious feedback loop which, along with other similar issues with other generative agents, could infect and destroy a database of human knowledge. The worst-case-scenario is the internet becomes useless as a source of factual information. How will we know what is real and what is not? Do we have sources? Do the sources have sources? It's a sound decision to monitor the situation closely. As closely as possible. Humans who are specialised in narrow fields should share helpful information about those fields and other closely related fields and ask and answer questions and be judged respectfully by their peers. That makes sense as we are in direct contact with that art, field or industry, moulding it as we source information from each other. It is not a place for widespread algorithmic overgeneralisation, diluting that source.

So why is SO AI Assist a thing? I can't speak for SE, but I suspect, the answer may partly be "because everyone else is doing it". But in truth, it's most likely this; to get people engaging with SO in a new way. To utilise this immense database as a controlled data source. It's not there to write answers, it's there to help you find an answer. It's a new way to utilise this database of human knowledge. It's not going to be perfect, but that's why it won't be adding to the database. It's simply utilising the database, that's all. Sharing the knowledge directly with the seeker in a new, useful way.

A good knowledge database is only as good as your ability to access it when needed.
Free access to personalised knowledge from a reputable source is GREAT.

Ok. So that turned into a bit of a rant about AI, but these are just my thoughts on the subject, borne of my experiences with SE, SO, the modern world and generative AI. But the future of knowledge access is AI agents. So it's pretty obvious, that the - apparently - struggling SE Network would employ the latest thing to help breathe life into the mothership.

The opening subtitle from "Introducing SO AI Assist" explains their reasoning pretty succinctly;

"The way that developers interact with knowledge has changed in the age of AI. That's why we created AI Assist—a new way for users to access our 18 years of expert knowledge, and how Stack Overflow is remaining the always-open-tab of programmers around the world."

18
  • 4
    Correction on the first sentence: GenAIs are not banned on SE in general. Using GenAIs to post content without clear and proper attribution/disclosure is banned everywhere, but with proper attribution/disclosure it's up to the site to decide its own policy. SO and many sites on SE choose to ban GenAI even with attribution, but there are also SE sites where that is allowed. Commented Dec 6, 2025 at 3:33
  • Edited. I haven't really paid as much attention to the AI policy as I probably should have, as a mod. It's banned on my small community. But SE must roll with the times or die. Commented Dec 6, 2025 at 15:55
  • 2
    @n00dles - I would rather see it die than see AI garbage on the community I love Commented Dec 6, 2025 at 18:33
  • 3
    This does not answer the question, though I appreciate your attempt to explain SO staff's motivations. I am asking for SO staff's opinion specifically. The passage you quoted does not acknowledge or resolve the contradiction between SO policy's view of LLMs, and the existence of AI Assist. Commented Dec 7, 2025 at 1:04
  • Well, I thought it did (at least after my edit)! Maybe I should have TL;DR'd. Why wouldn't they add an AI Assist? Seems pretty obvious to me. Everybody else is doing it for information access, while nobody wants it. The policy is about AI answers, right? SO AI assist is about humans accessing those answers more easily. To me, those are two disparate things. Commented Dec 9, 2025 at 14:53
  • @SecurityHound me too, that was my point with the Gemini errors. Wiki-style databases should be 100% human-written. But as far as I can see, AI Assist doesn't affect the database. It's simply an access tool. Commented Dec 9, 2025 at 15:02
  • The generative AI ban is only on AI answers, but the AI policy provides reasoning for this ban that goes beyond just posting on Stack Overflow. The Help Center even explicitly states "Due to the ease of using generative artificial intelligence services, if a user wanted an answer from an artificial intelligence, they may already have sought one, and so it does not make sense to provide one here." Edit: Regardless of what SO marketing says, "AI Assist does not provide AI answers" isn't a coherent answer here. Commented Dec 9, 2025 at 18:51
  • This question is written under the assumption that motivations and stances expressed in "Policy:" posts and the SO Help Centre are meaningful parts of SO policy, alongside the rules. If an official part of the website says "LLMs suck" and another part of the website says "Use our awesome LLM", there's still a disconnect between policy motivations and practice. To quote another example from the question, "Generative artificial intelligence tools are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network." - But AI Assist is up to par? Commented Dec 9, 2025 at 19:02
  • Even if the citation standards are lower for AI Assist than SO content, there's still a problem with saying "GenAI cannot cite properly" and then telling SO users (old and new) that AI Assist can somehow cite well enough to represent human SO answers. Commented Dec 9, 2025 at 19:12
  • @n00dles I should add that, as noted in the question, AI Assist is intended to facilitate voting on human answers in the future. So it is not merely an "access tool", it is absolutely intended to guide user participation in SO Q&As. I am not sure you include votes in "affecting the database", but it's worth noting. Commented Dec 10, 2025 at 13:32
  • @ygtozc Ok, I get what you're saying wrt policy. However, citing sources would be much easier and much more coherent with a dedicated SO LLM. The problem the policy seems to have with LLMs is that they are unmanaged internet-wide generalisation tools for which the core training sources can't inherently be trusted or verified - or even sought, sometimes - and for a general LLM, I'd agree with that, but for a dedicated database LLM, that issue is resolved by the nature of the database, which is curated and managed by (awesome and passionate) human experts and enthusiasts. Commented Dec 10, 2025 at 16:33
  • ...Further, the (ideally) properly cited answer, is a possible next stop for a user to seek specifics if they wish to learn more. WRT intention, I see it as "General LLMs Suck"; "Use our dedicated LLM". btw, it would be nice to have more clarity on the AI voting assist thing. Maybe Staff will add an answer, but I think it's too much of a "hotbed" question. Maybe they'll just update the policy to be more clear. Commented Dec 10, 2025 at 16:34
  • 1
    @n00dles the policy isn't written by staff. It's written by the community, and the last time staff touched that policy, it set off the moderation strike. That's the core conflict here. There are two groups with very different positions and the inconsistencies are a reflection of that tension and who has control over what. Commented Dec 12, 2025 at 11:35
  • 1
    @n00dles Its about AI. Its just the community and company (SE staff) have very different positions on the future role of AI on the site. The policy is within the sphere of the community. The AI Assist is the sphere of the company. Each sphere is mostly internally consistent, but when looking between them it looks completely incoherent and nonsensical. Because each side thinks the other is doing the wrong thing. Commented Dec 14, 2025 at 19:51
  • 1
    @n00dles Oh, in addition to all of that, AI Assist isn't an exception to the generative AI ban. AI Assist doesn't cite sources sufficiently for posting, either. As far as the genAI content ban is concerned, AI Assist is as unreliable as any other LLM (rightly so). If SO staff claims to follow policy, it's fair to ask: Why are you taking credit for the accuracy and integrity of our posts in your LLM, if that LLM isn't responsible for representing our posts accurately and with integrity? Briefly: "AI Assist doesn't plagiarize or manipulate your posts, but if it does, we aren't responsible." Commented Dec 15, 2025 at 16:54

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.