Skip to main content

Timeline for answer to No, I do not believe this is the end by Greg Burghardt

Current License: CC BY-SA 4.0

Post Revisions

13 events
when toggle format what by license comment
2 days ago comment added Serge Ballesta BTW, have you ever tried to publish content on Wikipedia? IMHO both community work the same: they build a nice repository of high quality information (that AI companies do use at no cost...) that anybody can use. Most questions that are badly received here should simply not have been asked if the poster had done a minimal research... Simply you do not have to ask a question on Widipeidia (and anyway cannot...)
Jan 26 at 16:26 comment added l4mpi Also, for an example closer to home: I tried the SO LLM for a small code review (~20 lines Java) last week for fun while waiting for a CI pipeline. I already had unit tests covering all cases so I knew the code was correct. It found three things "wrong" with the code; two were trivial but didn't apply (relating to empty input which is impossible in my case). The third "error" was a hallucination by the LLM, complaining about an unhandled case which was in fact handled. After pointing out it got that part wrong, the LLM concurred and generated an "apology". Surpremely useful!
Jan 26 at 16:12 comment added l4mpi @Lundin the LLM was clearly going for comedy. Battery with 2 plus poles, check. Nonsensical "V" circle (voltmeter?) in the top left diagram with only one connected pole and a "current" arrow not going through a cable, check. Two separate instances of R1 in one diagram, check. "V oui", check (what's "check" in french?). Nonsensical formulas, check - highlights: "I=" and "R=" lower left sharing a right side; "I=V/R=R" so I=R and R=V/R - seems legit; and that Vout term bot center that's several symbols short of being an expression. Confusing parallel and serial might be the least wrong part :D
Jan 23 at 10:16 comment added Lundin Yeah AI is great. I just asked the latest and greatest ChatGPT 5 some very basic beginner-level electronics questions: demonstrate Ohm's Law, parallel resistance and voltage dividers - something that most students probably can answer. i.sstatic.net/65x9zG9B.png. Except... that's serial resistance, not parallel... Where is the evidence that it can even answer beginner-level questions?
Jan 22 at 8:27 comment added Cerbrus You've lost me at your first line. "I think Q&A's days are numbered. I see sooooo many long-time members willing to die on the hill of Q&A, but the rest of humanity has simply moved on to a different hill". This is AI-bro rhetoric. It follows the whole vibe that AI is the magical bullet that solves everything. The users here who are against the use of AI simply see AI's limitations, and are arguing to apply AI where it HELPS, not just everywhere.
Jan 21 at 23:43 comment added Karl Knechtel LLMs are extremely good at asking beginner-level questions as asked. The problem is that beginners are not good at asking the question they should ask.
Jan 21 at 21:48 history edited halfer CC BY-SA 4.0
Spelling
Jan 21 at 20:17 comment added Dharman Mod "churning out low quality content in an endless river of sewage expecting us experts to filter it into drinkable water" That is certainly how it felt. So many users thought they could just post whatever and get a solution. No regard for our quality standards. It was like wadding through sewage.
Jan 21 at 20:15 comment added Dharman Mod @tylerh It's equally difficult to get them answered here as they would probably be closed as too broad.
Jan 21 at 17:22 comment added user400654 I believe there is some truth to this, but at the end of the day, LLM's need sources. We can be a source or we can fold and just settle on being irrelevant. I don't believe the community is inherently toxic, we're just using the tools we were given. We have no better way of ensuring low quality content gets deleted and or isn't used as a source than preventing it from getting an answer currently. That isn't a toxic community problem.
Jan 21 at 17:12 comment added TylerH So, naturally, I disagree with the premise of this argument and the quote it begins with.
Jan 21 at 17:12 comment added TylerH It may be easy to get genAI to correctly answer basic programming questions, but from what I have seen it is not easy to get them to correctly answers that have even a moderate level of complexity (read: a level of competency that I would expect from someone with 6+ months working in a language/technology, or perhaps someone with 5+ years of experience as a programmer, but not with the language/technology in question).
Jan 21 at 16:49 history answered Greg Burghardt CC BY-SA 4.0