Newest Questions
541 questions
1
vote
1
answer
10
views
Seeking an open-source analogue of Windsurf
Having used Windsurf on an employer-issued Macbook, I'm most impressed -- and would like to have the same capabilities working on my own projects at home.
My home computers run FreeBSD, however, and ...
-1
votes
1
answer
17
views
Why doesn’t GPT-5.2 pro list a cached-input price?
I noticed that in the OpenAI pricing tables, GPT-5.2 and GPT-5 mini both show a cached input price (e.g., $0.175/1M for GPT-5.2 and $0.025/1M for GPT-5 mini), but GPT-5.2 pro shows a dash (-) instead ...
0
votes
1
answer
20
views
How does ChatGPT deep research compare with Perplexity deep research?
With the release of dedicated "Deep Research" modes in both ChatGPT and Perplexity, I wonder the following: How does ChatGPT deep research compare with Perplexity deep research?
0
votes
0
answers
6
views
How can I select which model to use for a deep research query with the Perplexity Android application?
How can I select which model to use for a deep research query with the Perplexity Android application? I don't see the option on Android:
unlike on the Perplexity website where I do see the option ...
1
vote
1
answer
35
views
How can I know which "best" model was used in Perplexity?
I used the "best" model for a query in Perplexity:
How can I know which "best" model was used for deep research in Perplexity?
0
votes
0
answers
20
views
How can I improve the accuracy of custom GPTs when they rely on uploaded knowledge bases? [closed]
I created a custom GPT with a detailed knowledge base (PDFs, docs).
However, answers sometimes:
Miss key information
Cite irrelevant parts of documents
Give shallow or generic summaries
Contradict ...
0
votes
0
answers
24
views
Why do LLMs still hallucinate even when using RAG with high-quality retrieved documents? [closed]
I’m experimenting with different RAG (Retrieval-Augmented Generation) pipelines, and I still observe hallucinations even when the retrieved context is clearly relevant.
Specifically:
The model ...
0
votes
0
answers
7
views
In ChatGPT Deep Research, what does ‘X searches’ mean? Web searches, in-page searches or something else?
When using ChatGPT’s Deep Research mode, the interface displays a counter such as:
“Reviewing comments and exploring further suggestions… 168 searches”
It is unclear to me what the term “searches” ...
0
votes
0
answers
10
views
How do I set a custom GPT to read private GitHub company knowledge?
I set up GitHub company knowledge for the team.
I want to wrap a prompt into a custom GPT to read the company knowledge folder and teach it the naming conventions of all specific files and folders.
...
-1
votes
3
answers
57
views
What are the most reliable strategies to reduce hallucinations in retrieval-augmented generation (RAG) systems?
I am building a retrieval-augmented generation (RAG) pipeline using vector search and an LLM for answering factual queries over domain-specific documents.
Even when relevant context is retrieved and ...
1
vote
1
answer
28
views
How can I list all the Deep Research queries I made in ChatGPT?
I have made several hundred Deep Research queries in ChatGPT. I would now like to review some of them, and to do so I need a complete list of all the Deep Research queries I have submitted.
On https://...
1
vote
1
answer
23
views
Where can I view the price of each query in Cursor?
Earlier today I was able to see the price of each query in Cursor via https://cursor.com/dashboard:
But now it only display some used Cursor credits:
Where can I view the price of each query in ...
0
votes
0
answers
11
views
Where can I see the model used for a Deep Research in ChatGPT?
Where can I see the model used for a Deep Research in ChatGPT?
I don't see the information:
0
votes
0
answers
36
views
When running a query on chatgpt.com with thinking or deep research turned on, how can I see how many tokens the query used?
When running a query on https://chatgpt.com/ with thinking or deep research turned on, how can I see how many tokens the query used?
I know via the https://www.openai.com/ API one can use the usage ...
7
votes
5
answers
4k
views
Why do LLMs confidently answer incorrect factual questions even when explicitly instructed to “only use provided information”?
I’ve been experimenting with different LLMs and noticed a consistent issue:
even when I provide a small, controlled set of facts and explicitly instruct the model to only answer using that information,...